threads
listlengths
1
2.99k
[ { "msg_contents": "While working on something else, I noticed $SUBJECT, which I should\nhave updated in commit 27e1f1456. :-( There are two places that need\nto be updated, but in the first place the second one seemed a bit\nredundant to me, because it says the same thing as the first one, and\nis placed pretty close to the first one within 10 lines or so. So I\nrewrote the second one entirely into something much more simple like\nthe attached.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Wed, 6 Oct 2021 18:06:13 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "postgres_fdw: Obsolete comments in GetConnection()" }, { "msg_contents": "On Wed, Oct 6, 2021 at 2:35 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>\n> While working on something else, I noticed $SUBJECT, which I should\n> have updated in commit 27e1f1456. :-( There are two places that need\n> to be updated, but in the first place the second one seemed a bit\n> redundant to me, because it says the same thing as the first one, and\n> is placed pretty close to the first one within 10 lines or so. So I\n> rewrote the second one entirely into something much more simple like\n> the attached.\n\n+1 for rewording the comments. Here are my thoughts on the patch:\n\n1) Just to be consistent(we are using this word in the error message,\nand in other comments around there), how about\n+ * Determine whether to try to reestablish the connection.\ninstead of\n+ * Determine whether to try to remake the connection later.\n\n2) Just to be consistent, how about\n+ * cases where we're starting new transaction (not subtransaction),\nif a broken connection is\ninstead of\n+ * cases where we're out of all transactions, if a broken connection is\n\n3) IMO we don't need the word \"later\" here because we are immediately\nreestablishing the connection, if it is decided to do so.\n+ * Determine whether to try to remake the connection later.\nThe word \"later\" here in the comment below makes sense but not in the\nabove comment.\n+ * detected, we try to reestablish a new connection later.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 6 Oct 2021 15:06:56 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: Obsolete comments in GetConnection()" }, { "msg_contents": "Hi Bharath,\n\nOn Wed, Oct 6, 2021 at 6:37 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> +1 for rewording the comments. Here are my thoughts on the patch:\n>\n> 1) Just to be consistent(we are using this word in the error message,\n> and in other comments around there), how about\n> + * Determine whether to try to reestablish the connection.\n> instead of\n> + * Determine whether to try to remake the connection later.\n\nActually, we use the word “remake” as well in comments in\nconnection.c: e.g., “If the connection needs to be *remade* due to\ninvalidation, disconnect as soon as we're out of all transactions.” in\nGetConnection(). But I don’t have a strong opinion about that, so\nI’ll change the word as proposed.\n\n> 2) Just to be consistent, how about\n> + * cases where we're starting new transaction (not subtransaction),\n> if a broken connection is\n> instead of\n> + * cases where we're out of all transactions, if a broken connection is\n\nActually, I modified the comment to match existing comments like the\none mentioned above. I think the patch would actually be more\nconsistent.\n\n> 3) IMO we don't need the word \"later\" here because we are immediately\n> reestablishing the connection, if it is decided to do so.\n> + * Determine whether to try to remake the connection later.\n\nOk, I’ll drop the word “later”.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 6 Oct 2021 20:26:34 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: Obsolete comments in GetConnection()" }, { "msg_contents": "On Wed, Oct 6, 2021 at 4:55 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>\n> Hi Bharath,\n>\n> On Wed, Oct 6, 2021 at 6:37 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > +1 for rewording the comments. Here are my thoughts on the patch:\n> >\n> > 1) Just to be consistent(we are using this word in the error message,\n> > and in other comments around there), how about\n> > + * Determine whether to try to reestablish the connection.\n> > instead of\n> > + * Determine whether to try to remake the connection later.\n>\n> Actually, we use the word “remake” as well in comments in\n> connection.c: e.g., “If the connection needs to be *remade* due to\n> invalidation, disconnect as soon as we're out of all transactions.” in\n> GetConnection(). But I don’t have a strong opinion about that, so\n> I’ll change the word as proposed.\n\nThanks.\n\n> > 2) Just to be consistent, how about\n> > + * cases where we're starting new transaction (not subtransaction),\n> > if a broken connection is\n> > instead of\n> > + * cases where we're out of all transactions, if a broken connection is\n>\n> Actually, I modified the comment to match existing comments like the\n> one mentioned above. I think the patch would actually be more\n> consistent.\n\nOkay.\n\n> > 3) IMO we don't need the word \"later\" here because we are immediately\n> > reestablishing the connection, if it is decided to do so.\n> > + * Determine whether to try to remake the connection later.\n>\n> Ok, I’ll drop the word “later”.\n\nThanks.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 6 Oct 2021 17:08:56 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: Obsolete comments in GetConnection()" }, { "msg_contents": "On Wed, Oct 6, 2021 at 8:39 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Wed, Oct 6, 2021 at 4:55 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Wed, Oct 6, 2021 at 6:37 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > +1 for rewording the comments. Here are my thoughts on the patch:\n> > >\n> > > 1) Just to be consistent(we are using this word in the error message,\n> > > and in other comments around there), how about\n> > > + * Determine whether to try to reestablish the connection.\n> > > instead of\n> > > + * Determine whether to try to remake the connection later.\n> >\n> > Actually, we use the word “remake” as well in comments in\n> > connection.c: e.g., “If the connection needs to be *remade* due to\n> > invalidation, disconnect as soon as we're out of all transactions.” in\n> > GetConnection(). But I don’t have a strong opinion about that, so\n> > I’ll change the word as proposed.\n>\n> Thanks.\n>\n> > > 2) Just to be consistent, how about\n> > > + * cases where we're starting new transaction (not subtransaction),\n> > > if a broken connection is\n> > > instead of\n> > > + * cases where we're out of all transactions, if a broken connection is\n> >\n> > Actually, I modified the comment to match existing comments like the\n> > one mentioned above. I think the patch would actually be more\n> > consistent.\n>\n> Okay.\n>\n> > > 3) IMO we don't need the word \"later\" here because we are immediately\n> > > reestablishing the connection, if it is decided to do so.\n> > > + * Determine whether to try to remake the connection later.\n> >\n> > Ok, I’ll drop the word “later”.\n>\n> Thanks.\n\nPushed after modifying the patch as such. Thanks for reviewing!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 7 Oct 2021 18:27:44 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: Obsolete comments in GetConnection()" } ]
[ { "msg_contents": "Hello and thank you very much for the best open source database engine!\n\nThere's just this tiny but seemingly obvious issue that I can't believe I \nhaven't noticed until now: to_date(now(), 'TMmonth') returns 'october' in an \nEnglish locale (en_US.UTF-8 at least). Names of months and weekdays are proper \nnouns and as such *always* capitalized in English, so that seems wrong to me. \n\nIf you want to build an internationalized application, you want 'TMmonth' to \nreturn a month name that can be used in the middle of a sentence, capitalized \nor not depending on locale, and 'TMMonth' to return an always capitalized \nmonth name that can be used at the start of a sentence.\n\nThis was discussed back in 2008:\n\nhttps://www.postgresql.org/message-id/flat/\n47C34A98.7050102%40timbira.com#9593d90487976d28e2b612cff576545d\n\nThere is talk about how PostgreSQL has to do what Oracle does, but does it \nreally have to replicate bugs at this level of detail? Localized date and \nnumber formats are only for presentation, and not meant to be machine-\nreadable.\n\nI imagine that the reason it works the way it does is that the unlocalized \nformats exist, and there wouldn't be any difference between 'month' and \n'Month' if 'month' also capitalized month names due to English language rules, \nand as long as you're not building an internationalized application you can \nalways use 'Month' to get it right.\n\n(This is the situation in PostgreSQL 13, at least. I haven't tried PostgreSQL \n14, but there are no mentions of to_char() or localization in the release \nnotes, nothing in the documentation of to_char() suggesting any change, and I \nalso haven't found any more recent discussions.)\n\nThoughts?\n\n-- \nMagnus Holmgren\nMILLNET AB, Teknikringen 6, 583 30 Linköping\n\n\n\n\n-- \nVid e-postkontakt med Millnet är det normalt att åtminstone vissa \npersonuppgifter sparas om dig. Du kan läsa mer om vilka uppgifter som \nsparas och hur vi hanterar dem på https://www.millnet.se/integritetspolicy/ \n<https://www.millnet.se/integritetspolicy/>.\n\n\n", "msg_date": "Wed, 06 Oct 2021 11:08:57 +0200", "msg_from": "Magnus Holmgren <magnus.holmgren@millnet.se>", "msg_from_op": true, "msg_subject": "Capitalization of localized month and day names (to_char() with\n 'TMmonth', 'TMday', etc.)" }, { "msg_contents": "On Wed, Oct 6, 2021 at 11:09 AM Magnus Holmgren <magnus.holmgren@millnet.se>\nwrote:\n\n>\n> There's just this tiny but seemingly obvious issue that I can't believe I\n> haven't noticed until now: to_date(now(), 'TMmonth') returns 'october' in\n> an\n> English locale (en_US.UTF-8 at least). Names of months and weekdays are\n> proper\n> nouns and as such *always* capitalized in English, so that seems wrong to\n> me.\n>\n> IMHO, the patterns of TO_CHAR() do as promised in the documentation [1]:\n\nMONTH full upper case month name (blank-padded to 9 chars)\nMonth full capitalized month name (blank-padded to 9 chars)\nmonth full lower case month name (blank-padded to 9 chars)\n\nWhat you are proposing looks more like a new feature than a bug.\n\n[1] https://www.postgresql.org/docs/current/functions-formatting.html\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Oct 6, 2021 at 11:09 AM Magnus Holmgren <magnus.holmgren@millnet.se> wrote:\nThere's just this tiny but seemingly obvious issue that I can't believe I \nhaven't noticed until now: to_date(now(), 'TMmonth') returns 'october' in an \nEnglish locale (en_US.UTF-8 at least). Names of months and weekdays are proper \nnouns and as such *always* capitalized in English, so that seems wrong to me. IMHO, the patterns of TO_CHAR() do as promised in the documentation [1]:MONTH\tfull upper case month name (blank-padded to 9 chars)Month\tfull capitalized month name (blank-padded to 9 chars)month\tfull lower case month name (blank-padded to 9 chars)What you are proposing looks more like a new feature than a bug.[1] https://www.postgresql.org/docs/current/functions-formatting.html Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 6 Oct 2021 14:17:34 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Capitalization of localized month and day names (to_char() with\n 'TMmonth', 'TMday', etc.)" }, { "msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> On Wed, Oct 6, 2021 at 11:09 AM Magnus Holmgren <magnus.holmgren@millnet.se>\n> wrote:\n>> There's just this tiny but seemingly obvious issue that I can't believe I\n>> haven't noticed until now: to_date(now(), 'TMmonth') returns 'october' in\n>> an\n>> English locale (en_US.UTF-8 at least). Names of months and weekdays are\n>> proper\n>> nouns and as such *always* capitalized in English, so that seems wrong to\n>> me.\n\n> IMHO, the patterns of TO_CHAR() do as promised in the documentation [1]:\n> MONTH full upper case month name (blank-padded to 9 chars)\n> Month full capitalized month name (blank-padded to 9 chars)\n> month full lower case month name (blank-padded to 9 chars)\n\n> What you are proposing looks more like a new feature than a bug.\n\nYeah, this is operating as designed and documented. The idea that\nthere should be a way to get \"month name as it'd be spelled mid-sentence\"\nis an interesting one, but I really doubt that anyone would thank us for\nchanging TMmonth to act that way. (Perhaps a new format code or modifier\nwould be easier to swallow?)\n\nI also wonder exactly how the code would figure out what to do ---\nlanguage-specific conventions for this are not information available\nfrom the libc locale APIs, AFAIR.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Oct 2021 10:01:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Capitalization of localized month and day names (to_char() with\n 'TMmonth', 'TMday', etc.)" }, { "msg_contents": "onsdag 6 oktober 2021 kl. 16:01:49 CEST skrev du:\n> =?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= \n<juanjo.santamaria@gmail.com> writes:\n> > On Wed, Oct 6, 2021 at 11:09 AM Magnus Holmgren\n> > <magnus.holmgren@millnet.se>> \n> > wrote:\n> >> There's just this tiny but seemingly obvious issue that I can't believe I\n> >> haven't noticed until now: to_date(now(), 'TMmonth') returns 'october' in\n> >> an\n> >> English locale (en_US.UTF-8 at least). Names of months and weekdays are\n> >> proper\n> >> nouns and as such *always* capitalized in English, so that seems wrong to\n> >> me.\n> > \n> > IMHO, the patterns of TO_CHAR() do as promised in the documentation [1]:\n> > MONTH full upper case month name (blank-padded to 9 chars)\n> > Month full capitalized month name (blank-padded to 9 chars)\n> > month full lower case month name (blank-padded to 9 chars)\n> > \n> > What you are proposing looks more like a new feature than a bug.\n> \n> Yeah, this is operating as designed and documented. The idea that\n> there should be a way to get \"month name as it'd be spelled mid-sentence\"\n> is an interesting one, but I really doubt that anyone would thank us for\n> changing TMmonth to act that way. (Perhaps a new format code or modifier\n> would be easier to swallow?)\n\nYes, I see that it's working as designed and documented, but I contend that \nthe design is flawed for the reason I gave. I mean, you can't deny that names \nof months and weekdays are always capitalized in English and certain other \nlanguages, whereas in another set of languages they are not, can you? Perhaps \nthis is a conscious design choice with some reason behind it, but if so, \nneither the PostgreSQL nor the Oracle documentation (https://docs.oracle.com/\ncd/B12037_01/server.101/b10759/sql_elements004.htm#i34510) reveal it. What is \nthe use case for linguistically incorrectly lowercased localized month and day \nnames? What would such a change break?\n\nI still suspect that whoever designed this didn't consider locale switching. \n(Interestingly, \"month\", \"mon\", \"day\", and \"dy\" are locale-specific by \nthemselves; there is no \"TM\" prefix needed.\n\n> I also wonder exactly how the code would figure out what to do ---\n> language-specific conventions for this are not information available\n> from the libc locale APIs, AFAIR.\n\nI checked the code, and it looks like cache_locale_time() in src/backend/\nutils/adt/pg_locale.c uses strftime(3) to produce the correctly capitalized \nday and month names and abbreviations (format codes %A, %B, %a, and %b). All \nthat would be needed is not to force them to lowercase in DCH_to_char() in \nsrc/backend/utils/adt/formatting.c.\n\nWhat could a new, separate format code that doesn't do this look like?\n\n-- \nMagnus Holmgren, developer\nMILLNET AB\n\n\n\n-- \nVid e-postkontakt med Millnet är det normalt att åtminstone vissa \npersonuppgifter sparas om dig. Du kan läsa mer om vilka uppgifter som \nsparas och hur vi hanterar dem på https://www.millnet.se/integritetspolicy/ \n<https://www.millnet.se/integritetspolicy/>.\n\n\n", "msg_date": "Fri, 08 Oct 2021 12:05:03 +0200", "msg_from": "Magnus Holmgren <magnus.holmgren@millnet.se>", "msg_from_op": true, "msg_subject": "Re: Capitalization of localized month and day names (to_char() with\n 'TMmonth', 'TMday', etc.)" } ]
[ { "msg_contents": "Hi All,\nsentPtr reported by WAL sender should usually never jump back, it\nshould always increase.\nI observed a strange behaviour with the WAL sender where sentPtr jumps\nback at the beginning. From code examination it looks like the\nfollowing behaviour is culprit.\n\nThe WAL sender reads WAL from restart_lsn which is what is set in\nreader->EndRecPtr in XLogBeginRead. So reader->EndRecPtr starts with\nrestart_lsn\n\nsentPtr starts with MyReplicationSlot->data.confirmed_flush in\nStartLogicalReplication(). Usually there will be some or other\nconcurrent transaction happening, so confirmed_flush is higher than\nrestart_lsn. After the first loop over send_data in WalSndLoop(), it\ngets set to reader->EndRecPtr. So when the first WAL record is read it\njumps back to the end of the first record starting at restart_lsn.\nEventually it will catch up to confirmed_lsn when the WAL sender reads\nWAL.\n\nThis seems to be harmless but the logical receiver may get confused if\nit receives an LSN lesser than confirmed_flush.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 6 Oct 2021 15:23:11 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "sentPtr jumping back at the beginning of logical replication" } ]
[ { "msg_contents": "Dear Hackers,\n\nWhile reading source codes about timeouts and GUC and I found that\nstrange behavior about client_connection_check_interval.\n\nCurrently we did not an assign_hook about client_connection_check_interval,\nthat means a timeout will not turn on immediately if users change the GUC\nfrom zero to arbitrary positive integer.\nIn my understanding the timeout will fire only when:\n\n* before starting transaction\n* after firing the CLIENT_CONNECTION_CHECK_TIMEOUT timeout\n\nHence I thought following inconvenient scenario:\n\n1. set client_connection_check_interval = 0 in postgresql.conf\n2. start a tx\n3. SET LOCAL client_connection_check_interval to non-zero value\n in order to checking clients until the end of the tx\n4. users expect to firing the timeout, but it does not work\n because enable_timeout_after() will never execute in the tx\n\nIs this an expected behavior? If so, I think this spec should be documented.\nIf not, I think an assign_hook is needed for resolving the problem.\n\nHow do you think?\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Thu, 7 Oct 2021 03:07:33 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "Question about client_connection_check_interval" }, { "msg_contents": "Hello.\n\nAt Thu, 7 Oct 2021 03:07:33 +0000, \"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com> wrote in \n> Dear Hackers,\n> \n> While reading source codes about timeouts and GUC and I found that\n> strange behavior about client_connection_check_interval.\n> \n> Currently we did not an assign_hook about client_connection_check_interval,\n> that means a timeout will not turn on immediately if users change the GUC\n> from zero to arbitrary positive integer.\n> In my understanding the timeout will fire only when:\n> \n> * before starting transaction\n\nYou're misunderstanding here. Maybe you saw that start_xact_command()\nstarts the timer but note that the function is called before every\ncommand execution.\n\n> * after firing the CLIENT_CONNECTION_CHECK_TIMEOUT timeout\n>\n> Hence I thought following inconvenient scenario:\n> \n> 1. set client_connection_check_interval = 0 in postgresql.conf\n> 2. start a tx\n> 3. SET LOCAL client_connection_check_interval to non-zero value\n> in order to checking clients until the end of the tx\n\n> 4. users expect to firing the timeout, but it does not work\n> because enable_timeout_after() will never execute in the tx\n\nSo this is wrong. I should see the check performed as expected. That\nbehavior would be clearly visualized if you inserted an elog() into\npq_check_connection().\n\n> Is this an expected behavior? If so, I think this spec should be documented.\n> If not, I think an assign_hook is needed for resolving the problem.\n> \n> How do you think?\n\nAnd it seems that the documentation describes the behavior correctly.\n\nhttps://www.postgresql.org/docs/14/runtime-config-connection.html\n\n> client_connection_check_interval (integer)\n>\n> Sets the time interval between optional checks that the client is\n> still connected, while running queries.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 08 Oct 2021 09:56:32 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question about client_connection_check_interval" }, { "msg_contents": "At Fri, 08 Oct 2021 09:56:32 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Hello.\n> \n> At Thu, 7 Oct 2021 03:07:33 +0000, \"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com> wrote in \n> > Dear Hackers,\n> > \n> > While reading source codes about timeouts and GUC and I found that\n> > strange behavior about client_connection_check_interval.\n> > \n> > Currently we did not an assign_hook about client_connection_check_interval,\n> > that means a timeout will not turn on immediately if users change the GUC\n> > from zero to arbitrary positive integer.\n> > In my understanding the timeout will fire only when:\n> > \n> > * before starting transaction\n> \n> You're misunderstanding here. Maybe you saw that start_xact_command()\n> starts the timer but note that the function is called before every\n> command execution.\n> \n> > * after firing the CLIENT_CONNECTION_CHECK_TIMEOUT timeout\n> >\n> > Hence I thought following inconvenient scenario:\n> > \n> > 1. set client_connection_check_interval = 0 in postgresql.conf\n> > 2. start a tx\n> > 3. SET LOCAL client_connection_check_interval to non-zero value\n> > in order to checking clients until the end of the tx\n> \n> > 4. users expect to firing the timeout, but it does not work\n> > because enable_timeout_after() will never execute in the tx\n> \n> So this is wrong. I should see the check performed as expected. That\n\nI don't come up with how come I wrote this, but the \"*I* should\" is,\nof course, a typo of \"*You* should\".\n\nSo this is wrong. You should see the check performed as expected. That\n> behavior would be clearly visualized if you inserted an elog() into\n> pq_check_connection().\n> \n> > Is this an expected behavior? If so, I think this spec should be documented.\n> > If not, I think an assign_hook is needed for resolving the problem.\n> > \n> > How do you think?\n> \n> And it seems that the documentation describes the behavior correctly.\n> \n> https://www.postgresql.org/docs/14/runtime-config-connection.html\n> \n> > client_connection_check_interval (integer)\n> >\n> > Sets the time interval between optional checks that the client is\n> > still connected, while running queries.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 08 Oct 2021 09:59:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question about client_connection_check_interval" }, { "msg_contents": "Dear Horiguchi-san,\n\nThank you for replying! I understood I was wrong. Sorry.\n\n> You're misunderstanding here. Maybe you saw that start_xact_command()\n> starts the timer but note that the function is called before every\n> command execution.\n\nBased on your advice I read codes again and I found that start_xact_command() is called\nfrom exec_XXX functions.\nThey are called when backend processes read first char from front-end,\nhence I agreed enable_timeout_after() will call very quickly if timeout is disabled.\n\n> So this is wrong. I should see the check performed as expected. That\n> behavior would be clearly visualized if you inserted an elog() into\n> pq_check_connection().\n\nRight. As mentioned above timeout is checked basically whenever reading commands. \nI embedded elog() to ClientCheckTimeoutHandler() and visualized easily.\n\n> And it seems that the documentation describes the behavior correctly.\n> \n> https://www.postgresql.org/docs/14/runtime-config-connection.html\n>\n> > client_connection_check_interval (integer)\n> >\n> > Sets the time interval between optional checks that the client is\n> > still connected, while running queries.\n\nYeah I agreed that, I apologize for mistaking source and doc analysis.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Sun, 10 Oct 2021 23:51:57 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Question about client_connection_check_interval" } ]
[ { "msg_contents": "When producing a forked version of PostgreSQL, there is no \nstraightforward way to enforce that users don't accidentally load \nmodules built for the non-forked (standard, community) version. You can \nonly distinguish by PostgreSQL major version and a few compile-time \nsettings. (see internal_load_library(), Pg_magic_struct) Depending on \nthe details, mixing and matching might even work, until it doesn't, so \nthis is a bad experience.\n\nI'm thinking about adding two more int fields to Pg_magic_struct: a \nproduct or vendor magic number, and an ABI version that can be used \nfreely within a product/vendor.\n\nWould anyone else have use for this? Any thoughts?\n\n\n", "msg_date": "Thu, 7 Oct 2021 11:27:48 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "dfmgr additional ABI version fields" }, { "msg_contents": "čt 7. 10. 2021 v 11:28 odesílatel Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> napsal:\n\n> When producing a forked version of PostgreSQL, there is no\n> straightforward way to enforce that users don't accidentally load\n> modules built for the non-forked (standard, community) version. You can\n> only distinguish by PostgreSQL major version and a few compile-time\n> settings. (see internal_load_library(), Pg_magic_struct) Depending on\n> the details, mixing and matching might even work, until it doesn't, so\n> this is a bad experience.\n>\n> I'm thinking about adding two more int fields to Pg_magic_struct: a\n> product or vendor magic number, and an ABI version that can be used\n> freely within a product/vendor.\n>\n> Would anyone else have use for this? Any thoughts?\n>\n\n+1\n\nPavel\n\nčt 7. 10. 2021 v 11:28 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:When producing a forked version of PostgreSQL, there is no \nstraightforward way to enforce that users don't accidentally load \nmodules built for the non-forked (standard, community) version.  You can \nonly distinguish by PostgreSQL major version and a few compile-time \nsettings.  (see internal_load_library(), Pg_magic_struct)  Depending on \nthe details, mixing and matching might even work, until it doesn't, so \nthis is a bad experience.\n\nI'm thinking about adding two more int fields to Pg_magic_struct: a \nproduct or vendor magic number, and an ABI version that can be used \nfreely within a product/vendor.\n\nWould anyone else have use for this?  Any thoughts?+1Pavel", "msg_date": "Thu, 7 Oct 2021 12:02:23 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: dfmgr additional ABI version fields" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I'm thinking about adding two more int fields to Pg_magic_struct: a \n> product or vendor magic number, and an ABI version that can be used \n> freely within a product/vendor.\n\nWho would hand out these magic numbers?\n\nIf the answer is \"choose a random one, it probably won't collide\"\nthen I'm not sure why we need two fields. You can choose a new\nrandom number for each ABI version, if you're changing it faster\nthan once per PG major version.\n\nI'm also kind of unclear on why we need to do anything about this\nin the community version. If someone has forked PG and changed\nAPIs to the extent that extensions are unlikely to work, there's\nnot much stopping them from also making the two-line change\nto fmgr.h that would be needed to guarantee that different magic\nstruct contents are needed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Oct 2021 11:49:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: dfmgr additional ABI version fields" }, { "msg_contents": "Hi, \n\nOn October 7, 2021 8:49:57 AM PDT, Tom Lane \n>I'm also kind of unclear on why we need to do anything about this\n>in the community version. If someone has forked PG and changed\n>APIs to the extent that extensions are unlikely to work, there's\n>not much stopping them from also making the two-line change\n>to fmgr.h that would be needed to guarantee that different magic\n>struct contents are needed.\n\nI can see two reasons. First, it'd probably allow stock pg to generate a better error message when confronted with such a module. Second, there's some value in signaling forks that they should change (or think about changing), that field.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 07 Oct 2021 09:32:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: dfmgr additional ABI version fields" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On October 7, 2021 8:49:57 AM PDT, Tom Lane \n>> I'm also kind of unclear on why we need to do anything about this\n>> in the community version. If someone has forked PG and changed\n>> APIs to the extent that extensions are unlikely to work, there's\n>> not much stopping them from also making the two-line change\n>> to fmgr.h that would be needed to guarantee that different magic\n>> struct contents are needed.\n\n> I can see two reasons. First, it'd probably allow stock pg to generate a better error message when confronted with such a module. Second, there's some value in signaling forks that they should change (or think about changing), that field.\n\nHmm, ok, I can buy the first of those arguments. Less sure about\nthe second, but the first is reason enough.\n\nCan we make the addition be a string not a number, so that we\ncould include something more useful than \"1234\" in the error\nmessage? Something like \"Module is built for EDB v1234.56\"\nseems like it'd be a lot more on-point to the average user,\nand it gets us out of having to design the ABI versioning scheme\nthat a fork should use.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Oct 2021 12:42:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: dfmgr additional ABI version fields" }, { "msg_contents": "On 10/07/21 12:42, Tom Lane wrote:\n\n> Can we make the addition be a string not a number, so that we\n> could include something more useful than \"1234\" in the error\n> message?\n\nI was wondering the same thing, just to sidestep the \"who hands out IDs\"\nquestion.\n\nJust using a string like \"EDB v\" + something would probably rule out\ncollisions in practice. To be more formal about it, something like\nthe tag URI scheme [0] could be recommended. Nothing at runtime would\nhave to know or care about tag URI syntax; it would just match a string\nwith a fixed opaque prefix and some suffix. The scheme gives the developer\nan easy way to construct a meaningful and reliably non-colliding string.\n\nSurely loading libraries isn't a hot enough operation to begrudge\na strcmp.\n\nRegards,\n-Chap\n\n\n[0] https://datatracker.ietf.org/doc/html/rfc4151\n\n\n", "msg_date": "Thu, 7 Oct 2021 12:56:22 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: dfmgr additional ABI version fields" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 10/07/21 12:42, Tom Lane wrote:\n>> Can we make the addition be a string not a number, so that we\n>> could include something more useful than \"1234\" in the error\n>> message?\n\n> Just using a string like \"EDB v\" + something would probably rule out\n> collisions in practice. To be more formal about it, something like\n> the tag URI scheme [0] could be recommended.\n\nHmm. Personally I'm more interested in the string being comprehensible to\nend users than in whether there's any formal rule guaranteeing uniqueness.\nI really doubt that we will have any practical problem with collisions,\nso I'd rather go with something like \"EnterpriseDB v1.2.3\" than with\nsomething like \"tag:enterprisedb.com,2021:1.2.3\".\n\nConceivably we could have two strings, or a printable string and\na chosen-at-random unique number (the latter not meant to be shown\nto users). Not sure it's worth the trouble though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Oct 2021 15:15:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: dfmgr additional ABI version fields" }, { "msg_contents": "On 07.10.21 21:15, Tom Lane wrote:\n> Chapman Flack <chap@anastigmatix.net> writes:\n>> On 10/07/21 12:42, Tom Lane wrote:\n>>> Can we make the addition be a string not a number, so that we\n>>> could include something more useful than \"1234\" in the error\n>>> message?\n> \n>> Just using a string like \"EDB v\" + something would probably rule out\n>> collisions in practice. To be more formal about it, something like\n>> the tag URI scheme [0] could be recommended.\n> \n> Hmm. Personally I'm more interested in the string being comprehensible to\n> end users than in whether there's any formal rule guaranteeing uniqueness.\n> I really doubt that we will have any practical problem with collisions,\n> so I'd rather go with something like \"EnterpriseDB v1.2.3\" than with\n> something like \"tag:enterprisedb.com,2021:1.2.3\".\n\nYeah, just a string should be fine.\n\n\n", "msg_date": "Fri, 8 Oct 2021 16:54:45 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: dfmgr additional ABI version fields" }, { "msg_contents": "So here is a patch. This does what I had in mind as a use case. \nObviously, the naming and wording can be tuned. Input from other \nvendors is welcome.", "msg_date": "Tue, 12 Oct 2021 14:13:40 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: dfmgr additional ABI version fields" }, { "msg_contents": "On Tue, Oct 12, 2021 at 8:13 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> So here is a patch. This does what I had in mind as a use case.\n> Obviously, the naming and wording can be tuned. Input from other\n> vendors is welcome.\n\nI'm not a different vendor, but I do work on different code than you\ndo, and I like this. Advanced Server accidentally dodges this problem\nat present by shipping with a different FUNC_MAX_ARGS value, but this\nis much cleaner.\n\nWould it be reasonable to consider something similar for the control\nfile, for the benefit of distributions that are not the same on disk?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Oct 2021 12:50:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: dfmgr additional ABI version fields" }, { "msg_contents": "On Wed, Oct 13, 2021 at 12:50:38PM -0400, Robert Haas wrote:\n> I'm not a different vendor, but I do work on different code than you\n> do, and I like this. Advanced Server accidentally dodges this problem\n> at present by shipping with a different FUNC_MAX_ARGS value, but this\n> is much cleaner.\n\nI am pretty sure that Greenplum could benefit from something like\nthat. As a whole, using a string looks like a good idea for that.\n\n> Would it be reasonable to consider something similar for the control\n> file, for the benefit of distributions that are not the same on disk?\n\nHmm. Wouldn't that cause more harm than actual benefits?\n--\nMichael", "msg_date": "Fri, 19 Nov 2021 16:58:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: dfmgr additional ABI version fields" }, { "msg_contents": "On 19.11.21 08:58, Michael Paquier wrote:\n> On Wed, Oct 13, 2021 at 12:50:38PM -0400, Robert Haas wrote:\n>> I'm not a different vendor, but I do work on different code than you\n>> do, and I like this. Advanced Server accidentally dodges this problem\n>> at present by shipping with a different FUNC_MAX_ARGS value, but this\n>> is much cleaner.\n> \n> I am pretty sure that Greenplum could benefit from something like\n> that. As a whole, using a string looks like a good idea for that.\n> \n>> Would it be reasonable to consider something similar for the control\n>> file, for the benefit of distributions that are not the same on disk?\n> \n> Hmm. Wouldn't that cause more harm than actual benefits?\n\nThe catalog version already serves this purpose.\n\n\n", "msg_date": "Fri, 19 Nov 2021 11:01:11 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: dfmgr additional ABI version fields" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On Wed, Oct 13, 2021 at 12:50:38PM -0400, Robert Haas wrote:\n>>> Would it be reasonable to consider something similar for the control\n>>> file, for the benefit of distributions that are not the same on disk?\n\n> The catalog version already serves this purpose.\n\nWe already have fields in pg_control for that, and fields to check\nendianness, maxalign, etc, ie the things that matter for data storage.\nPerhaps there is a need for more such fields, but I don't see that\nextension ABI questions are directly related.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Nov 2021 09:59:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: dfmgr additional ABI version fields" }, { "msg_contents": "I have committed this patch as posted.\n\n\n", "msg_date": "Mon, 22 Nov 2021 08:23:08 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: dfmgr additional ABI version fields" } ]
[ { "msg_contents": "Hi,\n\nI was able to create a table in \"information_schema\" schema, but\npg_dump does not dumps the table that was created in\n\"information_schema\" schema:\ncreate table information_schema.t1(c1 int);\n\nThe similar problem exists in case of create publication, we are able\nto create publications for tables present in \"information_schema\"\nschema, but pg_dump does not dump the publication to include the\ninformation_schema.t1 information.\ncreate publication pub1 for table information_schema.t1;\n\nShould tables be allowed to create in \"information_schema\" schema, if\nyes should the tables/publications be dumped while dumping database\ncontents?\nThoughts?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 7 Oct 2021 16:49:01 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "pg_dump does not dump tables created in information_schema schema" }, { "msg_contents": "On Thursday, October 7, 2021, vignesh C <vignesh21@gmail.com> wrote:\n\n>\n> Should tables be allowed to create in \"information_schema\" schema, if\n> yes should the tables/publications be dumped while dumping database\n> contents?\n>\n>\nI presume you have to be superuser to do this. If so, this would seem to\nfit under the “we don’t stop you, but you shouldn’t” advice that we apply\nthroughout the system, like in say modifying stuff in pg_catalog.\nInformation_schema is an internal schema attached to an static for a given\nrelease.\n\nDavid J.\n\nOn Thursday, October 7, 2021, vignesh C <vignesh21@gmail.com> wrote:\nShould tables be allowed to create in \"information_schema\" schema, if\nyes should the tables/publications be dumped while dumping database\ncontents?\nI presume you have to be superuser to do this.  If so, this would seem to fit under the “we don’t stop you, but you shouldn’t” advice that we apply throughout the system, like in say modifying stuff in pg_catalog.  Information_schema is an internal schema attached to an static for a given release.David J.", "msg_date": "Thu, 7 Oct 2021 07:08:49 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump does not dump tables created in information_schema schema" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Thursday, October 7, 2021, vignesh C <vignesh21@gmail.com> wrote:\n>> Should tables be allowed to create in \"information_schema\" schema, if\n>> yes should the tables/publications be dumped while dumping database\n>> contents?\n\n> I presume you have to be superuser to do this. If so, this would seem to\n> fit under the “we don’t stop you, but you shouldn’t” advice that we apply\n> throughout the system, like in say modifying stuff in pg_catalog.\n> Information_schema is an internal schema attached to an static for a given\n> release.\n\nIt is (supposed to be) possible for a superuser to drop information_schema\npost-initdb and then recreate it by sourcing the information_schema.sql\nfile. In fact, I seem to recall that we've recommended doing so in past\nminor releases to correct errors in information_schema declarations.\nSo it's fairly hard to see how we could enforce prohibitions against\nchanging information_schema objects without breaking that use-case.\nOn the other hand, just because you did that doesn't mean that you want\ninformation_schema to start showing up in your dumps. Quite the opposite\nin fact, because then you'd have problems with trying to load the dump\ninto a newer PG version that might need different information_schema\ncontents.\n\nSo I agree: there's nothing to be done here, and the proposed scenario\nis a case of \"superusers should know better than to do that\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Oct 2021 12:00:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump does not dump tables created in information_schema schema" }, { "msg_contents": "On Thu, Oct 7, 2021 at 9:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Thursday, October 7, 2021, vignesh C <vignesh21@gmail.com> wrote:\n> >> Should tables be allowed to create in \"information_schema\" schema, if\n> >> yes should the tables/publications be dumped while dumping database\n> >> contents?\n>\n> > I presume you have to be superuser to do this. If so, this would seem to\n> > fit under the “we don’t stop you, but you shouldn’t” advice that we apply\n> > throughout the system, like in say modifying stuff in pg_catalog.\n> > Information_schema is an internal schema attached to an static for a given\n> > release.\n>\n> It is (supposed to be) possible for a superuser to drop information_schema\n> post-initdb and then recreate it by sourcing the information_schema.sql\n> file. In fact, I seem to recall that we've recommended doing so in past\n> minor releases to correct errors in information_schema declarations.\n> So it's fairly hard to see how we could enforce prohibitions against\n> changing information_schema objects without breaking that use-case.\n> On the other hand, just because you did that doesn't mean that you want\n> information_schema to start showing up in your dumps. Quite the opposite\n> in fact, because then you'd have problems with trying to load the dump\n> into a newer PG version that might need different information_schema\n> contents.\n>\n> So I agree: there's nothing to be done here, and the proposed scenario\n> is a case of \"superusers should know better than to do that\".\n\nThanks for the clarification.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 8 Oct 2021 08:59:42 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump does not dump tables created in information_schema schema" } ]
[ { "msg_contents": "Hi,\n\nIn a typical production environment, the user (not necessarily a\nsuperuser) sometimes wants to analyze the memory usage via\npg_backend_memory_contexts view or pg_log_backend_memory_contexts\nfunction which are accessible to only superusers. Isn't it better to\nallow non-superusers with an appropriate predefined role (I'm thinking\nof pg_monitor) to access them?\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 7 Oct 2021 23:11:35 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On 10/7/21, 10:42 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> In a typical production environment, the user (not necessarily a\r\n> superuser) sometimes wants to analyze the memory usage via\r\n> pg_backend_memory_contexts view or pg_log_backend_memory_contexts\r\n> function which are accessible to only superusers. Isn't it better to\r\n> allow non-superusers with an appropriate predefined role (I'm thinking\r\n> of pg_monitor) to access them?\r\n\r\nIt looks like this was discussed previously [0]. From the description\r\nof pg_monitor [1], I think it's definitely arguable that this view and\r\nfunction should be accessible by roles that are members of pg_monitor.\r\n\r\n The pg_monitor, pg_read_all_settings, pg_read_all_stats and\r\n pg_stat_scan_tables roles are intended to allow administrators\r\n to easily configure a role for the purpose of monitoring the\r\n database server. They grant a set of common privileges\r\n allowing the role to read various useful configuration\r\n settings, statistics and other system information normally\r\n restricted to superusers.\r\n\r\nAFAICT the current permissions were chosen as a safe default, but\r\nmaybe it can be revisited. The view and function appear to only\r\nreveal high level information about the memory contexts in use (e.g.,\r\nname, size, amount used), so I'm not seeing any obvious reason why\r\nthey should remain superuser-only. pg_log_backend_memory_contexts()\r\ndirectly affects the server log, which might be a bit beyond what\r\npg_monitor should be able to do. My currently thinking is that we\r\nshould give pg_monitor access to pg_backend_memory_contexts (and maybe\r\neven pg_shmem_allocations). However, one interesting thing I see is\r\nthat there is no mention of any predefined roles in system_views.sql.\r\nInstead, the convention seems to be to add hard-coded checks for\r\npredefined roles in the backing functions. I don't know if that's a\r\nhard and fast rule, but I do see that predefined roles are given\r\nspecial privileges in system_functions.sql.\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/message-id/flat/a99bdd0e-7271-8176-f700-2553a51d4a27%40oss.nttdata.com#0f79f7cf6a6c3b3e3ccb4570870b3bd4\r\n[1] https://www.postgresql.org/docs/devel/predefined-roles.html\r\n\r\n", "msg_date": "Thu, 7 Oct 2021 18:57:54 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Fri, Oct 8, 2021 at 12:27 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 10/7/21, 10:42 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > In a typical production environment, the user (not necessarily a\n> > superuser) sometimes wants to analyze the memory usage via\n> > pg_backend_memory_contexts view or pg_log_backend_memory_contexts\n> > function which are accessible to only superusers. Isn't it better to\n> > allow non-superusers with an appropriate predefined role (I'm thinking\n> > of pg_monitor) to access them?\n>\n> It looks like this was discussed previously [0]. From the description\n> of pg_monitor [1], I think it's definitely arguable that this view and\n> function should be accessible by roles that are members of pg_monitor.\n>\n> The pg_monitor, pg_read_all_settings, pg_read_all_stats and\n> pg_stat_scan_tables roles are intended to allow administrators\n> to easily configure a role for the purpose of monitoring the\n> database server. They grant a set of common privileges\n> allowing the role to read various useful configuration\n> settings, statistics and other system information normally\n> restricted to superusers.\n\nHm.\n\n> AFAICT the current permissions were chosen as a safe default, but\n> maybe it can be revisited. The view and function appear to only\n> reveal high level information about the memory contexts in use (e.g.,\n> name, size, amount used), so I'm not seeing any obvious reason why\n> they should remain superuser-only. pg_log_backend_memory_contexts()\n> directly affects the server log, which might be a bit beyond what\n> pg_monitor should be able to do. My currently thinking is that we\n> should give pg_monitor access to pg_backend_memory_contexts (and maybe\n> even pg_shmem_allocations).\n\npg_shmem_allocations is also a good candidate.\n\n> However, one interesting thing I see is\n> that there is no mention of any predefined roles in system_views.sql.\n> Instead, the convention seems to be to add hard-coded checks for\n> predefined roles in the backing functions. I don't know if that's a\n> hard and fast rule, but I do see that predefined roles are given\n> special privileges in system_functions.sql.\n\nThere are two things: 1) We revoke the permissions for nonsuperuser in\nsystem_views.sql with the below\nREVOKE ALL ON pg_shmem_allocations FROM PUBLIC;\nREVOKE EXECUTE ON FUNCTION pg_get_shmem_allocations() FROM PUBLIC;\nREVOKE ALL ON pg_backend_memory_contexts FROM PUBLIC;\nREVOKE EXECUTE ON FUNCTION pg_get_backend_memory_contexts() FROM PUBLIC;\n\n2) We don't revoke any permissions in the system_views.sql, but we\nhave the following kind checks in the underlying function:\n/*\n* Only superusers or members of pg_monitor can <<see the details>>.\n*/\nif (!superuser() || !is_member_of_role(GetUserId(), ROLE_PG_MONITOR))\nereport(ERROR,\n(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\nerrmsg(\"must be a superuser or a member of the pg_monitor role to\n<<see the details>>\")));\n\nI think we can remove the below revoke statements from\nsystem_views.sql and place the checks shown at (2) in the underlying\nfunctions pg_get_shmem_allocations, pg_get_backend_memory_contexts,\nalso in pg_log_backend_memory_contexts.\n\nREVOKE ALL ON pg_shmem_allocations FROM PUBLIC;\nREVOKE EXECUTE ON FUNCTION pg_get_shmem_allocations() FROM PUBLIC;\nREVOKE ALL ON pg_backend_memory_contexts FROM PUBLIC;\nREVOKE EXECUTE ON FUNCTION pg_get_backend_memory_contexts() FROM PUBLIC;\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 8 Oct 2021 12:30:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On 10/8/21, 12:01 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> I think we can remove the below revoke statements from\r\n> system_views.sql and place the checks shown at (2) in the underlying\r\n> functions pg_get_shmem_allocations, pg_get_backend_memory_contexts,\r\n> also in pg_log_backend_memory_contexts.\r\n>\r\n> REVOKE ALL ON pg_shmem_allocations FROM PUBLIC;\r\n> REVOKE EXECUTE ON FUNCTION pg_get_shmem_allocations() FROM PUBLIC;\r\n> REVOKE ALL ON pg_backend_memory_contexts FROM PUBLIC;\r\n> REVOKE EXECUTE ON FUNCTION pg_get_backend_memory_contexts() FROM PUBLIC;\r\n>\r\n> Thoughts?\r\n\r\nThis approach would add a restriction that a role must have SUPERUSER\r\nor be a member of pg_monitor to use the views/functions. I think\r\nthere is value in allowing any role to use them (if granted the proper\r\nprivileges). In any case, users may already depend on being able to\r\ndo that.\r\n\r\nInstead, I think we should just grant privileges to pg_monitor. I've\r\nattached a (basically untested) patch to demonstrate what I'm\r\nthinking.\r\n\r\nNathan", "msg_date": "Fri, 8 Oct 2021 17:11:12 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> On 10/8/21, 12:01 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > I think we can remove the below revoke statements from\n> > system_views.sql and place the checks shown at (2) in the underlying\n> > functions pg_get_shmem_allocations, pg_get_backend_memory_contexts,\n> > also in pg_log_backend_memory_contexts.\n> >\n> > REVOKE ALL ON pg_shmem_allocations FROM PUBLIC;\n> > REVOKE EXECUTE ON FUNCTION pg_get_shmem_allocations() FROM PUBLIC;\n> > REVOKE ALL ON pg_backend_memory_contexts FROM PUBLIC;\n> > REVOKE EXECUTE ON FUNCTION pg_get_backend_memory_contexts() FROM PUBLIC;\n> >\n> > Thoughts?\n> \n> This approach would add a restriction that a role must have SUPERUSER\n> or be a member of pg_monitor to use the views/functions. I think\n> there is value in allowing any role to use them (if granted the proper\n> privileges). In any case, users may already depend on being able to\n> do that.\n> \n> Instead, I think we should just grant privileges to pg_monitor. I've\n> attached a (basically untested) patch to demonstrate what I'm\n> thinking.\n\nI'm not necessarily against this, but I will point out that we've stayed\naway, so far, from explicitly GRANT'ing privileges to pg_monitor itself,\nintending that to be a role which just combines privileges of certain\nother predefined roles together.\n\nI would think that these would fall under \"pg_read_all_stats\", in\nparticular, which is explicitly documented as: Read all pg_stat_* views\nand use various statistics related extensions, even those normally\nvisible only to superusers.\n\n(the last bit being particularly relevant in this case)\n\nThanks,\n\nStephen", "msg_date": "Fri, 8 Oct 2021 15:15:42 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Sat, Oct 9, 2021 at 12:45 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Bossart, Nathan (bossartn@amazon.com) wrote:\n> > On 10/8/21, 12:01 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > I think we can remove the below revoke statements from\n> > > system_views.sql and place the checks shown at (2) in the underlying\n> > > functions pg_get_shmem_allocations, pg_get_backend_memory_contexts,\n> > > also in pg_log_backend_memory_contexts.\n> > >\n> > > REVOKE ALL ON pg_shmem_allocations FROM PUBLIC;\n> > > REVOKE EXECUTE ON FUNCTION pg_get_shmem_allocations() FROM PUBLIC;\n> > > REVOKE ALL ON pg_backend_memory_contexts FROM PUBLIC;\n> > > REVOKE EXECUTE ON FUNCTION pg_get_backend_memory_contexts() FROM PUBLIC;\n> > >\n> > > Thoughts?\n> >\n> > This approach would add a restriction that a role must have SUPERUSER\n> > or be a member of pg_monitor to use the views/functions. I think\n> > there is value in allowing any role to use them (if granted the proper\n> > privileges). In any case, users may already depend on being able to\n> > do that.\n> >\n> > Instead, I think we should just grant privileges to pg_monitor. I've\n> > attached a (basically untested) patch to demonstrate what I'm\n> > thinking.\n>\n> I'm not necessarily against this, but I will point out that we've stayed\n> away, so far, from explicitly GRANT'ing privileges to pg_monitor itself,\n> intending that to be a role which just combines privileges of certain\n> other predefined roles together.\n>\n> I would think that these would fall under \"pg_read_all_stats\", in\n> particular, which is explicitly documented as: Read all pg_stat_* views\n> and use various statistics related extensions, even those normally\n> visible only to superusers.\n>\n> (the last bit being particularly relevant in this case)\n\n+1. I will prepare the patch with the pg_read_all_stats role.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 9 Oct 2021 03:56:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Sat, Oct 9, 2021 at 3:56 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > I would think that these would fall under \"pg_read_all_stats\", in\n> > particular, which is explicitly documented as: Read all pg_stat_* views\n> > and use various statistics related extensions, even those normally\n> > visible only to superusers.\n> >\n> > (the last bit being particularly relevant in this case)\n>\n> +1. I will prepare the patch with the pg_read_all_stats role.\n\nHere's the v1, please review it further.\n\nI've also made a CF entry - https://commitfest.postgresql.org/35/3352/\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sat, 9 Oct 2021 14:41:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On 10/9/21, 2:12 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> Here's the v1, please review it further.\r\n\r\nThanks for the patch.\r\n\r\n-\t/* Only allow superusers to log memory contexts. */\r\n-\tif (!superuser())\r\n+\t/*\r\n+\t * Only superusers or members of pg_read_all_stats can log memory contexts.\r\n+\t */\r\n+\tif (!is_member_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS))\r\n\r\nI personally think pg_log_backend_memory_contexts() should remain\r\nrestricted to superusers since it directly impacts the server log.\r\nHowever, if we really did want to open it up to others, couldn't we\r\nadd GRANT/REVOKE statements in system_functions.sql and remove the\r\nhard-coded superuser check? I think that provides a bit more\r\nflexibility (e.g., permission to execute it can be granted to others\r\nwithout giving them pg_read_all_stats).\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 13 Oct 2021 00:26:47 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "Greetings,\n\nOn Tue, Oct 12, 2021 at 20:26 Bossart, Nathan <bossartn@amazon.com> wrote:\n\n> On 10/9/21, 2:12 AM, \"Bharath Rupireddy\" <\n> bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Here's the v1, please review it further.\n>\n> Thanks for the patch.\n>\n> - /* Only allow superusers to log memory contexts. */\n> - if (!superuser())\n> + /*\n> + * Only superusers or members of pg_read_all_stats can log memory\n> contexts.\n> + */\n> + if (!is_member_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS))\n>\n> I personally think pg_log_backend_memory_contexts() should remain\n> restricted to superusers since it directly impacts the server log.\n> However, if we really did want to open it up to others, couldn't we\n> add GRANT/REVOKE statements in system_functions.sql and remove the\n> hard-coded superuser check? I think that provides a bit more\n> flexibility (e.g., permission to execute it can be granted to others\n> without giving them pg_read_all_stats).\n\n\nI would think we would do both…. That is- move to using GRANT/REVOKE, and\nthen just include a GRANT to pg_read_all_stats.\n\nOr not. I can see the argument that, because it just goes into the log,\nthat it doesn’t make sense to grant to a predefined role, since that role\nwouldn’t be able to see the results even if it had access.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Tue, Oct 12, 2021 at 20:26 Bossart, Nathan <bossartn@amazon.com> wrote:On 10/9/21, 2:12 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> Here's the v1, please review it further.\n\nThanks for the patch.\n\n-       /* Only allow superusers to log memory contexts. */\n-       if (!superuser())\n+       /*\n+        * Only superusers or members of pg_read_all_stats can log memory contexts.\n+        */\n+       if (!is_member_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS))\n\nI personally think pg_log_backend_memory_contexts() should remain\nrestricted to superusers since it directly impacts the server log.\nHowever, if we really did want to open it up to others, couldn't we\nadd GRANT/REVOKE statements in system_functions.sql and remove the\nhard-coded superuser check?  I think that provides a bit more\nflexibility (e.g., permission to execute it can be granted to others\nwithout giving them pg_read_all_stats).I would think we would do both…. That is- move to using GRANT/REVOKE, and then just include a GRANT to pg_read_all_stats.Or not. I can see the argument that, because it just goes into the log, that it doesn’t make sense to grant to a predefined role, since that role wouldn’t be able to see the results even if it had access. Thanks,Stephen", "msg_date": "Tue, 12 Oct 2021 20:33:19 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Tue, Oct 12, 2021 at 08:33:19PM -0400, Stephen Frost wrote:\n> I would think we would do both…. That is- move to using GRANT/REVOKE, and\n> then just include a GRANT to pg_read_all_stats.\n> \n> Or not. I can see the argument that, because it just goes into the log,\n> that it doesn’t make sense to grant to a predefined role, since that role\n> wouldn’t be able to see the results even if it had access.\n\nI don't think that this is a bad thing to remove the superuser() check\nand replace it with a REVOKE FROM PUBLIC in this case, but linking the\nlogging of memory contexts with pg_read_all_stats does not seem right\nto me.\n--\nMichael", "msg_date": "Wed, 13 Oct 2021 10:25:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts\n function?gr" }, { "msg_contents": "On 10/12/21, 6:26 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Tue, Oct 12, 2021 at 08:33:19PM -0400, Stephen Frost wrote:\r\n>> I would think we would do both…. That is- move to using GRANT/REVOKE, and\r\n>> then just include a GRANT to pg_read_all_stats.\r\n>> \r\n>> Or not. I can see the argument that, because it just goes into the log,\r\n>> that it doesn’t make sense to grant to a predefined role, since that role\r\n>> wouldn’t be able to see the results even if it had access.\r\n>\r\n> I don't think that this is a bad thing to remove the superuser() check\r\n> and replace it with a REVOKE FROM PUBLIC in this case, but linking the\r\n> logging of memory contexts with pg_read_all_stats does not seem right\r\n> to me.\r\n\r\n+1\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 13 Oct 2021 02:18:34 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts\n function?gr" }, { "msg_contents": "On Wed, Oct 13, 2021 at 6:55 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Oct 12, 2021 at 08:33:19PM -0400, Stephen Frost wrote:\n> > I would think we would do both…. That is- move to using GRANT/REVOKE, and\n> > then just include a GRANT to pg_read_all_stats.\n> >\n> > Or not. I can see the argument that, because it just goes into the log,\n> > that it doesn’t make sense to grant to a predefined role, since that role\n> > wouldn’t be able to see the results even if it had access.\n>\n> I don't think that this is a bad thing to remove the superuser() check\n> and replace it with a REVOKE FROM PUBLIC in this case,\n\nIMO, we can just retain the \"if (!superuser())\" check in the\npg_log_backend_memory_contexts as is. This would be more meaningful as\nthe error \"must be superuser to use raw page functions\" explicitly\nsays that a superuser is allowed. Whereas if we revoke the permissions\nin system_views.sql, then the error we get is not meaningful as the\nerror \"permission denied for function pg_log_backend_memory_contexts\"\nsays that permissions denied and the user will have to look at the\ndocumentation for what permissions this function requires.\n\nAnd, I see there are a lot of functions in the code base that does \"if\n(!superuser())\" check and emit \"must be superuser to XXX\" sort of\nerror.\n\n> but linking the\n> logging of memory contexts with pg_read_all_stats does not seem right\n> to me.\n\nAgreed. The user with pg_read_all_stats can't see the server logs so\nit doesn't make sense to make them call the function. I will remove\nthis change from the patch.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 13 Oct 2021 11:15:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts\n function?gr" }, { "msg_contents": "On Wed, Oct 13, 2021 at 7:48 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 10/12/21, 6:26 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n> > On Tue, Oct 12, 2021 at 08:33:19PM -0400, Stephen Frost wrote:\n> >> I would think we would do both…. That is- move to using GRANT/REVOKE, and\n> >> then just include a GRANT to pg_read_all_stats.\n> >>\n> >> Or not. I can see the argument that, because it just goes into the log,\n> >> that it doesn’t make sense to grant to a predefined role, since that role\n> >> wouldn’t be able to see the results even if it had access.\n> >\n> > I don't think that this is a bad thing to remove the superuser() check\n> > and replace it with a REVOKE FROM PUBLIC in this case, but linking the\n> > logging of memory contexts with pg_read_all_stats does not seem right\n> > to me.\n>\n> +1\n\nHere comes the v2 patch. Note that I've retained superuser() check in\nthe pg_log_backend_memory_contexts(). Please review it.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Wed, 13 Oct 2021 11:43:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts\n function?gr" }, { "msg_contents": "On Wed, Oct 13, 2021 at 11:15:16AM +0530, Bharath Rupireddy wrote:\n> IMO, we can just retain the \"if (!superuser())\" check in the\n> pg_log_backend_memory_contexts as is. This would be more meaningful as\n> the error \"must be superuser to use raw page functions\" explicitly\n> says that a superuser is allowed. Whereas if we revoke the permissions\n> in system_views.sql, then the error we get is not meaningful as the\n> error \"permission denied for function pg_log_backend_memory_contexts\"\n> says that permissions denied and the user will have to look at the\n> documentation for what permissions this function requires.\n\nI don't really buy this argument with the \"superuser\" error message.\nWhen removing hardcoded superuser(), we just close the gap by adding\nin the documentation that the function execution can be granted\nafterwards. And nobody has complained about the difference in error\nmessage AFAIK. That's about extensibility.\n--\nMichael", "msg_date": "Wed, 13 Oct 2021 16:54:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts\n function?gr" }, { "msg_contents": "Greetings,\n\nOn Wed, Oct 13, 2021 at 03:54 Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Oct 13, 2021 at 11:15:16AM +0530, Bharath Rupireddy wrote:\n> > IMO, we can just retain the \"if (!superuser())\" check in the\n> > pg_log_backend_memory_contexts as is. This would be more meaningful as\n> > the error \"must be superuser to use raw page functions\" explicitly\n> > says that a superuser is allowed. Whereas if we revoke the permissions\n> > in system_views.sql, then the error we get is not meaningful as the\n> > error \"permission denied for function pg_log_backend_memory_contexts\"\n> > says that permissions denied and the user will have to look at the\n> > documentation for what permissions this function requires.\n>\n> I don't really buy this argument with the \"superuser\" error message.\n> When removing hardcoded superuser(), we just close the gap by adding\n> in the documentation that the function execution can be granted\n> afterwards. And nobody has complained about the difference in error\n> message AFAIK. That's about extensibility.\n\n\nAgreed.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Wed, Oct 13, 2021 at 03:54 Michael Paquier <michael@paquier.xyz> wrote:On Wed, Oct 13, 2021 at 11:15:16AM +0530, Bharath Rupireddy wrote:\n> IMO, we can just retain the \"if (!superuser())\" check in the\n> pg_log_backend_memory_contexts as is. This would be more meaningful as\n> the error \"must be superuser to use raw page functions\" explicitly\n> says that a superuser is allowed. Whereas if we revoke the permissions\n> in system_views.sql, then the error we get is not meaningful as the\n> error \"permission denied for function pg_log_backend_memory_contexts\"\n> says that permissions denied and the user will have to look at the\n> documentation for what permissions this function requires.\n\nI don't really buy this argument with the \"superuser\" error message.\nWhen removing hardcoded superuser(), we just close the gap by adding\nin the documentation that the function execution can be granted\nafterwards.  And nobody has complained about the difference in error\nmessage AFAIK.  That's about extensibility.Agreed.Thanks,Stephen", "msg_date": "Wed, 13 Oct 2021 04:00:51 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts\n function?gr" }, { "msg_contents": "On Wed, Oct 13, 2021 at 1:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Oct 13, 2021 at 11:15:16AM +0530, Bharath Rupireddy wrote:\n> > IMO, we can just retain the \"if (!superuser())\" check in the\n> > pg_log_backend_memory_contexts as is. This would be more meaningful as\n> > the error \"must be superuser to use raw page functions\" explicitly\n> > says that a superuser is allowed. Whereas if we revoke the permissions\n> > in system_views.sql, then the error we get is not meaningful as the\n> > error \"permission denied for function pg_log_backend_memory_contexts\"\n> > says that permissions denied and the user will have to look at the\n> > documentation for what permissions this function requires.\n>\n> I don't really buy this argument with the \"superuser\" error message.\n> When removing hardcoded superuser(), we just close the gap by adding\n> in the documentation that the function execution can be granted\n> afterwards. And nobody has complained about the difference in error\n> message AFAIK. That's about extensibility.\n\nI'm not against removing superuser() check in the\npg_log_backend_memory_contexts. However, there are a lot of functions\nwith the \"must be superuser to XXXXX\" kind of error [1]. I'm worried\nif someone proposes to change these as well with what we do for\npg_log_backend_memory_contexts.\n\nbrin_page_type\nbrin_page_items\nbrin_metapage_info\nbrin_revmap_data\nbt_page_stats_internal\nbt_page_items_internal\nbt_page_items_bytea\nbt_metap\nfsm_page_contents\ngin_metapage_info\ngin_page_opaque_info\nand the list goes on.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 13 Oct 2021 13:44:38 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts\n function?gr" }, { "msg_contents": "Greeting,\n\nOn Wed, Oct 13, 2021 at 04:14 Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Wed, Oct 13, 2021 at 1:24 PM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> >\n> > On Wed, Oct 13, 2021 at 11:15:16AM +0530, Bharath Rupireddy wrote:\n> > > IMO, we can just retain the \"if (!superuser())\" check in the\n> > > pg_log_backend_memory_contexts as is. This would be more meaningful as\n> > > the error \"must be superuser to use raw page functions\" explicitly\n> > > says that a superuser is allowed. Whereas if we revoke the permissions\n> > > in system_views.sql, then the error we get is not meaningful as the\n> > > error \"permission denied for function pg_log_backend_memory_contexts\"\n> > > says that permissions denied and the user will have to look at the\n> > > documentation for what permissions this function requires.\n> >\n> > I don't really buy this argument with the \"superuser\" error message.\n> > When removing hardcoded superuser(), we just close the gap by adding\n> > in the documentation that the function execution can be granted\n> > afterwards. And nobody has complained about the difference in error\n> > message AFAIK. That's about extensibility.\n>\n> I'm not against removing superuser() check in the\n> pg_log_backend_memory_contexts. However, there are a lot of functions\n> with the \"must be superuser to XXXXX\" kind of error [1]. I'm worried\n> if someone proposes to change these as well with what we do for\n> pg_log_backend_memory_contexts.\n>\n> brin_page_type\n> brin_page_items\n> brin_metapage_info\n> brin_revmap_data\n> bt_page_stats_internal\n> bt_page_items_internal\n> bt_page_items_bytea\n> bt_metap\n> fsm_page_contents\n> gin_metapage_info\n> gin_page_opaque_info\n> and the list goes on.\n\n\nYes, would generally be good to change at least some of those also, perhaps\nall of them.\n\nNot sure I see what the argument here is. We should really be trying to\nmove away from explicit superuser checks.\n\nThanks.\n\nStephen\n\n>\n\nGreeting,On Wed, Oct 13, 2021 at 04:14 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Wed, Oct 13, 2021 at 1:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Oct 13, 2021 at 11:15:16AM +0530, Bharath Rupireddy wrote:\n> > IMO, we can just retain the \"if (!superuser())\" check in the\n> > pg_log_backend_memory_contexts as is. This would be more meaningful as\n> > the error \"must be superuser to use raw page functions\" explicitly\n> > says that a superuser is allowed. Whereas if we revoke the permissions\n> > in system_views.sql, then the error we get is not meaningful as the\n> > error \"permission denied for function pg_log_backend_memory_contexts\"\n> > says that permissions denied and the user will have to look at the\n> > documentation for what permissions this function requires.\n>\n> I don't really buy this argument with the \"superuser\" error message.\n> When removing hardcoded superuser(), we just close the gap by adding\n> in the documentation that the function execution can be granted\n> afterwards.  And nobody has complained about the difference in error\n> message AFAIK.  That's about extensibility.\n\nI'm not against removing superuser() check in the\npg_log_backend_memory_contexts. However, there are a lot of functions\nwith the \"must be superuser to XXXXX\" kind of error [1]. I'm worried\nif someone proposes to change these as well with what we do for\npg_log_backend_memory_contexts.\n\nbrin_page_type\nbrin_page_items\nbrin_metapage_info\nbrin_revmap_data\nbt_page_stats_internal\nbt_page_items_internal\nbt_page_items_bytea\nbt_metap\nfsm_page_contents\ngin_metapage_info\ngin_page_opaque_info\nand the list goes on.Yes, would generally be good to change at least some of those also, perhaps all of them. Not sure I see what the argument here is. We should really be trying to move away from explicit superuser checks. Thanks.Stephen", "msg_date": "Wed, 13 Oct 2021 04:48:51 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts\n function?gr" }, { "msg_contents": "On Wed, Oct 13, 2021 at 2:19 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greeting,\n>\n> On Wed, Oct 13, 2021 at 04:14 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Wed, Oct 13, 2021 at 1:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> >\n>> > On Wed, Oct 13, 2021 at 11:15:16AM +0530, Bharath Rupireddy wrote:\n>> > > IMO, we can just retain the \"if (!superuser())\" check in the\n>> > > pg_log_backend_memory_contexts as is. This would be more meaningful as\n>> > > the error \"must be superuser to use raw page functions\" explicitly\n>> > > says that a superuser is allowed. Whereas if we revoke the permissions\n>> > > in system_views.sql, then the error we get is not meaningful as the\n>> > > error \"permission denied for function pg_log_backend_memory_contexts\"\n>> > > says that permissions denied and the user will have to look at the\n>> > > documentation for what permissions this function requires.\n>> >\n>> > I don't really buy this argument with the \"superuser\" error message.\n>> > When removing hardcoded superuser(), we just close the gap by adding\n>> > in the documentation that the function execution can be granted\n>> > afterwards. And nobody has complained about the difference in error\n>> > message AFAIK. That's about extensibility.\n>>\n>> I'm not against removing superuser() check in the\n>> pg_log_backend_memory_contexts. However, there are a lot of functions\n>> with the \"must be superuser to XXXXX\" kind of error [1]. I'm worried\n>> if someone proposes to change these as well with what we do for\n>> pg_log_backend_memory_contexts.\n>>\n>> brin_page_type\n>> brin_page_items\n>> brin_metapage_info\n>> brin_revmap_data\n>> bt_page_stats_internal\n>> bt_page_items_internal\n>> bt_page_items_bytea\n>> bt_metap\n>> fsm_page_contents\n>> gin_metapage_info\n>> gin_page_opaque_info\n>> and the list goes on.\n>\n>\n> Yes, would generally be good to change at least some of those also, perhaps all of them.\n\nHm. Let's deal with it separately, if required.\n\n> Not sure I see what the argument here is. We should really be trying to move away from explicit superuser checks.\n\nI will remove the superuser() for pg_log_backend_memory_context alone\nhere in the next version of patch.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 13 Oct 2021 14:36:47 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts\n function?gr" }, { "msg_contents": "On Tue, Oct 12, 2021 at 8:33 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Or not. I can see the argument that, because it just goes into the log, that it doesn’t make sense to grant to a predefined role, since that role wouldn’t be able to see the results even if it had access.\n\nYeah. I think we should really only use predefined roles where it's\nnot practical to have people use GRANT/REVOKE.\n\nFor instance, it makes sense to have pg_execute_server_program because\nthere's no particular function (or other object) to which you could\ngrant permissions at the SQL level to achieve the same results. And\npg_read_all_stats doesn't just allow you to run more functions: it\nchanges which fields those functions populate in the returned data,\nand which they mask out for security reasons. So, GRANT/REVOKE\nwouldn't do it in that case.\n\nBut if there's one particular function that someone may or may not\nwant a non-superuser to be able to execute, let's just let them do\nthat. It doesn't need to be tied to a predefined role, and in fact\nit's more flexible if it isn't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Oct 2021 10:03:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Wed, 2021-10-13 at 10:03 -0400, Robert Haas wrote:\n> Yeah. I think we should really only use predefined roles where it's\n> not practical to have people use GRANT/REVOKE.\n\nThat sounds like a good rule.\n\nA minor complaint though: to grant on pg_backend_memory_contexts, you\nneed two grant statements:\n\n grant select on pg_backend_memory_contexts to foo;\n grant execute on function pg_get_backend_memory_contexts() to foo;\n\nThe second is more of an internal detail, and we don't really want\nusers to be relying on that undocumented function. Is there a good way\nto define a view kind of like a SECURITY DEFINER function so that the\nsuperuser would only need to issue a GRANT statement on the view?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 13 Oct 2021 16:45:39 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Wed, Oct 13, 2021 at 7:45 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> users to be relying on that undocumented function. Is there a good way\n> to define a view kind of like a SECURITY DEFINER function so that the\n> superuser would only need to issue a GRANT statement on the view?\n\nAccording to https://www.postgresql.org/docs/current/sql-createview.html\nit always works like that: \"Access to tables referenced in the view is\ndetermined by permissions of the view owner. In some cases, this can\nbe used to provide secure but restricted access to the underlying\ntables.\"\n\nHmm, unless that rule is only being applied for *tables* and not for\n*functions*? I guess that could be true, but if so, it sure seems\ninconsistent.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Oct 2021 09:11:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Thu, 14 Oct 2021 at 09:11, Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> According to https://www.postgresql.org/docs/current/sql-createview.html\n> it always works like that: \"Access to tables referenced in the view is\n> determined by permissions of the view owner. In some cases, this can\n> be used to provide secure but restricted access to the underlying\n> tables.\"\n>\n> Hmm, unless that rule is only being applied for *tables* and not for\n> *functions*? I guess that could be true, but if so, it sure seems\n> inconsistent.\n>\n\nYes, I think this has come up before. It seems obvious to me that a view\nshould execute entirely in the context of its owner. I should be able to\nuse functions to define view columns without requiring that access to those\nfunctions be handed out to users of the view.\n\nI feel this might relate to the discussion of triggers, which I claim\nshould execute in the context of the table owner (or maybe the trigger\nowner, if that were a separate concept). There are lots of triggers one\nmight want to write that cannot be written because they execute in the\ncontext of the user of the table; my recollection is that it is harder to\nfind examples of non-malware triggers that depend on executing in the\ncontext of the user of the table.\n\nOn Thu, 14 Oct 2021 at 09:11, Robert Haas <robertmhaas@gmail.com> wrote:\nAccording to https://www.postgresql.org/docs/current/sql-createview.html\nit always works like that: \"Access to tables referenced in the view is\ndetermined by permissions of the view owner. In some cases, this can\nbe used to provide secure but restricted access to the underlying\ntables.\"\n\nHmm, unless that rule is only being applied for *tables* and not for\n*functions*? I guess that could be true, but if so, it sure seems\ninconsistent.Yes, I think this has come up before. It seems obvious to me that a view should execute entirely in the context of its owner. I should be able to use functions to define view columns without requiring that access to those functions be handed out to users of the view.I feel this might relate to the discussion of triggers, which I claim should execute in the context of the table owner (or maybe the trigger owner, if that were a separate concept). There are lots of triggers one might want to write that cannot be written because they execute in the context of the user of the table; my recollection is that it is harder to find examples of non-malware triggers that depend on executing in the context of the user of the table.", "msg_date": "Thu, 14 Oct 2021 12:44:37 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "Greetings,\n\n* Isaac Morland (isaac.morland@gmail.com) wrote:\n> On Thu, 14 Oct 2021 at 09:11, Robert Haas <robertmhaas@gmail.com> wrote:\n> > According to https://www.postgresql.org/docs/current/sql-createview.html\n> > it always works like that: \"Access to tables referenced in the view is\n> > determined by permissions of the view owner. In some cases, this can\n> > be used to provide secure but restricted access to the underlying\n> > tables.\"\n> >\n> > Hmm, unless that rule is only being applied for *tables* and not for\n> > *functions*? I guess that could be true, but if so, it sure seems\n> > inconsistent.\n\nI'm not sure that it's really inconsistent- if you want the function to\nrun as someone else, define it as SECURITY DEFINER and it will. If the\nfunction is defined as SECURITY INVOKER then it'll run with the\nprivileges of the user invoking the function- which can be pretty handy\nif, say, the function references CURRENT_USER. Note that RLS policies\nwork in the same way.\n\n> Yes, I think this has come up before. It seems obvious to me that a view\n> should execute entirely in the context of its owner. I should be able to\n> use functions to define view columns without requiring that access to those\n> functions be handed out to users of the view.\n\nI don't know that it's all that obvious, particularly when you consider\nthat the function owner has the option of having the function run as the\ninvoker of the function or as the owner of the function.\n\n> I feel this might relate to the discussion of triggers, which I claim\n> should execute in the context of the table owner (or maybe the trigger\n> owner, if that were a separate concept). There are lots of triggers one\n> might want to write that cannot be written because they execute in the\n> context of the user of the table; my recollection is that it is harder to\n> find examples of non-malware triggers that depend on executing in the\n> context of the user of the table.\n\nTriggers can call security definer functions, so I'm not quite sure I\nunderstand what the issue here is.\n\nThanks,\n\nStephen", "msg_date": "Thu, 14 Oct 2021 13:43:21 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Wed, 2021-10-13 at 10:03 -0400, Robert Haas wrote:\n> > Yeah. I think we should really only use predefined roles where it's\n> > not practical to have people use GRANT/REVOKE.\n> \n> That sounds like a good rule.\n> \n> A minor complaint though: to grant on pg_backend_memory_contexts, you\n> need two grant statements:\n> \n> grant select on pg_backend_memory_contexts to foo;\n> grant execute on function pg_get_backend_memory_contexts() to foo;\n> \n> The second is more of an internal detail, and we don't really want\n> users to be relying on that undocumented function. Is there a good way\n> to define a view kind of like a SECURITY DEFINER function so that the\n> superuser would only need to issue a GRANT statement on the view?\n\nErm, surely the function should be documented...\n\nOther than that, grouping of privileges is generally done using roles.\nWe could possibly create a predefined role to assist with this but I\ndon't think it's a huge issue for users to do that themselves.,\nparticularly since they're likely to grant other accesses to that role\ntoo. In some instances, it might make sense to grant such access to\nother predefined roles too (pg_monitor or the other ones), of course.\n\nI don't think we really want to be doing privilege checks with one role\n(view owner) for who is allowed to run the function, and then actually\nrunning the function with some other role when it's a security invoker\nfunction..\n\nThanks,\n\nStephen", "msg_date": "Thu, 14 Oct 2021 13:53:30 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Thu, 14 Oct 2021 at 13:43, Stephen Frost <sfrost@snowman.net> wrote:\n\n> I feel this might relate to the discussion of triggers, which I claim\n> > should execute in the context of the table owner (or maybe the trigger\n> > owner, if that were a separate concept). There are lots of triggers one\n> > might want to write that cannot be written because they execute in the\n> > context of the user of the table; my recollection is that it is harder to\n> > find examples of non-malware triggers that depend on executing in the\n> > context of the user of the table.\n>\n> Triggers can call security definer functions, so I'm not quite sure I\n> understand what the issue here is.\n>\n\nEven something as simple as a \"log all table updates\" cannot be implemented\nas far as I can tell.\n\nSo you have table T and T_log. Trigger on T causes all INSERT/UPDATE/DELETE\nactions to be logged to T_log. The only changes to T_log should be inserts\nresulting from the trigger. But now in order to make changes to T the user\nalso needs INSERT on T_log. OK, so use a security definer function. That\ndoesn't help; now instead of needing INSERT on T_log they need EXECUTE on\nthe function. Either way, two privilege grants are required, and one of\nthem allows the user to make spurious entries in T_log.\n\nBut the desired behaviour is that the user has access *only* to T, and no\naccess whatsoever to T_log other than indirect changes by causing the\ntrigger to execute.\n\nOn Thu, 14 Oct 2021 at 13:43, Stephen Frost <sfrost@snowman.net> wrote:\n> I feel this might relate to the discussion of triggers, which I claim\n> should execute in the context of the table owner (or maybe the trigger\n> owner, if that were a separate concept). There are lots of triggers one\n> might want to write that cannot be written because they execute in the\n> context of the user of the table; my recollection is that it is harder to\n> find examples of non-malware triggers that depend on executing in the\n> context of the user of the table.\n\nTriggers can call security definer functions, so I'm not quite sure I\nunderstand what the issue here is.Even something as simple as a \"log all table updates\" cannot be implemented as far as I can tell.So you have table T and T_log. Trigger on T causes all INSERT/UPDATE/DELETE actions to be logged to T_log. The only changes to T_log should be inserts resulting from the trigger. But now in order to make changes to T the user also needs INSERT on T_log. OK, so use a security definer function. That doesn't help; now instead of needing INSERT on T_log they need EXECUTE on the function. Either way, two privilege grants are required, and one of them allows the user to make spurious entries in T_log.But the desired behaviour is that the user has access *only* to T, and no access whatsoever to T_log other than indirect changes by causing the trigger to execute.", "msg_date": "Thu, 14 Oct 2021 14:03:11 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Thu, 2021-10-14 at 13:43 -0400, Stephen Frost wrote:\n> I'm not sure that it's really inconsistent- if you want the function\n> to\n> run as someone else, define it as SECURITY DEFINER and it will.\n\nThere are two issues:\n\n1. Does having permissions to read a view give the reader the ability\nto execute the function as a part of reading the view?\n\nHere it seems like we should allow the user to execute the function\nthat's a part of the view. If it's doing something that performs\nanother permission check, then it could fail, but at least they'd be\nable to execute it. That seems consistent with the ability to read\ntables as a part of reading the view.\n\n2. If the function is executed, is it SECURITY INVOKER or SECURITY\nDEFINER?\n\nI think here the answer is SECURITY INVOKER. SECURITY DEFINER doesn't\neven really make sense, because the definer might not be the owner of\nthe view. Maybe we need a concept where the function is executed as\nneither the invoker or the definer, but as the owner of the view (or\nsomething else), which sounds appealing, but sounds more like a new\nfeature.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 14 Oct 2021 11:14:47 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Thu, Oct 14, 2021 at 1:43 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I'm not sure that it's really inconsistent- if you want the function to\n> run as someone else, define it as SECURITY DEFINER and it will. If the\n> function is defined as SECURITY INVOKER then it'll run with the\n> privileges of the user invoking the function- which can be pretty handy\n> if, say, the function references CURRENT_USER.\n\nThat presumes that (1) the user who owns the view also owns the\nfunction and (2) the user who created the view and the function wants\nto permit people who query the view to call the function with any\narguments, rather than only those arguments that would be passed by\nquerying the view. Neither of those things is necessarily true.\n\nI am not really sure that we can get away with changing this, since it\nis long-established behavior. At least, if we do, we are going to have\nto warn people to watch out for backward-compatibility issues, some of\nwhich may not be things breaking functionally but rather having a\ndifferent security profile. But, in a green field, I don't know why\nit's sane to suppose that if you query a view, the things in the view\nbehave partly as if the user querying the view were running them, and\npartly as if the user owning the view were one of them. It seems much\nmore logical for it to be one or the other.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Oct 2021 14:22:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Thu, 2021-10-14 at 14:22 -0400, Robert Haas wrote:\n> I am not really sure that we can get away with changing this, since\n> it\n> is long-established behavior. At least, if we do, we are going to\n> have\n> to warn people to watch out for backward-compatibility issues, some\n> of\n> which may not be things breaking functionally but rather having a\n> different security profile. But, in a green field, I don't know why\n> it's sane to suppose that if you query a view, the things in the view\n> behave partly as if the user querying the view were running them, and\n> partly as if the user owning the view were one of them. It seems much\n> more logical for it to be one or the other.\n\nHow do you feel about at least allowing the functions to execute (and\nif it's SECURITY INVOKER, possibly encountering a permissions failure\nduring execution)?\n\nThere are of course security implications with any change like that,\nbut it seems like a fairly minor one unless I'm missing something. Why\nwould an admin give someone the privileges to read a view if it will\nalways fail due to lack of execute privilege?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 14 Oct 2021 12:02:19 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Thu, Oct 14, 2021 at 3:02 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> How do you feel about at least allowing the functions to execute (and\n> if it's SECURITY INVOKER, possibly encountering a permissions failure\n> during execution)?\n\nI think we'd at least need to check that the view owner has execute\npermission on the function. I'm not sure whether there are any other\ngotchas.\n\n> There are of course security implications with any change like that,\n> but it seems like a fairly minor one unless I'm missing something. Why\n> would an admin give someone the privileges to read a view if it will\n> always fail due to lack of execute privilege?\n\nAn excellent question.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 15 Oct 2021 09:08:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Fri, 2021-10-15 at 09:08 -0400, Robert Haas wrote:\n> I think we'd at least need to check that the view owner has execute\n> permission on the function. I'm not sure whether there are any other\n> gotchas.\n\nRight, like we do for tables in a view now.\n\nThe alternative is not very appealing: that we have to document a lot\nof currently-undocumented internal functions like\npg_get_backend_memory_contexts(), pg_lock_status(), etc., so that users\ncan grant fine-grained permissions.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 15 Oct 2021 10:26:16 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Fri, 2021-10-15 at 09:08 -0400, Robert Haas wrote:\n> > I think we'd at least need to check that the view owner has execute\n> > permission on the function. I'm not sure whether there are any other\n> > gotchas.\n> \n> Right, like we do for tables in a view now.\n> \n> The alternative is not very appealing: that we have to document a lot\n> of currently-undocumented internal functions like\n> pg_get_backend_memory_contexts(), pg_lock_status(), etc., so that users\n> can grant fine-grained permissions.\n\nBeing undocumented and being an 'internal function' aren't quite the\nsame thing.. pg_lock_status() is available for users to call and even\nhas a description which they can review with \\dfS+ and is \"view system\nlock information\", not to mention that it calls GetLockStatusData which\nis explicitly documented as \"for use in a user-level reporting\nfunction\".\n\nI do recongize that it's not in the formal documentation currently,\nthough I'm not quite sure I understand why that's the case when we\ndocument things like pg_stat_get_activity(). While I appreciate that it\nisn't really addressing the complaint you have that it'd be nice if we\nmade things simpler for administrators by making it so they don't have\nto GRANT access to both the view and the function, and I can see how\nthat would be nice, it seems like we should probably be documenting\nthese functions too and I don't know that it's correct to characterize\nthem as 'internal'. I can't say that I know exactly where the line is\nbetween being a user-level function and an 'internal' function is, but\nbeing used in a view that's created for users to query seems to me to\nmake it closer to user-level than, say, aclitemin.\n\nThanks,\n\nStephen", "msg_date": "Fri, 15 Oct 2021 13:52:06 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Fri, 2021-10-15 at 13:52 -0400, Stephen Frost wrote:\n> While I appreciate that\n> it\n> isn't really addressing the complaint you have that it'd be nice if\n> we\n> made things simpler for administrators by making it so they don't\n> have\n> to GRANT access to both the view and the function, and I can see how\n> that would be nice, it seems like we should probably be documenting\n> these functions too and I don't know that it's correct to\n> characterize\n> them as 'internal'.\n\nI'm content with that explanation.\n\nIt would be nice if there was some kind of improvement here, but I\nwon't push too hard for it if there are security concerns.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 15 Oct 2021 11:23:24 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Fri, Oct 15, 2021 at 11:53 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Fri, 2021-10-15 at 13:52 -0400, Stephen Frost wrote:\n> > While I appreciate that\n> > it\n> > isn't really addressing the complaint you have that it'd be nice if\n> > we\n> > made things simpler for administrators by making it so they don't\n> > have\n> > to GRANT access to both the view and the function, and I can see how\n> > that would be nice, it seems like we should probably be documenting\n> > these functions too and I don't know that it's correct to\n> > characterize\n> > them as 'internal'.\n>\n> I'm content with that explanation.\n>\n> It would be nice if there was some kind of improvement here, but I\n> won't push too hard for it if there are security concerns.\n\nI tried to go through the discussion that happened upthread, following\nis what I could grasp:\n1) Documenting internal functions that are being used by some of the\nviews in system_views.sql: These functions have entries in the pg_proc\ncatalog and users are not restricted from using them. I agree that the\nsame permissions should be applied for the views and those functions.\nIf at all, others agree to document them, it should be discussed\nseparately and not in this thread as there are lots of functions.\nPersonally, I'm against documenting them all.\n2) Removal of superuser() checks in all (if possible) or some of the\nfunctions as suggested in [1]: actually the list of functions having\nsuperuser() checks is huge and I'm not sure all agree on this. It\nshould be discussed separately and not in this thread.\n\nI would like to confine this thread to allowing non-superusers with a\npredefined role (earlier suggestion was to use pg_read_all_stats) to\naccess views pg_backend_memory_contexts and pg_shmem_allocations and\nfunctions pg_get_backend_memory_contexts and pg_get_shmem_allocations.\nAttaching the previous v2 patch here for further review and thoughts.\n\n[1] - https://www.postgresql.org/message-id/CAOuzzgpp0dmOFjWC4JDvk57ZQGm8umCrFdR1at4b80xuF0XChw%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 21 Oct 2021 12:14:34 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On 10/20/21, 11:44 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> I would like to confine this thread to allowing non-superusers with a\r\n> predefined role (earlier suggestion was to use pg_read_all_stats) to\r\n> access views pg_backend_memory_contexts and pg_shmem_allocations and\r\n> functions pg_get_backend_memory_contexts and pg_get_shmem_allocations.\r\n> Attaching the previous v2 patch here for further review and thoughts.\r\n\r\nI took a look at the new patch. The changes to system_views.sql look\r\ngood to me. Let's be sure to update doc/src/sgml/catalogs.sgml as\r\nwell.\r\n\r\n-SELECT * FROM pg_log_backend_memory_contexts(pg_backend_pid());\r\n+SELECT pg_log_backend_memory_contexts(pg_backend_pid());\r\n\r\nnitpick: Do we need to remove the \"* FROM\" here? This seems like an\r\nunrelated change.\r\n\r\n+-- test to check privileges of system views pg_shmem_allocations,\r\n+-- pg_backend_memory_contexts and function pg_log_backend_memory_contexts.\r\n\r\nI think the comment needs to be updated to remove the reference to\r\npg_log_backend_memory_contexts. It doesn't appear to be tested here.\r\n\r\n+SELECT name, ident, parent, level, total_bytes >= free_bytes\r\n+ FROM pg_backend_memory_contexts WHERE level = 0; -- permission denied error\r\n+SELECT COUNT(*) >= 0 AS ok FROM pg_shmem_allocations; -- permission denied error\r\n\r\nSince we're really just checking the basic permissions, could we just\r\ndo the \"count(*) >= 0\" check for both views?\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 21 Oct 2021 21:45:11 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Fri, Oct 22, 2021 at 3:15 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 10/20/21, 11:44 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > I would like to confine this thread to allowing non-superusers with a\n> > predefined role (earlier suggestion was to use pg_read_all_stats) to\n> > access views pg_backend_memory_contexts and pg_shmem_allocations and\n> > functions pg_get_backend_memory_contexts and pg_get_shmem_allocations.\n> > Attaching the previous v2 patch here for further review and thoughts.\n>\n> I took a look at the new patch. The changes to system_views.sql look\n> good to me.\n\nThanks for reviewing.\n\n> Let's be sure to update doc/src/sgml/catalogs.sgml as\n> well.\n\nAdded.\n\n> -SELECT * FROM pg_log_backend_memory_contexts(pg_backend_pid());\n> +SELECT pg_log_backend_memory_contexts(pg_backend_pid());\n>\n> nitpick: Do we need to remove the \"* FROM\" here? This seems like an\n> unrelated change.\n\nYes it's not mandatory, while we are on this I thought we could\ncombine them, I've also specified this in the commit message. IMO, we\ncan leave it to the committer.\n\n> +-- test to check privileges of system views pg_shmem_allocations,\n> +-- pg_backend_memory_contexts and function pg_log_backend_memory_contexts.\n>\n> I think the comment needs to be updated to remove the reference to\n> pg_log_backend_memory_contexts. It doesn't appear to be tested here.\n\nRemoved.\n\n> +SELECT name, ident, parent, level, total_bytes >= free_bytes\n> + FROM pg_backend_memory_contexts WHERE level = 0; -- permission denied error\n> +SELECT COUNT(*) >= 0 AS ok FROM pg_shmem_allocations; -- permission denied error\n>\n> Since we're really just checking the basic permissions, could we just\n> do the \"count(*) >= 0\" check for both views?\n\nDone.\n\nHere's v3 for further review.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 22 Oct 2021 07:20:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On 10/21/21, 6:51 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> Here's v3 for further review.\r\n\r\nI've marked this as ready-for-committer. The only other feedback I\r\nwould offer is nitpicking at the test code to clean it up a little\r\nbit, but I don't think it is necessary to block on that.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 22 Oct 2021 16:58:18 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" }, { "msg_contents": "On Wed, 2021-10-13 at 11:43 +0530, Bharath Rupireddy wrote:\n> Here comes the v2 patch. Note that I've retained superuser() check in\n> the pg_log_backend_memory_contexts(). Please review it.\n\nFYI: I submitted a separate patch here to allow pg_signal_backend to\nexecute pg_log_backend_memory_contexts():\n\n\nhttps://www.postgresql.org/message-id/flat/e5cf6684d17c8d1ef4904ae248605ccd6da03e72.camel@j-davis.com\n\nSo we can consider that patch separately.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 23 Oct 2021 13:02:18 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts\n function?gr" }, { "msg_contents": "On Fri, Oct 22, 2021 at 10:28 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 10/21/21, 6:51 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Here's v3 for further review.\n>\n> I've marked this as ready-for-committer. The only other feedback I\n> would offer is nitpicking at the test code to clean it up a little\n> bit, but I don't think it is necessary to block on that.\n\nI forgot to change the CATALOG_VERSION_NO (it can be set to right\nvalue while committing the patch), I did it in the v4 attached here,\notherwise the patch remains same as v3.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 25 Oct 2021 15:30:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should we allow users with a predefined role to access\n pg_backend_memory_contexts view and pg_log_backend_memory_contexts function?" } ]
[ { "msg_contents": "Adjust configure to insist on Perl version >= 5.8.3.\n\nPreviously it only checked for version >= 5.8.0, although the\ndocumentation has said that the minimum version is 5.8.3 since\ncommit dea6ba939. Per the discussion leading up to that commit,\nI (tgl) left it that way intentionally because you could, at the\ntime, do some bare-bones stuff with 5.8.0. But we aren't actually\ntesting against anything older than 5.8.3, so who knows if that's\nstill true. It's pretty unlikely that anyone would care anyway,\nso let's just make configure's version check match the docs.\n\nDagfinn Ilmari Mannsåker\n\nDiscussion: https://postgr.es/m/87y278s6iq.fsf@wibble.ilmari.org\nDiscussion: https://postgr.es/m/16894.1501392088@sss.pgh.pa.us\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/92e6a98c3636948e7ece9a3260f9d89dd60da278\n\nModified Files\n--------------\nconfig/perl.m4 | 4 ++--\nconfigure | 6 +++---\n2 files changed, 5 insertions(+), 5 deletions(-)", "msg_date": "Thu, 07 Oct 2021 18:26:55 +0000", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "> On 7 Oct 2021, at 20:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Previously it only checked for version >= 5.8.0, although the\n> documentation has said that the minimum version is 5.8.3 since\n> commit dea6ba939.\n\nsrc/test/perl/README still claims \"5.8.0 and newer\", not sure how important\nthat is to fix but it seems a bit inconsistent now.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 7 Oct 2021 20:33:22 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 7 Oct 2021, at 20:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Previously it only checked for version >= 5.8.0, although the\n>> documentation has said that the minimum version is 5.8.3 since\n>> commit dea6ba939.\n\n> src/test/perl/README still claims \"5.8.0 and newer\", not sure how important\n> that is to fix but it seems a bit inconsistent now.\n\nAh, done. I grepped for other possible references to 5.8.x, and\nfound\n\nsrc/tools/msvc/gendef.pl:use 5.8.0;\n\nbut I don't think we need to change that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Oct 2021 14:46:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "I wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> src/test/perl/README still claims \"5.8.0 and newer\", not sure how important\n>> that is to fix but it seems a bit inconsistent now.\n\n> Ah, done.\n\nBTW, looking at that a second time, I wonder if that advice is\nreally of any use.\n\n(1) I'm distrustful of the idea that perl 5.8.x will compile\ncleanly, or at all, on modern platforms. Certainly Postgres\nreleases of similar vintage won't.\n\n(2) Unless perlbrew.pl is doing something a lot more magic than\nI think, you're going to end up with current-not-historical\nversions of whatever it has to pull from CPAN. That's going\nto include at least IPC::Run and Test::More if you want to run\nour TAP tests.\n\nSo maybe this advice is helpful, but I'm not very convinced.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Oct 2021 15:02:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "> On 7 Oct 2021, at 21:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> BTW, looking at that a second time, I wonder if that advice is\n> really of any use.\n\nYeah, I would have to agree. Reading that again I think what it perhaps should\nbe saying is that 5.8.3 is the Perl API level that the testcode must conform\nto, but they should run with basically whichever recent Perl you have handy as\nlong as the required modules are installed. Not that we expect developers to\nrun 5.8.3 when executing TAP tests.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 7 Oct 2021 21:18:19 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "[ cc'ing Craig and Noah, as author/committer of the existing text ]\n\nDaniel Gustafsson <daniel@yesql.se> writes:\n> On 7 Oct 2021, at 21:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> BTW, looking at that a second time, I wonder if that advice is\n>> really of any use.\n\n> Yeah, I would have to agree. Reading that again I think what it perhaps should\n> be saying is that 5.8.3 is the Perl API level that the testcode must conform\n> to, but they should run with basically whichever recent Perl you have handy as\n> long as the required modules are installed. Not that we expect developers to\n> run 5.8.3 when executing TAP tests.\n\nYeah. I propose that what might be more useful than the existing last\nsection of src/test/perl/README is something along the lines of:\n\n Avoid using any bleeding-edge Perl features. We have buildfarm\n animals running Perl versions as old as 5.8.3, so your tests will\n be expected to pass on that.\n\n Also, do not use any non-core Perl modules except IPC::Run.\n Or, if you must do so for a particular test, arrange to skip\n the test when the needed module isn't present.\n\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Oct 2021 15:44:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "> On 7 Oct 2021, at 21:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> [ cc'ing Craig and Noah, as author/committer of the existing text ]\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 7 Oct 2021, at 21:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> BTW, looking at that a second time, I wonder if that advice is\n>>> really of any use.\n> \n>> Yeah, I would have to agree. Reading that again I think what it perhaps should\n>> be saying is that 5.8.3 is the Perl API level that the testcode must conform\n>> to, but they should run with basically whichever recent Perl you have handy as\n>> long as the required modules are installed. Not that we expect developers to\n>> run 5.8.3 when executing TAP tests.\n> \n> Yeah. I propose that what might be more useful than the existing last\n> section of src/test/perl/README is something along the lines of:\n> \n> Avoid using any bleeding-edge Perl features. We have buildfarm\n> animals running Perl versions as old as 5.8.3, so your tests will\n> be expected to pass on that.\n> \n> Also, do not use any non-core Perl modules except IPC::Run.\n> Or, if you must do so for a particular test, arrange to skip\n> the test when the needed module isn't present.\n\nAgreed, that's a lot more helpful. Since the set of core Perl modules change\nover time as modules are brought in (at least that's my understanding of it),\nthat last paragraph might want to discourage use of modules that aren't\nexpected to be in-core in commonly used systems? It might be overthinking it\nthough.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 7 Oct 2021 21:51:49 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "On 2021-Oct-07, Daniel Gustafsson wrote:\n\n> Agreed, that's a lot more helpful. Since the set of core Perl modules change\n> over time as modules are brought in (at least that's my understanding of it),\n> that last paragraph might want to discourage use of modules that aren't\n> expected to be in-core in commonly used systems? It might be overthinking it\n> though.\n\nMaybe we can mention `corelist -a` as a way to find out the module\nversions shipped with each Perl version.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The Gord often wonders why people threaten never to come back after they've\nbeen told never to return\" (www.actsofgord.com)\n\n\n", "msg_date": "Thu, 7 Oct 2021 17:05:44 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Maybe we can mention `corelist -a` as a way to find out the module\n> versions shipped with each Perl version.\n\nHm, I don't see that on my RHEL box.\n\nIt does exist on my Mac, but the output is very unhelpful:\n\n$ which corelist\n/usr/bin/corelist\n$ corelist -a\n The contents of this script should normally never run! The perl wrapper\n should pick the correct script in /usr/bin by appending the appropriate\n version. You can try appending the appropriate perl version number. See\n perlmacosx.pod for more information about multiple version support in\n Mac OS X.\n\nThat hint leads me to notice\n\n$ ls /usr/bin/corelist*\n/usr/bin/corelist* /usr/bin/corelist5.18* /usr/bin/corelist5.30*\n\nbut all three of those print the same thing.\n\nSo this isn't looking promising :-(\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Oct 2021 16:11:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "On 2021-Oct-07, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > Maybe we can mention `corelist -a` as a way to find out the module\n> > versions shipped with each Perl version.\n> \n> Hm, I don't see that on my RHEL box.\n\nOh, that's strange. It's installed by the perl package on my system, so\nI had assumed it was a standard part of a Perl install.\n\n> It does exist on my Mac, but the output is very unhelpful:\n\nWow, it looks like it's completely broken in macOS.\n\n> So this isn't looking promising :-(\n\nLooking in the archives, apparently people use\n perl -MModule::CoreList\nbut I see that that module, at least in Debian, is distributed even less\nwidely than corelist(1) itself, because it's a separate package -- even\nthough it seems to be part of Perl's core. Also, the module's interface\nappears less helpful than `corelist -a`.\n\nLet's leave it at that, then. Your original is a step forward in any\ncase.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 7 Oct 2021 17:26:17 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Oct-07, Tom Lane wrote:\n>> So this isn't looking promising :-(\n\n> Looking in the archives, apparently people use\n> perl -MModule::CoreList\n> but I see that that module, at least in Debian, is distributed even less\n> widely than corelist(1) itself, because it's a separate package -- even\n> though it seems to be part of Perl's core. Also, the module's interface\n> appears less helpful than `corelist -a`.\n\nHmm. I do see that Module::CoreList knows not only which modules\nare in core but when they were brought in, so that does seem like\na really valuable reference to know about. Let's just say something\nlike \"You can consult Module::CoreList to find out whether and for\nlong a module has been present in the Perl core.\"\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Oct 2021 17:01:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "On 2021-Oct-07, Tom Lane wrote:\n\n> Hmm. I do see that Module::CoreList knows not only which modules\n> are in core but when they were brought in, so that does seem like\n> a really valuable reference to know about. Let's just say something\n> like \"You can consult Module::CoreList to find out whether and for\n> long a module has been present in the Perl core.\"\n\nWFM.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nEssentially, you're proposing Kevlar shoes as a solution for the problem\nthat you want to walk around carrying a loaded gun aimed at your foot.\n(Tom Lane)\n\n\n", "msg_date": "Thu, 7 Oct 2021 18:06:10 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Oct-07, Tom Lane wrote:\n>> Hmm. I do see that Module::CoreList knows not only which modules\n>> are in core but when they were brought in, so that does seem like\n>> a really valuable reference to know about. Let's just say something\n>> like \"You can consult Module::CoreList to find out whether and for\n>> long a module has been present in the Perl core.\"\n\n> WFM.\n\nConcretely, then, I propose the attached.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 07 Oct 2021 17:48:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "> On 7 Oct 2021, at 23:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Concretely, then, I propose the attached.\n\nLGTM. Good idea to change the section heading, Portability is a better title\nfor this.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 8 Oct 2021 00:11:31 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "On 2021-Oct-07, Tom Lane wrote:\n\n> +Portability\n> +-----------\n> +\n> +Avoid using any bleeding-edge Perl features. We have buildfarm animals\n> +running Perl versions as old as 5.8.3, so your tests will be expected\n> +to pass on that.\n> +\n> +Also, do not use any non-core Perl modules except IPC::Run. Or, if you\n> +must do so for a particular test, arrange to skip the test when the needed\n> +module isn't present. If unsure, you can consult Module::CoreList to find\n> +out whether a given module is part of the Perl core, and which module\n> +versions shipped with which Perl releases.\n\nLGTM, thanks.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Find a bug in a program, and fix it, and the program will work today.\nShow the program how to find and fix a bug, and the program\nwill work forever\" (Oliver Silfridge)\n\n\n", "msg_date": "Thu, 7 Oct 2021 19:39:17 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "On Thu, Oct 07, 2021 at 03:44:48PM -0400, Tom Lane wrote:\n> [ cc'ing Craig and Noah, as author/committer of the existing text ]\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n> > On 7 Oct 2021, at 21:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> BTW, looking at that a second time, I wonder if that advice is\n> >> really of any use.\n> >> \n> >> (1) I'm distrustful of the idea that perl 5.8.x will compile\n> >> cleanly, or at all, on modern platforms. Certainly Postgres\n> >> releases of similar vintage won't.\n\nperlbrew uses the patchperl system to build old Perl in modern environments.\nThis year, I used it to get 5.8.0. Building unpatched 5.8.0 does fail.\n\n> >> (2) Unless perlbrew.pl is doing something a lot more magic than\n> >> I think, you're going to end up with current-not-historical\n> >> versions of whatever it has to pull from CPAN. That's going\n> >> to include at least IPC::Run and Test::More if you want to run\n> >> our TAP tests.\n\nYes. If someone changed the recipe to install Test::More 0.87 and the\noldest-acceptable IPC::Run, we'd detect more portability problems. I'd regard\nsuch a change as an improvement.\n\n> >> So maybe this advice is helpful, but I'm not very convinced.\n\nThe rest of this thread is leaning on the above misconceptions:\n\n> I propose that what might be more useful than the existing last\n> section of src/test/perl/README is something along the lines of:\n> \n> Avoid using any bleeding-edge Perl features. We have buildfarm\n> animals running Perl versions as old as 5.8.3, so your tests will\n> be expected to pass on that.\n> \n> Also, do not use any non-core Perl modules except IPC::Run.\n> Or, if you must do so for a particular test, arrange to skip\n> the test when the needed module isn't present.\n\n-1. This would replace a useful recipe with, essentially, a restatement of\nthat recipe in English words. That just leaves the user to rediscover the\nactual recipe.\n\n\n", "msg_date": "Thu, 7 Oct 2021 19:56:59 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Thu, Oct 07, 2021 at 03:44:48PM -0400, Tom Lane wrote:\n>>> (1) I'm distrustful of the idea that perl 5.8.x will compile\n>>> cleanly, or at all, on modern platforms. Certainly Postgres\n>>> releases of similar vintage won't.\n\n> perlbrew uses the patchperl system to build old Perl in modern environments.\n> This year, I used it to get 5.8.0. Building unpatched 5.8.0 does fail.\n\nOh, cool.\n\n>> I propose that what might be more useful than the existing last\n>> section of src/test/perl/README is something along the lines of:\n\n> -1. This would replace a useful recipe with, essentially, a restatement of\n> that recipe in English words. That just leaves the user to rediscover the\n> actual recipe.\n\nWell, I think the existing text does the reader a disservice\nby stating a specific recipe without any context. Notably,\nit says nothing about restricting which Perl modules you use.\n\nWhat do you think of using my proposed text followed by\n\n One way to test against an old Perl version is to use\n perlbrew.\n << more or less the existing text here >>\n Bear in mind that you will still need to install IPC::Run,\n and what you will get is a current version not the one\n distributed with Perl 5.8.3. You will also need to update\n Test::More because the version distributed with Perl 5.8.3\n is too old to run our TAP tests. So this recipe does not create\n a perfect reproduction of a back-in-the-day Perl installation,\n but it will probably catch any problems that might surface in\n the buildfarm.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Oct 2021 23:39:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "On Thu, Oct 07, 2021 at 11:39:11PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > On Thu, Oct 07, 2021 at 03:44:48PM -0400, Tom Lane wrote:\n> >>> (1) I'm distrustful of the idea that perl 5.8.x will compile\n> >>> cleanly, or at all, on modern platforms. Certainly Postgres\n> >>> releases of similar vintage won't.\n> \n> > perlbrew uses the patchperl system to build old Perl in modern environments.\n> > This year, I used it to get 5.8.0. Building unpatched 5.8.0 does fail.\n> \n> Oh, cool.\n> \n> >> I propose that what might be more useful than the existing last\n> >> section of src/test/perl/README is something along the lines of:\n> \n> > -1. This would replace a useful recipe with, essentially, a restatement of\n> > that recipe in English words. That just leaves the user to rediscover the\n> > actual recipe.\n> \n> Well, I think the existing text does the reader a disservice\n> by stating a specific recipe without any context. Notably,\n> it says nothing about restricting which Perl modules you use.\n\nThat's obvious from \"cpanm install IPC::Run\". Surely if any other non-core\nmodule were allowed, the recipe would list it in a similar way. This is a\nsource tree README; it shouldn't try to hold the reader's hand like the\nuser-facing docs do. We've not had commits add usage of other modules, so\nthere's no evidence of actual doubt on this point.\n\n> What do you think of using my proposed text followed by\n> \n> One way to test against an old Perl version is to use\n> perlbrew.\n> << more or less the existing text here >>\n> Bear in mind that you will still need to install IPC::Run,\n> and what you will get is a current version not the one\n> distributed with Perl 5.8.3. You will also need to update\n> Test::More because the version distributed with Perl 5.8.3\n> is too old to run our TAP tests. So this recipe does not create\n> a perfect reproduction of a back-in-the-day Perl installation,\n> but it will probably catch any problems that might surface in\n> the buildfarm.\n\nI don't see an improvement in there. If there's something to change, it's\nimproving the actual recipe:\n\n--- a/src/test/perl/README\n+++ b/src/test/perl/README\n@@ -83,3 +83,4 @@ Just install and\n perlbrew install-cpanm\n- cpanm install IPC::Run\n+ cpanm install Test::More@0.87\n+ cpanm install IPC::Run@tbd_old_version\n\n\n", "msg_date": "Thu, 7 Oct 2021 21:24:54 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "> On 8 Oct 2021, at 06:24, Noah Misch <noah@leadboat.com> wrote:\n> \n> On Thu, Oct 07, 2021 at 11:39:11PM -0400, Tom Lane wrote:\n>> Noah Misch <noah@leadboat.com> writes:\n>>> On Thu, Oct 07, 2021 at 03:44:48PM -0400, Tom Lane wrote:\n>>>>> (1) I'm distrustful of the idea that perl 5.8.x will compile\n>>>>> cleanly, or at all, on modern platforms. Certainly Postgres\n>>>>> releases of similar vintage won't.\n>> \n>>> perlbrew uses the patchperl system to build old Perl in modern environments.\n>>> This year, I used it to get 5.8.0. Building unpatched 5.8.0 does fail.\n>> \n>> Oh, cool.\n>> \n>>>> I propose that what might be more useful than the existing last\n>>>> section of src/test/perl/README is something along the lines of:\n>> \n>>> -1. This would replace a useful recipe with, essentially, a restatement of\n>>> that recipe in English words. That just leaves the user to rediscover the\n>>> actual recipe.\n>> \n>> Well, I think the existing text does the reader a disservice\n>> by stating a specific recipe without any context. Notably,\n>> it says nothing about restricting which Perl modules you use.\n> \n> That's obvious from \"cpanm install IPC::Run\". Surely if any other non-core\n> module were allowed, the recipe would list it in a similar way.\n\nThe proposed changes talks about with core modules are allowed to use, I think\nthat's a different thing. The distinction between core and non-core modules\nmay not be known/clear to people who haven't used Perl in the past.\n\n> This is a source tree README; it shouldn't try to hold the reader's hand like\n> the user-facing docs do. We've not had commits add usage of other modules, so\n> there's no evidence of actual doubt on this point.\n\n\nThis README isn't primarily targeting committers though IMO, but new developers\nonboarding onto postgres who are trying to learn the dev environment.\n\n>> What do you think of using my proposed text followed by\n>> \n>> One way to test against an old Perl version is to use\n>> perlbrew.\n>> << more or less the existing text here >>\n>> Bear in mind that you will still need to install IPC::Run,\n>> and what you will get is a current version not the one\n>> distributed with Perl 5.8.3. You will also need to update\n>> Test::More because the version distributed with Perl 5.8.3\n>> is too old to run our TAP tests. So this recipe does not create\n>> a perfect reproduction of a back-in-the-day Perl installation,\n>> but it will probably catch any problems that might surface in\n>> the buildfarm.\n> \n> I don't see an improvement in there.\n\nI respectfully disagree, the current text reads as if 5.8.0 is required for\nrunning the test, not that using perlbrew is a great way to verify that your\ntests pass in all supported Perl versions.\n\n> If there's something to change, it's improving the actual recipe:\n\nThat we should do as well.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 8 Oct 2021 10:03:42 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 8 Oct 2021, at 06:24, Noah Misch <noah@leadboat.com> wrote:\n>> That's obvious from \"cpanm install IPC::Run\". Surely if any other non-core\n>> module were allowed, the recipe would list it in a similar way.\n\n> The proposed changes talks about with core modules are allowed to use, I think\n> that's a different thing. The distinction between core and non-core modules\n> may not be known/clear to people who haven't used Perl in the past.\n\nYeah, I don't really think that this recipe makes it plain that we have\na policy. It certainly fails to explain that you're allowed to use\nadditional modules if you're willing to skip the relevant tests.\n\n> This README isn't primarily targeting committers though IMO, but new developers\n> onboarding onto postgres who are trying to learn the dev environment.\n\nRight.\n\n>> If there's something to change, it's improving the actual recipe:\n\n> That we should do as well.\n\nYou're not going to get far with \"improving the recipe\", because it's\njust not possible. To check this, I installed perlbrew on a Fedora 34\nmachine, and found that it actually can install a mostly-working 5.8.3\n(nice!). But as I suspected earlier, it can't reproduce the old module\nconfiguration:\n\n$ cpanm install Test::More@0.87\n--> Working on install\nFetching http://www.cpan.org/authors/id/D/DA/DAGOLDEN/install-0.01.tar.gz ... OK\n==> Found dependencies: ExtUtils::MakeMaker\n--> Working on ExtUtils::MakeMaker\nFetching http://www.cpan.org/authors/id/B/BI/BINGOS/ExtUtils-MakeMaker-7.62.tar.gz ... OK\nConfiguring ExtUtils-MakeMaker-7.62 ... OK\nBuilding and testing ExtUtils-MakeMaker-7.62 ... OK\nSuccessfully installed ExtUtils-MakeMaker-7.62 (upgraded from 6.17)\nConfiguring install-0.01 ... OK\nBuilding and testing install-0.01 ... OK\nSuccessfully installed install-0.01\n! Finding Test::More (== 0.87) on cpanmetadb failed.\nFound Test::More 1.302188 which doesn't satisfy == 0.87.\n2 distributions installed\n\nNot only is that a fail on Test::More itself, but as un-asked-for side\neffects, it upgraded ExtUtils::MakeMaker to current, and installed a\nmodule that should not have been there (which kinda defeats the point\nof the exercise).\n\nI did find I could install IPC::Run 0.79, which matches prairiedog's\nversion of that module:\n\n$ cpanm install IPC::Run@0.79 \ninstall is up to date. (0.01)\n! Finding IPC::Run (== 0.79) on cpanmetadb failed.\n--> Working on IPC::Run\nFetching http://cpan.metacpan.org/authors/id/R/RS/RSOD/IPC-Run-0.79.tar.gz ... OK\nConfiguring IPC-Run-0.79 ... OK\nBuilding and testing IPC-Run-0.79 ... OK\nSuccessfully installed IPC-Run-0.79\n1 distribution installed\n\nHowever, this just reflects the fact that prairiedog's installation\nis itself a bit Frankensteinan. What it has for Test::More and\nIPC::Run are just the oldest versions I could find on CPAN back in\n2017 when I built that installation. I can't claim that they have\nany historical relevance. They are, however, a lot more likely to\nstill be duplicatable from current CPAN than actually-old versions.\n\nSo while this recipe is a lot more useful than I thought, it can't\nentirely reproduce the Perl environment of older buildfarm members.\nI think we really ought to document that. I also think it is\nuseful to explicitly state the policy and then give this recipe\nas one way to (partially) test against the policy.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 Oct 2021 12:03:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "On Fri, Oct 08, 2021 at 12:03:41PM -0400, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> > On 8 Oct 2021, at 06:24, Noah Misch <noah@leadboat.com> wrote:\n> >> That's obvious from \"cpanm install IPC::Run\". Surely if any other non-core\n> >> module were allowed, the recipe would list it in a similar way.\n> \n> > The proposed changes talks about with core modules are allowed to use, I think\n> > that's a different thing. The distinction between core and non-core modules\n> > may not be known/clear to people who haven't used Perl in the past.\n> \n> Yeah, I don't really think that this recipe makes it plain that we have\n> a policy. It certainly fails to explain that you're allowed to use\n> additional modules if you're willing to skip the relevant tests.\n\nTrue, +1 for mentioning that tests can use less-available modules if they skip\nwhen those modules are absent. I'm only -0 for adding the other English\n(unlike the -1 for the original proposal of removing the shell commands).\n\n> >> If there's something to change, it's improving the actual recipe:\n> \n> > That we should do as well.\n> \n> You're not going to get far with \"improving the recipe\", because it's\n> just not possible. To check this, I installed perlbrew on a Fedora 34\n\nYour test result is evidence that \"cpanm install Test::More@0.87\" is the wrong\nshell command, but it's quite a leap to \"just not possible\". Surely there\nexist other shell commands that install\nhttp://backpan.perl.org/modules/by-authors/id/M/MS/MSCHWERN/Test-Simple-0.87_03.tar.gz.\n(Perhaps none of us will care enough to identify them, but they exist.)\n\nBy the way, I suspect 93fb39e introduced a regression in the recipe. (I\nhaven't tested, though.) Before commit 93fb39e, \"cpanm install IPC::Run\"\nwould update Test::More. As of 5.8.3, the core version of Test::More is new\nenough for IPC::Run but not new enough for PostgreSQL. I recommend adding\n\"cpanm install Test::More\" to restore the pre-93fb39e functionality.\n\n\n", "msg_date": "Sat, 9 Oct 2021 12:23:33 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Fri, Oct 08, 2021 at 12:03:41PM -0400, Tom Lane wrote:\n>> You're not going to get far with \"improving the recipe\", because it's\n>> just not possible. To check this, I installed perlbrew on a Fedora 34\n\n> Your test result is evidence that \"cpanm install Test::More@0.87\" is the wrong\n> shell command, but it's quite a leap to \"just not possible\". Surely there\n> exist other shell commands that install\n> http://backpan.perl.org/modules/by-authors/id/M/MS/MSCHWERN/Test-Simple-0.87_03.tar.gz.\n> (Perhaps none of us will care enough to identify them, but they exist.)\n\nOh, I never heard of backpan before. Now I'm tempted to see how far\nI can downgrade prairiedog before it breaks ;-). However, I agree\nthat most people won't care about that, and probably shouldn't need to.\n\n> By the way, I suspect 93fb39e introduced a regression in the recipe. (I\n> haven't tested, though.) Before commit 93fb39e, \"cpanm install IPC::Run\"\n> would update Test::More. As of 5.8.3, the core version of Test::More is new\n> enough for IPC::Run but not new enough for PostgreSQL. I recommend adding\n> \"cpanm install Test::More\" to restore the pre-93fb39e functionality.\n\nGood point. So how about like the attached?\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 09 Oct 2021 15:44:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "On Sat, Oct 09, 2021 at 03:44:17PM -0400, Tom Lane wrote:\n> > By the way, I suspect 93fb39e introduced a regression in the recipe. (I\n> > haven't tested, though.) Before commit 93fb39e, \"cpanm install IPC::Run\"\n> > would update Test::More. As of 5.8.3, the core version of Test::More is new\n> > enough for IPC::Run but not new enough for PostgreSQL. I recommend adding\n> > \"cpanm install Test::More\" to restore the pre-93fb39e functionality.\n> \n> Good point. So how about like the attached?\n\nFine with me.\n\n\n", "msg_date": "Sat, 9 Oct 2021 12:59:47 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "Hah ... your backpan link led me to realize the actual problem with\nTest::More. It got folded into Test::Simple at some point, and\nevidently cpanm isn't smart enough to handle a request for a back\nversion in such cases. But this works:\n\n$ cpanm install Test::Simple@0.87_01\n...\n$ perl -MTest::More -e 'print $Test::More::VERSION, \"\\n\";'\n0.8701\n\nSo we oughta recommend that instead. Now I'm wondering what\nversion of IPC::Run to recommend.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 Oct 2021 16:34:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "On Sat, Oct 09, 2021 at 04:34:46PM -0400, Tom Lane wrote:\n> Hah ... your backpan link led me to realize the actual problem with\n> Test::More. It got folded into Test::Simple at some point, and\n> evidently cpanm isn't smart enough to handle a request for a back\n> version in such cases. But this works:\n> \n> $ cpanm install Test::Simple@0.87_01\n> ...\n> $ perl -MTest::More -e 'print $Test::More::VERSION, \"\\n\";'\n> 0.8701\n> \n> So we oughta recommend that instead. Now I'm wondering what\n> version of IPC::Run to recommend.\n\nYou mentioned prairiedog uses IPC::Run 0.79. That's from 2005. (Perl 5.8.3\nis from 2004, and Test::More 0.87 is from 2009.) I'd just use 0.79 in the\nREADME recipe. IPC::Run is easy to upgrade, so if we find cause to rely on a\nnewer version, I'd be fine updating that requirement.\n\n\n", "msg_date": "Sat, 9 Oct 2021 19:25:53 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sat, Oct 09, 2021 at 04:34:46PM -0400, Tom Lane wrote:\n>> ... Now I'm wondering what\n>> version of IPC::Run to recommend.\n\n> You mentioned prairiedog uses IPC::Run 0.79. That's from 2005. (Perl 5.8.3\n> is from 2004, and Test::More 0.87 is from 2009.) I'd just use 0.79 in the\n> README recipe. IPC::Run is easy to upgrade, so if we find cause to rely on a\n> newer version, I'd be fine updating that requirement.\n\nYeah, since we know 0.79 works, there seems no reason to suggest a\nlater version. The only reason to suggest an earlier version would\nbe if some other buildfarm critter is using something older than 0.79.\n\nI'm tempted to propose adjusting configure to require IPC::Run >= 0.79,\nso that we can find out if that's true. If it isn't, that's still a\ngood change to codify what our minimum expectation is. As you say,\nwe can always move that goalpost in future if we find it necessary.\n\nHowever, back to the matter of the recipe. I'm feeling discouraged\nagain because experimentation shows that cpanm insists on updating\nthe ExtUtils suite to current while installing Test::Simple. You\ncan then downgrade that, but it's not a complete fix, because there\nare some new ExtUtils modules that don't get uninstalled. There's\nalso assorted CPAN infrastructure left behind.\n\nThe closest I can get to what we want using cpanm is with this recipe:\n\n cpanm install Test::Simple@0.87_01\n cpanm install IPC::Run@0.79\n cpanm install ExtUtils::MakeMaker@6.50 # downgrade\n\n(Note: the actual prerequisite of this release of Test::Simple seems\nto be \"> 6.30\", but the first such version that actually passes its\nown tests for me is 6.50. FWIW, prairiedog currently has 6.59.)\n\nAttached is the diff of module manifests between a raw perl 5.8.3\ninstallation and what this results in. Probably the added CPAN::Meta\nmodules are mostly harmless, but the forced addition of JSON::PP seems\nannoying.\n\nAFAICT the only way to get to precisely the minimum configuration\nis to do the extra module installs by hand, without using cpan or\ncpanm. I'm probably going to go and re-set-up prairiedog that way,\nbut it seems like a bit too much trouble to ask of most developers.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 10 Oct 2021 13:17:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "I wrote:\n> The closest I can get to what we want using cpanm is with this recipe:\n\n> cpanm install Test::Simple@0.87_01\n> cpanm install IPC::Run@0.79\n> cpanm install ExtUtils::MakeMaker@6.50 # downgrade\n\nUpon trying to actually use the perlbrew installation, I discovered\nanother oversight in the recipe: at least with old perl versions,\nyou end up with a non-shared libperl, so that --with-perl fails.\n\nThat leads me to the attached revision...\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 10 Oct 2021 14:42:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "On Sun, Oct 10, 2021 at 01:17:10PM -0400, Tom Lane wrote:\n> However, back to the matter of the recipe. I'm feeling discouraged\n> again because experimentation shows that cpanm insists on updating\n> the ExtUtils suite to current while installing Test::Simple. You\n> can then downgrade that, but it's not a complete fix, because there\n> are some new ExtUtils modules that don't get uninstalled. There's\n> also assorted CPAN infrastructure left behind.\n> \n> The closest I can get to what we want using cpanm is with this recipe:\n> \n> cpanm install Test::Simple@0.87_01\n> cpanm install IPC::Run@0.79\n> cpanm install ExtUtils::MakeMaker@6.50 # downgrade\n> \n> (Note: the actual prerequisite of this release of Test::Simple seems\n> to be \"> 6.30\", but the first such version that actually passes its\n> own tests for me is 6.50. FWIW, prairiedog currently has 6.59.)\n\nWhile the MakeMaker litter is annoying, I'm not too worried about it. The\nonly other thing I'd consider is doing the MakeMaker 6.50 install before\nTest::Simple, not after. Then you don't pull in additional dependencies of\npost-6.50 MakeMaker, if any.\n\nOn Sun, Oct 10, 2021 at 02:42:11PM -0400, Tom Lane wrote:\n> Upon trying to actually use the perlbrew installation, I discovered\n> another oversight in the recipe: at least with old perl versions,\n> you end up with a non-shared libperl, so that --with-perl fails.\n> \n> That leads me to the attached revision...\n\nLooks good. Thanks.\n\n\n", "msg_date": "Sun, 10 Oct 2021 15:55:43 -0400", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sun, Oct 10, 2021 at 01:17:10PM -0400, Tom Lane wrote:\n>> The closest I can get to what we want using cpanm is with this recipe:\n>> cpanm install Test::Simple@0.87_01\n>> cpanm install IPC::Run@0.79\n>> cpanm install ExtUtils::MakeMaker@6.50 # downgrade\n\n> While the MakeMaker litter is annoying, I'm not too worried about it. The\n> only other thing I'd consider is doing the MakeMaker 6.50 install before\n> Test::Simple, not after.\n\nTried that to begin with, doesn't work. There are at least two problems:\n\n1. Before anything else, the first invocation of \"cpanm install\" wants\nto pull in \"install\". That seems to be a dummy module, but it's not\nwithout side-effects: it updates ExtUtils to current. If your first\nrequest is \"cpanm install ExtUtils::MakeMaker@6.50\", the version\nspecification is effectively ignored.\n\n2. I then tried doing a second \"cpanm install ExtUtils::MakeMaker@6.50\",\nand that did successfully downgrade to 6.50 ... but then the request\nto update Test::Simple upgraded it again. I'm not really sure why\nthat happened. It looks more like a cpanm bug than anything Test::Simple\nasked for.\n\nI didn't do exhaustive experimentation to see if putting the downgrade\nbefore \"install IPC::Run\" would work. I think we're best off assuming\nthat cpanm will cause that upgrade due to phase-of-the-moon conditions,\nso putting the downgrade last is the most robust recipe.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 10 Oct 2021 16:10:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "On Sun, Oct 10, 2021 at 04:10:38PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > On Sun, Oct 10, 2021 at 01:17:10PM -0400, Tom Lane wrote:\n> >> The closest I can get to what we want using cpanm is with this recipe:\n> >> cpanm install Test::Simple@0.87_01\n> >> cpanm install IPC::Run@0.79\n> >> cpanm install ExtUtils::MakeMaker@6.50 # downgrade\n> \n> > While the MakeMaker litter is annoying, I'm not too worried about it. The\n> > only other thing I'd consider is doing the MakeMaker 6.50 install before\n> > Test::Simple, not after.\n> \n> Tried that to begin with, doesn't work. There are at least two problems:\n> \n> 1. Before anything else, the first invocation of \"cpanm install\" wants\n> to pull in \"install\". That seems to be a dummy module, but it's not\n> without side-effects: it updates ExtUtils to current. If your first\n> request is \"cpanm install ExtUtils::MakeMaker@6.50\", the version\n> specification is effectively ignored.\n> \n> 2. I then tried doing a second \"cpanm install ExtUtils::MakeMaker@6.50\",\n> and that did successfully downgrade to 6.50 ... but then the request\n> to update Test::Simple upgraded it again. I'm not really sure why\n> that happened. It looks more like a cpanm bug than anything Test::Simple\n> asked for.\n> \n> I didn't do exhaustive experimentation to see if putting the downgrade\n> before \"install IPC::Run\" would work. I think we're best off assuming\n> that cpanm will cause that upgrade due to phase-of-the-moon conditions,\n> so putting the downgrade last is the most robust recipe.\n\nGot it. Good enough!\n\n\n", "msg_date": "Sun, 10 Oct 2021 16:43:48 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "\nOn 10/9/21 10:25 PM, Noah Misch wrote:\n> On Sat, Oct 09, 2021 at 04:34:46PM -0400, Tom Lane wrote:\n>> Hah ... your backpan link led me to realize the actual problem with\n>> Test::More. It got folded into Test::Simple at some point, and\n>> evidently cpanm isn't smart enough to handle a request for a back\n>> version in such cases. But this works:\n>>\n>> $ cpanm install Test::Simple@0.87_01\n>> ...\n>> $ perl -MTest::More -e 'print $Test::More::VERSION, \"\\n\";'\n>> 0.8701\n>>\n>> So we oughta recommend that instead. Now I'm wondering what\n>> version of IPC::Run to recommend.\n> You mentioned prairiedog uses IPC::Run 0.79. That's from 2005. (Perl 5.8.3\n> is from 2004, and Test::More 0.87 is from 2009.) I'd just use 0.79 in the\n> README recipe. IPC::Run is easy to upgrade, so if we find cause to rely on a\n> newer version, I'd be fine updating that requirement.\n>\n>\n\nWhy don't we specify the minimum versions required of these somewhere in\nthe perl code? Perl is pretty good at this.\n\n\ne.g.\n\n\n use IPC::Run 0.79;\n\n use Test::More 0.87;\n\n\nIt will choke if the supplied version is older.\n\n\nWe could even put lines like this in a small script that configure could\nrun.\n\n\ncheers\n\n\nandrew\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 11 Oct 2021 10:57:53 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 10/9/21 10:25 PM, Noah Misch wrote:\n>> You mentioned prairiedog uses IPC::Run 0.79. That's from 2005. (Perl 5.8.3\n>> is from 2004, and Test::More 0.87 is from 2009.) I'd just use 0.79 in the\n>> README recipe. IPC::Run is easy to upgrade, so if we find cause to rely on a\n>> newer version, I'd be fine updating that requirement.\n\n> Why don't we specify the minimum versions required of these somewhere in\n> the perl code? Perl is pretty good at this.\n\nconfigure already checks Test::More's version. I proposed downthread\nthat it should also check IPC::Run, but didn't pull that trigger yet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Oct 2021 11:03:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." }, { "msg_contents": "I wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Why don't we specify the minimum versions required of these somewhere in\n>> the perl code? Perl is pretty good at this.\n\n> configure already checks Test::More's version. I proposed downthread\n> that it should also check IPC::Run, but didn't pull that trigger yet.\n\nDone now.\n\nI found an old note indicating that the reason I chose 0.79 for prairiedog\nback in 2017 is that 0.78 failed its self-test on that machine. 0.78 did\npass when I tried it just now on a perlbrew-on-Fedora-34 rig, so I'm not\nsure what that was about ... but in any case, it discourages me from\nworrying any further about whether a lower minimum could be sane.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Oct 2021 16:54:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Adjust configure to insist on Perl version >= 5.8.3." } ]
[ { "msg_contents": "Hi,\n\nI noticed that for NetBSD we only have one animal, and it's running\nEOL'd release 7. To give decent visibility of relevant portability\nproblems it'd be nice to have one of the current supported releases[1]\nin there. CC'ing owner; any interest in updating this animal to 9.x?\n\nFor FreeBSD the situation is better, we have HEAD (bleeding edge 14),\n13.x, and then loach running 10.3 which is dead. Given that 12.x and\n13.x are supported[2] (well, 11.4 is just about done), perhaps it'd\nmake sense to cover 12.x rather than 10.x?\n\nI don't know too much about DragonflyBSD, but I happened to be\nsurveying operating systems we support (by the \"it's in the build farm\nso we're going to keep it green\" definition) in the context of some\nAIO work, and I learned that they'd ripped the native AIO support out\nof this one at some point, which caused me to focus on the versions.\nAnimal conchuela is running 4.4 (2016) while 6.0 is current[3].\nAgain, if we're going to have one example of a rare OS that someone\ncares about, I think it'd be useful to have a current one?\n\nFor OpenBSD we have the current[4] and previous major releases\ncovered, so that's cool, and then there's a 5.9 system, which is long\ndead and could probably be put to better use, but at least we don't\nlack coverage there.\n\nIn all these cases there are more options that could be turned on, in\ncase someone is interested in extending what's tested. From my notes:\n\nFreeBSD 12:\n\npkg install -y gmake ccache git flex bison readline p5-IPC-Run llvm\npkgconf python3 libxslt openldap-client gettext tcl86 krb5\n\n --enable-cassert --enable-debug --enable-tap-tests --enable-nls\n--with-gssapi --with-icu --with-tcl --with-perl --with-python\n--with-pam --with-ldap --with-libxml --with-libxslt --with-lz4\n--with-openssl --with-llvm --with-libs=/usr/local/lib\n--with-includes=/usr/local/include CC=\"ccache cc\" CXX=\"ccache c++\"\nCLANG=\"ccache clang90\" LLVM_CONFIG=\"llvm-config90\"\n\nNetBSD 9:\n\npkgin -y install gmake git flex bison ccache readline\nmozilla-rootcerts p5-IPC-Run llvm clang pkg-config icu lz4 libxslt tcl\n\n--enable-cassert --enable-debug --enable-tap-tests --enable-nls\n--with-gssapi --with-icu --with-tcl --with-perl --with-python\n--with-pam --with-ldap --with-libxml --with-libxslt --with-lz4\n--with-openssl --with-llvm --with-includes=/usr/pkg/include CC=\"ccache\ncc\" CXX=\"ccache c++\" CLANG=\"ccache clang\" LLVM_CONFIG=\"llvm-config\"\nPYTHON=\"python3.8\" LDFLAGS=\"-R/usr/pkg/lib\"\n\nOpenBSD 6.9:\n\npkg_add -z ccache gmake git bison autoconf-2.69 readline screen--\np5-IPC-Run icu4c python3 libxml libxslt openldap-client--gssapi\ntcl-8.6.8 gettext-tools\n\n--enable-cassert --enable-debug --enable-tap-tests --enable-nls\n--with-icu --with-tcl --with-perl --with-python --with-bsd-auth\n--with-ldap --with-libxml --with-libxslt --with-lz4 --with-openssl\nCC=\"ccache cc\"\n\n(I have never succeeded in getting our LLVM stuff running on OpenBSD;\nthere's something wrong with their LLVM package the way we use it and\ntheir retpoline mitigation stuff, not investigated further.)\n\n(Hmm, in hindsight, I don't know why we need \"--with-bsd-auth\" instead\nof detecting it, but I don't plan to work on that...)\n\n[1] https://www.netbsd.org/releases/\n[2] https://www.freebsd.org/releases/\n[3] https://www.dragonflybsd.org/\n[4] https://www.openbsd.org/\n\n\n", "msg_date": "Fri, 8 Oct 2021 11:15:31 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Time to upgrade buildfarm coverage for some EOL'd OSes?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> (Hmm, in hindsight, I don't know why we need \"--with-bsd-auth\" instead\n> of detecting it, but I don't plan to work on that...)\n\nAs far as that goes, I thought we had a policy against auto-detecting\nuser-visible features. From memory, the rationale goes like \"if you\nwant feature X you should say so, so that the build will fail if we\ncan't provide it\". Thus we make you say something like --with-openssl\neven though it wouldn't be particularly hard to auto-detect. Peter E.\ncan probably advocate more strongly for this approach.\n\nBut anyway, +1 for your main point that it might be time to move up\nsome buildfarm animals, unless we want to scrape up extra resources\nto test both older and newer OS versions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Oct 2021 18:40:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Time to upgrade buildfarm coverage for some EOL'd OSes?" }, { "msg_contents": "On Fri, Oct 8, 2021 at 11:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > (Hmm, in hindsight, I don't know why we need \"--with-bsd-auth\" instead\n> > of detecting it, but I don't plan to work on that...)\n>\n> As far as that goes, I thought we had a policy against auto-detecting\n> user-visible features. From memory, the rationale goes like \"if you\n> want feature X you should say so, so that the build will fail if we\n> can't provide it\". Thus we make you say something like --with-openssl\n> even though it wouldn't be particularly hard to auto-detect. Peter E.\n> can probably advocate more strongly for this approach.\n\nOh, I see. I was thinking that operating system features were a bit\ndifferent from \"external packages\" (the purpose of --with according to\nthe autoconf docs), but that's a bit fuzzy and I see now that it's\nconsistent with our treatment of PAM which is very similar.\n\n\n", "msg_date": "Fri, 8 Oct 2021 13:17:28 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Time to upgrade buildfarm coverage for some EOL'd OSes?" }, { "msg_contents": "On 2021-10-08 00:15, Thomas Munro wrote:\n\n> I noticed that for NetBSD we only have one animal, and it's running\n> EOL'd release 7. To give decent visibility of relevant portability\n> problems it'd be nice to have one of the current supported releases[1]\n> in there. CC'ing owner; any interest in updating this animal to 9.x?\n\nYes, it's getting long in the tooth. I will upgrade the NetBSD 7 \n(sidewinder) to 9.2.\n\n\n> For FreeBSD the situation is better, we have HEAD (bleeding edge 14),\n> 13.x, and then loach running 10.3 which is dead. Given that 12.x and\n> 13.x are supported[2] (well, 11.4 is just about done), perhaps it'd\n> make sense to cover 12.x rather than 10.x?\n\nAnd I will also upgrade loach to 12.x if that's the version that is \nneeded the most.\n\n\n> I don't know too much about DragonflyBSD, but I happened to be\n> surveying operating systems we support (by the \"it's in the build farm\n> so we're going to keep it green\" definition) in the context of some\n> AIO work, and I learned that they'd ripped the native AIO support out\n> of this one at some point, which caused me to focus on the versions.\n> Animal conchuela is running 4.4 (2016) while 6.0 is current[3].\n> Again, if we're going to have one example of a rare OS that someone\n> cares about, I think it'd be useful to have a current one?\n\nI will upgrade conchuela to DragonFlyBSD 6.0.\n\n\n> For OpenBSD we have the current[4] and previous major releases\n> covered, so that's cool, and then there's a 5.9 system, which is long\n> dead and could probably be put to better use, but at least we don't\n> lack coverage there.\n\nI will remove the 5.9 (curculio) and upgrade the 6.5 (morepork) to 6.9.\n\nWould these changes be acceptable?\n\n/Mikael\n\n\n", "msg_date": "Fri, 8 Oct 2021 15:08:23 +0200", "msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>", "msg_from_op": false, "msg_subject": "Re: Time to upgrade buildfarm coverage for some EOL'd OSes?" }, { "msg_contents": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> Yes, it's getting long in the tooth. I will upgrade the NetBSD 7 \n> (sidewinder) to 9.2.\n> And I will also upgrade loach to 12.x if that's the version that is \n> needed the most.\n> I will upgrade conchuela to DragonFlyBSD 6.0.\n> I will remove the 5.9 (curculio) and upgrade the 6.5 (morepork) to 6.9.\n\n+1 to all of that except retiring curculio. That one has shown us issues\nwe've not seen on other animals (recent examples at [1][2]), so unless\nyou're feeling resource-constrained I'd vote for keeping it going.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/YPAdf9r5aJbDoHoq@paquier.xyz\n[2] https://www.postgresql.org/message-id/CAA4eK1+uW1UGDHDz-HWMHMen76mKP7NJebOTZN4uwbyMjaYVww@mail.gmail.com\n\n\n", "msg_date": "Fri, 08 Oct 2021 12:12:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Time to upgrade buildfarm coverage for some EOL'd OSes?" }, { "msg_contents": "On 2021-10-08 18:12, Tom Lane wrote:\n> =?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n>> Yes, it's getting long in the tooth. I will upgrade the NetBSD 7\n>> (sidewinder) to 9.2.\n>> And I will also upgrade loach to 12.x if that's the version that is\n>> needed the most.\n>> I will upgrade conchuela to DragonFlyBSD 6.0.\n>> I will remove the 5.9 (curculio) and upgrade the 6.5 (morepork) to 6.9.\n> \n> +1 to all of that except retiring curculio. That one has shown us issues\n> we've not seen on other animals (recent examples at [1][2]), so unless\n> you're feeling resource-constrained I'd vote for keeping it going.\n\nSure I can keep curculio as is. Will just upgrade morepork to OpenBSD \n6.9 then.\n\n/Mikael\n\n\n", "msg_date": "Fri, 8 Oct 2021 18:55:02 +0200", "msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>", "msg_from_op": false, "msg_subject": "Re: Time to upgrade buildfarm coverage for some EOL'd OSes?" }, { "msg_contents": "On Sat, Oct 9, 2021 at 5:55 AM Mikael Kjellström\n<mikael.kjellstrom@mksoft.nu> wrote:\n> On 2021-10-08 18:12, Tom Lane wrote:\n> > =?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> >> Yes, it's getting long in the tooth. I will upgrade the NetBSD 7\n> >> (sidewinder) to 9.2.\n> >> And I will also upgrade loach to 12.x if that's the version that is\n> >> needed the most.\n> >> I will upgrade conchuela to DragonFlyBSD 6.0.\n> >> I will remove the 5.9 (curculio) and upgrade the 6.5 (morepork) to 6.9.\n> >\n> > +1 to all of that except retiring curculio. That one has shown us issues\n> > we've not seen on other animals (recent examples at [1][2]), so unless\n> > you're feeling resource-constrained I'd vote for keeping it going.\n>\n> Sure I can keep curculio as is. Will just upgrade morepork to OpenBSD\n> 6.9 then.\n\nThanks very much for doing all these upgrades!\n\nHere's a nice recording of a morepork, a kind of owl that we often\nhear in the still of the night where I live. Supposedly it's saying\n'more pork!' but I don't hear that myself.\n\nhttps://www.doc.govt.nz/nature/native-animals/birds/birds-a-z/morepork-ruru/\n\n\n", "msg_date": "Sat, 9 Oct 2021 08:40:26 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Time to upgrade buildfarm coverage for some EOL'd OSes?" }, { "msg_contents": "On 08.10.21 00:40, Tom Lane wrote:\n> As far as that goes, I thought we had a policy against auto-detecting\n> user-visible features. From memory, the rationale goes like \"if you\n> want feature X you should say so, so that the build will fail if we\n> can't provide it\". Thus we make you say something like --with-openssl\n> even though it wouldn't be particularly hard to auto-detect. Peter E.\n> can probably advocate more strongly for this approach.\n\nYeah, there used to be RPMs shipped that accidentally omitted readline \nsupport.\n\n\n", "msg_date": "Sun, 10 Oct 2021 14:51:38 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Time to upgrade buildfarm coverage for some EOL'd OSes?" }, { "msg_contents": "\nOn 2021-10-08 21:40, Thomas Munro wrote:\n\n>> Sure I can keep curculio as is. Will just upgrade morepork to OpenBSD\n>> 6.9 then.\n> \n> Thanks very much for doing all these upgrades!\n\nNo problem.\n\nCurrent status is:\n\nloach: Upgraded to FreeBSD 12.2\nmorepork: Upgraded to OpenBSD 6.9\nconchuela: Upgraded to DragonFly BSD 6.0\nsidewinder: Upgraded to NetBSD 9.2\n\ncurculio: Is not able to connect to https://git.postgresql.org due to \nthe Let's Encrypt expired CA.\n\nWithout doing anything:\n\n$ git clone https://git.postgresql.org\nCloning into 'git.postgresql.org'...\nfatal: unable to access 'https://git.postgresql.org/': SSL certificate \nproblem: certificate has expired\n\nModifying /etc/ssl/certs.pem by removing expired DST Root CA X3:\n\n$ git clone https://git.postgresql.org\nCloning into 'git.postgresql.org'...\nfatal: unable to access 'https://git.postgresql.org/': SSL certificate \nproblem: unable to get local issuer certificate\n\nThen I tried to download the new CA and Intermediate from:\n\nhttps://letsencrypt.org/certificates/\n\nand adding them manually to /etc/ssl/cert.pem\n\nbut no dice. Only getting:\n\n$ git clone https://git.postgresql.org\nCloning into 'git.postgresql.org'...\nfatal: unable to access 'https://git.postgresql.org/': SSL certificate \nproblem: unable to get local issuer certificate\n\nIf anybody have any tips about how to get SSL-working again, I'll gladly \ntake it.\n\n/Mikael\n\n\n", "msg_date": "Sun, 10 Oct 2021 19:25:40 +0200", "msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>", "msg_from_op": false, "msg_subject": "Re: Time to upgrade buildfarm coverage for some EOL'd OSes?" }, { "msg_contents": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> curculio: Is not able to connect to https://git.postgresql.org due to \n> the Let's Encrypt expired CA.\n\nWe're working on fixing things so that git.postgresql.org will\nadvertise a cert chain that is compatible with older OpenSSL\nversions. I thought that was supposed to happen this weekend,\nbut evidently it hasn't yet. You will need an up-to-date\n(less than several years old) /etc/ssl/certs.pem, but no software\nmods should be needed. I'd counsel just waiting a day or two\nmore before trying to resurrect curculio.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 10 Oct 2021 14:00:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Time to upgrade buildfarm coverage for some EOL'd OSes?" }, { "msg_contents": "On 2021-10-10 20:00, Tom Lane wrote:\n> =?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n>> curculio: Is not able to connect to https://git.postgresql.org due to\n>> the Let's Encrypt expired CA.\n> \n> We're working on fixing things so that git.postgresql.org will\n> advertise a cert chain that is compatible with older OpenSSL\n> versions. I thought that was supposed to happen this weekend,\n> but evidently it hasn't yet. You will need an up-to-date\n> (less than several years old) /etc/ssl/certs.pem, but no software\n> mods should be needed. I'd counsel just waiting a day or two\n> more before trying to resurrect curculio.\n\nOK. Cool. Then I will just sit back and relax.\n\nAnother thing I used the update_personality.pl and after that the name \nof my animals and compiler settings looks, hmm, how to say this, not \nentirely correct.\n\nExample:\n\nDragonFly BSD DragonFly BSD 6.0 gcc gcc 8.3 x86_64\n\nalso the status page seems to be broken. It doesn't show any Flags anymore.\n\nBut maybe that is a known problem and someone is working on that?\n\n/Mikael\n\n\n", "msg_date": "Sun, 10 Oct 2021 20:04:51 +0200", "msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>", "msg_from_op": false, "msg_subject": "Re: Time to upgrade buildfarm coverage for some EOL'd OSes?" }, { "msg_contents": "On Sun, Oct 10, 2021 at 8:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> =?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> > curculio: Is not able to connect to https://git.postgresql.org due to\n> > the Let's Encrypt expired CA.\n>\n> We're working on fixing things so that git.postgresql.org will\n> advertise a cert chain that is compatible with older OpenSSL\n> versions. I thought that was supposed to happen this weekend,\n> but evidently it hasn't yet. You will need an up-to-date\n> (less than several years old) /etc/ssl/certs.pem, but no software\n> mods should be needed. I'd counsel just waiting a day or two\n> more before trying to resurrect curculio.\n>\n\nIt was indeed supposed to, but didn't. It has now been done though, so\ngit.postgresql.org should now be compatible with ancient OpenSSL.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sun, Oct 10, 2021 at 8:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> curculio: Is not able to connect to https://git.postgresql.org due to \n> the Let's Encrypt expired CA.\n\nWe're working on fixing things so that git.postgresql.org will\nadvertise a cert chain that is compatible with older OpenSSL\nversions.  I thought that was supposed to happen this weekend,\nbut evidently it hasn't yet.  You will need an up-to-date\n(less than several years old) /etc/ssl/certs.pem, but no software\nmods should be needed.  I'd counsel just waiting a day or two\nmore before trying to resurrect curculio.It was indeed supposed to, but didn't. It has now been done though, so git.postgresql.org should now be compatible with ancient OpenSSL. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 11 Oct 2021 10:20:37 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Time to upgrade buildfarm coverage for some EOL'd OSes?" }, { "msg_contents": "On 2021-10-11 10:20, Magnus Hagander wrote:\n\n> It was indeed supposed to, but didn't. It has now been done though, so \n> git.postgresql.org <http://git.postgresql.org> should now be compatible \n> with ancient OpenSSL.\n\nAnd curculio is back to life and shows as all green on the status page.\n\nSo it's indeed working again.\n\n/Mikael\n\n\n", "msg_date": "Mon, 11 Oct 2021 16:15:13 +0200", "msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>", "msg_from_op": false, "msg_subject": "Re: Time to upgrade buildfarm coverage for some EOL'd OSes?" } ]
[ { "msg_contents": "We generally only expect amvacuumcleanup() routines to be called\nduring VACUUM. But some ginvacuumcleanup() calls are an exception to\nthat general rule -- these are calls made during autoanalyze, where\nginvacuumcleanup() does real pending list cleanup work (not just a\nno-op return). That'll only happen within an autoanalyze, and only\nwhen no VACUUM took place before the ANALYZE. The high level goal is\nto make sure that the av worker process won't neglect to call\nginvacuumcleanup() for pending list cleanup, even when there was no\nVACUUM. This behavior was added when the GIN fastupdate/pending list\nstuff was first introduced, in commit ff301d6e69.\n\nThe design of ANALYZE differs from the design of VACUUM in that only\nANALYZE will allocate an XID (typically in a call to\nupdate_attstats()). ANALYZE can also hold an MVCC snapshot. This is\nwhy ANALYZE holds back cleanup by VACUUM in another process, which\nsometimes causes problems (say during pgbench) -- this much is fairly\nwell known. But there is also a pretty nasty interaction between this\naspect of ANALYZE, and the special GIN pending list cleanup path I\nmentioned. This interaction makes the VACUUM-OldestXmin-held-back\nsituation far worse. The special analyze_only ginvacuumcleanup() calls\nhappen fairly late during the ANALYZE, during the window that ANALYZE\nholds back OldestXmin values in other VACUUMs. This greatly increases\nthe extent of the problem, in the obvious way. GIN index pending list\ncleanup will often take a great deal longer than the typical ANALYZE\ntasks take -- it's a pretty resource intensive maintenance operation.\nEspecially if there are a couple of GIN indexes on the table.\n\nThis issue was brought to my attention by Nikolay Samokhvalov. He\nreached out privately about it. He mentioned one problematic case\ninvolving an ANALYZE lasting 45 minutes, or longer (per\nlog_autovacuum_min_duration output for the autoanalyze). That was\ncorrelated with VACUUMs on other tables whose OldestXmin values were\nall held back to the same old XID. I think that this issue ought to be\ntreated as a bug.\n\nJaime Casanova wrote a patch that does pending list cleanup using the\nAV worker item infrastructure [1]. It's in the CF queue. Sounds like a\ngood idea to me. The goal of that patch is to take work out of the\ninsert path, when our gin_pending_list_limit-based limit is hit, but\noffhand I imagine that the same approach could be used as a fix for\nthis issue, at least on HEAD.\n\n[1] https://postgr.es/m/20210405063117.GA2478@ahch-to\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 7 Oct 2021 22:34:31 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "GIN pending list cleanup during autoanalyze blocks cleanup by VACUUM" }, { "msg_contents": "On Thu, Oct 7, 2021 at 10:34 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> This issue was brought to my attention by Nikolay Samokhvalov. He\n> reached out privately about it. He mentioned one problematic case\n> involving an ANALYZE lasting 45 minutes, or longer (per\n> log_autovacuum_min_duration output for the autoanalyze). That was\n> correlated with VACUUMs on other tables whose OldestXmin values were\n> all held back to the same old XID. I think that this issue ought to be\n> treated as a bug.\n\nIt's hard to think of a non-invasive bug fix. The obvious approach is\nto move the index_vacuum_cleanup()/ginvacuumcleanup() calls in\nanalyze.c to some earlier point in ANALYZE, in order to avoid doing\nlots of VACUUM-ish work while we hold an MVCC snapshot that blocks\ncleanup in other tables. The code in question is more or less supposed\nto be run during VACUUM already, and so the idea of moving it back to\nwhen the autoanalyze worker backend state \"still looks like the usual\nautovacuum case\" makes a certain amount of sense. But that doesn't\nwork, at least not without lots of invasive changes.\n\nWhile I'm no closer to a backpatchable fix than I was on Thursday, I\ndo have some more ideas about what to do on HEAD. I now lean towards\ncompletely ripping analyze_only calls out, there -- the whole idea of\ncalling amvacuumcleanup() routines during autoanalyze (but not plain\nANALYZE) seems bolted on. It's not just the risk of similar problems\ncropping up in the future -- it's that the whole approach seems\nobsolete. We now generally expect autovacuum to run against\ninsert-only tables. That might not be a perfect fit for this, but it\nstill seems far better.\n\nDoes anyone have any ideas for a targeted fix?\n\nHere's why the \"obvious\" fix is impractical, at least for backpatch:\n\nTo recap, a backend running VACUUM is generally able to avoid the need\nto be considered inside GetOldestNonRemovableTransactionId(), which is\npractically essential for any busy database -- without that, long\nrunning VACUUM operations would behave like conventional long running\ntransactions, causing all sorts of chaos. The problem here is that we\ncan have ginvacuumcleanup() calls that take place without avoiding the\nsame kind of chaos, just because they happen to take place during\nautoanalyze. It seems like the whole GIN autoanalyze mechanism design\nwas based on the assumption that it didn't make much difference *when*\nwe reach ginvacuumcleanup(), as long as it happened regularly. But\nthat's just not true.\n\nWe go to a lot of trouble to make VACUUM have this property. This\ncannot easily be extended or generalized to cover this special case\nduring ANALYZE. For one thing, the high level vacuum_rel() entry point\nsets things up carefully, using session-level locks for relations. For\nanother, it relies on special PROC_IN_VACUUM flag logic -- that status\nis stored in MyProc->statusFlags.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Sat, 9 Oct 2021 17:51:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: GIN pending list cleanup during autoanalyze blocks cleanup by\n VACUUM" }, { "msg_contents": "On Sat, Oct 9, 2021 at 5:51 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> While I'm no closer to a backpatchable fix than I was on Thursday, I\n> do have some more ideas about what to do on HEAD. I now lean towards\n> completely ripping analyze_only calls out, there -- the whole idea of\n> calling amvacuumcleanup() routines during autoanalyze (but not plain\n> ANALYZE) seems bolted on. It's not just the risk of similar problems\n> cropping up in the future -- it's that the whole approach seems\n> obsolete. We now generally expect autovacuum to run against\n> insert-only tables.\n\nAttached patch removes calls to each index's amvacuumcleanup() routine\nthat take place during ANALYZE. These days we can just rely on\nautovacuum to run against insert-only tables (assuming the user didn't\ngo out of their way to disable that behavior).\n\nHaving thought about it some more, I have arrived at the conclusion\nthat we should backpatch this to Postgres 13, the first version that\nhad insert-driven autovacuums (following commit b07642db). This\napproach is unorthodox, because it amounts to disabling a\ntheoretically-working feature in the backbranches. Also, I'd be\ndrawing the line at Postgres 13, due only to the quite accidental fact\nthat that's the first major release that clearly doesn't need this\nmechanism. (As it happens Nikolay was on 12 anyway, so this won't work\nfor him, but he already has a workaround IIUC.)\n\nI reached this conclusion because I can't think of a non-invasive fix,\nand I really don't want to go there. At the same time, this behavior\nis barely documented, and is potentially very harmful indeed. I'm sure\nthat we should get rid of it on HEAD, but getting rid of it a couple\nof years earlier seems prudent.\n\nDoes anybody have any opinion on this, either in favor or against my\nbackpatch-to-13 proposal?\n\nAlthough this is technically the first problem report about this since\nthe GIN fastupdate stuff was introduced over a decade ago, I highly\ndoubt that that tells us much, given the specifics. We only added\ninstrumentation to autovacuum that showed each VACUUM's OldestXmin in\nPostgres 10 -- that's relatively recent. Nikolay is as sophisticated a\nPostgres user as anybody, and it was only through sheer luck that we\nmanaged to figure this out -- he had access to that OldestXmin\ninstrumentation, and also had access to my input on it. While the\nissue itself was very hard to spot, the negative ramifications\ncertainly were not.\n\nMany users bend over backwards to avoid long running transactions, and\nthe fact that there is this highly obscure path in which autoanalyze\ncreates very long running transactions carelessly is pretty\ndistressing to me. I remember hearing complaints about how slow GIN\npending list cleanup by VACUUM was years ago, back in my consulting\ndays. When the feature was relatively new. I just accepted the general\nwisdom at the time, which is that the mechanism itself is slow. But I\nnow suspect that that issue has far more to do with holding back\nVACUUM/other cleanup generally, and not with the efficiency of GIN\nitself.\n\n-- \nPeter Geoghegan", "msg_date": "Wed, 13 Oct 2021 15:58:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: GIN pending list cleanup during autoanalyze blocks cleanup by\n VACUUM" }, { "msg_contents": "On Thu, Oct 14, 2021 at 7:58 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sat, Oct 9, 2021 at 5:51 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > While I'm no closer to a backpatchable fix than I was on Thursday, I\n> > do have some more ideas about what to do on HEAD. I now lean towards\n> > completely ripping analyze_only calls out, there -- the whole idea of\n> > calling amvacuumcleanup() routines during autoanalyze (but not plain\n> > ANALYZE) seems bolted on. It's not just the risk of similar problems\n> > cropping up in the future -- it's that the whole approach seems\n> > obsolete. We now generally expect autovacuum to run against\n> > insert-only tables.\n>\n> Attached patch removes calls to each index's amvacuumcleanup() routine\n> that take place during ANALYZE. These days we can just rely on\n> autovacuum to run against insert-only tables (assuming the user didn't\n> go out of their way to disable that behavior).\n\nLooking at the original commit, as you mentioned, ISTM performing\npending list cleanup during (auto)analyze (and analyze_only) was\nintroduced to perform the pending list cleanup on insert-only tables.\nNow that we have autovacuum_vacuum_insert_threshold, we don’t\nnecessarily need to rely on that.\n\nOn the other hand, I still see a little value in performing the\npending list cleanup during autoanalyze. For example, if the user\nwants to clean up the pending list frequently in the background (so\nthat it's not triggered in the INSERT path), it might be better to do\nthat during autoanalyze than autovacuum. If the table has garbage,\nautovacuum has to vacuum all indexes and the table, taking a very long\ntime. But autoanalyze can be done in a shorter time. If we trigger\nautoanalyze frequently and perform pending list cleanup, the pending\nlist cleanup can also be done in a relatively short time, preventing\nMVCC snapshots from being held for a long time.\n\nTherefore, I personally think that it's better to eliminate\nanalyze_only code after introducing a way that allows us to perform\nthe pending list cleanup more frequently. I think that the idea of\nJaime Casanova's patch is a good solution.\n\n>\n> Having thought about it some more, I have arrived at the conclusion\n> that we should backpatch this to Postgres 13, the first version that\n> had insert-driven autovacuums (following commit b07642db). This\n> approach is unorthodox, because it amounts to disabling a\n> theoretically-working feature in the backbranches. Also, I'd be\n> drawing the line at Postgres 13, due only to the quite accidental fact\n> that that's the first major release that clearly doesn't need this\n> mechanism. (As it happens Nikolay was on 12 anyway, so this won't work\n> for him, but he already has a workaround IIUC.)\n>\n> I reached this conclusion because I can't think of a non-invasive fix,\n> and I really don't want to go there. At the same time, this behavior\n> is barely documented, and is potentially very harmful indeed. I'm sure\n> that we should get rid of it on HEAD, but getting rid of it a couple\n> of years earlier seems prudent.\n>\n> Does anybody have any opinion on this, either in favor or against my\n> backpatch-to-13 proposal?\n\nI'm not very positive about back-patching. The first reason is what I\ndescribed above; I still see little value in performing pending list\ncleanup during autoanalyze. Another reason is that if the user relies\non autoanalyze to perform pending list cleanup, they have to enable\nautovacuum_vacuum_insert_threshold instead during the minor upgrade.\nSince it also means to trigger autovacuum in more cases I think it\nwill have a profound impact on the existing system.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 14 Oct 2021 16:49:30 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GIN pending list cleanup during autoanalyze blocks cleanup by\n VACUUM" }, { "msg_contents": "On Thu, Oct 14, 2021 at 12:50 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Looking at the original commit, as you mentioned, ISTM performing\n> pending list cleanup during (auto)analyze (and analyze_only) was\n> introduced to perform the pending list cleanup on insert-only tables.\n> Now that we have autovacuum_vacuum_insert_threshold, we don’t\n> necessarily need to rely on that.\n\nRight.\n\n> On the other hand, I still see a little value in performing the\n> pending list cleanup during autoanalyze. For example, if the user\n> wants to clean up the pending list frequently in the background (so\n> that it's not triggered in the INSERT path), it might be better to do\n> that during autoanalyze than autovacuum. If the table has garbage,\n> autovacuum has to vacuum all indexes and the table, taking a very long\n> time. But autoanalyze can be done in a shorter time. If we trigger\n> autoanalyze frequently and perform pending list cleanup, the pending\n> list cleanup can also be done in a relatively short time, preventing\n> MVCC snapshots from being held for a long time.\n\nI agree that that's true -- there is at least a little value. But,\nthat's just an accident of history.\n\nToday, ginvacuumcleanup() won't need to scan the whole index in the\nautoanalyze path that I'm concerned about - it will just do pending\nlist insertion. This does mean that the autoanalyze path taken within\nginvacuumcleanup() should be a lot faster than a similar cleanup-only\ncall to ginvacuumcleanup(). But...is there actually a good reason for\nthat? Why should a cleanup-only VACUUM ANALYZE (i.e. a V-A where the\nVACUUM does not see any heap-page LP_DEAD items) be so much slower\nthan a similar ANALYZE against the same table, under the same\nconditions? I see no good reason.\n\nIdeally, ginvacuumcleanup() would behave like btvacuumcleanup() and\nhashvacuumcleanup(). That is, it should not unnecessarily scan the\nindex (even when used by VACUUM). In other words, it should have\nsomething like the \"Skip full index scan\" mechanism that you added to\nnbtree in commit 857f9c36. That way it just wouldn't make sense to\nhave this autoanalyze path anymore -- it would no longer have this\naccidental advantage over a regular ginvacuumcleanup() call made from\nVACUUM.\n\nMore generally, I think it's a big problem that ginvacuumcleanup() has\nso many weird special cases. Why does the full_clean argument to\nginInsertCleanup() even exist? It makes the behavior inside\nginInsertCleanup() vary based on whether we're in autovacuum (or\nautoanalyze) for no reason at all. I think that the true reason for\nthis is simple: the code in ginInsertCleanup() is *bad*. full_clean\nwas just forgotten about by one of its many bug fixes since the code\nquality started to go down. Somebody (who shall remain nameless) was\njust careless when maintaining that code.\n\nVACUUM should be in charge of index AMs -- not the other way around.\nIt's good that the amvacuumcleanup() interface is so flexible, but I\nthink that GIN is over relying on this flexibility. Ideally, VACUUM\nwouldn't have to think about the specific index AMs involved at all --\nwhy should GIN be so different to GiST, nbtree, or hash? If GIN (or\nany other index AM) behaves like a special little snowflake, with its\nown unique performance characteristics, then it is harder to improve\ncertain important things inside VACUUM. For example, the conveyor belt\nindex vacuuming design from Robert Haas won't work as well as it\ncould.\n\n> Therefore, I personally think that it's better to eliminate\n> analyze_only code after introducing a way that allows us to perform\n> the pending list cleanup more frequently. I think that the idea of\n> Jaime Casanova's patch is a good solution.\n\nI now think that it would be better to fix ginvacuumcleanup() in the\nway that I described (I changed my mind). Jaime's patch does seem like\na good idea to me, but not for this. It makes sense to have that, for\nthe reasons that Jaime said himself.\n\n> I'm not very positive about back-patching. The first reason is what I\n> described above; I still see little value in performing pending list\n> cleanup during autoanalyze. Another reason is that if the user relies\n> on autoanalyze to perform pending list cleanup, they have to enable\n> autovacuum_vacuum_insert_threshold instead during the minor upgrade.\n> Since it also means to trigger autovacuum in more cases I think it\n> will have a profound impact on the existing system.\n\nI have to admit that when I wrote my earlier email, I was still a\nlittle shocked by the problem -- which is not a good state of mind\nwhen making a decision about backpatching. But, I now think that I\nunderappreciated the risk of making the problem worse in the\nbackbranches, rather than better. I won't be backpatching anything\nhere.\n\nThe problem report from Nikolay was probably an unusually bad case,\nwhere the pending list cleanup/insertion was particularly expensive.\nThis *is* really awful behavior, but that alone isn't a good enough\nreason to backpatch.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 16 Oct 2021 19:33:57 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: GIN pending list cleanup during autoanalyze blocks cleanup by\n VACUUM" } ]
[ { "msg_contents": "Hi\n\nWhen 'ALTER FOREIGN DATA WRAPPER OPTIONS' is executed against \npostgres_fdw, the HINT message is printed as shown below, even though \nthere are no valid options in this context.\n\n=# ALTER FOREIGN DATA WRAPPER postgres_fdw OPTIONS (format 'csv');\nERROR: invalid option \"format\"\nHINT: Valid options in this context are:\n\nI made a patch for this problem.\n\n\nregards,\nKosei Masumura", "msg_date": "Fri, 08 Oct 2021 16:18:02 +0900", "msg_from": "bt21masumurak <bt21masumurak@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Improve the HINT message of the ALTER command for postgres_fdw" }, { "msg_contents": "On Fri, Oct 8, 2021 at 12:48 PM bt21masumurak\n<bt21masumurak@oss.nttdata.com> wrote:\n>\n> Hi\n>\n> When 'ALTER FOREIGN DATA WRAPPER OPTIONS' is executed against\n> postgres_fdw, the HINT message is printed as shown below, even though\n> there are no valid options in this context.\n>\n> =# ALTER FOREIGN DATA WRAPPER postgres_fdw OPTIONS (format 'csv');\n> ERROR: invalid option \"format\"\n> HINT: Valid options in this context are:\n>\n> I made a patch for this problem.\n\nGood catch. It seems like the change proposed for\npostgres_fdw_validator is similar to what file_fdw is doing in\nfile_fdw_validator. I think we also need to do the same change in\ndblink_fdw_validator and postgresql_fdw_validator as well.\n\nWhile on this, it's better to add test cases for the error message\n\"There are no valid options in this context.\" for all the three fdws\ni.e. file_fdw, postgres_fdw and dblink_fdw may be in their respective\ncontrib modules test files, and for postgresql_fdw_validator in\nsrc/test/regress/sql/foreign_data.sql.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 8 Oct 2021 13:08:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve the HINT message of the ALTER command for postgres_fdw" }, { "msg_contents": "> On 8 Oct 2021, at 09:38, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> On Fri, Oct 8, 2021 at 12:48 PM bt21masumurak\n> <bt21masumurak@oss.nttdata.com> wrote:\n>> \n>> Hi\n>> \n>> When 'ALTER FOREIGN DATA WRAPPER OPTIONS' is executed against\n>> postgres_fdw, the HINT message is printed as shown below, even though\n>> there are no valid options in this context.\n>> \n>> =# ALTER FOREIGN DATA WRAPPER postgres_fdw OPTIONS (format 'csv');\n>> ERROR: invalid option \"format\"\n>> HINT: Valid options in this context are:\n>> \n>> I made a patch for this problem.\n> \n> Good catch.\n\n+1\n\n> It seems like the change proposed for postgres_fdw_validator is similar to what\n> file_fdw is doing in file_fdw_validator. I think we also need to do the same\n> change in dblink_fdw_validator and postgresql_fdw_validator as well.\n\nAgreed.\n\n> While on this, it's better to add test cases for the error message\n> \"There are no valid options in this context.\" for all the three fdws\n> i.e. file_fdw, postgres_fdw and dblink_fdw may be in their respective\n> contrib modules test files, and for postgresql_fdw_validator in\n> src/test/regress/sql/foreign_data.sql.\n\n+1.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 8 Oct 2021 10:13:16 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Improve the HINT message of the ALTER command for postgres_fdw" }, { "msg_contents": "Hi\nThank you for your comments.\n\n>> It seems like the change proposed for postgres_fdw_validator is \n>> similar to what\n>> file_fdw is doing in file_fdw_validator. I think we also need to do \n>> the same\n>> change in dblink_fdw_validator and postgresql_fdw_validator as well.\n> \n> Agreed.\n> \n>> While on this, it's better to add test cases for the error message\n>> \"There are no valid options in this context.\" for all the three fdws\n>> i.e. file_fdw, postgres_fdw and dblink_fdw may be in their respective\n>> contrib modules test files, and for postgresql_fdw_validator in\n>> src/test/regress/sql/foreign_data.sql.\n> \n> +1.\n\nI made new patch based on those comments.\n\nRegards,\nKosei Masumura\n\n\n2021-10-08 17:13 に Daniel Gustafsson さんは書きました:\n>> On 8 Oct 2021, at 09:38, Bharath Rupireddy \n>> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> \n>> On Fri, Oct 8, 2021 at 12:48 PM bt21masumurak\n>> <bt21masumurak@oss.nttdata.com> wrote:\n>>> \n>>> Hi\n>>> \n>>> When 'ALTER FOREIGN DATA WRAPPER OPTIONS' is executed against\n>>> postgres_fdw, the HINT message is printed as shown below, even though\n>>> there are no valid options in this context.\n>>> \n>>> =# ALTER FOREIGN DATA WRAPPER postgres_fdw OPTIONS (format 'csv');\n>>> ERROR: invalid option \"format\"\n>>> HINT: Valid options in this context are:\n>>> \n>>> I made a patch for this problem.\n>> \n>> Good catch.\n> \n> +1\n> \n>> It seems like the change proposed for postgres_fdw_validator is \n>> similar to what\n>> file_fdw is doing in file_fdw_validator. I think we also need to do \n>> the same\n>> change in dblink_fdw_validator and postgresql_fdw_validator as well.\n> \n> Agreed.\n> \n>> While on this, it's better to add test cases for the error message\n>> \"There are no valid options in this context.\" for all the three fdws\n>> i.e. file_fdw, postgres_fdw and dblink_fdw may be in their respective\n>> contrib modules test files, and for postgresql_fdw_validator in\n>> src/test/regress/sql/foreign_data.sql.\n> \n> +1.\n> \n> --\n> Daniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 12 Oct 2021 19:57:53 +0900", "msg_from": "bt21masumurak <bt21masumurak@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Improve the HINT message of the ALTER command for postgres_fdw" }, { "msg_contents": "On 2021/10/12 19:57, bt21masumurak wrote:\n> I made new patch based on those comments.\n\nThanks for updating the patch!\n\n- errhint(\"HOGEHOGEValid options in this context are: %s\",\n- buf.data)));\n\nThe patch contains the garbage \"HOGEHOGE\" in the above,\nwhich causes the compiler to fail. Attached is the updated version\nof the patch. I got rid of the garbage.\n\n\n+--HINT test\n+ALTER FOREIGN DATA WRAPPER file_fdw OPTIONS (format 'csv');\n\nfile_fdw already has the test for ALTER FOREIGN DATA WRAPPER .. OPTIONS,\nso you don't need to add new test for the command into file_fdw.\nI removed that test from file_fdw.\n\n\nAlso I moved the tests for ALTER FOREIGN DATA WRAPPER .. OPTIONS,\nin the tests of postgres_fdw, dblink, and foreign data, into more proper\nplaces.\n\n\nBTW, I found file_fdw.c, dblink.c, postgres_fdw/option.c and foreign.c\nuse different error codes for the same error message as follows.\nThey should use the same error code? If yes, ISTM that\nERRCODE_FDW_INVALID_OPTION_NAME is better because\nthe error message is \"invalid option ...\".\n\n- ERRCODE_FDW_INVALID_OPTION_NAME (file_fdw.c)\n- ERRCODE_FDW_OPTION_NAME_NOT_FOUND (dblink.c)\n- ERRCODE_FDW_INVALID_OPTION_NAME (postgres_fdw/option.c)\n- ERRCODE_SYNTAX_ERROR (foreign.c)\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 13 Oct 2021 02:41:10 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Improve the HINT message of the ALTER command for postgres_fdw" }, { "msg_contents": "On Tue, Oct 12, 2021 at 11:11 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> BTW, I found file_fdw.c, dblink.c, postgres_fdw/option.c and foreign.c\n> use different error codes for the same error message as follows.\n> They should use the same error code? If yes, ISTM that\n> ERRCODE_FDW_INVALID_OPTION_NAME is better because\n> the error message is \"invalid option ...\".\n>\n> - ERRCODE_FDW_INVALID_OPTION_NAME (file_fdw.c)\n> - ERRCODE_FDW_OPTION_NAME_NOT_FOUND (dblink.c)\n> - ERRCODE_FDW_INVALID_OPTION_NAME (postgres_fdw/option.c)\n> - ERRCODE_SYNTAX_ERROR (foreign.c)\n\nGood catch. ERRCODE_FDW_INVALID_OPTION_NAME seems reasonable to me. I\nthink we can remove the error code ERRCODE_FDW_OPTION_NAME_NOT_FOUND\n(it is being used only by dblink.c), instead use\nERRCODE_FDW_INVALID_OPTION_NAME for all option parsing and\nvalidations.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 13 Oct 2021 10:30:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve the HINT message of the ALTER command for postgres_fdw" }, { "msg_contents": "On 2021/10/13 14:00, Bharath Rupireddy wrote:\n> On Tue, Oct 12, 2021 at 11:11 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> BTW, I found file_fdw.c, dblink.c, postgres_fdw/option.c and foreign.c\n>> use different error codes for the same error message as follows.\n>> They should use the same error code? If yes, ISTM that\n>> ERRCODE_FDW_INVALID_OPTION_NAME is better because\n>> the error message is \"invalid option ...\".\n>>\n>> - ERRCODE_FDW_INVALID_OPTION_NAME (file_fdw.c)\n>> - ERRCODE_FDW_OPTION_NAME_NOT_FOUND (dblink.c)\n>> - ERRCODE_FDW_INVALID_OPTION_NAME (postgres_fdw/option.c)\n>> - ERRCODE_SYNTAX_ERROR (foreign.c)\n> \n> Good catch. ERRCODE_FDW_INVALID_OPTION_NAME seems reasonable to me. I\n> think we can remove the error code ERRCODE_FDW_OPTION_NAME_NOT_FOUND\n> (it is being used only by dblink.c), instead use\n> ERRCODE_FDW_INVALID_OPTION_NAME for all option parsing and\n> validations.\n\nAlvaro told me the difference of those error codes as follows at [1].\nThis makes me think that ERRCODE_FDW_OPTION_NAME_NOT_FOUND\nis more proper for the error message. Thought?\n\n-----------------------\nin SQL/MED compare GetServerOptByName: INVALID OPTION NAME is used\nwhen the buffer length does not match the option name length;\nOPTION NAME NOT FOUND is used when an option of that name is not found\n-----------------------\n\n[1] https://twitter.com/alvherre/status/1447991206286348302\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 14 Oct 2021 02:36:56 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Improve the HINT message of the ALTER command for postgres_fdw" }, { "msg_contents": "On Wed, Oct 13, 2021 at 11:06 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2021/10/13 14:00, Bharath Rupireddy wrote:\n> > On Tue, Oct 12, 2021 at 11:11 PM Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote:\n> >> BTW, I found file_fdw.c, dblink.c, postgres_fdw/option.c and foreign.c\n> >> use different error codes for the same error message as follows.\n> >> They should use the same error code? If yes, ISTM that\n> >> ERRCODE_FDW_INVALID_OPTION_NAME is better because\n> >> the error message is \"invalid option ...\".\n> >>\n> >> - ERRCODE_FDW_INVALID_OPTION_NAME (file_fdw.c)\n> >> - ERRCODE_FDW_OPTION_NAME_NOT_FOUND (dblink.c)\n> >> - ERRCODE_FDW_INVALID_OPTION_NAME (postgres_fdw/option.c)\n> >> - ERRCODE_SYNTAX_ERROR (foreign.c)\n> >\n> > Good catch. ERRCODE_FDW_INVALID_OPTION_NAME seems reasonable to me. I\n> > think we can remove the error code ERRCODE_FDW_OPTION_NAME_NOT_FOUND\n> > (it is being used only by dblink.c), instead use\n> > ERRCODE_FDW_INVALID_OPTION_NAME for all option parsing and\n> > validations.\n>\n> Alvaro told me the difference of those error codes as follows at [1].\n> This makes me think that ERRCODE_FDW_OPTION_NAME_NOT_FOUND\n> is more proper for the error message. Thought?\n>\n> -----------------------\n> in SQL/MED compare GetServerOptByName: INVALID OPTION NAME is used\n> when the buffer length does not match the option name length;\n> OPTION NAME NOT FOUND is used when an option of that name is not found\n> -----------------------\n>\n> [1] https://twitter.com/alvherre/status/1447991206286348302\n\nI'm fine with the distinction that's made, now I'm thinking about the\nappropriate areas where ERRCODE_FDW_INVALID_OPTION_NAME can be used.\nIs it correct to use ERRCODE_FDW_INVALID_OPTION_NAME in\npostgresImportForeignSchema where we don't check buffer length and\noption name length but throw the error when we don't find what's being\nexpected for IMPORT FOREIGN SCHEMA command? Isn't it the\nERRCODE_FDW_OPTION_NAME_NOT_FOUND right choice there? I've seen some\nof the option parsing logic(with the search text \"stmt->options)\" in\nthe code base), they are mostly using \"option \\\"%s\\\" not recognized\"\nwithout an error code or \"unrecognized XXXX option \\\"%s\\\"\" with\nERRCODE_SYNTAX_ERROR. I'm not sure which is right. If not in\npostgresImportForeignSchema, where else can\nERRCODE_FDW_INVALID_OPTION_NAME be used?\n\nIf we were to retain the error code ERRCODE_FDW_INVALID_OPTION_NAME,\ndo we need to maintain the difference in documentation or in code\ncomments or in the commit message at least?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 16 Oct 2021 16:13:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve the HINT message of the ALTER command for postgres_fdw" }, { "msg_contents": "\n\nOn 2021/10/16 19:43, Bharath Rupireddy wrote:\n> I'm fine with the distinction that's made, now I'm thinking about the\n> appropriate areas where ERRCODE_FDW_INVALID_OPTION_NAME can be used.\n> Is it correct to use ERRCODE_FDW_INVALID_OPTION_NAME in\n> postgresImportForeignSchema where we don't check buffer length and\n> option name length but throw the error when we don't find what's being\n> expected for IMPORT FOREIGN SCHEMA command? Isn't it the\n> ERRCODE_FDW_OPTION_NAME_NOT_FOUND right choice there? I've seen some\n> of the option parsing logic(with the search text \"stmt->options)\" in\n> the code base), they are mostly using \"option \\\"%s\\\" not recognized\"\n> without an error code or \"unrecognized XXXX option \\\"%s\\\"\" with\n> ERRCODE_SYNTAX_ERROR. I'm not sure which is right. If not in\n> postgresImportForeignSchema, where else can\n> ERRCODE_FDW_INVALID_OPTION_NAME be used?\n\nThese are good questions. But TBH I don't know the answers and have not\nfound good articles describing more detail definitions of those error codes.\nI'm tempted to improve the HINT message part at first because it has\nobviously an issue. And then we can consider what error code should be\nused in FDW layer if necessary.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 25 Oct 2021 15:30:23 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Improve the HINT message of the ALTER command for postgres_fdw" }, { "msg_contents": "On Mon, Oct 25, 2021 at 12:00 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2021/10/16 19:43, Bharath Rupireddy wrote:\n> > I'm fine with the distinction that's made, now I'm thinking about the\n> > appropriate areas where ERRCODE_FDW_INVALID_OPTION_NAME can be used.\n> > Is it correct to use ERRCODE_FDW_INVALID_OPTION_NAME in\n> > postgresImportForeignSchema where we don't check buffer length and\n> > option name length but throw the error when we don't find what's being\n> > expected for IMPORT FOREIGN SCHEMA command? Isn't it the\n> > ERRCODE_FDW_OPTION_NAME_NOT_FOUND right choice there? I've seen some\n> > of the option parsing logic(with the search text \"stmt->options)\" in\n> > the code base), they are mostly using \"option \\\"%s\\\" not recognized\"\n> > without an error code or \"unrecognized XXXX option \\\"%s\\\"\" with\n> > ERRCODE_SYNTAX_ERROR. I'm not sure which is right. If not in\n> > postgresImportForeignSchema, where else can\n> > ERRCODE_FDW_INVALID_OPTION_NAME be used?\n>\n> These are good questions. But TBH I don't know the answers and have not\n> found good articles describing more detail definitions of those error codes.\n> I'm tempted to improve the HINT message part at first because it has\n> obviously an issue. And then we can consider what error code should be\n> used in FDW layer if necessary.\n\nYeah, let's focus on fixing the hint message here and the\nalter_foreign_data_wrapper_options_v3.patch LGTM.\n\nWhy didn't we have a test case for file_fdw? It looks like the\nfile_fdw contrib module doesn't have any test cases in its sql\ndirectory. I'm not sure if the module code is being covered in\nsomeother tests.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 25 Oct 2021 13:14:58 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve the HINT message of the ALTER command for postgres_fdw" }, { "msg_contents": "On Mon, Oct 25, 2021 at 12:00 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2021/10/16 19:43, Bharath Rupireddy wrote:\n> > I'm fine with the distinction that's made, now I'm thinking about the\n> > appropriate areas where ERRCODE_FDW_INVALID_OPTION_NAME can be used.\n> > Is it correct to use ERRCODE_FDW_INVALID_OPTION_NAME in\n> > postgresImportForeignSchema where we don't check buffer length and\n> > option name length but throw the error when we don't find what's being\n> > expected for IMPORT FOREIGN SCHEMA command? Isn't it the\n> > ERRCODE_FDW_OPTION_NAME_NOT_FOUND right choice there? I've seen some\n> > of the option parsing logic(with the search text \"stmt->options)\" in\n> > the code base), they are mostly using \"option \\\"%s\\\" not recognized\"\n> > without an error code or \"unrecognized XXXX option \\\"%s\\\"\" with\n> > ERRCODE_SYNTAX_ERROR. I'm not sure which is right. If not in\n> > postgresImportForeignSchema, where else can\n> > ERRCODE_FDW_INVALID_OPTION_NAME be used?\n>\n> These are good questions. But TBH I don't know the answers and have not\n> found good articles describing more detail definitions of those error codes.\n> And then we can consider what error code should be\n> used in FDW layer if necessary.\n\nYeah, after this HINT message correction patch gets in, another thread\ncan be started for the error code usage discussion.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 25 Oct 2021 13:20:32 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve the HINT message of the ALTER command for postgres_fdw" }, { "msg_contents": "On 2021/10/25 16:44, Bharath Rupireddy wrote:\n> Yeah, let's focus on fixing the hint message here and the\n> alter_foreign_data_wrapper_options_v3.patch LGTM.\n\nThanks! But since v3 changed the error codes, I got rid of those\nchanges and made v4 patch. Attached.\n\n> Why didn't we have a test case for file_fdw? It looks like the\n> file_fdw contrib module doesn't have any test cases in its sql\n> directory. I'm not sure if the module code is being covered in\n> someother tests.\n\nYou can see the regression test for file_fdw,\nin contrib/file_fdw/input and output directories.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 25 Oct 2021 20:06:29 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Improve the HINT message of the ALTER command for postgres_fdw" }, { "msg_contents": "On Mon, Oct 25, 2021 at 4:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/10/25 16:44, Bharath Rupireddy wrote:\n> > Yeah, let's focus on fixing the hint message here and the\n> > alter_foreign_data_wrapper_options_v3.patch LGTM.\n>\n> Thanks! But since v3 changed the error codes, I got rid of those\n> changes and made v4 patch. Attached.\n\nThat's okay as we plan to deal with the error code stuff separately.\n\n> > Why didn't we have a test case for file_fdw? It looks like the\n> > file_fdw contrib module doesn't have any test cases in its sql\n> > directory. I'm not sure if the module code is being covered in\n> > someother tests.\n>\n> You can see the regression test for file_fdw,\n> in contrib/file_fdw/input and output directories.\n\nI missed it. Thanks. I see that there are already test cases covering\nerror message with hint - \"There are no valid options in this\ncontext.\"\n\nThe v4 patch LGTM.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 25 Oct 2021 17:57:27 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve the HINT message of the ALTER command for postgres_fdw" }, { "msg_contents": "\n\nOn 2021/10/25 21:27, Bharath Rupireddy wrote:\n> On Mon, Oct 25, 2021 at 4:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2021/10/25 16:44, Bharath Rupireddy wrote:\n>>> Yeah, let's focus on fixing the hint message here and the\n>>> alter_foreign_data_wrapper_options_v3.patch LGTM.\n>>\n>> Thanks! But since v3 changed the error codes, I got rid of those\n>> changes and made v4 patch. Attached.\n> \n> That's okay as we plan to deal with the error code stuff separately.\n> \n>>> Why didn't we have a test case for file_fdw? It looks like the\n>>> file_fdw contrib module doesn't have any test cases in its sql\n>>> directory. I'm not sure if the module code is being covered in\n>>> someother tests.\n>>\n>> You can see the regression test for file_fdw,\n>> in contrib/file_fdw/input and output directories.\n> \n> I missed it. Thanks. I see that there are already test cases covering\n> error message with hint - \"There are no valid options in this\n> context.\"\n> \n> The v4 patch LGTM.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 27 Oct 2021 01:04:02 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Improve the HINT message of the ALTER command for postgres_fdw" } ]
[ { "msg_contents": "Hi hackers,\n\n== Background ==\n\nThis is a follow-up thread to `Add ZSON extension to /contrib/` [1].\nThe ZSON extension introduces a new type called ZSON, which is 100%\ncompatible with JSONB but uses a shared dictionary of strings most\nfrequently used by given JSONB documents for compression. See the\nthread for more details.\n\nAccording to the feedback I got, the community generally liked the\nidea of adding an across-rows and across-tables compression capability\nto JSONB. What the community didn't like was:\n\n1. Introducing a new data type in order to archive this;\n2. Updating compression dictionaries manually;\n3. Some implementation details of ZSON, such as limited dictionary\nsize (2 ** 16 entries) and an extensive usage of gettimeofday() system\ncall;\n\nThere was also a request for proof of the usefulness of this feature\nin practice.\n\nTo be honest with you I don't have solid proof that many users require\nthis feature, and how many users that would be exactly. ZSON was\noriginally developed because a customer of Postgres Professional\nrequested it back in 2016. People approach me with questions from time\nto time. E.g. one user asked me recently how the extension can be\ncompiled on Windows [2].\n\nAndrew Dunstan reported that 2nd Quadrant (now EDB) has a fork of ZSON\nwith some enhancements [3]. Konstantin Knizhnik reported that he\nworked on a similar extension [4]. Unfortunately, Andrew and\nKonstantin didn't give any more details (hopefully they will). But all\nin all, this indicates some demand.\n\nSpeaking of performance, some time ago I benchmarked ZSON on data\nsimilar to the one that the customer had [5]. The benchmark showed\n~10% performance improvements in terms of TPS and ~50% of saved disk\nspace. The extension saved the memory as well, which was known from\nthe implementation. The exact amount of saved memory was not measured.\nThis benchmark shouldn't be considered as proof that all users will\nnecessarily benefit from such a feature. But it indicates that some\nusers could.\n\n== Proposal ==\n\nThe proposal is to add the support of compression dictionaries to JSONB.\n\nIn order to do this, the SQL syntax should be modified. The proposed\nsyntax is based on Matthias van de Meent's idea [6]:\n\n```\nCREATE TYPE <type-name> AS JSONB_DICTIONARY (\n learn_by = { {\"table_a\", \"column_a\"}, {\"table_b\", \"column_b\"}, ... },\n autoupdate = false, -- true by default\n -- optional: other arguments, min/max string lengths, etc\n);\n```\n\nBasically, this is an equivalent of zson_learn [7]. It will create an\nid -> string dictionary in the PostgreSQL catalog. When the user\nchooses `autoupdate = true`, the dictionary will be updated\nautomatically by PostgreSQL (e.g. during the VACUUM). This is the\ndefault value. The dictionary can also be updated manually:\n\n```\nSELECT jsonb_update_dictionary(\"type-name\");\n```\n\nOther than that, the type works like a regular one. All the usual\nALTER TYPE / DROP TYPE semantics are applicable. All the operators\navailable to JSONB are also available to <type-name>.\n\nInternally <type-name> is represented similar to JSONB. However, the\nstrings from the dictionary are replaced with varints. This idea was\nborrowed from Tomas Vondra [8]. The dictionary size is limited to\n2**28 entries. The limit can be easily extended in the future if\nnecessary. Also <type-name> stores the version of the dictionary used\nto compress the data. All in all, this is similar to how ZSON works.\n\nThe first implementation always decompresses <type-name> entirely.\nPartial compression and decompression can always be added\ntransparently to the user.\n\n== Looking for a feedback ===\n\nI would appreciate your feedback on this RFC.\n\nIs anything missing in the description of the feature? Do you think\nusers need it? Can you think of a better user interface? Are there any\ncorner cases worth considering? Any other comments and questions are\nwelcome too!\n\nI would like to implement this when the consensus will be reached on\nhow the feature should look like (and whether we need it). Any help\n(from co-authors, REVIEWERS!!!, technical writers, ...) would be much\nappreciated.\n\n== Links ==\n\n[1]: https://www.postgresql.org/message-id/flat/CAJ7c6TP3fCC9TNKJBQAcEf4c%3DL7XQZ7QvuUayLgjhNQMD_5M_A%40mail.gmail.com\n[2]: https://github.com/postgrespro/zson/issues?q=is%3Aissue+is%3Aclosed\n[3]: https://www.postgresql.org/message-id/6f3944ad-6924-5fed-580c-e72477733f04%40dunslane.net\n[4]: https://github.com/postgrespro/jsonb_schema\n[5]: https://github.com/postgrespro/zson/blob/master/docs/benchmark.md\n[6]: https://www.postgresql.org/message-id/CAEze2WheMusc73UZ5TpfiAGQ%3DrRwSSgr0y3j9DEVAQgQFwneRA%40mail.gmail.com\n[7]: https://github.com/postgrespro/zson#usage\n[8]: https://www.postgresql.org/message-id/77356556-0634-5cde-f55e-cce739dc09b9%40enterprisedb.com\n\n-- \nBest regards,\nAleksander Alekseev\nOpen-Source PostgreSQL Contributor at Timescale\n\n\n", "msg_date": "Fri, 8 Oct 2021 12:47:17 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "RFC: compression dictionaries for JSONB" }, { "msg_contents": "On Fri, 8 Oct 2021 at 11:47, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> This is a follow-up thread to `Add ZSON extension to /contrib/` [1].\n> The ZSON extension introduces a new type called ZSON, which is 100%\n> compatible with JSONB but uses a shared dictionary of strings most\n> frequently used by given JSONB documents for compression. See the\n> thread for more details.\n\nGreat to see that you're still working on this! It would be great if\nwe could get this into postgres. As such, I hope you can provide some\nclarifications on my questions and comments.\n\n> According to the feedback I got, the community generally liked the\n> idea of adding an across-rows and across-tables compression capability\n> to JSONB. What the community didn't like was:\n>\n> 1. Introducing a new data type in order to archive this;\n> 2. Updating compression dictionaries manually;\n\nWell, I for one would like access to manually add entries to the\ndictionary. What I'm not interested in is being required to manually\nupdate the dictionary; but the ability to manually insert into the\ndictionary however is much appreciated.\n\n> 3. Some implementation details of ZSON, such as limited dictionary\n> size (2 ** 16 entries) and an extensive usage of gettimeofday() system\n> call;\n>\n> There was also a request for proof of the usefulness of this feature\n> in practice.\n\nMore compact JSONB is never a bad idea: one reason to stick with JSON\nover JSONB is that JSON can use significantly less space than JSONB,\nif stored properly. So, improving the disk usage of JSONB is not\nreally a bad idea.\n\n>\n> == Proposal ==\n>\n> The proposal is to add the support of compression dictionaries to JSONB.\n>\n> In order to do this, the SQL syntax should be modified. The proposed\n> syntax is based on Matthias van de Meent's idea [6]:\n\nSeems fine\n\n> ```\n> CREATE TYPE <type-name> AS JSONB_DICTIONARY (\n> learn_by = { {\"table_a\", \"column_a\"}, {\"table_b\", \"column_b\"}, ... },\n\nI'm having trouble understanding how this learn_by field would be used:\n\nIf stored as strings, they would go out of date when tables or columns\nare renamed or dropped.\nSimilarly, you'd want to update the dictionary with common values in\ncolumns of that type; generally not columns of arbitrary other types.\nYou can't in advance know the names of tables and columns, so that\nwould add a burden of maintenance to the user when they add / change /\nremove a column of the dictionary type. Instead of storing 'use update\ndata from table X column Y' in the type, I think that adding it as a\ncolumn option would be the better choice.\n\nI agree with an option for auto-update, though I don't think we have\nenough information to determine the default value (I'd err to the side\nof caution, going with 'off').\n\n> autoupdate = false, -- true by default\n> -- optional: other arguments, min/max string lengths, etc\n> );\n> ```\n\nFor dump/restore I think it would be very useful to allow export &\nimport of these dictionaries, so that restored databases don't have\nthe problem of starting cold.\n\nAs such, `ALTER TYPE jsondict ADD ENTRY entry_value` would probably\nbe useful, and maybe even `CREATE TYPE dict AS JSONB_DICTIONARY\n('\"entry_1\"', '\"entry_two\"', '\"entry_three\"') WITH (option =\noptional)`\n\n> Basically, this is an equivalent of zson_learn [7]. It will create an\n> id -> string dictionary in the PostgreSQL catalog. When the user\n> chooses `autoupdate = true`, the dictionary will be updated\n> automatically by PostgreSQL (e.g. during the VACUUM). This is the\n> default value. The dictionary can also be updated manually:\n>\n> ```\n> SELECT jsonb_update_dictionary(\"type-name\");\n> ```\n\nI'm a bit on the fence about this. We do use this for sequences, but\nalternatively we might want to use ALTER TYPE jsondict;\n\n> Other than that, the type works like a regular one. All the usual\n> ALTER TYPE / DROP TYPE semantics are applicable. All the operators\n> available to JSONB are also available to <type-name>.\n>\n> Internally <type-name> is represented similar to JSONB. However, the\n> strings from the dictionary are replaced with varints.\n\nHow do you propose to differentiate actual integers with these keyed\nstrings, and / or actual strings with varints? Replacing _all_ strings\ndoesn't seem like such a great idea.\n\nRelated comment below.\n\n> This idea was\n> borrowed from Tomas Vondra [8]. The dictionary size is limited to\n> 2**28 entries. The limit can be easily extended in the future if\n> necessary. Also <type-name> stores the version of the dictionary used\n> to compress the data. All in all, this is similar to how ZSON works.\n\nI appreciate this idea, but using that varint implementation is not a\nchoice I'd make. In the jsonb storage format, we already encode the\nlength of each value, so varint shouldn't be necessary here. Next, as\neverything in jsonb storage is 4-byte aligned, a uint32 should\nsuffice, or if we're being adventurous, we might even fit a uint29\nidentifier in the length field instead (at the cost of full backwards\nincompatibility).\n\nLastly, we don't have a good format for varint now (numeric is close,\nbut has significant overhead), so I'd say we should go with a\nfixed-size integer and accept that limitation.\n\nMy own suggestion would be updating JSONB on-disk format with the following:\n\n```\n /* values stored in the type bits */\n #define JENTRY_ISSTRING 0x00000000\n #define JENTRY_ISNUMERIC 0x10000000\n #define JENTRY_ISBOOL_FALSE 0x20000000\n #define JENTRY_ISBOOL_TRUE 0x30000000\n #define JENTRY_ISNULL 0x40000000\n #define JENTRY_ISCONTAINER 0x50000000 /* array or object */\n+#define JENTRY_ISSYMBOL 0x60000000 /* Lookup in dictionary */\n```\n\nAnd then store the symbol in the JEntry (either in the\nJENTRY_OFFLENMASK or in the actual referred content), whilst maybe\nusing some bits in this for e.g. type hints (whether the item in the\ndictionary is an array, object, string or numeric).\n\nI really would like this to support non-string types, because jsonb\nstructures can grow quite large, even with only small strings: e.g.\n`{..., \"tags\": {\"customer\": \"blabla\"}}` could be dictionaried to\n`{..., \"tags\": {'1: '2}`, but potentially also to `{... \"tags\": '1}`.\nOf these, the second would be more efficient overall for storage and\nretrieval..\n\n> The first implementation always decompresses <type-name> entirely.\n> Partial compression and decompression can always be added\n> transparently to the user.\n\nAre you talking about the TOAST compression and decompression, or are\nyou talking about a different compression scheme? If a different\nscheme, is it only replacing the strings in the jsonb-tree with their\ndirectory identifiers, and replacing the symbols in the jsonb-tree\nwith text (all through the JSONB internals), or are you proposing an\nactual compression scheme over the stored jsonb bytes (effectively\nwrapping the jsonb IO functions)?\n\nOverall, I'm glad to see this take off, but I do want some\nclarifications regarding the direction that this is going towards.\n\n\nKind regards,\n\nMatthias\n\n\n", "msg_date": "Fri, 8 Oct 2021 16:12:30 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: compression dictionaries for JSONB" }, { "msg_contents": "On 2021-Oct-08, Matthias van de Meent wrote:\n\n> On Fri, 8 Oct 2021 at 11:47, Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n\n> > In order to do this, the SQL syntax should be modified. The proposed\n> > syntax is based on Matthias van de Meent's idea [6]:\n> \n> Seems fine\n> \n> > ```\n> > CREATE TYPE <type-name> AS JSONB_DICTIONARY (\n> > learn_by = { {\"table_a\", \"column_a\"}, {\"table_b\", \"column_b\"}, ... },\n\nActually, why is it a JSONB_DICTIONARY and not like\n\nCREATE TYPE name AS DICTIONARY (\n base_type = JSONB, ...\n);\n\nso that it is possible to use the infrastructure for other things? For\nexample, perhaps PostGIS geometries could benefit from it -- or even\ntext or xml columns.\n\nThe pg_type entry would have to provide some support procedure that\nmakes use of the dictionary in some way. This seems better than tying\nthe SQL object to a specific type.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n#error \"Operator lives in the wrong universe\"\n (\"Use of cookies in real-time system development\", M. Gleixner, M. Mc Guire)\n\n\n", "msg_date": "Fri, 8 Oct 2021 12:19:44 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: RFC: compression dictionaries for JSONB" }, { "msg_contents": "On Fri, 8 Oct 2021 at 17:19, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Oct-08, Matthias van de Meent wrote:\n>\n> > On Fri, 8 Oct 2021 at 11:47, Aleksander Alekseev\n> > <aleksander@timescale.com> wrote:\n>\n> > > In order to do this, the SQL syntax should be modified. The proposed\n> > > syntax is based on Matthias van de Meent's idea [6]:\n> >\n> > Seems fine\n> >\n> > > ```\n> > > CREATE TYPE <type-name> AS JSONB_DICTIONARY (\n> > > learn_by = { {\"table_a\", \"column_a\"}, {\"table_b\", \"column_b\"}, ... },\n>\n> Actually, why is it a JSONB_DICTIONARY and not like\n>\n> CREATE TYPE name AS DICTIONARY (\n> base_type = JSONB, ...\n> );\n\nThat's a good point, but if we're extending this syntax to allow the\nability of including other types, then I'd instead extend the syntax\nthat of below, so that the type of the dictionary entries is required\nin the syntax:\n\nCREATE TYPE name AS DICTIONARY OF jsonb [ ( ...entries ) ] [ WITH (\n...options ) ];\n\n> so that it is possible to use the infrastructure for other things? For\n> example, perhaps PostGIS geometries could benefit from it -- or even\n> text or xml columns.\n>\n> The pg_type entry would have to provide some support procedure that\n> makes use of the dictionary in some way. This seems better than tying\n> the SQL object to a specific type.\n\nAgreed, but this might mean that much more effort would be required to\nget such a useful quality-of-life feature committed.\n\nKind regards,\n\nMatthias\n\n\n", "msg_date": "Fri, 8 Oct 2021 19:17:39 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: compression dictionaries for JSONB" }, { "msg_contents": "On 2021-Oct-08, Matthias van de Meent wrote:\n\n> That's a good point, but if we're extending this syntax to allow the\n> ability of including other types, then I'd instead extend the syntax\n> that of below, so that the type of the dictionary entries is required\n> in the syntax:\n> \n> CREATE TYPE name AS DICTIONARY OF jsonb [ ( ...entries ) ] [ WITH (\n> ...options ) ];\n\nI don't think this gives you any guarantees of the sort you seem to\nexpect. See CREATE AGGREGATE as a precedent where there are some\noptions in the parenthesized options list you cannot omit.\n\n> > The pg_type entry would have to provide some support procedure that\n> > makes use of the dictionary in some way. This seems better than tying\n> > the SQL object to a specific type.\n> \n> Agreed, but this might mean that much more effort would be required to\n> get such a useful quality-of-life feature committed.\n\nI don't understand what you mean by that. I'm not saying that the patch\nhas to provide support for any additional datatypes. Its only\nobligation would be to provide a new column in pg_type which is zero for\nall rows except jsonb, and in that row it is the OID of a\njsonb_dictionary() function that's called from all the right places and\nreceives all the right arguments.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"No tengo por qué estar de acuerdo con lo que pienso\"\n (Carlos Caszeli)\n\n\n", "msg_date": "Fri, 8 Oct 2021 16:21:32 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: RFC: compression dictionaries for JSONB" }, { "msg_contents": "On Fri, 8 Oct 2021 at 21:21, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Oct-08, Matthias van de Meent wrote:\n>\n> > That's a good point, but if we're extending this syntax to allow the\n> > ability of including other types, then I'd instead extend the syntax\n> > that of below, so that the type of the dictionary entries is required\n> > in the syntax:\n> >\n> > CREATE TYPE name AS DICTIONARY OF jsonb [ ( ...entries ) ] [ WITH (\n> > ...options ) ];\n>\n> I don't think this gives you any guarantees of the sort you seem to\n> expect. See CREATE AGGREGATE as a precedent where there are some\n> options in the parenthesized options list you cannot omit.\n\nBikeshedding on syntax:\nI guess? I don't really like 'required options' patterns. If you're\nrequired to use/specify an option, then it's not optional, and should\nthus not be included in the group of 'options'.\n\n> > > The pg_type entry would have to provide some support procedure that\n> > > makes use of the dictionary in some way. This seems better than tying\n> > > the SQL object to a specific type.\n> >\n> > Agreed, but this might mean that much more effort would be required to\n> > get such a useful quality-of-life feature committed.\n>\n> I don't understand what you mean by that. I'm not saying that the patch\n> has to provide support for any additional datatypes. Its only\n> obligation would be to provide a new column in pg_type which is zero for\n> all rows except jsonb, and in that row it is the OID of a\n> jsonb_dictionary() function that's called from all the right places and\n> receives all the right arguments.\n\nThis seems feasable to do, but I still have limited knowledge on the\nintricacies of the type system, and as such I don't see how this part\nwould function:\n\nI was expecting more something in the line of how array types seem to\nwork: Type _A is an array type, containing elements of Type A. It's\ncontaining type is defined in pg_type.typbasetype. No special\nfunctions are defined on base types to allow their respective array\ntypes, that part is handled by the array infrastructure. Same for\nDomain types.\n\nNow that I think about it, we should still provide the information on\n_how_ to find the type functions for the dictionaried type: Arrays and\ndomains are generic, but dictionaries will require deep understanding\nof the underlying type.\n\nSo, yes, you are correct, there should be one more function, which\nwould supply the necessary pg_type functions that CREATE TYPE\nDICTIONARY can then register in the pg_type entry of the dictionary\ntype. The alternative would initially be hardcoding this for the base\ntypes that have dictionary support, which definitely would be possible\nfor a first iteration, but wouldn't be great.\n\n\nKind regards,\n\nMatthias\n\n\n", "msg_date": "Sat, 9 Oct 2021 14:49:56 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: compression dictionaries for JSONB" }, { "msg_contents": "Matthias, Alvaro,\n\nMany thanks for your comments and suggestions!\n\n> Well, I for one would like access to manually add entries to the\n> dictionary. What I'm not interested in is being required to manually\n> update the dictionary; but the ability to manually insert into the\n> dictionary however is much appreciated.\n\nSure, I see no reason why we can't do it. This would also simplify\nsplitting the task into smaller ones. We could introduce only manually\nupdated dictionaries first, and then automate filling them for the\nusers who need this.\n\n> If stored as strings, they would go out of date when tables or columns\n> are renamed or dropped.\n> Similarly, you'd want to update the dictionary with common values in\n> columns of that type; generally not columns of arbitrary other types.\n> You can't in advance know the names of tables and columns, so that\n> would add a burden of maintenance to the user when they add / change /\n> remove a column of the dictionary type. Instead of storing 'use update\n> data from table X column Y' in the type, I think that adding it as a\n> column option would be the better choice.\n\nAgree, add / change / remove of a column should be handled\nautomatically. Just to clarify, by column option do you mean syntax\nlike ALTER TABLE ... ALTER COLUMN ... etc, right? I didn't think of\nextending this part of the syntax. That would be a better choice\nindeed.\n\n> I'm a bit on the fence about this. We do use this for sequences, but\n> alternatively we might want to use ALTER TYPE jsondict;\n\nAgree, ALTER TYPE seems to be a better choice than SELECT function().\nThis would make the interface more consistent.\n\n> Overall, I'm glad to see this take off, but I do want some\n> clarifications regarding the direction that this is going towards.\n> [...]\n> Actually, why is it a JSONB_DICTIONARY and not like:\n> CREATE TYPE name AS DICTIONARY (\n> base_type = JSONB, ...\n> );\n> so that it is possible to use the infrastructure for other things? For\n> example, perhaps PostGIS geometries could benefit from it -- or even\n> text or xml columns.\n\nSo the question is if we want to extend the capabilities of a single\ntype, i.e. JSONB, or to add a functionality that would work for the\nvarious types. I see the following pros and cons of both approaches.\n\nModifying JSONB may at some point allow to partially decompress only\nthe parts of the document that need to be decompressed for a given\nquery. However, the compression schema will be probably less\nefficient. There could also be difficulties in respect of backward\ncompatibility, and this is going to work only with JSONB.\n\nAn alternative approach, CREATE TYPE ... AS DICTIONARY OF <type> or\nsomething like this would work not only for JSONB, but also for TEXT,\nXML, arrays, and PostGIS. By the way, this was suggested before [1].\nAnother advantage here is that all things being equal the compression\nschema could be more efficient. The feature will not affect existing\ntypes. The main disadvantage is that implementing a partial\ndecompression would be very difficult and/or impractical.\n\nPersonally, I would say that the 2nd option, CREATE TYPE ... AS\nDICTIONARY OF <type>, seems to be more useful. To my knowledge, not\nmany users would care much about partial decompression, and this is\nthe only real advantage of the 1st option I see. Also, this is how\nZSON is implemented. It doesn't care about the underlying type and\ntreats it as a BLOB. Thus the proofs of usefulness I provided above\nare not quite valid for the 1st option. Probably unrelated, but 2nd\noption would be even easier for me to implement since I already solved\na similar task.\n\nAll in all, I suggest focusing on the 2nd option with universal\ncompression dictionaries. Naturally, the focus will be on JSONB first.\nBut we will be able to extend this functionality for other types as\nwell.\n\nThoughts?\n\n[1]: https://www.postgresql.org/message-id/CANP8%2BjLT8r03LJsw%3DdUSFxBh5pRB%2BUCKvS3BUT-dd4JPRDb3tg%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 11 Oct 2021 16:25:07 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: RFC: compression dictionaries for JSONB" }, { "msg_contents": "On Mon, 11 Oct 2021 at 15:25, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> Agree, add / change / remove of a column should be handled\n> automatically. Just to clarify, by column option do you mean syntax\n> like ALTER TABLE ... ALTER COLUMN ... etc, right?\n\nCorrect, either SET (option) or maybe using typmod (which is hack-ish,\nbut could save on some bytes of storage)\n\n> I didn't think of\n> extending this part of the syntax. That would be a better choice\n> indeed.\n\n> > Overall, I'm glad to see this take off, but I do want some\n> > clarifications regarding the direction that this is going towards.\n> > [...]\n> > Actually, why is it a JSONB_DICTIONARY and not like:\n> > CREATE TYPE name AS DICTIONARY (\n> > base_type = JSONB, ...\n> > );\n> > so that it is possible to use the infrastructure for other things? For\n> > example, perhaps PostGIS geometries could benefit from it -- or even\n> > text or xml columns.\n>\n> So the question is if we want to extend the capabilities of a single\n> type, i.e. JSONB, or to add a functionality that would work for the\n> various types. I see the following pros and cons of both approaches.\n>\n> Modifying JSONB may at some point allow to partially decompress only\n> the parts of the document that need to be decompressed for a given\n> query. However, the compression schema will be probably less\n> efficient. There could also be difficulties in respect of backward\n> compatibility, and this is going to work only with JSONB.\n\nAssuming this above is option 1. If I understand correctly, this\noption was 'adapt the data type so that it understands how to handle a\nshared dictionary, decreasing storage requirements'.\n\n> An alternative approach, CREATE TYPE ... AS DICTIONARY OF <type> or\n> something like this would work not only for JSONB, but also for TEXT,\n> XML, arrays, and PostGIS. By the way, this was suggested before [1].\n> Another advantage here is that all things being equal the compression\n> schema could be more efficient. The feature will not affect existing\n> types. The main disadvantage is that implementing a partial\n> decompression would be very difficult and/or impractical.\n\nAssuming this was the 2nd option. If I understand correctly, this\noption is effectively 'adapt or wrap TOAST to understand and handle\ndictionaries for dictionary encoding common values'.\n\n> Personally, I would say that the 2nd option, CREATE TYPE ... AS\n> DICTIONARY OF <type>, seems to be more useful. To my knowledge, not\n> many users would care much about partial decompression, and this is\n> the only real advantage of the 1st option I see. Also, this is how\n> ZSON is implemented. It doesn't care about the underlying type and\n> treats it as a BLOB. Thus the proofs of usefulness I provided above\n> are not quite valid for the 1st option. Probably unrelated, but 2nd\n> option would be even easier for me to implement since I already solved\n> a similar task.\n>\n> All in all, I suggest focusing on the 2nd option with universal\n> compression dictionaries. Naturally, the focus will be on JSONB first.\n> But we will be able to extend this functionality for other types as\n> well.\n>\n> Thoughts?\n\nI think that an 'universal dictionary encoder' would be useful, but\nthat a data type might also have good reason to implement their\nreplacement methods by themselves for better overall performance (such\nas maintaining partial detoast support in dictionaried items, or\noverall lower memory footprint, or ...). As such, I'd really\nappreciate it if Option 1 is not ruled out by any implementation of\nOption 2.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 11 Oct 2021 19:39:02 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: compression dictionaries for JSONB" }, { "msg_contents": "Hi Matthias,\n\n> Assuming this above is option 1. If I understand correctly, this\n> option was 'adapt the data type so that it understands how to handle a\n> shared dictionary, decreasing storage requirements'.\n> [...]\n> Assuming this was the 2nd option. If I understand correctly, this\n> option is effectively 'adapt or wrap TOAST to understand and handle\n> dictionaries for dictionary encoding common values'.\n\nYes, exactly.\n\n> I think that an 'universal dictionary encoder' would be useful, but\n> that a data type might also have good reason to implement their\n> replacement methods by themselves for better overall performance (such\n> as maintaining partial detoast support in dictionaried items, or\n> overall lower memory footprint, or ...). As such, I'd really\n> appreciate it if Option 1 is not ruled out by any implementation of\n> Option 2.\n\nI agree, having the benefits of two approaches in one feature would be\ngreat. However, I'm having some difficulties imagining how the\nimplementation would look like in light of the pros and cons stated\nabove. I could use some help here.\n\nOne approach I can think of is introducing a new entity, let's call it\n\"dictionary compression method\". The idea is similar to access methods\nand tableam's. There is a set of callbacks the dictionary compression\nmethod should implement, some are mandatory, some can be set to NULL.\nUsers can specify the compression method for the dictionary:\n\n```\nCREATE TYPE name AS DICTIONARY OF JSONB (\n compression_method = 'jsonb_best_compression'\n -- compression_methods = 'jsonb_fastest_partial_decompression'\n -- if not specified, some default compression method is used\n);\n```\n\nJSONB is maybe not the best example of the type for which people may\nneed multiple compression methods in practice. But I can imagine how\noverwriting a compression method for, let's say, arrays in an\nextension could be beneficial depending on the application.\n\nThis approach will make an API well-defined and, more importantly,\nextendable. In the future, we could add additional (optional) methods\nfor particular scenarios, like partial decompression.\n\nDoes it sound like a reasonable approach?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 13 Oct 2021 12:48:08 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: RFC: compression dictionaries for JSONB" }, { "msg_contents": "On Wed, 13 Oct 2021 at 11:48, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi Matthias,\n>\n> > Assuming this above is option 1. If I understand correctly, this\n> > option was 'adapt the data type so that it understands how to handle a\n> > shared dictionary, decreasing storage requirements'.\n> > [...]\n> > Assuming this was the 2nd option. If I understand correctly, this\n> > option is effectively 'adapt or wrap TOAST to understand and handle\n> > dictionaries for dictionary encoding common values'.\n>\n> Yes, exactly.\n>\n> > I think that an 'universal dictionary encoder' would be useful, but\n> > that a data type might also have good reason to implement their\n> > replacement methods by themselves for better overall performance (such\n> > as maintaining partial detoast support in dictionaried items, or\n> > overall lower memory footprint, or ...). As such, I'd really\n> > appreciate it if Option 1 is not ruled out by any implementation of\n> > Option 2.\n>\n> I agree, having the benefits of two approaches in one feature would be\n> great. However, I'm having some difficulties imagining how the\n> implementation would look like in light of the pros and cons stated\n> above. I could use some help here.\n>\n> One approach I can think of is introducing a new entity, let's call it\n> \"dictionary compression method\". The idea is similar to access methods\n> and tableam's. There is a set of callbacks the dictionary compression\n> method should implement, some are mandatory, some can be set to NULL.\n\nYou might also want to look into the 'pluggable compression support'\n[0] and 'Custom compression methods' [1] threads for inspiration, as\nthat seems very similar to what was originally proposed there. (†)\n\nOne important difference from those discussed at [0][1] is that the\ncompression proposed here is at the type level, while the compression\nproposed in both 'Pluggable compression support' and 'Custom\ncompression methods' is at the column / table / server level.\n\n> Users can specify the compression method for the dictionary:\n>\n> ```\n> CREATE TYPE name AS DICTIONARY OF JSONB (\n> compression_method = 'jsonb_best_compression'\n> -- compression_methods = 'jsonb_fastest_partial_decompression'\n> -- if not specified, some default compression method is used\n> );\n> ```\n>\n> JSONB is maybe not the best example of the type for which people may\n> need multiple compression methods in practice. But I can imagine how\n> overwriting a compression method for, let's say, arrays in an\n> extension could be beneficial depending on the application.\n>\n> This approach will make an API well-defined and, more importantly,\n> extendable. In the future, we could add additional (optional) methods\n> for particular scenarios, like partial decompression.\n>\n> Does it sound like a reasonable approach?\n\nYes, I think that's doable.\n\n\nKind regards,\n\nMatthias\n\n(†): 'Custom compression methods' eventually got committed in an\nentirely different state by the way of commit bbe0a81db, where LZ4 is\nnow a toast compression option that can be configured at the column /\nsystem level. This is a hard-coded compression method, so no\ninfrastructure (or at least, API) is available for custom compression\nmethods in that code.\n\n[0] https://www.postgresql.org/message-id/flat/20130614230142.GC19641%40awork2.anarazel.de\n[1] https://www.postgresql.org/message-id/flat/20170907194236.4cefce96@wp.localdomain\n\n\n", "msg_date": "Wed, 13 Oct 2021 23:25:35 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: compression dictionaries for JSONB" }, { "msg_contents": "Hi hackers,\n\nMany thanks for all your great feedback!\n\nPlease see the follow-up thread '[PATCH] Compression dictionaries for JSONB':\n\nhttps://postgr.es/m/CAJ7c6TOtAB0z1UrksvGTStNE-herK-43bj22%3D5xVBg7S4vr5rQ%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 22 Apr 2022 11:33:56 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: RFC: compression dictionaries for JSONB" } ]
[ { "msg_contents": "Hi,\n\nThe commit [1] for the feature \"Remove temporary files after backend\ncrash\" introduced following in the docs:\n+ <para>\n+ When set to <literal>on</literal>, which is the default,\n+ <productname>PostgreSQL</productname> will automatically remove\n+ temporary files after a backend crash. If disabled, the files will be\n+ retained and may be used for debugging, for example. Repeated crashes\n+ may however result in accumulation of useless files.\n+ </para>\n\nThe term backend means the user sessions (see from the glossary, at\n[2]). It looks like the code introduced by the commit [1] i.e. the\ntemp table removal gets hit not only after the backend crash, but also\nafter checkpointer, bg writer, wal writer, auto vac launcher, logical\nrepl launcher and so on. It is sort of misleading to the normal users.\nWith the commit [3] clarifying these processes in master branch [4],\ndo we also need to modify the doc added by commit [1] in PG master at\nleast?\n\n[1] commit cd91de0d17952b5763466cfa663e98318f26d357\nAuthor: Tomas Vondra <tomas.vondra@postgresql.org>\nDate: Thu Mar 18 16:05:03 2021 +0100\n\n Remove temporary files after backend crash\n\n[2] PG 14 - https://www.postgresql.org/docs/current/glossary.html#GLOSSARY-BACKEND\n\n[3] commit d3014fff4cd4dcaf4b2764d96ad038f3be7413b0\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Mon Sep 20 12:22:02 2021 -0300\n\n Doc: add glossary term for \"auxiliary process\"\n\n[4] PG master - https://www.postgresql.org/docs/devel/glossary.html\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 8 Oct 2021 16:27:14 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Reword docs of feature \"Remove temporary files after backend crash\"" }, { "msg_contents": "On Fri, Oct 8, 2021 at 4:27 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> The commit [1] for the feature \"Remove temporary files after backend\n> crash\" introduced following in the docs:\n> + <para>\n> + When set to <literal>on</literal>, which is the default,\n> + <productname>PostgreSQL</productname> will automatically remove\n> + temporary files after a backend crash. If disabled, the files will be\n> + retained and may be used for debugging, for example. Repeated crashes\n> + may however result in accumulation of useless files.\n> + </para>\n>\n> The term backend means the user sessions (see from the glossary, at\n> [2]). It looks like the code introduced by the commit [1] i.e. the\n> temp table removal gets hit not only after the backend crash, but also\n> after checkpointer, bg writer, wal writer, auto vac launcher, logical\n> repl launcher and so on. It is sort of misleading to the normal users.\n> With the commit [3] clarifying these processes in master branch [4],\n> do we also need to modify the doc added by commit [1] in PG master at\n> least?\n>\n> [1] commit cd91de0d17952b5763466cfa663e98318f26d357\n> Author: Tomas Vondra <tomas.vondra@postgresql.org>\n> Date: Thu Mar 18 16:05:03 2021 +0100\n>\n> Remove temporary files after backend crash\n>\n> [2] PG 14 - https://www.postgresql.org/docs/current/glossary.html#GLOSSARY-BACKEND\n>\n> [3] commit d3014fff4cd4dcaf4b2764d96ad038f3be7413b0\n> Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> Date: Mon Sep 20 12:22:02 2021 -0300\n>\n> Doc: add glossary term for \"auxiliary process\"\n>\n> [4] PG master - https://www.postgresql.org/docs/devel/glossary.html\n\nHere's the patch modifying the docs slightly. Please review it.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sat, 9 Oct 2021 21:18:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reword docs of feature \"Remove temporary files after backend\n crash\"" }, { "msg_contents": "On Sat, Oct 09, 2021 at 09:18:24PM +0530, Bharath Rupireddy wrote:\n> On Fri, Oct 8, 2021 at 4:27 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > The commit [1] for the feature \"Remove temporary files after backend\n> > crash\" introduced following in the docs:\n> > + <para>\n> > + When set to <literal>on</literal>, which is the default,\n> > + <productname>PostgreSQL</productname> will automatically remove\n> > + temporary files after a backend crash. If disabled, the files will be\n> > + retained and may be used for debugging, for example. Repeated crashes\n> > + may however result in accumulation of useless files.\n> > + </para>\n> >\n> > The term backend means the user sessions (see from the glossary, at\n> > [2]). It looks like the code introduced by the commit [1] i.e. the\n> > temp table removal gets hit not only after the backend crash, but also\n> > after checkpointer, bg writer, wal writer, auto vac launcher, logical\n> > repl launcher and so on. It is sort of misleading to the normal users.\n> > With the commit [3] clarifying these processes in master branch [4],\n> > do we also need to modify the doc added by commit [1] in PG master at\n> > least?\n> >\n> > [1] commit cd91de0d17952b5763466cfa663e98318f26d357\n> > Author: Tomas Vondra <tomas.vondra@postgresql.org>\n> > Date: Thu Mar 18 16:05:03 2021 +0100\n> >\n> > Remove temporary files after backend crash\n> >\n> > [2] PG 14 - https://www.postgresql.org/docs/current/glossary.html#GLOSSARY-BACKEND\n> >\n> > [3] commit d3014fff4cd4dcaf4b2764d96ad038f3be7413b0\n> > Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > Date: Mon Sep 20 12:22:02 2021 -0300\n> >\n> > Doc: add glossary term for \"auxiliary process\"\n> >\n> > [4] PG master - https://www.postgresql.org/docs/devel/glossary.html\n> \n> Here's the patch modifying the docs slightly. Please review it.\n\nI doubt there's much confusion here - crashes are treated the same. I think\nthe fix would be to say \"server crash\" rather than backend crash.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 9 Oct 2021 11:12:16 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Reword docs of feature \"Remove temporary files after backend\n crash\"" }, { "msg_contents": "On Sat, Oct 9, 2021 at 9:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> I doubt there's much confusion here - crashes are treated the same. I think\n> the fix would be to say \"server crash\" rather than backend crash.\n\nIIUC, the \"server crash\" includes any backend, auxiliary process,\npostmaster crash i.e. database cluster/instance crash. The commit\ncd91de0d1 especially added the temp file cleanup support if any\nbackend or auxiliary process (except syslogger and stats collector)\ncrashes. The temp file cleanup in postmaster crash does exist prior to\nthe commit cd91de0d1.\n\nWhen we add the description about the new GUC introduced by the commit\ncd91de0d1, let's be clear as to which crash triggers the new temp file\ncleanup path.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 9 Oct 2021 21:55:21 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reword docs of feature \"Remove temporary files after backend\n crash\"" }, { "msg_contents": "\n\nOn 2021/10/10 1:25, Bharath Rupireddy wrote:\n> On Sat, Oct 9, 2021 at 9:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>\n>> I doubt there's much confusion here - crashes are treated the same. I think\n>> the fix would be to say \"server crash\" rather than backend crash.\n> \n> IIUC, the \"server crash\" includes any backend, auxiliary process,\n> postmaster crash i.e. database cluster/instance crash. The commit\n> cd91de0d1 especially added the temp file cleanup support if any\n> backend or auxiliary process (except syslogger and stats collector)\n\nAlso the startup process should be in this exception list?\n\n\n> crashes. The temp file cleanup in postmaster crash does exist prior to\n> the commit cd91de0d1.\n> \n> When we add the description about the new GUC introduced by the commit\n> cd91de0d1, let's be clear as to which crash triggers the new temp file\n> cleanup path.\n\nIf we really want to add this information, the description of\nrestart_after_crash seems more proper place to do that in.\nremove_temp_files_after_crash works only when the processes are\nreinitialized because restart_after_crash is enabled.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sun, 10 Oct 2021 12:42:16 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Reword docs of feature \"Remove temporary files after backend\n crash\"" }, { "msg_contents": "On Sun, Oct 10, 2021 at 9:12 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2021/10/10 1:25, Bharath Rupireddy wrote:\n> > On Sat, Oct 9, 2021 at 9:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >>\n> >> I doubt there's much confusion here - crashes are treated the same. I think\n> >> the fix would be to say \"server crash\" rather than backend crash.\n> >\n> > IIUC, the \"server crash\" includes any backend, auxiliary process,\n> > postmaster crash i.e. database cluster/instance crash. The commit\n> > cd91de0d1 especially added the temp file cleanup support if any\n> > backend or auxiliary process (except syslogger and stats collector)\n>\n> Also the startup process should be in this exception list?\n\nYes, if the startup process fails, neither restart_after_crash nor\nremove_temp_files_after_crash code path is hit.\n\n> > crashes. The temp file cleanup in postmaster crash does exist prior to\n> > the commit cd91de0d1.\n> >\n> > When we add the description about the new GUC introduced by the commit\n> > cd91de0d1, let's be clear as to which crash triggers the new temp file\n> > cleanup path.\n>\n> If we really want to add this information, the description of\n> restart_after_crash seems more proper place to do that in.\n> remove_temp_files_after_crash works only when the processes are\n> reinitialized because restart_after_crash is enabled.\n\nIMO, we can add the new description as proposed in my v1 patch (after\nadding startup process to the exception list) to both the GUCs\nrestart_after_crash and remove_temp_files_after_crash. And, in\nremove_temp_files_after_crash GUC description we can just add a note\nsaying \"this GUC is effective only when restart_after_crash is on\".\n\nAlso, I see that the restart_after_crash and\nremove_temp_files_after_crash descriptions in guc.c say \"Remove\ntemporary files after backend crash.\". I think we can also modify them\nto \"Remove temporary files after the backend or auxiliary process\n(except startup, syslogger and stats collector) crash.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sun, 10 Oct 2021 19:03:32 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reword docs of feature \"Remove temporary files after backend\n crash\"" }, { "msg_contents": "\n\nOn 2021/10/10 22:33, Bharath Rupireddy wrote:\n>>> IIUC, the \"server crash\" includes any backend, auxiliary process,\n>>> postmaster crash i.e. database cluster/instance crash. The commit\n>>> cd91de0d1 especially added the temp file cleanup support if any\n>>> backend or auxiliary process (except syslogger and stats collector)\n\nWe should mention not only a backend and an auxiliary processe\nbut also background worker? Because, per Glossary, background worker\nis neither a backend nor an auxiliary process. Instead,\nmaybe it's better to use \"child processes\" or something rather than\nmentioning those three processes.\n\n> IMO, we can add the new description as proposed in my v1 patch (after\n> adding startup process to the exception list) to both the GUCs\n> restart_after_crash and remove_temp_files_after_crash. And, in\n> remove_temp_files_after_crash GUC description we can just add a note\n> saying \"this GUC is effective only when restart_after_crash is on\".\n\nOK.\n\n> Also, I see that the restart_after_crash and\n> remove_temp_files_after_crash descriptions in guc.c say \"Remove\n> temporary files after backend crash.\". I think we can also modify them\n> to \"Remove temporary files after the backend or auxiliary process\n> (except startup, syslogger and stats collector) crash.\n\nI'm not sure if we really need this long log message.\nIMO it's enough to add that information in the docs.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 11 Oct 2021 15:07:10 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Reword docs of feature \"Remove temporary files after backend\n crash\"" }, { "msg_contents": "On Mon, Oct 11, 2021 at 11:37 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2021/10/10 22:33, Bharath Rupireddy wrote:\n> >>> IIUC, the \"server crash\" includes any backend, auxiliary process,\n> >>> postmaster crash i.e. database cluster/instance crash. The commit\n> >>> cd91de0d1 especially added the temp file cleanup support if any\n> >>> backend or auxiliary process (except syslogger and stats collector)\n>\n> We should mention not only a backend and an auxiliary processe\n> but also background worker? Because, per Glossary, background worker\n> is neither a backend nor an auxiliary process. Instead,\n> maybe it's better to use \"child processes\" or something rather than\n> mentioning those three processes.\n\nIf we were to use child processes (a term the glossary doesn't\ndefine), we might end up saying postmaster child process crash, that's\nnot enough. We have to say things like \"child process (except startup,\nsyslogger and stats collector) crash.\" IMO, let's not introduce\nanother term for the processes, the glossary defines many kinds of\nprocesses already.\n\n> > Also, I see that the restart_after_crash and\n> > remove_temp_files_after_crash descriptions in guc.c say \"Remove\n> > temporary files after backend crash.\". I think we can also modify them\n> > to \"Remove temporary files after the backend or auxiliary process\n> > (except startup, syslogger and stats collector) crash.\n>\n> I'm not sure if we really need this long log message.\n> IMO it's enough to add that information in the docs.\n\nIMO let's be clear here as well for consistency reasons. I've seen\nsome of the long descriptions for GUCs [1]. And it seems like we don't\nhave any tex\nSo, the text for remove_temp_files_after_crash will be : \"Remove\ntemporary files after backend or auxiliary process (except startup,\nsyslogger and stats collector) or background worker crash.\"\nand for restart_after_crash: \"Reinitialize server after backend crash\nor auxiliary process (except startup, syslogger and stats collector)\nor background worker crash.\"\n\nI noticed another thing that the remove_temp_files_after_crash is\ncategorized as DEVELOPER_OPTIONS, shouldn't it be under\nRESOURCES_DISK?\n\n[1]\ngettext_noop(\"Sets whether a WAL receiver should create a temporary\nreplication slot if no permanent slot is configured.\"),\ngettext_noop(\"Writes full pages to WAL when first modified after a\ncheckpoint, even for a non-critical modification.\")\ngettext_noop(\"Enables backward compatibility mode for privilege checks\non large objects.\")\ngettext_noop(\"Forces a switch to the next WAL file if a \"\n \"new file has not been started within N seconds.\"),\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 11 Oct 2021 12:50:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reword docs of feature \"Remove temporary files after backend\n crash\"" }, { "msg_contents": "On Mon, Oct 11, 2021 at 12:50:28PM +0530, Bharath Rupireddy wrote:\n> I noticed another thing that the remove_temp_files_after_crash is\n> categorized as DEVELOPER_OPTIONS, shouldn't it be under\n> RESOURCES_DISK?\n\nSee here:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=797b0fc0b078c7b4c46ef9f2d9e47aa2d98c6c63\n\nThe old behavior of leaving the tempfiles behind isn't expected to be useful to\nuses, and the only reason to keep them was to allow debugging.\n\nPutting it in DEVELOPER means that it's not in the user-facing\npostgresql.conf.sample.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 11 Oct 2021 04:23:50 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Reword docs of feature \"Remove temporary files after backend\n crash\"" }, { "msg_contents": "On Mon, Oct 11, 2021 at 2:53 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Oct 11, 2021 at 12:50:28PM +0530, Bharath Rupireddy wrote:\n> > I noticed another thing that the remove_temp_files_after_crash is\n> > categorized as DEVELOPER_OPTIONS, shouldn't it be under\n> > RESOURCES_DISK?\n>\n> See here:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=797b0fc0b078c7b4c46ef9f2d9e47aa2d98c6c63\n>\n> The old behavior of leaving the tempfiles behind isn't expected to be useful to\n> uses, and the only reason to keep them was to allow debugging.\n>\n> Putting it in DEVELOPER means that it's not in the user-facing\n> postgresql.conf.sample.\n\nThanks.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 11 Oct 2021 15:29:58 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reword docs of feature \"Remove temporary files after backend\n crash\"" }, { "msg_contents": "On Mon, Oct 11, 2021 at 11:37 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> > IMO, we can add the new description as proposed in my v1 patch (after\n> > adding startup process to the exception list) to both the GUCs\n> > restart_after_crash and remove_temp_files_after_crash. And, in\n> > remove_temp_files_after_crash GUC description we can just add a note\n> > saying \"this GUC is effective only when restart_after_crash is on\".\n>\n> OK.\n\nHere's a v2 patch that I could come up with. Please review it further.\n\nI've also made a CF entry - https://commitfest.postgresql.org/35/3356/\n\nRegards,\nBharath Rupireddy.", "msg_date": "Tue, 12 Oct 2021 12:10:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reword docs of feature \"Remove temporary files after backend\n crash\"" }, { "msg_contents": "> On 12 Oct 2021, at 08:40, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n\n> Here's a v2 patch that I could come up with. Please review it further.\n\n+ debugging, for example. Repeated crashes may however result in\n+ accumulation of useless files. This parameter can only be set in the\n\nI think \"useless\" is a bit too strong and subjective given that it's describing\nan unknown situation out of the ordinary. How about \"outdated\" or \"redundant\"\n(or something else entirely which is even better)?\n\n> I've also made a CF entry - https://commitfest.postgresql.org/35/3356/\n\nThis has been sitting the CF for quite some time, time to make a decision on\nit. I think it makes sense, having detailed docs around debugging is rarely a\nbad thing. Does anyone else have opinions?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 1 Apr 2022 15:42:36 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Reword docs of feature \"Remove temporary files after backend\n crash\"" }, { "msg_contents": "On Fri, Apr 1, 2022 at 9:42 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> I think \"useless\" is a bit too strong and subjective given that it's describing\n> an unknown situation out of the ordinary. How about \"outdated\" or \"redundant\"\n> (or something else entirely which is even better)?\n\nIt's the existing wording, though, and unrelated to the changes the\npatch is trying to make. It also seems accurate to me.\n\n> > I've also made a CF entry - https://commitfest.postgresql.org/35/3356/\n>\n> This has been sitting the CF for quite some time, time to make a decision on\n> it. I think it makes sense, having detailed docs around debugging is rarely a\n> bad thing. Does anyone else have opinions?\n\nI don't like it. It seems to me that it will result in a lot of\nduplication in the docs, because every time we talk about something\nthat happens in connection with a crash, we'll need to talk about this\nsame list of exceptions. It would be reasonable to document which\nconditions trigger a crash-and-restart cycle and which do not in some\ncentralized place, but not in every place where we mention crashing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Apr 2022 09:57:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reword docs of feature \"Remove temporary files after backend\n crash\"" }, { "msg_contents": "> On 1 Apr 2022, at 15:57, Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Apr 1, 2022 at 9:42 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n>> This has been sitting the CF for quite some time, time to make a decision on\n>> it. I think it makes sense, having detailed docs around debugging is rarely a\n>> bad thing. Does anyone else have opinions?\n> \n> I don't like it. It seems to me that it will result in a lot of\n> duplication in the docs, because every time we talk about something\n> that happens in connection with a crash, we'll need to talk about this\n> same list of exceptions.\n\nFair enough, that's a valid concern. Unless others object I then think we\nshould close this patch in the CF as rejected.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 4 Apr 2022 14:29:40 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Reword docs of feature \"Remove temporary files after backend\n crash\"" } ]
[ { "msg_contents": "Hi,\n\nAt times, users want to know what are the files (snapshot and mapping\nfiles) that are available under pg_logical directory and also the\nspill files that are under pg_replslot directory and how much space\nthey occupy. This will help to better know the storage usage pattern\nof these directories. Can we have two new functions pg_ls_logicaldir\nand pg_ls_replslotdir on the similar lines of pg_ls_logdir,\npg_ls_logdir,pg_ls_tmpdir, pg_ls_archive_statusdir [1]?\n\n[1] - https://www.postgresql.org/docs/devel/functions-admin.html\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 8 Oct 2021 16:39:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "logical decoding/replication: new functions pg_ls_logicaldir and\n pg_ls_replslotdir" }, { "msg_contents": "On Fri, Oct 8, 2021 at 4:39 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> At times, users want to know what are the files (snapshot and mapping\n> files) that are available under pg_logical directory and also the\n> spill files that are under pg_replslot directory and how much space\n> they occupy.\n>\n\nWhy can't you use pg_ls_dir to see the contents of pg_replslot? To\nknow the space taken by spilling, you might want to check\npg_stat_replication_slots[1] as that gives information about\nspill_bytes.\n\n[1] - https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-SLOTS-VIEW\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 22 Oct 2021 15:17:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding/replication: new functions pg_ls_logicaldir and\n pg_ls_replslotdir" }, { "msg_contents": "On Fri, Oct 22, 2021 at 3:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 8, 2021 at 4:39 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > At times, users want to know what are the files (snapshot and mapping\n> > files) that are available under pg_logical directory and also the\n> > spill files that are under pg_replslot directory and how much space\n> > they occupy.\n> >\n>\n> Why can't you use pg_ls_dir to see the contents of pg_replslot? To\n> know the space taken by spilling, you might want to check\n> pg_stat_replication_slots[1] as that gives information about\n> spill_bytes.\n>\n> [1] - https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-SLOTS-VIEW\n\nThanks Amit!\n\npg_ls_dir gives the list of directories and files, but not their\nsizes. And it looks like the spill_bytes from\npg_stat_replication_slots is the accumulated byte count (see [1]), not\nthe current size of the spills files, so it's not representing the\nspill files and their size at that moment.\n\nIf we have pg_ls_logicaldir and pg_ls_replslotdir returning the\nfiles, szies, and last modified times, it will be useful in production\nenvironments to see the disk usage of those files at the current\nmoment. The data from these functions can be fed to an external\nanalytics tool invoking the functions at regular intervals of time and\nreport the disk usage of these folders. This will be super useful to\nanalyze the questions like: Was the disk usage more at time t1? What\nhappened to my database system at that time? etc. And, these\nfunctions can run independent of the stats collector process which is\ncurrently required for the pg_stat_replication_slots view.\n\nThoughts?\n\nI plan to work on a patch if okay.\n\n[1]\npostgres=# select\npg_ls_dir('/home/bharath/postgres/inst/bin/data/pg_replslot/mysub');\n pg_ls_dir\n-----------\n state\n(1 row)\n\npostgres=# select * from pg_stat_replication_slots;\n slot_name | spill_txns | spill_count | spill_bytes | stream_txns |\nstream_count | stream_bytes | total_txns | total_bytes | stats_reset\n-----------+------------+-------------+-------------+-------------+--------------+--------------+------------+-------------+-------------\n mysub | 3 | 6 | 396000000 | 0 |\n 0 | 0 | 5 | 396001128 |\n(1 row)\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 22 Oct 2021 16:18:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding/replication: new functions pg_ls_logicaldir and\n pg_ls_replslotdir" }, { "msg_contents": "On Fri, Oct 22, 2021 at 04:18:04PM +0530, Bharath Rupireddy wrote:\n> On Fri, Oct 22, 2021 at 3:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Oct 8, 2021 at 4:39 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > At times, users want to know what are the files (snapshot and mapping\n> > > files) that are available under pg_logical directory and also the\n> > > spill files that are under pg_replslot directory and how much space\n> > > they occupy.\n> >\n> > Why can't you use pg_ls_dir to see the contents of pg_replslot? To\n> \n> Thanks Amit!\n> \n> pg_ls_dir gives the list of directories and files, but not their sizes.\n\nReturning sizes is already possible by using pg_stat_file:\n\nts=# SELECT dd, a, ls, stat.* FROM (SELECT current_setting('data_directory') AS dd, 'pg_logical' AS a) AS a, pg_ls_dir(a) AS ls, pg_stat_file(dd ||'/'|| a ||'/'|| ls) AS stat ;\n dd | a | ls | size | access | modification | change | creation | isdir \n------------------------+------------+-----------------------+------+------------------------+------------------------+------------------------+----------+-------\n /var/lib/pgsql/14/data | pg_logical | replorigin_checkpoint | 8 | 2021-10-22 08:20:30-06 | 2021-10-22 08:20:30-06 | 2021-10-22 08:20:30-06 | | f\n /var/lib/pgsql/14/data | pg_logical | mappings | 4096 | 2021-10-21 19:54:19-06 | 2021-10-15 19:50:35-06 | 2021-10-15 19:50:35-06 | | t\n /var/lib/pgsql/14/data | pg_logical | snapshots | 4096 | 2021-10-21 19:54:19-06 | 2021-10-15 19:50:35-06 | 2021-10-15 19:50:35-06 | | t\n\nI agree that this isn't a very friendly query, so I had created a patch adding\npg_ls_dir_metadata():\nhttps://commitfest.postgresql.org/33/2377/\n\npostgres=# SELECT * FROM pg_ls_dir_metadata('pg_logical');\n filename | size | access | modification | change | creation | type | path \n-----------------------+------+------------------------+------------------------+------------------------+----------+------+----------------------------------\n mappings | 4096 | 2021-10-22 09:15:29-05 | 2021-10-22 09:15:29-05 | 2021-10-22 09:15:29-05 | | d | pg_logical/mappings\n replorigin_checkpoint | 8 | 2021-10-22 09:15:47-05 | 2021-10-22 09:15:45-05 | 2021-10-22 09:15:45-05 | | - | pg_logical/replorigin_checkpoint\n . | 4096 | 2021-10-22 09:16:23-05 | 2021-10-22 09:15:45-05 | 2021-10-22 09:15:45-05 | | d | pg_logical/.\n .. | 4096 | 2021-10-22 09:16:01-05 | 2021-10-22 09:15:47-05 | 2021-10-22 09:15:47-05 | | d | pg_logical/..\n snapshots | 4096 | 2021-10-22 09:15:29-05 | 2021-10-22 09:15:29-05 | 2021-10-22 09:15:29-05 | | d | pg_logical/snapshots\n(5 rows)\n\nI concluded that it's better to add a function to list metadata of an arbitrary\ndir, rather than adding more functions to handle specific, hardcoded dirs:\nhttps://www.postgresql.org/message-id/flat/20191227170220.GE12890@telsasoft.com\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 22 Oct 2021 10:56:20 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding/replication: new functions pg_ls_logicaldir and\n pg_ls_replslotdir" }, { "msg_contents": "On Fri, Oct 22, 2021 at 9:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I concluded that it's better to add a function to list metadata of an arbitrary\n> dir, rather than adding more functions to handle specific, hardcoded dirs:\n> https://www.postgresql.org/message-id/flat/20191227170220.GE12890@telsasoft.com\n\nI just had a quick look at the pg_ls_dir_metadata() patch(I didn't\nlook at the other patches). While it's a good idea to have a single\nfunction for all the PGDATA directories, I'm not sure if one would\never need the info like type, change, creation path etc. If we do\nthis, the function will become the linux equivalent command. I don't\nsee the difference between modification and change time stamps. For\ndebugging or analytical purposes in production environments, one would\nmajorly look at the file name, it's size on the disk, modification\ntime (to decide whether the file is stale or not, creation time (to\ndecide how old is the file), file/directory(maybe?). I'm not sure if\nyour patch has a recursive option for pg_ls_dir_metadata(), if it has,\nI think it's more complex from a usability perspective.\n\nAnd the functions like pg_ls_tmpdir, pg_ls_tmpdir, pg_ls_waldir etc.\n(existing) and pg_ls_logicaldir, pg_ls_replslotdir (yet to have) will\nprovide the better usability compared to a generic function. Having\nsaid this, I don't oppose having a generic function returning the file\nname, file size, modification time, creation time, but not other info,\nplease. If one is interested in knowing the other information file\ntype, path etc. they can go run linux/windows/OS commands.\n\nIn summary what I think at this point is:\n1) pg_ls_logicaldir, pg_ls_replslotdir - better for usability and\nserving special purpose like their peers\n2) modify pg_ls_dir such that it returns the file name, file size,\nmodification time, creation time, for directories, to be simple, it\nshouldn't go recursively over all the directories, it should just\nreturn the directory name, size, modification time, creation time.\n\nIf okay, I'm ready to spend time implementing them.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 23 Oct 2021 23:10:09 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding/replication: new functions pg_ls_logicaldir and\n pg_ls_replslotdir" }, { "msg_contents": "On Sat, Oct 23, 2021 at 11:10 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Oct 22, 2021 at 9:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I concluded that it's better to add a function to list metadata of an arbitrary\n> > dir, rather than adding more functions to handle specific, hardcoded dirs:\n> > https://www.postgresql.org/message-id/flat/20191227170220.GE12890@telsasoft.com\n>\n> I just had a quick look at the pg_ls_dir_metadata() patch(I didn't\n> look at the other patches). While it's a good idea to have a single\n> function for all the PGDATA directories, I'm not sure if one would\n> ever need the info like type, change, creation path etc. If we do\n> this, the function will become the linux equivalent command. I don't\n> see the difference between modification and change time stamps. For\n> debugging or analytical purposes in production environments, one would\n> majorly look at the file name, it's size on the disk, modification\n> time (to decide whether the file is stale or not, creation time (to\n> decide how old is the file), file/directory(maybe?). I'm not sure if\n> your patch has a recursive option for pg_ls_dir_metadata(), if it has,\n> I think it's more complex from a usability perspective.\n>\n> And the functions like pg_ls_tmpdir, pg_ls_tmpdir, pg_ls_waldir etc.\n> (existing) and pg_ls_logicaldir, pg_ls_replslotdir (yet to have) will\n> provide the better usability compared to a generic function. Having\n> said this, I don't oppose having a generic function returning the file\n> name, file size, modification time, creation time, but not other info,\n> please. If one is interested in knowing the other information file\n> type, path etc. they can go run linux/windows/OS commands.\n>\n> In summary what I think at this point is:\n> 1) pg_ls_logicaldir, pg_ls_replslotdir - better for usability and\n> serving special purpose like their peers\n\nI've added 3 functions pg_ls_logicalsnapdir, pg_ls_logicalmapdir,\npg_ls_replslotdir, and attached the patch. The sample output looks\nlike [1]. Please review it further.\n\nHere's the CF entry - https://commitfest.postgresql.org/35/3390/\n\n[1]\npostgres=# select pg_ls_logicalsnapdir();\n pg_ls_logicalsnapdir\n-----------------------------------------------\n (0-14A50C0.snap,128,\"2021-10-30 09:15:56+00\")\n (0-14C46D8.snap,128,\"2021-10-30 09:16:05+00\")\n (0-14C97C8.snap,132,\"2021-10-30 09:16:20+00\")\n\npostgres=# select pg_ls_logicalmapdir();\n pg_ls_logicalmapdir\n---------------------------------------------------------------\n (map-31d5-4eb-0_CDDDE88-2d9-2db,108,\"2021-10-30 09:24:34+00\")\n (map-31d5-4eb-0_CDDDE88-2da-2db,108,\"2021-10-30 09:24:34+00\")\n (map-31d5-4eb-0_CE48038-2dc-2de,108,\"2021-10-30 09:24:35+00\")\n (map-31d5-4eb-0_CE6BAF0-2dd-2df,108,\"2021-10-30 09:24:35+00\")\n (map-31d5-4eb-0_CD97DE0-2d9-2d9,36,\"2021-10-30 09:24:30+00\")\n (map-31d5-4eb-0_CE24808-2da-2dd,108,\"2021-10-30 09:24:35+00\")\n (map-31d5-4eb-0_CE01200-2dc-2dc,36,\"2021-10-30 09:24:34+00\")\n (map-31d5-4eb-0_CDDDE88-2db-2db,36,\"2021-10-30 09:24:34+00\")\n (map-31d5-4eb-0_CE6BAF0-2dc-2df,108,\"2021-10-30 09:24:35+00\")\n (map-31d5-4eb-0_CDBA920-2d9-2da,108,\"2021-10-30 09:24:32+00\")\n (map-31d5-4eb-0_CE01200-2da-2dc,108,\"2021-10-30 09:24:34+00\")\n (map-31d5-4eb-0_CE6BAF0-2d9-2df,108,\"2021-10-30 09:24:35+00\")\n (map-31d5-4eb-0_CE24808-2db-2dd,108,\"2021-10-30 09:24:35+00\")\n (map-31d5-4eb-0_CE6BAF0-2db-2df,108,\"2021-10-30 09:24:35+00\")\n (map-31d5-4eb-0_CE24808-2dd-2dd,36,\"2021-10-30 09:24:35+00\")\n (map-31d5-4eb-0_CE24808-2dc-2dd,108,\"2021-10-30 09:24:35+00\")\n (map-31d5-4eb-0_CD74E48-2d8-2d8,36,\"2021-10-30 09:24:25+00\")\n (map-31d5-4eb-0_CE24808-2d9-2dd,108,\"2021-10-30 09:24:35+00\")\n\n postgres=# select pg_ls_replslotdir('mysub');\n pg_ls_replslotdir\n-----------------------------------------------------------------\n (xid-722-lsn-0-2000000.spill,36592640,\"2021-10-30 09:18:29+00\")\n (xid-722-lsn-0-5000000.spill,4577860,\"2021-10-30 09:18:32+00\")\n (state,200,\"2021-10-30 09:18:25+00\")\n (xid-722-lsn-0-1000000.spill,25644220,\"2021-10-30 09:18:29+00\")\n (xid-722-lsn-0-4000000.spill,36592640,\"2021-10-30 09:18:32+00\")\n (xid-722-lsn-0-3000000.spill,36592640,\"2021-10-30 09:18:32+00\")\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sat, 30 Oct 2021 15:01:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding/replication: new functions pg_ls_logicaldir and\n pg_ls_replslotdir" }, { "msg_contents": "On 10/30/21, 2:36 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> I've added 3 functions pg_ls_logicalsnapdir, pg_ls_logicalmapdir,\r\n> pg_ls_replslotdir, and attached the patch. The sample output looks\r\n> like [1]. Please review it further.\r\n\r\nI took a look at the patch.\r\n\r\n+\tchar\t\tpath[MAXPGPATH + 11];\r\n\r\nWhy are you adding 11 to MAXPGPATH here? I would think that MAXPGPATH\r\nis sufficient.\r\n\r\n+\tfilename = text_to_cstring(filename_t);\r\n+\tsnprintf(path, sizeof(path), \"%s/%s\", \"pg_replslot\", filename);\r\n+\treturn pg_ls_dir_files(fcinfo, path, false);\r\n\r\nI think we need to do some additional input validation here. It's\r\npretty easy to use this to see the contents of other directories.\r\n\r\n postgres=# SELECT * FROM pg_ls_replslotdir('../');\r\n name | size | modification\r\n ----------------------+-------+------------------------\r\n postgresql.conf | 28995 | 2021-11-17 18:40:33+00\r\n pg_hba.conf | 4789 | 2021-11-17 18:40:33+00\r\n postmaster.opts | 39 | 2021-11-17 18:43:07+00\r\n postgresql.auto.conf | 88 | 2021-11-17 18:40:33+00\r\n pg_ident.conf | 1636 | 2021-11-17 18:40:33+00\r\n postmaster.pid | 95 | 2021-11-17 18:43:07+00\r\n PG_VERSION | 3 | 2021-11-17 18:40:33+00\r\n (7 rows)\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 17 Nov 2021 18:46:47 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding/replication: new functions pg_ls_logicaldir and\n pg_ls_replslotdir" }, { "msg_contents": "On Wed, Nov 17, 2021 at 06:46:47PM +0000, Bossart, Nathan wrote:\n> On 10/30/21, 2:36 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > I've added 3 functions pg_ls_logicalsnapdir, pg_ls_logicalmapdir,\n> > pg_ls_replslotdir, and attached the patch. The sample output looks\n> > like [1]. Please review it further.\n> \n> I took a look at the patch.\n> \n> +\tchar\t\tpath[MAXPGPATH + 11];\n> +\tfilename = text_to_cstring(filename_t);\n> +\tsnprintf(path, sizeof(path), \"%s/%s\", \"pg_replslot\", filename);\n> +\treturn pg_ls_dir_files(fcinfo, path, false);\n> \n> Why are you adding 11 to MAXPGPATH here? I would think that MAXPGPATH\n> is sufficient.\n\nI suppose it's for \"pg_replslot\" - but it forgot about the \"/\".\n\nMAXPGPATH isn't sufficient (even if you add 12), since it's a user-supplied\nstring. snprintf keeps it from overflowing the buffer, but its return value\nisn't checked, so it could (hypothetically) return a result for the wrong slot,\nif the slot name were very long, or MAXPGPATH were very short..\n\n+ text *filename_t = PG_GETARG_TEXT_PP(0);\n\n> I think we need to do some additional input validation here. It's\n> pretty easy to use this to see the contents of other directories.\n\nActually, limiting the dir seems like a valid reason to add this function,\nsince it would allow GRANTing privileges for just those directories.\n\nSo now I agree that this patch should be included. But I suggest to add it\nafter my \"ls\" patches, which change the output fields, and include directories.\nDirectories might normally not be present, but an extension might put them\nthere. (And it may be important to show things that aren't supposed to be\nthere, too).\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 17 Nov 2021 19:23:31 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding/replication: new functions pg_ls_logicaldir and\n pg_ls_replslotdir" }, { "msg_contents": "On Thu, Nov 18, 2021 at 12:16 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 10/30/21, 2:36 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > I've added 3 functions pg_ls_logicalsnapdir, pg_ls_logicalmapdir,\n> > pg_ls_replslotdir, and attached the patch. The sample output looks\n> > like [1]. Please review it further.\n>\n> I took a look at the patch.\n>\n> + char path[MAXPGPATH + 11];\n>\n> Why are you adding 11 to MAXPGPATH here? I would think that MAXPGPATH\n> is sufficient.\n\nYeah, MAXPGPATH is sufficient. Note that the replication slot name be\nat max NAMEDATALEN(64 bytes) size\n(ReplicationSlotPersistentData->name) and what we pass to the\npg_ls_dir_files is\n\"pg_replslot/<<user_entered_slot_name_with_max_64_bytes>>\", so it can\nnever cross MAXPGPATH (1024).\n\n> + filename = text_to_cstring(filename_t);\n> + snprintf(path, sizeof(path), \"%s/%s\", \"pg_replslot\", filename);\n> + return pg_ls_dir_files(fcinfo, path, false);\n>\n> I think we need to do some additional input validation here. It's\n> pretty easy to use this to see the contents of other directories.\n\nDone. Checking if the entered slot exists or not, if not throwing an\nerror, see [1].\n\nPlease review the attached v2.\n\n[1]\npostgres=# select * from pg_ls_replslotdir('');\nERROR: replication slot \"\" does not exist\npostgres=# select * from pg_ls_replslotdir('../');\nERROR: replication slot \"../\" does not exist\npostgres=# select pg_ls_replslotdir('mysub');\n pg_ls_replslotdir\n-----------------------------------------------------------------\n (xid-722-lsn-0-2000000.spill,36592640,\"2021-11-18 07:34:40+00\")\n (xid-722-lsn-0-5000000.spill,36592640,\"2021-11-18 07:34:43+00\")\n (xid-722-lsn-0-A000000.spill,29910720,\"2021-11-18 07:34:48+00\")\n (xid-722-lsn-0-7000000.spill,36592640,\"2021-11-18 07:34:45+00\")\n (xid-722-lsn-0-9000000.spill,36592640,\"2021-11-18 07:34:47+00\")\n (state,200,\"2021-11-18 07:34:36+00\")\n (xid-722-lsn-0-8000000.spill,36592500,\"2021-11-18 07:34:46+00\")\n (xid-722-lsn-0-6000000.spill,36592640,\"2021-11-18 07:34:44+00\")\n (xid-722-lsn-0-1000000.spill,11171300,\"2021-11-18 07:34:39+00\")\n (xid-722-lsn-0-4000000.spill,36592500,\"2021-11-18 07:34:42+00\")\n (xid-722-lsn-0-3000000.spill,36592640,\"2021-11-18 07:34:42+00\")\n(11 rows)\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 18 Nov 2021 13:08:44 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding/replication: new functions pg_ls_logicaldir and\n pg_ls_replslotdir" }, { "msg_contents": "On 11/17/21, 11:39 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> Please review the attached v2.\r\n\r\nLGTM. I've marked this one as ready-for-committer.\r\n\r\nNathan\r\n\r\n", "msg_date": "Sat, 20 Nov 2021 00:29:51 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding/replication: new functions pg_ls_logicaldir and\n pg_ls_replslotdir" }, { "msg_contents": "On Sat, Nov 20, 2021 at 12:29:51AM +0000, Bossart, Nathan wrote:\n> On 11/17/21, 11:39 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> Please review the attached v2.\n> \n> LGTM. I've marked this one as ready-for-committer.\n\nOne issue that I have with this patch is that there are zero\nregression tests. Could you add a couple of things in\nmisc_functions.sql (for the negative tests perhaps) or\ncontrib/test_decoding/, taking advantage of places where slots are\nalready created? You may want to look after the non-superuser case\nwhere the calls should fail, and the second case where a role is part\nof pg_monitor where the call succeeds. Note that any roles created in\nthe tests have to be prefixed with \"regress_\".\n\n+ snprintf(path, sizeof(path), \"%s/%s\", \"pg_replslot\", slotname);\n+ return pg_ls_dir_files(fcinfo, path, false);\n\"pg_replslot\" could be part of the third argument here. There is no\nneed to separate it.\n\n+ ordinary file in the server's pg_logical/mappings directory.\nPaths had better have <filename> markups around them, no?\n--\nMichael", "msg_date": "Sun, 21 Nov 2021 10:28:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: logical decoding/replication: new functions pg_ls_logicaldir and\n pg_ls_replslotdir" }, { "msg_contents": "On Sun, Nov 21, 2021 at 6:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Nov 20, 2021 at 12:29:51AM +0000, Bossart, Nathan wrote:\n> > On 11/17/21, 11:39 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> Please review the attached v2.\n> >\n> > LGTM. I've marked this one as ready-for-committer.\n>\n> One issue that I have with this patch is that there are zero\n> regression tests. Could you add a couple of things in\n> misc_functions.sql (for the negative tests perhaps) or\n> contrib/test_decoding/, taking advantage of places where slots are\n> already created? You may want to look after the non-superuser case\n> where the calls should fail, and the second case where a role is part\n> of pg_monitor where the call succeeds. Note that any roles created in\n> the tests have to be prefixed with \"regress_\".\n\nI don't think we need to go far to contrib/test_decoding/, even if we\nadd it there we can't test it for the outputs of these functions, so\nI've added the tests in misc_functinos.sql itself.\n\n> + snprintf(path, sizeof(path), \"%s/%s\", \"pg_replslot\", slotname);\n> + return pg_ls_dir_files(fcinfo, path, false);\n> \"pg_replslot\" could be part of the third argument here. There is no\n> need to separate it.\n\nDone.\n\n> + ordinary file in the server's pg_logical/mappings directory.\n> Paths had better have <filename> markups around them, no?\n\nDone.\n\nAttached v3 patch, please review it further.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sun, 21 Nov 2021 08:45:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding/replication: new functions pg_ls_logicaldir and\n pg_ls_replslotdir" }, { "msg_contents": "On Sun, Nov 21, 2021 at 08:45:52AM +0530, Bharath Rupireddy wrote:\n> I don't think we need to go far to contrib/test_decoding/, even if we\n> add it there we can't test it for the outputs of these functions, so\n> I've added the tests in misc_functinos.sql itself.\n\n+SELECT COUNT(*) >= 0 AS OK FROM pg_ls_replslotdir('slot_dir_funcs');\n+ ok\n+----\n+ t\n+(1 row)\nCreating a slot within the main regression test suite is something we\nshould avoid as it impacts the portability of the tests (note that we\ndon't have tests creating slots in src/test/regress/, and we'd require\nmax_replication_slots > 0 with this version of the patch). This was\nthe point I was trying to make upthread about using test_decoding/\nwhere we already have slots.\n\nA second thing I have noticed is the set of OIDs used by the patch\nwhich was incorrect. On a development branch, we require new features\nto use OIDs between 8000-9999 (unused_oids would recommend a random\nrange of them). A third thing was that pg_proc.dat had an incorrect\ndescription for pg_ls_replslotdir(), and that it was in need of\nindentation.\n\nI have tweaked a bit the tests and the docs, and the result looked\nfine at the end. Hence, applied.\n--\nMichael", "msg_date": "Tue, 23 Nov 2021 19:33:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: logical decoding/replication: new functions pg_ls_logicaldir and\n pg_ls_replslotdir" } ]
[ { "msg_contents": "Hi, hackers\n\n\nI notice that postgres use constraints to optimize/rewrite queries in\nlimited cases as\nhttps://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CONSTRAINT-EXCLUSION\n.\n\n\nI propose two new types of query rewrites using constraints here\n\n1) Remove DISTINCT\n\nA simple example is SELECT DISTINCT(name) FROM R. If there is a unique\nconstraint on the name column. The DISTINCT keyword can be removed safely.\nQuery plans without the DISTINCT keyword might be much cheaper since\nDISTINCT is expensive.\n\n\n2) Add LIMIT 1\n\nAn example of this optimization will be SELECT name from R WHERE name =\n‘foo’. If there is a unique constraint on the name column, the selection\nresult has at most one record. Therefore, we can add LIMIT 1 safely. If the\noriginal query plan performs a sequential scan on the R, adding LIMIT 1\nmight speed up the query because of the early return.\n\n\nWe designed an algorithm to decide if 1), 2) can be performed safely.\nRewriting queries manually and experimenting on a table with 10K records\nshows 2X ~ 3X improvement for both rewrites. We have some other rewrite\nrules, but the two are most obvious ones. With this feature, the optimizer\ncan consider the query plans both before and after the rewrite and choose\nthe one with minimum cost.\n\n\nWill that feature be useful? How hard to implement the feature in the\ncurrent system? Any thoughts or comments are highly appreciated!\n\n\nBest,\n\nLily\n\nHi, hackersI notice that postgres use constraints to optimize/rewrite queries in limited cases as https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CONSTRAINT-EXCLUSION. I propose two new types of query rewrites using constraints here1) Remove DISTINCT A simple example is SELECT DISTINCT(name) FROM R. If there is a unique constraint on the name column. The DISTINCT keyword can be removed safely. Query plans without the DISTINCT keyword might be much cheaper since DISTINCT is expensive.2) Add LIMIT 1An example of this optimization will be SELECT name from R WHERE name = ‘foo’. If there is a unique constraint on the name column, the selection result has at most one record. Therefore, we can add LIMIT 1 safely. If the original query plan performs a sequential scan on the R, adding LIMIT 1 might speed up the query because of the early return.We designed an algorithm to decide if 1), 2) can be performed safely. Rewriting queries manually and experimenting on a table with 10K records shows 2X ~ 3X improvement for both rewrites. We have some other rewrite rules, but the two are most obvious ones. With this feature, the optimizer can consider the query plans both before and after the rewrite and choose the one with minimum cost. Will that feature be useful? How hard to implement the feature in the current system? Any thoughts or comments are highly appreciated!Best,Lily", "msg_date": "Fri, 8 Oct 2021 10:24:33 -0700", "msg_from": "Lily Liu <lilyliupku@gmail.com>", "msg_from_op": true, "msg_subject": "Query rewrite(optimization) using constraints" }, { "msg_contents": "On Fri, Oct 08, 2021 at 10:24:33AM -0700, Lily Liu wrote:\n> 1) Remove DISTINCT\n> \n> A simple example is SELECT DISTINCT(name) FROM R. If there is a unique\n> constraint on the name column. The DISTINCT keyword can be removed safely.\n> Query plans without the DISTINCT keyword might be much cheaper since\n> DISTINCT is expensive.\n\nThere's an ongoing discussion and patches for this here.\n\nErase the distinctClause if the result is unique by definition\nhttps://commitfest.postgresql.org/35/2433/\n\nPerhaps you could help to test or review the patch ?\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 8 Oct 2021 13:01:11 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Query rewrite(optimization) using constraints" }, { "msg_contents": "Lily Liu <lilyliupku@gmail.com> writes:\n> I propose two new types of query rewrites using constraints here\n\n> 1) Remove DISTINCT\n> A simple example is SELECT DISTINCT(name) FROM R. If there is a unique\n> constraint on the name column. The DISTINCT keyword can be removed safely.\n> Query plans without the DISTINCT keyword might be much cheaper since\n> DISTINCT is expensive.\n\nThere's already been a fair amount of effort in that direction, cf [1].\nHowever, I can't avoid the impression that that patch series is adding\nway too much overcomplicated infrastructure. If you have an idea for\nan easier way, let's hear it.\n\n> 2) Add LIMIT 1\n> An example of this optimization will be SELECT name from R WHERE name =\n> ‘foo’. If there is a unique constraint on the name column, the selection\n> result has at most one record. Therefore, we can add LIMIT 1 safely. If the\n> original query plan performs a sequential scan on the R, adding LIMIT 1\n> might speed up the query because of the early return.\n\nI strongly suspect that this idea isn't actually useful. If there is a\nmatching unique constraint, the planner will almost surely choose an\nindexscan on the unique index, and adding an additional plan node to that\nis likely to add more overhead than it removes. For example,\n\nregression=# explain analyze select * from tenk1 where unique1 = 42;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------\n Index Scan using tenk1_unique1 on tenk1 (cost=0.29..8.30 rows=1 width=244) (actual time=0.012..0.013 rows=1 loops=1)\n Index Cond: (unique1 = 42)\n Planning Time: 0.059 ms\n Execution Time: 0.030 ms\n(4 rows)\n\nregression=# explain analyze select * from tenk1 where unique1 = 42 limit 1;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.29..8.30 rows=1 width=244) (actual time=0.013..0.013 rows=1 loops=1)\n -> Index Scan using tenk1_unique1 on tenk1 (cost=0.29..8.30 rows=1 width=244) (actual time=0.012..0.012 rows=1 loops=1)\n Index Cond: (unique1 = 42)\n Planning Time: 0.067 ms\n Execution Time: 0.034 ms\n(5 rows)\n\nThis test case shouldn't be taken too seriously, because it was just a\nquick one-off check with little attempt to control for noise, besides\nwhich it's a debug-enabled build. Nonetheless, if you want to pursue\nthis idea I think you first need to prove that it actually is a win.\n\"Might speed up\" won't cut it.\n\nAnother concern here is that the planner is famously bad about making\ngood decisions for queries involving LIMIT. The cost estimates for\nthat require a bunch of assumptions that we don't need elsewhere, and\nsome of them aren't very trustworthy. Now maybe for the restricted\ncases where you want to add LIMIT, this isn't really a problem. But\nyou haven't shown that to be true, so I'm afraid that this transformation\nwill sometimes lead us into worse plan choices than we'd make otherwise.\nI'm pretty leery of adding LIMIT when we don't have to.\n\n\t\t\tregards, tom lane\n\n[1] https://commitfest.postgresql.org/35/2433/\n\n\n", "msg_date": "Fri, 08 Oct 2021 14:17:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Query rewrite(optimization) using constraints" } ]
[ { "msg_contents": "Hi,\n\nwe have a replica product that access to wal files with logical replication.\n\nAfter a reboot following a database fault we receive the following issue.\n\n2021-10-08 16:07:31.829 CEST:127.0.0.1(49880):cdcadm@REPLICA:[12976]:\nERROR: unexpected duplicate for tablespace 16389, relfilenode 484036\n\nBut the object is not duplicated: SELECT pg_filenode_relation\n(16389,484036); pg_filenode_relation | -------------------- |\nmobile.vw_cella_site\n| Thanks for the support\n\nMassimo\n\nHi,we have a replica product that access to wal files with logical replication.After a reboot following a database fault we receive the following issue. 2021-10-08 16:07:31.829 CEST:127.0.0.1(49880):cdcadm@REPLICA:[12976]: ERROR: unexpected duplicate for tablespace 16389, relfilenode 484036But the object is not duplicated:\nSELECT pg_filenode_relation (16389,484036);\npg_filenode_relation |\n-------------------- |\nmobile.vw_cella_site |\nThanks for the support Massimo", "msg_date": "Fri, 8 Oct 2021 20:00:23 +0200", "msg_from": "Max Shore <maxriva@gmail.com>", "msg_from_op": true, "msg_subject": "ERROR: unexpected duplicate for tablespace 16389, relfilenode 484036" } ]
[ { "msg_contents": "Greetings hackers,\n\nI'm seeing some odd behavior around string prefix searching -\nhopefully I've missed something here (thanks to Nino Floris for\noriginally flagging this).\n\nIn PostgreSQL 11, a starts_with function and a ^@ operators were added\nfor string prefix checking, as an alternative to LIKE 'foo%' [1] [2].\nI've ran a few scenarios and have seen the following behavior:\n\nQueries tested:\n\n1. EXPLAIN SELECT * FROM data WHERE name LIKE 'foo10%';\n2. EXPLAIN SELECT * FROM data WHERE name ^@ 'foo10';\n3. EXPLAIN SELECT * FROM data WHERE starts_with(name, 'foo10');\n\n... running against a table with 500k rows and enable_seqscan turned\noff. Results:\n\nIndex | Operator class | LIKE 'X%' | ^@ | starts_with\n------ | ---------------- | ----------------- | ----------------- | -----------\nbtree | text_ops | Parallel seq scan | Parallel seq scan | Seq scan\nbtree | text_pattern_ops | Index scan | Parallel seq scan | Seq scan\nspgist | | Index scan | Index Scan | Seq scan\n\nFirst, starts_with doesn't seem to use SP-GIST indexes, contrary to\nthe patch description (and also doesn't trigger a parallel seq scan) -\nis this intentional? The function is listed front-and-center on the\nstring functions and operators page[3], and receives mention on the\npattern matching page[4], without any mention of it being so\nproblematic.\n\nNote that ^@ isn't documented on the string functions and operators,\nso it's not very discoverable; if added to the docs, I'd recommend\nadding a note on SP-GIST being required, since uninformed new users\nwould probably expect a default btree index to work as well.\n\nShay\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=710d90da1fd8c1d028215ecaf7402062079e99e9\n[2] https://www.postgresql.org/message-id/flat/03300255-cff2-b508-50f4-f00cca0a57a1%40sigaev.ru#38d2020edf92f96d204cd2679d362c38\n[3] https://www.postgresql.org/docs/current/functions-string.html\n[4] https://www.postgresql.org/docs/current/functions-matching.html\n\n\n", "msg_date": "Sat, 9 Oct 2021 10:01:25 +0200", "msg_from": "Shay Rojansky <roji@roji.org>", "msg_from_op": true, "msg_subject": "starts_with, ^@ and index usage" }, { "msg_contents": "Shay Rojansky <roji@roji.org> writes:\n> In PostgreSQL 11, a starts_with function and a ^@ operators were added\n> for string prefix checking, as an alternative to LIKE 'foo%' [1] [2].\n\n> First, starts_with doesn't seem to use SP-GIST indexes, contrary to\n> the patch description (and also doesn't trigger a parallel seq scan) -\n> is this intentional? The function is listed front-and-center on the\n> string functions and operators page[3], and receives mention on the\n> pattern matching page[4], without any mention of it being so\n> problematic.\n\nIt seems like it didn't occur to anybody to tie starts_with() into\nthe machinery for derived index operators. That wouldn't be hard,\nbut it wasn't done.\n\nBefore (I think) v12, function invocations never could be converted\nto indexquals anyway, so it's not surprising that a v11-era patch\nwouldn't have thought it needed to address that point.\n\nI do see that starts_with() is marked parallel safe, so it's not clear\nwhy it wouldn't be amenable to a parallel seqscan. The function (as\nopposed to the operator) isn't tied into selectivity estimation either,\nso maybe that has something to do with using a default selectivity\nestimate for it? But said estimate would almost always be too high,\nwhich doesn't seem like the direction that would discourage parallelism.\n\n> Note that ^@ isn't documented on the string functions and operators,\n\nThat's another oversight.\n\nIt seems clear that the original patch author was pretty narrowly focused\non use of the operator with SP-GIST, and didn't think about how it should\nfit into the larger ecosystem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 Oct 2021 10:44:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: starts_with, ^@ and index usage" }, { "msg_contents": "I wrote:\n> Shay Rojansky <roji@roji.org> writes:\n>> First, starts_with doesn't seem to use SP-GIST indexes, contrary to\n>> the patch description (and also doesn't trigger a parallel seq scan) -\n>> is this intentional?\n\n> It seems like it didn't occur to anybody to tie starts_with() into\n> the machinery for derived index operators. That wouldn't be hard,\n> but it wasn't done.\n\nI've started another thread with a patch for that [1].\n\n>> Note that ^@ isn't documented on the string functions and operators,\n\n> That's another oversight.\n\nWell, \"oversight\" might be too strong a word. AFAICS from a quick look\nin pg_operator, most operators on type text are comparisons, pattern\nmatch, or text search, none of which do I want to fold into section 9.4.\nThe only exceptions are ||, which we do document there under SQL\noperators, and ^@. Commit 710d90da1 apparently decided to treat ^@ as a\npattern match operator, which I guess it could be if you hold your head\nat the right angle, but I doubt most people would think to look for it\nin section 9.7. I guess the most practical answer is to rename table\n9.10 from \"Other String Functions\" to \"Other String Functions and\nOperators\", which is more parallel to table 9.9 anyway. Just as in 9.9,\nit would look weird to have a one-entry table of operators. (Maybe\nsomeday in the far future it'd make sense to split 9.10 into two\ntables.)\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/232599.1633800229%40sss.pgh.pa.us\n\n\n", "msg_date": "Sat, 09 Oct 2021 13:59:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: starts_with, ^@ and index usage" } ]
[ { "msg_contents": "Hi,\n\nIt looks like auxiliary processes will not have a valid MyBackendId as\nthey don't call InitPostgres() and SharedInvalBackendInit() unlike\nbackends. But the startup process (which is an auxiliary process) in\nhot standby mode seems to be different in the way that it does have a\nvalid MyBackendId as SharedInvalBackendInit() gets called from\nInitRecoveryTransactionEnvironment(). The SharedInvalBackendInit()\nusually stores the MyBackendId in the caller's PGPROC structure i.e.\nMyProc->backendId. The auxiliary processes (including the startup\nprocess) usually register themselves in procsignal array with\nProcSignalInit(MaxBackends + MyAuxProcType + 1) unlike the backends\nwith ProcSignalInit(MyBackendId) after SharedInvalBackendInit() in\nInitPostgres().\n\nThe problem comes when a postgres process wants to send a multiplexed\nSIGUSR1 signal (probably using SendProcSignal()) to the startup\nprocess after receiving its ProcSignal->psh_slot[] with its backendId\nfrom the PGPROC (the postgres process can get the startup process\nPGPROC structure from AuxiliaryPidGetProc()). Remember the startup\nprocess has registered in the procsignal array with\nProcSignalInit(MaxBackends + MyAuxProcType + 1), not with the\nProcSignalInit(MyBackendId) like the backends did. So, the postgres\nprocess, wanting to send SIGUSR1 to the startup process, refers to the\nwrong ProcSignal->psh_slot[] and may not send the signal.\n\nIs this inconsistency of MyBackendId for a startup process a problem\nat all? Thoughts?\n\nThese are the following ways I think we can fix it, if at all some\nother hacker agrees that it is actually an issue:\n\n1) Fix the startup process code, probably by unregistering the\nprocsignal array entry that was made with ProcSignalInit(MaxBackends +\nMyAuxProcType + 1) in AuxiliaryProcessMain() and register with\nProcSignalInit(MyBackendId) immediately after SharedInvalBackendInit()\ncalculates the MyBackendId in with SharedInvalBackendInit() in\nInitRecoveryTransactionEnvironment(). This seems risky to me as\nunregistering and registering ProcSignal array involves some barriers\nand during the registering and unregistering window, the startup\nprocess may miss the SIGUSR1.\n\n2) Ensure that the process, that wants to send the startup process\nSIGUSR1 signal, doesn't use the backendId from the startup process\nPGPROC, in which case it has to loop over all the entries of\nProcSignal->psh_slot[] array to find the entry with the startup\nprocess PID. It seems easier and less riskier but only caveat is that\nthe sending process shouldn't look at the backendId from auxiliary\nprocess PGPROC, instead it should just traverse the entire proc signal\narray to find the right slot.\n\n3) Add a comment around AuxiliaryPidGetProc() that says \"auxiliary\nprocesses don't have valid backend ids, so don't use the backendId\nfrom the returned PGPROC\".\n\n(2) and (3) seem reasonable to me. Thoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 9 Oct 2021 18:52:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Inconsistency in startup process's MyBackendId and procsignal array\n registration with ProcSignalInit()" }, { "msg_contents": "\n\nOn 2021/10/09 22:22, Bharath Rupireddy wrote:\n> Hi,\n> \n> It looks like auxiliary processes will not have a valid MyBackendId as\n> they don't call InitPostgres() and SharedInvalBackendInit() unlike\n> backends. But the startup process (which is an auxiliary process) in\n> hot standby mode seems to be different in the way that it does have a\n> valid MyBackendId as SharedInvalBackendInit() gets called from\n> InitRecoveryTransactionEnvironment(). The SharedInvalBackendInit()\n> usually stores the MyBackendId in the caller's PGPROC structure i.e.\n> MyProc->backendId. The auxiliary processes (including the startup\n> process) usually register themselves in procsignal array with\n> ProcSignalInit(MaxBackends + MyAuxProcType + 1) unlike the backends\n> with ProcSignalInit(MyBackendId) after SharedInvalBackendInit() in\n> InitPostgres().\n> \n> The problem comes when a postgres process wants to send a multiplexed\n> SIGUSR1 signal (probably using SendProcSignal()) to the startup\n> process after receiving its ProcSignal->psh_slot[] with its backendId\n> from the PGPROC (the postgres process can get the startup process\n> PGPROC structure from AuxiliaryPidGetProc()). Remember the startup\n> process has registered in the procsignal array with\n> ProcSignalInit(MaxBackends + MyAuxProcType + 1), not with the\n> ProcSignalInit(MyBackendId) like the backends did. So, the postgres\n> process, wanting to send SIGUSR1 to the startup process, refers to the\n> wrong ProcSignal->psh_slot[] and may not send the signal.\n> \n> Is this inconsistency of MyBackendId for a startup process a problem\n> at all? Thoughts?\n> \n> These are the following ways I think we can fix it, if at all some\n> other hacker agrees that it is actually an issue:\n> \n> 1) Fix the startup process code, probably by unregistering the\n> procsignal array entry that was made with ProcSignalInit(MaxBackends +\n> MyAuxProcType + 1) in AuxiliaryProcessMain() and register with\n> ProcSignalInit(MyBackendId) immediately after SharedInvalBackendInit()\n> calculates the MyBackendId in with SharedInvalBackendInit() in\n> InitRecoveryTransactionEnvironment(). This seems risky to me as\n> unregistering and registering ProcSignal array involves some barriers\n> and during the registering and unregistering window, the startup\n> process may miss the SIGUSR1.\n> \n> 2) Ensure that the process, that wants to send the startup process\n> SIGUSR1 signal, doesn't use the backendId from the startup process\n> PGPROC, in which case it has to loop over all the entries of\n> ProcSignal->psh_slot[] array to find the entry with the startup\n> process PID. It seems easier and less riskier but only caveat is that\n> the sending process shouldn't look at the backendId from auxiliary\n> process PGPROC, instead it should just traverse the entire proc signal\n> array to find the right slot.\n> \n> 3) Add a comment around AuxiliaryPidGetProc() that says \"auxiliary\n> processes don't have valid backend ids, so don't use the backendId\n> from the returned PGPROC\".\n> \n> (2) and (3) seem reasonable to me. Thoughts?\n\nHow about modifying SharedInvalBackendInit() so that it accepts\nBackendId as an argument and allocates the ProcState entry of\nthe specified BackendId? That is, the startup process determines\nthat its BackendId is \"MaxBackends + MyAuxProcType (=StartupProcess) + 1\"\nin AuxiliaryProcessMain(), and then it passes that BackendId to\nSharedInvalBackendInit() in InitRecoveryTransactionEnvironment().\n\nMaybe you need to enlarge ProcState array so that it also handles\nauxiliary processes if it does not for now.\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 11 Oct 2021 15:24:46 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "At Mon, 11 Oct 2021 15:24:46 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/10/09 22:22, Bharath Rupireddy wrote:\n> > Hi,\n> > It looks like auxiliary processes will not have a valid MyBackendId as\n> > they don't call InitPostgres() and SharedInvalBackendInit() unlike\n> > backends. But the startup process (which is an auxiliary process) in\n> > hot standby mode seems to be different in the way that it does have a\n> > valid MyBackendId as SharedInvalBackendInit() gets called from\n> > InitRecoveryTransactionEnvironment(). The SharedInvalBackendInit()\n> > usually stores the MyBackendId in the caller's PGPROC structure i.e.\n> > MyProc->backendId. The auxiliary processes (including the startup\n> > process) usually register themselves in procsignal array with\n> > ProcSignalInit(MaxBackends + MyAuxProcType + 1) unlike the backends\n> > with ProcSignalInit(MyBackendId) after SharedInvalBackendInit() in\n> > InitPostgres().\n> > The problem comes when a postgres process wants to send a multiplexed\n> > SIGUSR1 signal (probably using SendProcSignal()) to the startup\n> > process after receiving its ProcSignal->psh_slot[] with its backendId\n> > from the PGPROC (the postgres process can get the startup process\n> > PGPROC structure from AuxiliaryPidGetProc()). Remember the startup\n> > process has registered in the procsignal array with\n> > ProcSignalInit(MaxBackends + MyAuxProcType + 1), not with the\n> > ProcSignalInit(MyBackendId) like the backends did. So, the postgres\n> > process, wanting to send SIGUSR1 to the startup process, refers to the\n> > wrong ProcSignal->psh_slot[] and may not send the signal.\n> > Is this inconsistency of MyBackendId for a startup process a problem\n> > at all? Thoughts?\n> > These are the following ways I think we can fix it, if at all some\n> > other hacker agrees that it is actually an issue:\n> > 1) Fix the startup process code, probably by unregistering the\n> > procsignal array entry that was made with ProcSignalInit(MaxBackends +\n> > MyAuxProcType + 1) in AuxiliaryProcessMain() and register with\n> > ProcSignalInit(MyBackendId) immediately after SharedInvalBackendInit()\n> > calculates the MyBackendId in with SharedInvalBackendInit() in\n> > InitRecoveryTransactionEnvironment(). This seems risky to me as\n> > unregistering and registering ProcSignal array involves some barriers\n> > and during the registering and unregistering window, the startup\n> > process may miss the SIGUSR1.\n> > 2) Ensure that the process, that wants to send the startup process\n> > SIGUSR1 signal, doesn't use the backendId from the startup process\n> > PGPROC, in which case it has to loop over all the entries of\n> > ProcSignal->psh_slot[] array to find the entry with the startup\n> > process PID. It seems easier and less riskier but only caveat is that\n> > the sending process shouldn't look at the backendId from auxiliary\n> > process PGPROC, instead it should just traverse the entire proc signal\n> > array to find the right slot.\n> > 3) Add a comment around AuxiliaryPidGetProc() that says \"auxiliary\n> > processes don't have valid backend ids, so don't use the backendId\n> > from the returned PGPROC\".\n> > (2) and (3) seem reasonable to me. Thoughts?\n\n(I'm not sure how the trouble happens.)\n2 and 3 looks like fixing inconsistency by another inconsistency.\nI'm not sure 1 is acceptable.\n\n> How about modifying SharedInvalBackendInit() so that it accepts\n> BackendId as an argument and allocates the ProcState entry of\n> the specified BackendId? That is, the startup process determines\n> that its BackendId is \"MaxBackends + MyAuxProcType (=StartupProcess) +\n> 1\"\n> in AuxiliaryProcessMain(), and then it passes that BackendId to\n> SharedInvalBackendInit() in InitRecoveryTransactionEnvironment().\n> \n> Maybe you need to enlarge ProcState array so that it also handles\n> auxiliary processes if it does not for now.\n\nIt seems to me that the backendId on startup process is used only for\nvxid generation. Actually \"make check-world\" doesn't fail by skipping\nSharedInvalBackendinit() (and disabling an assertion).\n\nI thought that we could decouple vxid from backend ID (or sinval code)\nby using pgprocno for vxid generation instead of backend ID. \"make\ncheck-world\" doesn't fail with that change, too. (I don't think\n\"doesn't fail\" ncecessarily mean that that change is correct, though),\nbut vxid gets somewhat odd after the change..\n\n=# select distinct virtualxid from pg_locks;\n virtualxid \n------------\n \n 116/1 # startup\n 99/48 # backend 1\n...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 11 Oct 2021 16:11:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "On Mon, Oct 11, 2021 at 12:41 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> > > These are the following ways I think we can fix it, if at all some\n> > > other hacker agrees that it is actually an issue:\n> > > 1) Fix the startup process code, probably by unregistering the\n> > > procsignal array entry that was made with ProcSignalInit(MaxBackends +\n> > > MyAuxProcType + 1) in AuxiliaryProcessMain() and register with\n> > > ProcSignalInit(MyBackendId) immediately after SharedInvalBackendInit()\n> > > calculates the MyBackendId in with SharedInvalBackendInit() in\n> > > InitRecoveryTransactionEnvironment(). This seems risky to me as\n> > > unregistering and registering ProcSignal array involves some barriers\n> > > and during the registering and unregistering window, the startup\n> > > process may miss the SIGUSR1.\n> > > 2) Ensure that the process, that wants to send the startup process\n> > > SIGUSR1 signal, doesn't use the backendId from the startup process\n> > > PGPROC, in which case it has to loop over all the entries of\n> > > ProcSignal->psh_slot[] array to find the entry with the startup\n> > > process PID. It seems easier and less riskier but only caveat is that\n> > > the sending process shouldn't look at the backendId from auxiliary\n> > > process PGPROC, instead it should just traverse the entire proc signal\n> > > array to find the right slot.\n> > > 3) Add a comment around AuxiliaryPidGetProc() that says \"auxiliary\n> > > processes don't have valid backend ids, so don't use the backendId\n> > > from the returned PGPROC\".\n> > > (2) and (3) seem reasonable to me. Thoughts?\n>\n> (I'm not sure how the trouble happens.)\n\nThe order in which SharedInvalBackendInit and ProcSignalInit should be\nused to keep procState and ProcSignal array entries for backend and\nauxiliary processes is as follows:\ncall SharedInvalBackendInit() and let it calculate the MyBackendId\ncall ProcSignalInit(MyBackendId);\n\nBut for the startup process it does the opposite way, so the procState\nand ProcSignal array entries are not in sync.\ncall ProcSignalInit(MaxBackends + MyAuxProcType + 1);\ncall SharedInvalBackendInit() and let it calculate the MyBackendId\n\n If some process wants to send the startup process SIGUSR1 with the\nPGPROC->backendId and SendProcSignal(PGPROC->pid, XXXXX,\nPGPROC->backendId) after getting the PGPROC entry from\nAuxiliaryPidGetProc(), then the signal isn't sent. To understand this\nissue, please use a sample patch at [1], have a standby setup, call\npg_log_backend_memory_contexts with startup process pid on the\nstandby, the error \"could not send signal to process\" is shown, see\n[2].\n\n[1]\ndiff --git a/src/backend/utils/adt/mcxtfuncs.c\nb/src/backend/utils/adt/mcxtfuncs.c\nindex 0d52613bc3..2739591edc 100644\n--- a/src/backend/utils/adt/mcxtfuncs.c\n+++ b/src/backend/utils/adt/mcxtfuncs.c\n@@ -185,6 +185,10 @@ pg_log_backend_memory_contexts(PG_FUNCTION_ARGS)\n\n proc = BackendPidGetProc(pid);\n\n+ /* see if the given process is an auxiliary process. */\n+ if (proc == NULL)\n+ proc = AuxiliaryPidGetProc(pid);\n+\n /*\n * BackendPidGetProc returns NULL if the pid isn't valid; but\nby the time\n * we reach kill(), a process for which we get a valid proc here might\n\n[2]\npostgres=# select pg_log_backend_memory_contexts(726901);\nWARNING: could not send signal to process 726901: No such process\n pg_log_backend_memory_contexts\n--------------------------------\n f\n(1 row)\n\n> > How about modifying SharedInvalBackendInit() so that it accepts\n> > BackendId as an argument and allocates the ProcState entry of\n> > the specified BackendId? That is, the startup process determines\n> > that its BackendId is \"MaxBackends + MyAuxProcType (=StartupProcess) +\n> > 1\"\n> > in AuxiliaryProcessMain(), and then it passes that BackendId to\n> > SharedInvalBackendInit() in InitRecoveryTransactionEnvironment().\n> >\n> > Maybe you need to enlarge ProcState array so that it also handles\n> > auxiliary processes if it does not for now.\n>\n> It seems to me that the backendId on startup process is used only for\n> vxid generation. Actually \"make check-world\" doesn't fail by skipping\n> SharedInvalBackendinit() (and disabling an assertion).\n>\n> I thought that we could decouple vxid from backend ID (or sinval code)\n> by using pgprocno for vxid generation instead of backend ID. \"make\n> check-world\" doesn't fail with that change, too. (I don't think\n> \"doesn't fail\" ncecessarily mean that that change is correct, though),\n> but vxid gets somewhat odd after the change..\n>\n> =# select distinct virtualxid from pg_locks;\n> virtualxid\n> ------------\n>\n> 116/1 # startup\n> 99/48 # backend 1\n\nI'm not sure we go that path. Others may have better thoughts.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 11 Oct 2021 16:03:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "On Mon, Oct 11, 2021 at 11:54 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n> How about modifying SharedInvalBackendInit() so that it accepts\n> BackendId as an argument and allocates the ProcState entry of\n> the specified BackendId? That is, the startup process determines\n> that its BackendId is \"MaxBackends + MyAuxProcType (=StartupProcess) + 1\"\n> in AuxiliaryProcessMain(), and then it passes that BackendId to\n> SharedInvalBackendInit() in InitRecoveryTransactionEnvironment().\n\nIf we do the above, then the problem might arise if somebody calls\nSICleanupQueue and wants to signal the startup process, the below code\n(from SICleanupQueue) can't get the startup process backend id. So,\nthe backend id calculation for the startup process can't just be\nMaxBackends + MyAuxProcType + 1.\nBackendId his_backendId = (needSig - &segP->procState[0]) + 1;\n\n> Maybe you need to enlarge ProcState array so that it also handles\n> auxiliary processes if it does not for now.\n\nIt looks like we need to increase the size of the ProcState array by 1\nat least (for the startup process). Currently the ProcState array\ndoesn't have entries for auxiliary processes, it does have entries for\nMaxBackends. The startup process is eating up one slot from\nMaxBackends. Since we need only an extra ProcState array slot for the\nstartup process I think we could just extend its size by 1. Instead of\nmodifying the MaxBackends definition, we can just add 1 (and a comment\nsaying this 1 is for startup process) to shmInvalBuffer->maxBackends\nin SInvalShmemSize, CreateSharedInvalidationState. IMO, this has to go\nin a separate patch and probably in a separate thread. Thoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 11 Oct 2021 16:16:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "On 2021/10/11 19:46, Bharath Rupireddy wrote:\n> If we do the above, then the problem might arise if somebody calls\n> SICleanupQueue and wants to signal the startup process, the below code\n> (from SICleanupQueue) can't get the startup process backend id. So,\n> the backend id calculation for the startup process can't just be\n> MaxBackends + MyAuxProcType + 1.\n> BackendId his_backendId = (needSig - &segP->procState[0]) + 1;\n\nAttached POC patch illustrates what I'm in mind. ISTM this change\ndoesn't prevent SICleanupQueue() from getting right backend ID\nof the startup process. Thought?\n\n\n> It looks like we need to increase the size of the ProcState array by 1\n> at least (for the startup process). Currently the ProcState array\n> doesn't have entries for auxiliary processes, it does have entries for\n> MaxBackends. The startup process is eating up one slot from\n> MaxBackends. Since we need only an extra ProcState array slot for the\n> startup process I think we could just extend its size by 1. Instead of\n> modifying the MaxBackends definition, we can just add 1 (and a comment\n> saying this 1 is for startup process) to shmInvalBuffer->maxBackends\n> in SInvalShmemSize, CreateSharedInvalidationState. IMO, this has to go\n> in a separate patch and probably in a separate thread. Thoughts?\n\nAgreed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 12 Oct 2021 00:29:03 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "On Mon, Oct 11, 2021 at 8:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2021/10/11 19:46, Bharath Rupireddy wrote:\n> > If we do the above, then the problem might arise if somebody calls\n> > SICleanupQueue and wants to signal the startup process, the below code\n> > (from SICleanupQueue) can't get the startup process backend id. So,\n> > the backend id calculation for the startup process can't just be\n> > MaxBackends + MyAuxProcType + 1.\n> > BackendId his_backendId = (needSig - &segP->procState[0]) + 1;\n>\n> Attached POC patch illustrates what I'm in mind. ISTM this change\n> doesn't prevent SICleanupQueue() from getting right backend ID\n> of the startup process. Thought?\n\nI will take a look at it a bit later.\n\n> > It looks like we need to increase the size of the ProcState array by 1\n> > at least (for the startup process). Currently the ProcState array\n> > doesn't have entries for auxiliary processes, it does have entries for\n> > MaxBackends. The startup process is eating up one slot from\n> > MaxBackends. Since we need only an extra ProcState array slot for the\n> > startup process I think we could just extend its size by 1. Instead of\n> > modifying the MaxBackends definition, we can just add 1 (and a comment\n> > saying this 1 is for startup process) to shmInvalBuffer->maxBackends\n> > in SInvalShmemSize, CreateSharedInvalidationState. IMO, this has to go\n> > in a separate patch and probably in a separate thread. Thoughts?\n>\n> Agreed.\n\nPosted a patch in a separate thread, please review it.\nhttps://www.postgresql.org/message-id/CALj2ACXZ_o7rcOb7-Rs96P0d%3DEi%2Bnvf_WZ-Meky7Vv%2BnQNFYjQ%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 12 Oct 2021 00:39:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "On Mon, Oct 11, 2021 at 8:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2021/10/11 19:46, Bharath Rupireddy wrote:\n> > If we do the above, then the problem might arise if somebody calls\n> > SICleanupQueue and wants to signal the startup process, the below code\n> > (from SICleanupQueue) can't get the startup process backend id. So,\n> > the backend id calculation for the startup process can't just be\n> > MaxBackends + MyAuxProcType + 1.\n> > BackendId his_backendId = (needSig - &segP->procState[0]) + 1;\n>\n> Attached POC patch illustrates what I'm in mind. ISTM this change\n> doesn't prevent SICleanupQueue() from getting right backend ID\n> of the startup process. Thought?\n\nThe patch looks good to me unless I'm missing something badly.\n\n+ Assert(MyBackendId == InvalidBackendId ||\n+ MyBackendId <= segP->maxBackends);\n+\nIn the above assertion, we can just do MyBackendId ==\nsegP->maxBackends, instead of <= as the startup process is the only\nprocess that calls SharedInvalBackendInit with pre-calculated\nMyBackendId = MaxBackends + MyAuxProcType + 1; and it will always\noccupy the last slot in the procState array.\n\nOtherwise, we could discard defining MyBackendId in auxprocess.c and\ndefine the MyBackendId in the SharedInvalBackendInit itself as this is\nthe function that defines the MyBackendId for everyone whoever\nrequires it. I prefer this approach over what's done in PoC patch.\n\nIn SharedInvalBackendInit:\nAssert(MyBackendId == InvalidBackendId);\n/*\n* The startup process requires a valid BackendId for the SI message\n* buffer and virtual transaction id, so define it here with the value with\n* which the procsignal array slot was allocated in AuxiliaryProcessMain.\n* All other auxiliary processes don't need it.\n*/\nif (MyAuxProcType == StartupProcess)\n MyBackendId = MaxBackends + MyAuxProcType + 1;\n\nI think this solution, coupled with the one proposed at [1], should\nsolve this startup process's inconsistency in MyBackendId, procState\nand ProcSignal array slot problems.\n\n[1] - https://www.postgresql.org/message-id/CALj2ACXZ_o7rcOb7-Rs96P0d%3DEi%2Bnvf_WZ-Meky7Vv%2BnQNFYjQ%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 12 Oct 2021 13:09:47 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "At Mon, 11 Oct 2021 16:03:57 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Mon, Oct 11, 2021 at 12:41 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > (I'm not sure how the trouble happens.)\n...\n> If some process wants to send the startup process SIGUSR1 with the\n> PGPROC->backendId and SendProcSignal(PGPROC->pid, XXXXX,\n> PGPROC->backendId) after getting the PGPROC entry from\n> AuxiliaryPidGetProc(), then the signal isn't sent. To understand this\n> issue, please use a sample patch at [1], have a standby setup, call\n> pg_log_backend_memory_contexts with startup process pid on the\n> standby, the error \"could not send signal to process\" is shown, see\n> [2].\n\nThanks, I understand that this doesn't happen on vanilla PG.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 12 Oct 2021 17:09:44 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "At Tue, 12 Oct 2021 13:09:47 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Mon, Oct 11, 2021 at 8:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> > On 2021/10/11 19:46, Bharath Rupireddy wrote:\n> > > If we do the above, then the problem might arise if somebody calls\n> > > SICleanupQueue and wants to signal the startup process, the below code\n> > > (from SICleanupQueue) can't get the startup process backend id. So,\n> > > the backend id calculation for the startup process can't just be\n> > > MaxBackends + MyAuxProcType + 1.\n> > > BackendId his_backendId = (needSig - &segP->procState[0]) + 1;\n> >\n> > Attached POC patch illustrates what I'm in mind. ISTM this change\n> > doesn't prevent SICleanupQueue() from getting right backend ID\n> > of the startup process. Thought?\n> \n> The patch looks good to me unless I'm missing something badly.\n> \n> + Assert(MyBackendId == InvalidBackendId ||\n> + MyBackendId <= segP->maxBackends);\n> +\n> In the above assertion, we can just do MyBackendId ==\n> segP->maxBackends, instead of <= as the startup process is the only\n> process that calls SharedInvalBackendInit with pre-calculated\n> MyBackendId = MaxBackends + MyAuxProcType + 1; and it will always\n> occupy the last slot in the procState array.\n\n+1 for not allowing to explicitly specify the \"auto-assigned\"\nbackendid range,\n\n> Otherwise, we could discard defining MyBackendId in auxprocess.c and\n> define the MyBackendId in the SharedInvalBackendInit itself as this is\n> the function that defines the MyBackendId for everyone whoever\n> requires it. I prefer this approach over what's done in PoC patch.\n> \n> In SharedInvalBackendInit:\n> Assert(MyBackendId == InvalidBackendId);\n> /*\n> * The startup process requires a valid BackendId for the SI message\n> * buffer and virtual transaction id, so define it here with the value with\n> * which the procsignal array slot was allocated in AuxiliaryProcessMain.\n> * All other auxiliary processes don't need it.\n> */\n> if (MyAuxProcType == StartupProcess)\n> MyBackendId = MaxBackends + MyAuxProcType + 1;\n> \n> I think this solution, coupled with the one proposed at [1], should\n> solve this startup process's inconsistency in MyBackendId, procState\n> and ProcSignal array slot problems.\n> \n> [1] - https://www.postgresql.org/message-id/CALj2ACXZ_o7rcOb7-Rs96P0d%3DEi%2Bnvf_WZ-Meky7Vv%2BnQNFYjQ%40mail.gmail.com\n\nThe patch does this:\n\n \t\tcase StartupProcess:\n+\t\t\tMyBackendId = MaxBackends + MyAuxProcType + 1;\n\nas well as this:\n\n+\tshmInvalBuffer->maxBackends = MaxBackends + 1;\n\nThese don't seem to be in the strict correspondence. I'd prefer\nsomething like the following.\n\n+ /* currently only StartupProcess uses nailed SI slot */\n+\tshmInvalBuffer->maxBackends = MaxBackends + StartupProcess + 1;\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 12 Oct 2021 17:33:42 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "On Tue, Oct 12, 2021 at 2:03 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> > [1] - https://www.postgresql.org/message-id/CALj2ACXZ_o7rcOb7-Rs96P0d%3DEi%2Bnvf_WZ-Meky7Vv%2BnQNFYjQ%40mail.gmail.com\n>\n> The patch does this:\n>\n> case StartupProcess:\n> + MyBackendId = MaxBackends + MyAuxProcType + 1;\n>\n> as well as this:\n>\n> + shmInvalBuffer->maxBackends = MaxBackends + 1;\n>\n> These don't seem to be in the strict correspondence. I'd prefer\n> something like the following.\n>\n> + /* currently only StartupProcess uses nailed SI slot */\n> + shmInvalBuffer->maxBackends = MaxBackends + StartupProcess + 1;\n\nI don't think it is a good idea to use macro StartupProcess (because\nthe macro might get changed to a different value later). What we\nessentially need to do for procState array is to extend its size by 1\n(for startup process) which is being handled separately in [1]. Once\nthe patch at [1] gets in, the patch proposed here will not have the\nabove change at all.\n\n[1] - https://www.postgresql.org/message-id/CALj2ACXZ_o7rcOb7-Rs96P0d%3DEi%2Bnvf_WZ-Meky7Vv%2BnQNFYjQ%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 12 Oct 2021 14:57:58 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "\n\nOn 2021/10/12 16:39, Bharath Rupireddy wrote:\n> Otherwise, we could discard defining MyBackendId in auxprocess.c and\n> define the MyBackendId in the SharedInvalBackendInit itself as this is\n> the function that defines the MyBackendId for everyone whoever\n> requires it. I prefer this approach over what's done in PoC patch.\n> \n> In SharedInvalBackendInit:\n> Assert(MyBackendId == InvalidBackendId);\n> /*\n> * The startup process requires a valid BackendId for the SI message\n> * buffer and virtual transaction id, so define it here with the value with\n> * which the procsignal array slot was allocated in AuxiliaryProcessMain.\n> * All other auxiliary processes don't need it.\n> */\n> if (MyAuxProcType == StartupProcess)\n> MyBackendId = MaxBackends + MyAuxProcType + 1;\n\nYes, this is an option.\n\nBut, at [1], you're proposing to enhance pg_log_backend_memory_contexts()\nso that it can send the request to even auxiliary processes. If we need to\nassign a backend ID to an auxiliary process other than the startup process\nand use it to send the signal promptly to those auxiliary processes,\nthis design might not be good. Since those auxiliary processes don't call\nSharedInvalBackendInit(), backend IDs for them might need to be assigned\noutside SharedInvalBackendInit(). Thought?\n\n\n[1]\nhttps://postgr.es/m/CALj2ACU1nBzpacOK2q=a65S_4+Oaz_rLTsU1Ri0gf7YUmnmhfQ@mail.gmail.com\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 13 Oct 2021 03:10:34 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "Hi,\n\nOn 2021-10-11 15:24:46 +0900, Fujii Masao wrote:\n> How about modifying SharedInvalBackendInit() so that it accepts\n> BackendId as an argument and allocates the ProcState entry of\n> the specified BackendId? That is, the startup process determines\n> that its BackendId is \"MaxBackends + MyAuxProcType (=StartupProcess) + 1\"\n> in AuxiliaryProcessMain(), and then it passes that BackendId to\n> SharedInvalBackendInit() in InitRecoveryTransactionEnvironment().\n\nIf I understand correctly what you're proposing, I think that's going in the\nwrong direction. We should work towards getting rid of BackendIds\ninstead. This whole complication vanishes if we make sinvaladt use pgprocno.\n\nSee https://postgr.es/m/20210802171255.k4yv5cfqaqbuuy6f%40alap3.anarazel.de\nfor some discussion of this.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Oct 2021 15:55:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "At Tue, 12 Oct 2021 14:57:58 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Tue, Oct 12, 2021 at 2:03 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > > [1] - https://www.postgresql.org/message-id/CALj2ACXZ_o7rcOb7-Rs96P0d%3DEi%2Bnvf_WZ-Meky7Vv%2BnQNFYjQ%40mail.gmail.com\n> >\n> > The patch does this:\n> >\n> > case StartupProcess:\n> > + MyBackendId = MaxBackends + MyAuxProcType + 1;\n> >\n> > as well as this:\n> >\n> > + shmInvalBuffer->maxBackends = MaxBackends + 1;\n> >\n> > These don't seem to be in the strict correspondence. I'd prefer\n> > something like the following.\n> >\n> > + /* currently only StartupProcess uses nailed SI slot */\n> > + shmInvalBuffer->maxBackends = MaxBackends + StartupProcess + 1;\n> \n> I don't think it is a good idea to use macro StartupProcess (because\n> the macro might get changed to a different value later). What we\n\nIf wo, we shouldn't use MyAuxProcType at the \"case StartupProcess\".\n\n> essentially need to do for procState array is to extend its size by 1\n> (for startup process) which is being handled separately in [1]. Once\n> the patch at [1] gets in, the patch proposed here will not have the\n> above change at all.\n> \n> [1] - https://www.postgresql.org/message-id/CALj2ACXZ_o7rcOb7-Rs96P0d%3DEi%2Bnvf_WZ-Meky7Vv%2BnQNFYjQ%40mail.gmail.com\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 13 Oct 2021 09:16:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "On Wed, Oct 13, 2021 at 4:25 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-10-11 15:24:46 +0900, Fujii Masao wrote:\n> > How about modifying SharedInvalBackendInit() so that it accepts\n> > BackendId as an argument and allocates the ProcState entry of\n> > the specified BackendId? That is, the startup process determines\n> > that its BackendId is \"MaxBackends + MyAuxProcType (=StartupProcess) + 1\"\n> > in AuxiliaryProcessMain(), and then it passes that BackendId to\n> > SharedInvalBackendInit() in InitRecoveryTransactionEnvironment().\n>\n> If I understand correctly what you're proposing, I think that's going in the\n> wrong direction. We should work towards getting rid of BackendIds\n> instead. This whole complication vanishes if we make sinvaladt use pgprocno.\n>\n> See https://postgr.es/m/20210802171255.k4yv5cfqaqbuuy6f%40alap3.anarazel.de\n> for some discussion of this.\n\nWill any of the backends get pgprocno greater than MaxBackends? The\npgprocno can range from 0 to ((MaxBackends + NUM_AUXILIARY_PROCS +\nmax_prepared_xacts) - 1) and the ProcState array size is MaxBackends.\nHow do we put a backend with pgprocno > MaxBackends, into the\nProcState array? Is it that we also increase ProcState array size to\n(MaxBackends + NUM_AUXILIARY_PROCS + max_prepared_xacts)? Probably\nthis is the dumbest thing as some slots are unused\n(NUM_AUXILIARY_PROCS - 1 + max_prepared_xacts slots. -1 because the\nstartup process needs a ProcState slot) and the shared memory is\nwasted.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 13 Oct 2021 13:39:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "At Wed, 13 Oct 2021 13:39:24 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Wed, Oct 13, 2021 at 4:25 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2021-10-11 15:24:46 +0900, Fujii Masao wrote:\n> > > How about modifying SharedInvalBackendInit() so that it accepts\n> > > BackendId as an argument and allocates the ProcState entry of\n> > > the specified BackendId? That is, the startup process determines\n> > > that its BackendId is \"MaxBackends + MyAuxProcType (=StartupProcess) + 1\"\n> > > in AuxiliaryProcessMain(), and then it passes that BackendId to\n> > > SharedInvalBackendInit() in InitRecoveryTransactionEnvironment().\n> >\n> > If I understand correctly what you're proposing, I think that's going in the\n> > wrong direction. We should work towards getting rid of BackendIds\n> > instead. This whole complication vanishes if we make sinvaladt use pgprocno.\n> >\n> > See https://postgr.es/m/20210802171255.k4yv5cfqaqbuuy6f%40alap3.anarazel.de\n> > for some discussion of this.\n\nI feel this is the right direction. I understand sinvaladt needs the\nbackend id but I think it's wrong that the backend id is used widely.\nAnd, in the current direction, procState array gets very sparse and\nperformence of sinval gets degraded.\n\n> Will any of the backends get pgprocno greater than MaxBackends? The\n> pgprocno can range from 0 to ((MaxBackends + NUM_AUXILIARY_PROCS +\n> max_prepared_xacts) - 1) and the ProcState array size is MaxBackends.\n> How do we put a backend with pgprocno > MaxBackends, into the\n> ProcState array? Is it that we also increase ProcState array size to\n> (MaxBackends + NUM_AUXILIARY_PROCS + max_prepared_xacts)? Probably\n> this is the dumbest thing as some slots are unused\n> (NUM_AUXILIARY_PROCS - 1 + max_prepared_xacts slots. -1 because the\n> startup process needs a ProcState slot) and the shared memory is\n> wasted.\n\nThe elements for max_prepared_xacts can be placed at the last part of\npgprocno and we can actually omit allocating memory for them. I don't\nthink NUM_AUXILIARY_PROC is that large. It's only 5 and won't increase\nmuch in future. Since we know that startup is the only user of\nprocsig, we can save the space for four of them. In short, no extra\nmemory is required by using pgprocno for now.\n\nThe actual problem is if we simply use pgprocno in sinvaladt, we will\nsee preformance degradation caused by the super sparse procState\narray. As Andres mentioned in the URL above, it can be avoided if we\ncan get rid of looping over the array.\n\nAlthough needing a bit of care for the difference of invalid values\nfor both though, BackendId can be easily replaced with pgprocno almost\nmechanically except sinvaladt. Therefore, we can confine the current\nbackend ID within sinvaladt isolating from other part. The ids\ndedicated for sinvaladt can be packed to small range and perfomance\nwon't be damaged.\n\nIn the future, if we can get rid of looping over the procState array,\nsinvaladt - the last user of the current backend ID - can move to\npgprocno and we will say good-bye to the current backend ID.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 13 Oct 2021 19:52:52 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "At Wed, 13 Oct 2021 19:52:52 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Although needing a bit of care for the difference of invalid values\n> for both though, BackendId can be easily replaced with pgprocno almost\n> mechanically except sinvaladt. Therefore, we can confine the current\n> backend ID within sinvaladt isolating from other part. The ids\n> dedicated for sinvaladt can be packed to small range and perfomance\n> won't be damaged.\n\nSince I said it \"mechanically doable\", did I that. FWIW the attached is that. All behavioral differences come from the difrence of the valid range nad the invaild value between old BackendId and pgprpcno.\n\n- Only sinvaladt uses the packed old backendid internally.\n So procsignal can be sent to auxiliary processes.\n\n- All other part uses pgprocno but it is named as backendid so as to\n reduce the diff size.\n\n- vxid's backendid part start from 0 not 1, and invalid backendid\n becomes -1 to PG_INT32_MAX. Since it is mere a cosmetic issue, we\n can replace PG_INT32_MAX as -1 on printing.\n\n- The name of an exported snapshot changes. The first part start from\n 0, not 1.\n\n- Prepared transactions' vxid is changed so that it's backendid part\n has a valid value. Previously it was invalid id (-1). I'm not sure\n it doesn't harm but I faced no trouble with make check-world.\n (MarkAsPreparingGuts)\n\n- With only 0002, backendid starts from 99 (when max_connection is\n 100) then decremented to 0, which is quite odd. 0001 reverses the\n order of freelist.\n\n> In the future, if we can get rid of looping over the procState array,\n> sinvaladt - the last user of the current backend ID - can move to\n> pgprocno and we will say good-bye to the current backend ID.\n\nThe attached files are named as *.txt so that bots don't recognize\nthem as a patch.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n From dd8187954eb72edc3fbe0a807ce531b438412030 Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Wed, 13 Oct 2021 10:31:26 +0900\nSubject: [PATCH 1/2] procfreelist in ascending order\n\n---\n src/backend/storage/lmgr/proc.c | 62 +++++++++++++++++++--------------\n 1 file changed, 36 insertions(+), 26 deletions(-)\n\ndiff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\nindex b7d9da0aa9..78e05976a4 100644\n--- a/src/backend/storage/lmgr/proc.c\n+++ b/src/backend/storage/lmgr/proc.c\n@@ -163,6 +163,9 @@ InitProcGlobal(void)\n \t\t\t\tj;\n \tbool\t\tfound;\n \tuint32\t\tTotalProcs = MaxBackends + NUM_AUXILIARY_PROCS + max_prepared_xacts;\n+\tPGPROC\t **freelist;\n+\tint\t\t\tswitchpoint;\n+\tint\t\t\tarraykind;\n \n \t/* Create the ProcGlobal shared structure */\n \tProcGlobal = (PROC_HDR *)\n@@ -214,6 +217,8 @@ InitProcGlobal(void)\n \tProcGlobal->statusFlags = (uint8 *) ShmemAlloc(TotalProcs * sizeof(*ProcGlobal->statusFlags));\n \tMemSet(ProcGlobal->statusFlags, 0, TotalProcs * sizeof(*ProcGlobal->statusFlags));\n \n+\tswitchpoint = 0;\n+\tarraykind = 0;\n \tfor (i = 0; i < TotalProcs; i++)\n \t{\n \t\t/* Common initialization for all PGPROCs, regardless of type. */\n@@ -239,33 +244,38 @@ InitProcGlobal(void)\n \t\t * linear search. PGPROCs for prepared transactions are added to a\n \t\t * free list by TwoPhaseShmemInit().\n \t\t */\n-\t\tif (i < MaxConnections)\n+\t\tif (i < MaxBackends)\n \t\t{\n-\t\t\t/* PGPROC for normal backend, add to freeProcs list */\n-\t\t\tprocs[i].links.next = (SHM_QUEUE *) ProcGlobal->freeProcs;\n-\t\t\tProcGlobal->freeProcs = &procs[i];\n-\t\t\tprocs[i].procgloballist = &ProcGlobal->freeProcs;\n-\t\t}\n-\t\telse if (i < MaxConnections + autovacuum_max_workers + 1)\n-\t\t{\n-\t\t\t/* PGPROC for AV launcher/worker, add to autovacFreeProcs list */\n-\t\t\tprocs[i].links.next = (SHM_QUEUE *) ProcGlobal->autovacFreeProcs;\n-\t\t\tProcGlobal->autovacFreeProcs = &procs[i];\n-\t\t\tprocs[i].procgloballist = &ProcGlobal->autovacFreeProcs;\n-\t\t}\n-\t\telse if (i < MaxConnections + autovacuum_max_workers + 1 + max_worker_processes)\n-\t\t{\n-\t\t\t/* PGPROC for bgworker, add to bgworkerFreeProcs list */\n-\t\t\tprocs[i].links.next = (SHM_QUEUE *) ProcGlobal->bgworkerFreeProcs;\n-\t\t\tProcGlobal->bgworkerFreeProcs = &procs[i];\n-\t\t\tprocs[i].procgloballist = &ProcGlobal->bgworkerFreeProcs;\n-\t\t}\n-\t\telse if (i < MaxBackends)\n-\t\t{\n-\t\t\t/* PGPROC for walsender, add to walsenderFreeProcs list */\n-\t\t\tprocs[i].links.next = (SHM_QUEUE *) ProcGlobal->walsenderFreeProcs;\n-\t\t\tProcGlobal->walsenderFreeProcs = &procs[i];\n-\t\t\tprocs[i].procgloballist = &ProcGlobal->walsenderFreeProcs;\n+\t\t\tif (i == switchpoint)\n+\t\t\t{\n+\t\t\t\tswitch (arraykind++)\n+\t\t\t\t{\n+\t\t\t\t\tcase 0:\n+\t\t\t\t\t\tfreelist = &ProcGlobal->freeProcs;\n+\t\t\t\t\t\tswitchpoint += MaxConnections;\n+\t\t\t\t\t\tbreak;\n+\n+\t\t\t\t\tcase 1:\n+\t\t\t\t\t\tfreelist = &ProcGlobal->autovacFreeProcs;\n+\t\t\t\t\t\tswitchpoint += autovacuum_max_workers + 1;\n+\t\t\t\t\t\tbreak;\n+\n+\t\t\t\t\tcase 2:\n+\t\t\t\t\t\tfreelist = &ProcGlobal->bgworkerFreeProcs;\n+\t\t\t\t\t\tswitchpoint += max_worker_processes;\n+\t\t\t\t\t\tbreak;\n+\n+\t\t\t\t\tcase 3:\n+\t\t\t\t\t\tfreelist = &ProcGlobal->walsenderFreeProcs;\n+\t\t\t\t}\n+\n+\t\t\t\t/* link the element to the just-switched freelist */\n+\t\t\t\t*freelist = &procs[i];\n+\t\t\t}\n+\t\t\telse\n+\t\t\t\tprocs[i - 1].links.next = (SHM_QUEUE *) &procs[i];\n+\n+\t\t\tprocs[i].procgloballist = freelist;\n \t\t}\n \n \t\t/* Initialize myProcLocks[] shared memory queues. */\n-- \n2.27.0\n\n\n From 24d58b74691c5011fd26a9d430fe79e6f66079ea Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Thu, 14 Oct 2021 17:05:24 +0900\nSubject: [PATCH 2/2] Remove BackendId\n\nBackendId was generated by sinvaladt.c and widely used as ids packed\nin narrow range. However, the characteristics of narrow-packed is not\nrequired other than sinvaladt.c and the use of the id in other places\nrather harm. Get rid of the backend id in most of the tree and use\npgprocno as new BackendId.\n\nHowever, sinvaladt still needs such packed id so the old backend id is\nconfined to the module.\n---\n src/backend/access/transam/multixact.c | 23 +++++-----\n src/backend/access/transam/twophase.c | 4 +-\n src/backend/access/transam/xact.c | 2 +-\n src/backend/catalog/namespace.c | 2 +-\n src/backend/commands/async.c | 21 +++++----\n src/backend/commands/indexcmds.c | 3 +-\n src/backend/postmaster/auxprocess.c | 2 +-\n src/backend/postmaster/pgstat.c | 2 +-\n src/backend/storage/ipc/procarray.c | 9 ++--\n src/backend/storage/ipc/procsignal.c | 25 ++++-------\n src/backend/storage/ipc/sinvaladt.c | 49 ++++-----------------\n src/backend/storage/ipc/standby.c | 2 +-\n src/backend/storage/lmgr/lmgr.c | 3 +-\n src/backend/storage/lmgr/lock.c | 20 ++++-----\n src/backend/storage/lmgr/proc.c | 29 +++++++++---\n src/backend/utils/activity/backend_status.c | 22 +--------\n src/backend/utils/adt/dbsize.c | 5 ++-\n src/backend/utils/adt/mcxtfuncs.c | 2 +-\n src/backend/utils/cache/relcache.c | 1 +\n src/backend/utils/error/elog.c | 10 ++---\n src/backend/utils/init/globals.c | 2 -\n src/backend/utils/init/postinit.c | 7 +--\n src/backend/utils/time/snapmgr.c | 4 +-\n src/include/storage/backendid.h | 3 +-\n src/include/storage/lock.h | 6 +--\n src/include/storage/proc.h | 11 +++--\n src/include/storage/procsignal.h | 4 +-\n src/include/storage/smgr.h | 2 +-\n src/include/utils/rel.h | 2 +-\n 29 files changed, 117 insertions(+), 160 deletions(-)\n\ndiff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c\nindex e6c70ed0bc..585dc50858 100644\n--- a/src/backend/access/transam/multixact.c\n+++ b/src/backend/access/transam/multixact.c\n@@ -238,9 +238,9 @@ typedef struct MultiXactStateData\n \t * immediately following the MultiXactStateData struct. Each is indexed by\n \t * BackendId.\n \t *\n-\t * In both arrays, there's a slot for all normal backends (1..MaxBackends)\n-\t * followed by a slot for max_prepared_xacts prepared transactions. Valid\n-\t * BackendIds start from 1; element zero of each array is never used.\n+\t * In both arrays, there's a slot for all normal backends\n+\t * (0..MaxBackends-1) followed by a slot for max_prepared_xacts prepared\n+\t * transactions.\n \t *\n \t * OldestMemberMXactId[k] is the oldest MultiXactId each backend's current\n \t * transaction(s) could possibly be a member of, or InvalidMultiXactId\n@@ -283,9 +283,9 @@ typedef struct MultiXactStateData\n \n /*\n * Last element of OldestMemberMXactId and OldestVisibleMXactId arrays.\n- * Valid elements are (1..MaxOldestSlot); element 0 is never used.\n+ * Valid elements are (0..MaxOldestSlot).\n */\n-#define MaxOldestSlot\t(MaxBackends + max_prepared_xacts)\n+#define MaxOldestSlot\t(MaxBackends + max_prepared_xacts - 1)\n \n /* Pointers to the state data in shared memory */\n static MultiXactStateData *MultiXactState;\n@@ -697,7 +697,7 @@ MultiXactIdSetOldestVisible(void)\n \t\tif (oldestMXact < FirstMultiXactId)\n \t\t\toldestMXact = FirstMultiXactId;\n \n-\t\tfor (i = 1; i <= MaxOldestSlot; i++)\n+\t\tfor (i = 0 ; i <= MaxOldestSlot; i++)\n \t\t{\n \t\t\tMultiXactId thisoldest = OldestMemberMXactId[i];\n \n@@ -1828,10 +1828,10 @@ MultiXactShmemSize(void)\n {\n \tSize\t\tsize;\n \n-\t/* We need 2*MaxOldestSlot + 1 perBackendXactIds[] entries */\n+\t/* We need 2*(MaxOldestSlot + 1) + 1 perBackendXactIds[] entries */\n #define SHARED_MULTIXACT_STATE_SIZE \\\n \tadd_size(offsetof(MultiXactStateData, perBackendXactIds) + sizeof(MultiXactId), \\\n-\t\t\t mul_size(sizeof(MultiXactId) * 2, MaxOldestSlot))\n+\t\t\t mul_size(sizeof(MultiXactId) * 2, MaxOldestSlot + 1))\n \n \tsize = SHARED_MULTIXACT_STATE_SIZE;\n \tsize = add_size(size, SimpleLruShmemSize(NUM_MULTIXACTOFFSET_BUFFERS, 0));\n@@ -1878,11 +1878,10 @@ MultiXactShmemInit(void)\n \t\tAssert(found);\n \n \t/*\n-\t * Set up array pointers. Note that perBackendXactIds[0] is wasted space\n-\t * since we only use indexes 1..MaxOldestSlot in each array.\n+\t * Set up array pointers.\n \t */\n \tOldestMemberMXactId = MultiXactState->perBackendXactIds;\n-\tOldestVisibleMXactId = OldestMemberMXactId + MaxOldestSlot;\n+\tOldestVisibleMXactId = OldestMemberMXactId + MaxOldestSlot + 1;\n }\n \n /*\n@@ -2525,7 +2524,7 @@ GetOldestMultiXactId(void)\n \t\tnextMXact = FirstMultiXactId;\n \n \toldestMXact = nextMXact;\n-\tfor (i = 1; i <= MaxOldestSlot; i++)\n+\tfor (i = 0; i <= MaxOldestSlot; i++)\n \t{\n \t\tMultiXactId thisoldest;\n \ndiff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c\nindex 2156de187c..76d5bb55d6 100644\n--- a/src/backend/access/transam/twophase.c\n+++ b/src/backend/access/transam/twophase.c\n@@ -293,7 +293,7 @@ TwoPhaseShmemInit(void)\n \t\t\t * prepared transaction. Currently multixact.c uses that\n \t\t\t * technique.\n \t\t\t */\n-\t\t\tgxacts[i].dummyBackendId = MaxBackends + 1 + i;\n+\t\t\tgxacts[i].dummyBackendId = MaxBackends + i;\n \t\t}\n \t}\n \telse\n@@ -459,14 +459,12 @@ MarkAsPreparingGuts(GlobalTransaction gxact, TransactionId xid, const char *gid,\n \tproc->pgprocno = gxact->pgprocno;\n \tSHMQueueElemInit(&(proc->links));\n \tproc->waitStatus = PROC_WAIT_STATUS_OK;\n-\t/* We set up the gxact's VXID as InvalidBackendId/XID */\n \tproc->lxid = (LocalTransactionId) xid;\n \tproc->xid = xid;\n \tAssert(proc->xmin == InvalidTransactionId);\n \tproc->delayChkpt = false;\n \tproc->statusFlags = 0;\n \tproc->pid = 0;\n-\tproc->backendId = InvalidBackendId;\n \tproc->databaseId = databaseid;\n \tproc->roleId = owner;\n \tproc->tempNamespaceId = InvalidOid;\ndiff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\nindex 4cc38f0d85..9a8b0686bd 100644\n--- a/src/backend/access/transam/xact.c\n+++ b/src/backend/access/transam/xact.c\n@@ -2020,7 +2020,7 @@ StartTransaction(void)\n \t * Advertise it in the proc array. We assume assignment of\n \t * localTransactionId is atomic, and the backendId should be set already.\n \t */\n-\tAssert(MyProc->backendId == vxid.backendId);\n+\tAssert(MyProc->pgprocno == vxid.backendId);\n \tMyProc->lxid = vxid.localTransactionId;\n \n \tTRACE_POSTGRESQL_TRANSACTION_START(vxid.localTransactionId);\ndiff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c\nindex 4de8400fd0..f450c46f4d 100644\n--- a/src/backend/catalog/namespace.c\n+++ b/src/backend/catalog/namespace.c\n@@ -3293,7 +3293,7 @@ checkTempNamespaceStatus(Oid namespaceId)\n \t\treturn TEMP_NAMESPACE_NOT_TEMP;\n \n \t/* Is the backend alive? */\n-\tproc = BackendIdGetProc(backendId);\n+\tproc = GetProcIfAlive(backendId);\n \tif (proc == NULL)\n \t\treturn TEMP_NAMESPACE_IDLE;\n \ndiff --git a/src/backend/commands/async.c b/src/backend/commands/async.c\nindex 8557008545..e007c17f51 100644\n--- a/src/backend/commands/async.c\n+++ b/src/backend/commands/async.c\n@@ -272,8 +272,8 @@ typedef struct QueueBackendStatus\n * NotifyQueueTailLock, then NotifyQueueLock, and lastly NotifySLRULock.\n *\n * Each backend uses the backend[] array entry with index equal to its\n- * BackendId (which can range from 1 to MaxBackends). We rely on this to make\n- * SendProcSignal fast.\n+ * BackendId (which can range from 0 to MaxBackends - 1). We rely on this to\n+ * make SendProcSignal fast.\n *\n * The backend[] array entries for actively-listening backends are threaded\n * together using firstListener and the nextListener links, so that we can\n@@ -1122,7 +1122,8 @@ Exec_ListenPreCommit(void)\n \thead = QUEUE_HEAD;\n \tmax = QUEUE_TAIL;\n \tprevListener = InvalidBackendId;\n-\tfor (BackendId i = QUEUE_FIRST_LISTENER; i > 0; i = QUEUE_NEXT_LISTENER(i))\n+\tfor (BackendId i = QUEUE_FIRST_LISTENER; i != InvalidBackendId ;\n+\t\t i = QUEUE_NEXT_LISTENER(i))\n \t{\n \t\tif (QUEUE_BACKEND_DBOID(i) == MyDatabaseId)\n \t\t\tmax = QUEUE_POS_MAX(max, QUEUE_BACKEND_POS(i));\n@@ -1134,7 +1135,7 @@ Exec_ListenPreCommit(void)\n \tQUEUE_BACKEND_PID(MyBackendId) = MyProcPid;\n \tQUEUE_BACKEND_DBOID(MyBackendId) = MyDatabaseId;\n \t/* Insert backend into list of listeners at correct position */\n-\tif (prevListener > 0)\n+\tif (prevListener != InvalidBackendId)\n \t{\n \t\tQUEUE_NEXT_LISTENER(MyBackendId) = QUEUE_NEXT_LISTENER(prevListener);\n \t\tQUEUE_NEXT_LISTENER(prevListener) = MyBackendId;\n@@ -1281,7 +1282,8 @@ asyncQueueUnregister(void)\n \t\tQUEUE_FIRST_LISTENER = QUEUE_NEXT_LISTENER(MyBackendId);\n \telse\n \t{\n-\t\tfor (BackendId i = QUEUE_FIRST_LISTENER; i > 0; i = QUEUE_NEXT_LISTENER(i))\n+\t\tfor (BackendId i = QUEUE_FIRST_LISTENER; i != InvalidBackendId ;\n+\t\t\t i = QUEUE_NEXT_LISTENER(i))\n \t\t{\n \t\t\tif (QUEUE_NEXT_LISTENER(i) == MyBackendId)\n \t\t\t{\n@@ -1590,7 +1592,8 @@ asyncQueueFillWarning(void)\n \t\tQueuePosition min = QUEUE_HEAD;\n \t\tint32\t\tminPid = InvalidPid;\n \n-\t\tfor (BackendId i = QUEUE_FIRST_LISTENER; i > 0; i = QUEUE_NEXT_LISTENER(i))\n+\t\tfor (BackendId i = QUEUE_FIRST_LISTENER; i != InvalidBackendId;\n+\t\t\t i = QUEUE_NEXT_LISTENER(i))\n \t\t{\n \t\t\tAssert(QUEUE_BACKEND_PID(i) != InvalidPid);\n \t\t\tmin = QUEUE_POS_MIN(min, QUEUE_BACKEND_POS(i));\n@@ -1646,7 +1649,8 @@ SignalBackends(void)\n \tcount = 0;\n \n \tLWLockAcquire(NotifyQueueLock, LW_EXCLUSIVE);\n-\tfor (BackendId i = QUEUE_FIRST_LISTENER; i > 0; i = QUEUE_NEXT_LISTENER(i))\n+\tfor (BackendId i = QUEUE_FIRST_LISTENER; i != InvalidBackendId ;\n+\t\t i = QUEUE_NEXT_LISTENER(i))\n \t{\n \t\tint32\t\tpid = QUEUE_BACKEND_PID(i);\n \t\tQueuePosition pos;\n@@ -2183,7 +2187,8 @@ asyncQueueAdvanceTail(void)\n \t */\n \tLWLockAcquire(NotifyQueueLock, LW_EXCLUSIVE);\n \tmin = QUEUE_HEAD;\n-\tfor (BackendId i = QUEUE_FIRST_LISTENER; i > 0; i = QUEUE_NEXT_LISTENER(i))\n+\tfor (int i = QUEUE_FIRST_LISTENER; i != InvalidBackendId ;\n+\t\t i = QUEUE_NEXT_LISTENER(i))\n \t{\n \t\tAssert(QUEUE_BACKEND_PID(i) != InvalidPid);\n \t\tmin = QUEUE_POS_MIN(min, QUEUE_BACKEND_POS(i));\ndiff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c\nindex c14ca27c5e..fa70eed559 100644\n--- a/src/backend/commands/indexcmds.c\n+++ b/src/backend/commands/indexcmds.c\n@@ -463,7 +463,8 @@ WaitForOlderSnapshots(TransactionId limitXmin, bool progress)\n \t\t\t/* If requested, publish who we're going to wait for. */\n \t\t\tif (progress)\n \t\t\t{\n-\t\t\t\tPGPROC\t *holder = BackendIdGetProc(old_snapshots[i].backendId);\n+\t\t\t\tPGPROC\t *holder =\n+\t\t\t\t\t&ProcGlobal->allProcs[old_snapshots[i].backendId];\n \n \t\t\t\tif (holder)\n \t\t\t\t\tpgstat_progress_update_param(PROGRESS_WAITFOR_CURRENT_PID,\ndiff --git a/src/backend/postmaster/auxprocess.c b/src/backend/postmaster/auxprocess.c\nindex 7452f908b2..4c7784b036 100644\n--- a/src/backend/postmaster/auxprocess.c\n+++ b/src/backend/postmaster/auxprocess.c\n@@ -116,7 +116,7 @@ AuxiliaryProcessMain(AuxProcType auxtype)\n \t * This will need rethinking if we ever want more than one of a particular\n \t * auxiliary process type.\n \t */\n-\tProcSignalInit(MaxBackends + MyAuxProcType + 1);\n+\tProcSignalInit();\n \n \t/*\n \t * Auxiliary processes don't run transactions, but they may need a\ndiff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c\nindex b7d0fbaefd..b1cd23f661 100644\n--- a/src/backend/postmaster/pgstat.c\n+++ b/src/backend/postmaster/pgstat.c\n@@ -54,7 +54,7 @@\n #include \"postmaster/postmaster.h\"\n #include \"replication/slot.h\"\n #include \"replication/walsender.h\"\n-#include \"storage/backendid.h\"\n+//#include \"storage/backendid.h\"\n #include \"storage/dsm.h\"\n #include \"storage/fd.h\"\n #include \"storage/ipc.h\"\ndiff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\nindex bd3c7a47fe..100c0dae8c 100644\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n@@ -2592,7 +2592,7 @@ ProcArrayInstallImportedXmin(TransactionId xmin,\n \t\t\tcontinue;\n \n \t\t/* We are only interested in the specific virtual transaction. */\n-\t\tif (proc->backendId != sourcevxid->backendId)\n+\t\tif (proc->pgprocno != sourcevxid->backendId)\n \t\t\tcontinue;\n \t\tif (proc->lxid != sourcevxid->localTransactionId)\n \t\t\tcontinue;\n@@ -3454,7 +3454,7 @@ SignalVirtualTransaction(VirtualTransactionId vxid, ProcSignalReason sigmode,\n \t\t\t\t * Kill the pid if it's still here. If not, that's what we\n \t\t\t\t * wanted so ignore any errors.\n \t\t\t\t */\n-\t\t\t\t(void) SendProcSignal(pid, sigmode, vxid.backendId);\n+\t\t\t\t(void) SendProcSignal(pid, sigmode, proc->pgprocno);\n \t\t\t}\n \t\t\tbreak;\n \t\t}\n@@ -3604,11 +3604,8 @@ CancelDBBackends(Oid databaseid, ProcSignalReason sigmode, bool conflictPending)\n \n \t\tif (databaseid == InvalidOid || proc->databaseId == databaseid)\n \t\t{\n-\t\t\tVirtualTransactionId procvxid;\n \t\t\tpid_t\t\tpid;\n \n-\t\t\tGET_VXID_FROM_PGPROC(procvxid, *proc);\n-\n \t\t\tproc->recoveryConflictPending = conflictPending;\n \t\t\tpid = proc->pid;\n \t\t\tif (pid != 0)\n@@ -3617,7 +3614,7 @@ CancelDBBackends(Oid databaseid, ProcSignalReason sigmode, bool conflictPending)\n \t\t\t\t * Kill the pid if it's still here. If not, that's what we\n \t\t\t\t * wanted so ignore any errors.\n \t\t\t\t */\n-\t\t\t\t(void) SendProcSignal(pid, sigmode, procvxid.backendId);\n+\t\t\t\t(void) SendProcSignal(pid, sigmode, proc->pgprocno);\n \t\t\t}\n \t\t}\n \t}\ndiff --git a/src/backend/storage/ipc/procsignal.c b/src/backend/storage/ipc/procsignal.c\nindex defb75aa26..ffe6939780 100644\n--- a/src/backend/storage/ipc/procsignal.c\n+++ b/src/backend/storage/ipc/procsignal.c\n@@ -81,9 +81,8 @@ typedef struct\n } ProcSignalHeader;\n \n /*\n- * We reserve a slot for each possible BackendId, plus one for each\n- * possible auxiliary process type. (This scheme assumes there is not\n- * more than one of any auxiliary process type at a time.)\n+ * We reserve a slot for PGPROCs for backends and auxiliary processes. Not all\n+ * of auxiliary processes use this but allocate for them for safety.\n */\n #define NumProcSignalSlots\t(MaxBackends + NUM_AUXPROCTYPES)\n \n@@ -153,24 +152,19 @@ ProcSignalShmemInit(void)\n /*\n * ProcSignalInit\n *\t\tRegister the current process in the procsignal array\n- *\n- * The passed index should be my BackendId if the process has one,\n- * or MaxBackends + aux process type if not.\n */\n void\n-ProcSignalInit(int pss_idx)\n+ProcSignalInit(void)\n {\n \tProcSignalSlot *slot;\n \tuint64\t\tbarrier_generation;\n \n-\tAssert(pss_idx >= 1 && pss_idx <= NumProcSignalSlots);\n-\n-\tslot = &ProcSignal->psh_slot[pss_idx - 1];\n+\tslot = &ProcSignal->psh_slot[MyBackendId];\n \n \t/* sanity check */\n \tif (slot->pss_pid != 0)\n \t\telog(LOG, \"process %d taking over ProcSignal slot %d, but it's not empty\",\n-\t\t\t MyProcPid, pss_idx);\n+\t\t\t MyProcPid, MyBackendId);\n \n \t/* Clear out any leftover signal reasons */\n \tMemSet(slot->pss_signalFlags, 0, NUM_PROCSIGNALS * sizeof(sig_atomic_t));\n@@ -199,7 +193,7 @@ ProcSignalInit(int pss_idx)\n \tMyProcSignalSlot = slot;\n \n \t/* Set up to release the slot on process exit */\n-\ton_shmem_exit(CleanupProcSignalState, Int32GetDatum(pss_idx));\n+\ton_shmem_exit(CleanupProcSignalState, (Datum) 0);\n }\n \n /*\n@@ -211,10 +205,9 @@ ProcSignalInit(int pss_idx)\n static void\n CleanupProcSignalState(int status, Datum arg)\n {\n-\tint\t\t\tpss_idx = DatumGetInt32(arg);\n \tProcSignalSlot *slot;\n \n-\tslot = &ProcSignal->psh_slot[pss_idx - 1];\n+\tslot = &ProcSignal->psh_slot[MyBackendId];\n \tAssert(slot == MyProcSignalSlot);\n \n \t/*\n@@ -232,7 +225,7 @@ CleanupProcSignalState(int status, Datum arg)\n \t\t * infinite loop trying to exit\n \t\t */\n \t\telog(LOG, \"process %d releasing ProcSignal slot %d, but it contains %d\",\n-\t\t\t MyProcPid, pss_idx, (int) slot->pss_pid);\n+\t\t\t MyProcPid, MyProcPid, (int) slot->pss_pid);\n \t\treturn;\t\t\t\t\t/* XXX better to zero the slot anyway? */\n \t}\n \n@@ -264,7 +257,7 @@ SendProcSignal(pid_t pid, ProcSignalReason reason, BackendId backendId)\n \n \tif (backendId != InvalidBackendId)\n \t{\n-\t\tslot = &ProcSignal->psh_slot[backendId - 1];\n+\t\tslot = &ProcSignal->psh_slot[backendId];\n \n \t\t/*\n \t\t * Note: Since there's no locking, it's possible that the target\ndiff --git a/src/backend/storage/ipc/sinvaladt.c b/src/backend/storage/ipc/sinvaladt.c\nindex 946bd8e3cb..e5cfba5f96 100644\n--- a/src/backend/storage/ipc/sinvaladt.c\n+++ b/src/backend/storage/ipc/sinvaladt.c\n@@ -19,7 +19,6 @@\n \n #include \"access/transam.h\"\n #include \"miscadmin.h\"\n-#include \"storage/backendid.h\"\n #include \"storage/ipc.h\"\n #include \"storage/proc.h\"\n #include \"storage/procsignal.h\"\n@@ -157,7 +156,7 @@ typedef struct ProcState\n \t/*\n \t * Next LocalTransactionId to use for each idle backend slot. We keep\n \t * this here because it is indexed by BackendId and it is convenient to\n-\t * copy the value to and from local memory when MyBackendId is set. It's\n+\t * copy the value to and from local memory when MybackendId is set. It's\n \t * meaningless in an active ProcState entry.\n \t */\n \tLocalTransactionId nextLXID;\n@@ -195,6 +194,7 @@ static LocalTransactionId nextLocalTransactionId;\n \n static void CleanupInvalidationState(int status, Datum arg);\n \n+static int siindex;\n \n /*\n * SInvalShmemSize --- return shared-memory space needed\n@@ -290,7 +290,7 @@ SharedInvalBackendInit(bool sendOnly)\n \t\t\t/*\n \t\t\t * out of procState slots: MaxBackends exceeded -- report normally\n \t\t\t */\n-\t\t\tMyBackendId = InvalidBackendId;\n+\t\t\tsiindex = -1;\n \t\t\tLWLockRelease(SInvalWriteLock);\n \t\t\tereport(FATAL,\n \t\t\t\t\t(errcode(ERRCODE_TOO_MANY_CONNECTIONS),\n@@ -298,10 +298,7 @@ SharedInvalBackendInit(bool sendOnly)\n \t\t}\n \t}\n \n-\tMyBackendId = (stateP - &segP->procState[0]) + 1;\n-\n-\t/* Advertise assigned backend ID in MyProc */\n-\tMyProc->backendId = MyBackendId;\n+\tsiindex = (stateP - &segP->procState[0]);\n \n \t/* Fetch next local transaction ID into local memory */\n \tnextLocalTransactionId = stateP->nextLXID;\n@@ -320,7 +317,7 @@ SharedInvalBackendInit(bool sendOnly)\n \t/* register exit routine to mark my entry inactive at exit */\n \ton_shmem_exit(CleanupInvalidationState, PointerGetDatum(segP));\n \n-\telog(DEBUG4, \"my backend ID is %d\", MyBackendId);\n+\telog(DEBUG4, \"my SI slot index is %d\", siindex);\n }\n \n /*\n@@ -342,7 +339,7 @@ CleanupInvalidationState(int status, Datum arg)\n \n \tLWLockAcquire(SInvalWriteLock, LW_EXCLUSIVE);\n \n-\tstateP = &segP->procState[MyBackendId - 1];\n+\tstateP = &segP->procState[siindex];\n \n \t/* Update next local transaction ID for next holder of this backendID */\n \tstateP->nextLXID = nextLocalTransactionId;\n@@ -365,34 +362,6 @@ CleanupInvalidationState(int status, Datum arg)\n \tLWLockRelease(SInvalWriteLock);\n }\n \n-/*\n- * BackendIdGetProc\n- *\t\tGet the PGPROC structure for a backend, given the backend ID.\n- *\t\tThe result may be out of date arbitrarily quickly, so the caller\n- *\t\tmust be careful about how this information is used. NULL is\n- *\t\treturned if the backend is not active.\n- */\n-PGPROC *\n-BackendIdGetProc(int backendID)\n-{\n-\tPGPROC\t *result = NULL;\n-\tSISeg\t *segP = shmInvalBuffer;\n-\n-\t/* Need to lock out additions/removals of backends */\n-\tLWLockAcquire(SInvalWriteLock, LW_SHARED);\n-\n-\tif (backendID > 0 && backendID <= segP->lastBackend)\n-\t{\n-\t\tProcState *stateP = &segP->procState[backendID - 1];\n-\n-\t\tresult = stateP->proc;\n-\t}\n-\n-\tLWLockRelease(SInvalWriteLock);\n-\n-\treturn result;\n-}\n-\n /*\n * BackendIdGetTransactionIds\n *\t\tGet the xid and xmin of the backend. The result may be out of date\n@@ -541,7 +510,7 @@ SIGetDataEntries(SharedInvalidationMessage *data, int datasize)\n \tint\t\t\tn;\n \n \tsegP = shmInvalBuffer;\n-\tstateP = &segP->procState[MyBackendId - 1];\n+\tstateP = &segP->procState[siindex];\n \n \t/*\n \t * Before starting to take locks, do a quick, unlocked test to see whether\n@@ -730,13 +699,13 @@ SICleanupQueue(bool callerHasWriteLock, int minFree)\n \tif (needSig)\n \t{\n \t\tpid_t\t\this_pid = needSig->procPid;\n-\t\tBackendId\this_backendId = (needSig - &segP->procState[0]) + 1;\n+\t\tint\t\t\this_pgprocno = needSig->proc->pgprocno;\n \n \t\tneedSig->signaled = true;\n \t\tLWLockRelease(SInvalReadLock);\n \t\tLWLockRelease(SInvalWriteLock);\n \t\telog(DEBUG4, \"sending sinval catchup signal to PID %d\", (int) his_pid);\n-\t\tSendProcSignal(his_pid, PROCSIG_CATCHUP_INTERRUPT, his_backendId);\n+\t\tSendProcSignal(his_pid, PROCSIG_CATCHUP_INTERRUPT, his_pgprocno);\n \t\tif (callerHasWriteLock)\n \t\t\tLWLockAcquire(SInvalWriteLock, LW_EXCLUSIVE);\n \t}\ndiff --git a/src/backend/storage/ipc/standby.c b/src/backend/storage/ipc/standby.c\nindex b17326bc20..032f0428aa 100644\n--- a/src/backend/storage/ipc/standby.c\n+++ b/src/backend/storage/ipc/standby.c\n@@ -274,7 +274,7 @@ LogRecoveryConflict(ProcSignalReason reason, TimestampTz wait_start,\n \t\tvxids = wait_list;\n \t\twhile (VirtualTransactionIdIsValid(*vxids))\n \t\t{\n-\t\t\tPGPROC\t *proc = BackendIdGetProc(vxids->backendId);\n+\t\t\tPGPROC\t *proc = &ProcGlobal->allProcs[vxids->backendId];\n \n \t\t\t/* proc can be NULL if the target backend is not active */\n \t\t\tif (proc)\ndiff --git a/src/backend/storage/lmgr/lmgr.c b/src/backend/storage/lmgr/lmgr.c\nindex cdf2266d6d..f0208531e0 100644\n--- a/src/backend/storage/lmgr/lmgr.c\n+++ b/src/backend/storage/lmgr/lmgr.c\n@@ -918,7 +918,8 @@ WaitForLockersMultiple(List *locktags, LOCKMODE lockmode, bool progress)\n \t\t\t/* If requested, publish who we're going to wait for. */\n \t\t\tif (progress)\n \t\t\t{\n-\t\t\t\tPGPROC\t *holder = BackendIdGetProc(lockholders->backendId);\n+\t\t\t\tPGPROC\t *holder =\n+\t\t\t\t\t&ProcGlobal->allProcs[lockholders->backendId];\n \n \t\t\t\tif (holder)\n \t\t\t\t\tpgstat_progress_update_param(PROGRESS_WAITFOR_CURRENT_PID,\ndiff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c\nindex 364654e106..3edb7d6fd5 100644\n--- a/src/backend/storage/lmgr/lock.c\n+++ b/src/backend/storage/lmgr/lock.c\n@@ -3695,7 +3695,7 @@ GetLockStatusData(void)\n \t\t\t\t\t\t\t\t proc->fpRelId[f]);\n \t\t\tinstance->holdMask = lockbits << FAST_PATH_LOCKNUMBER_OFFSET;\n \t\t\tinstance->waitLockMode = NoLock;\n-\t\t\tinstance->backend = proc->backendId;\n+\t\t\tinstance->backend = proc->pgprocno;\n \t\t\tinstance->lxid = proc->lxid;\n \t\t\tinstance->pid = proc->pid;\n \t\t\tinstance->leaderPid = proc->pid;\n@@ -3722,14 +3722,14 @@ GetLockStatusData(void)\n \t\t\t\t\trepalloc(data->locks, sizeof(LockInstanceData) * els);\n \t\t\t}\n \n-\t\t\tvxid.backendId = proc->backendId;\n+\t\t\tvxid.backendId = proc->pgprocno;\n \t\t\tvxid.localTransactionId = proc->fpLocalTransactionId;\n \n \t\t\tinstance = &data->locks[el];\n \t\t\tSET_LOCKTAG_VIRTUALTRANSACTION(instance->locktag, vxid);\n \t\t\tinstance->holdMask = LOCKBIT_ON(ExclusiveLock);\n \t\t\tinstance->waitLockMode = NoLock;\n-\t\t\tinstance->backend = proc->backendId;\n+\t\t\tinstance->backend = proc->pgprocno;\n \t\t\tinstance->lxid = proc->lxid;\n \t\t\tinstance->pid = proc->pid;\n \t\t\tinstance->leaderPid = proc->pid;\n@@ -3782,7 +3782,7 @@ GetLockStatusData(void)\n \t\t\tinstance->waitLockMode = proc->waitLockMode;\n \t\telse\n \t\t\tinstance->waitLockMode = NoLock;\n-\t\tinstance->backend = proc->backendId;\n+\t\tinstance->backend = proc->pgprocno;\n \t\tinstance->lxid = proc->lxid;\n \t\tinstance->pid = proc->pid;\n \t\tinstance->leaderPid = proclock->groupLeader->pid;\n@@ -3961,7 +3961,7 @@ GetSingleProcBlockerStatusData(PGPROC *blocked_proc, BlockedProcsData *data)\n \t\t\tinstance->waitLockMode = proc->waitLockMode;\n \t\telse\n \t\t\tinstance->waitLockMode = NoLock;\n-\t\tinstance->backend = proc->backendId;\n+\t\tinstance->backend = proc->pgprocno;\n \t\tinstance->lxid = proc->lxid;\n \t\tinstance->pid = proc->pid;\n \t\tinstance->leaderPid = proclock->groupLeader->pid;\n@@ -4475,7 +4475,7 @@ VirtualXactLockTableInsert(VirtualTransactionId vxid)\n \n \tLWLockAcquire(&MyProc->fpInfoLock, LW_EXCLUSIVE);\n \n-\tAssert(MyProc->backendId == vxid.backendId);\n+\tAssert(MyProc->pgprocno == vxid.backendId);\n \tAssert(MyProc->fpLocalTransactionId == InvalidLocalTransactionId);\n \tAssert(MyProc->fpVXIDLock == false);\n \n@@ -4497,8 +4497,6 @@ VirtualXactLockTableCleanup(void)\n \tbool\t\tfastpath;\n \tLocalTransactionId lxid;\n \n-\tAssert(MyProc->backendId != InvalidBackendId);\n-\n \t/*\n \t * Clean up shared memory state.\n \t */\n@@ -4520,7 +4518,7 @@ VirtualXactLockTableCleanup(void)\n \t\tVirtualTransactionId vxid;\n \t\tLOCKTAG\t\tlocktag;\n \n-\t\tvxid.backendId = MyBackendId;\n+\t\tvxid.backendId = MyProc->pgprocno;\n \t\tvxid.localTransactionId = lxid;\n \t\tSET_LOCKTAG_VIRTUALTRANSACTION(locktag, vxid);\n \n@@ -4571,7 +4569,7 @@ VirtualXactLock(VirtualTransactionId vxid, bool wait)\n \t * relevant lxid is no longer running here, that's enough to prove that\n \t * it's no longer running anywhere.\n \t */\n-\tproc = BackendIdGetProc(vxid.backendId);\n+\tproc = &ProcGlobal->allProcs[vxid.backendId];\n \tif (proc == NULL)\n \t\treturn true;\n \n@@ -4583,7 +4581,7 @@ VirtualXactLock(VirtualTransactionId vxid, bool wait)\n \tLWLockAcquire(&proc->fpInfoLock, LW_EXCLUSIVE);\n \n \t/* If the transaction has ended, our work here is done. */\n-\tif (proc->backendId != vxid.backendId\n+\tif (proc->pgprocno != vxid.backendId\n \t\t|| proc->fpLocalTransactionId != vxid.localTransactionId)\n \t{\n \t\tLWLockRelease(&proc->fpInfoLock);\ndiff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\nindex 78e05976a4..0ee6627e20 100644\n--- a/src/backend/storage/lmgr/proc.c\n+++ b/src/backend/storage/lmgr/proc.c\n@@ -386,6 +386,8 @@ InitProcess(void)\n \tif (IsUnderPostmaster && !IsAutoVacuumLauncherProcess())\n \t\tMarkPostmasterChildActive();\n \n+\tMyBackendId = MyProc->pgprocno;\n+\n \t/*\n \t * Initialize all fields of MyProc, except for those previously\n \t * initialized by InitProcGlobal.\n@@ -398,14 +400,13 @@ InitProcess(void)\n \tMyProc->xid = InvalidTransactionId;\n \tMyProc->xmin = InvalidTransactionId;\n \tMyProc->pid = MyProcPid;\n-\t/* backendId, databaseId and roleId will be filled in later */\n-\tMyProc->backendId = InvalidBackendId;\n+\t/* databaseId and roleId will be filled in later */\n \tMyProc->databaseId = InvalidOid;\n \tMyProc->roleId = InvalidOid;\n \tMyProc->tempNamespaceId = InvalidOid;\n \tMyProc->isBackgroundWorker = IsBackgroundWorker;\n \tMyProc->delayChkpt = false;\n-\tMyProc->statusFlags = 0;\n+\tMyProc->statusFlags = PROC_IS_ACTIVE;\n \t/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */\n \tif (IsAutoVacuumWorkerProcess())\n \t\tMyProc->statusFlags |= PROC_IS_AUTOVACUUM;\n@@ -570,6 +571,7 @@ InitAuxiliaryProcess(void)\n \t((volatile PGPROC *) auxproc)->pid = MyProcPid;\n \n \tMyProc = auxproc;\n+\tMyBackendId = MyProc->pgprocno;\n \n \tSpinLockRelease(ProcStructLock);\n \n@@ -584,13 +586,12 @@ InitAuxiliaryProcess(void)\n \tMyProc->fpLocalTransactionId = InvalidLocalTransactionId;\n \tMyProc->xid = InvalidTransactionId;\n \tMyProc->xmin = InvalidTransactionId;\n-\tMyProc->backendId = InvalidBackendId;\n \tMyProc->databaseId = InvalidOid;\n \tMyProc->roleId = InvalidOid;\n \tMyProc->tempNamespaceId = InvalidOid;\n \tMyProc->isBackgroundWorker = IsBackgroundWorker;\n \tMyProc->delayChkpt = false;\n-\tMyProc->statusFlags = 0;\n+\tMyProc->statusFlags = PROC_IS_ACTIVE;\n \tMyProc->lwWaiting = false;\n \tMyProc->lwWaitMode = 0;\n \tMyProc->waitLock = NULL;\n@@ -913,6 +914,9 @@ ProcKill(int code, Datum arg)\n \tMyProc = NULL;\n \tDisownLatch(&proc->procLatch);\n \n+\t/* mark this process dead */\n+\tproc->statusFlags &= ~PROC_IS_ACTIVE;\n+\n \tprocgloballist = proc->procgloballist;\n \tSpinLockAcquire(ProcStructLock);\n \n@@ -985,6 +989,7 @@ AuxiliaryProcKill(int code, Datum arg)\n \n \t/* Mark auxiliary proc no longer in use */\n \tproc->pid = 0;\n+\tproc->statusFlags &= ~PROC_IS_ACTIVE;\n \n \t/* Update shared estimate of spins_per_delay */\n \tProcGlobal->spins_per_delay = update_spins_per_delay(ProcGlobal->spins_per_delay);\n@@ -2020,3 +2025,17 @@ BecomeLockGroupMember(PGPROC *leader, int pid)\n \n \treturn ok;\n }\n+\n+/*\n+ * Return PGPROC if it is alive.\n+ */\n+PGPROC *\n+GetProcIfAlive(BackendId backend)\n+{\n+\tPGPROC *proc = &ProcGlobal->allProcs[backend];\n+\n+\tif (proc->statusFlags & PROC_IS_ACTIVE)\n+\t\treturn proc;\n+\n+\treturn NULL;\n+}\ndiff --git a/src/backend/utils/activity/backend_status.c b/src/backend/utils/activity/backend_status.c\nindex 7229598822..0490a3a8b2 100644\n--- a/src/backend/utils/activity/backend_status.c\n+++ b/src/backend/utils/activity/backend_status.c\n@@ -249,26 +249,8 @@ void\n pgstat_beinit(void)\n {\n \t/* Initialize MyBEEntry */\n-\tif (MyBackendId != InvalidBackendId)\n-\t{\n-\t\tAssert(MyBackendId >= 1 && MyBackendId <= MaxBackends);\n-\t\tMyBEEntry = &BackendStatusArray[MyBackendId - 1];\n-\t}\n-\telse\n-\t{\n-\t\t/* Must be an auxiliary process */\n-\t\tAssert(MyAuxProcType != NotAnAuxProcess);\n-\n-\t\t/*\n-\t\t * Assign the MyBEEntry for an auxiliary process. Since it doesn't\n-\t\t * have a BackendId, the slot is statically allocated based on the\n-\t\t * auxiliary process type (MyAuxProcType). Backends use slots indexed\n-\t\t * in the range from 1 to MaxBackends (inclusive), so we use\n-\t\t * MaxBackends + AuxBackendType + 1 as the index of the slot for an\n-\t\t * auxiliary process.\n-\t\t */\n-\t\tMyBEEntry = &BackendStatusArray[MaxBackends + MyAuxProcType];\n-\t}\n+\tAssert(MyBackendId >= 0 && MyBackendId < NumBackendStatSlots);\n+\tMyBEEntry = &BackendStatusArray[MyBackendId];\n \n \t/* Set up a process-exit hook to clean up */\n \ton_shmem_exit(pgstat_beshutdown_hook, 0);\ndiff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c\nindex d5a7fb13f3..16acb0c335 100644\n--- a/src/backend/utils/adt/dbsize.c\n+++ b/src/backend/utils/adt/dbsize.c\n@@ -22,6 +22,7 @@\n #include \"commands/dbcommands.h\"\n #include \"commands/tablespace.h\"\n #include \"miscadmin.h\"\n+#include \"storage/backendid.h\"\n #include \"storage/fd.h\"\n #include \"utils/acl.h\"\n #include \"utils/builtins.h\"\n@@ -292,7 +293,7 @@ pg_tablespace_size_name(PG_FUNCTION_ARGS)\n * is no check here or at the call sites for that.\n */\n static int64\n-calculate_relation_size(RelFileNode *rfn, BackendId backend, ForkNumber forknum)\n+calculate_relation_size(RelFileNode *rfn, int backend, ForkNumber forknum)\n {\n \tint64\t\ttotalsize = 0;\n \tchar\t *relationpath;\n@@ -925,7 +926,7 @@ pg_relation_filepath(PG_FUNCTION_ARGS)\n \tHeapTuple\ttuple;\n \tForm_pg_class relform;\n \tRelFileNode rnode;\n-\tBackendId\tbackend;\n+\tint\t\t\tbackend;\n \tchar\t *path;\n \n \ttuple = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));\ndiff --git a/src/backend/utils/adt/mcxtfuncs.c b/src/backend/utils/adt/mcxtfuncs.c\nindex 0d52613bc3..fca87448cf 100644\n--- a/src/backend/utils/adt/mcxtfuncs.c\n+++ b/src/backend/utils/adt/mcxtfuncs.c\n@@ -205,7 +205,7 @@ pg_log_backend_memory_contexts(PG_FUNCTION_ARGS)\n \t\tPG_RETURN_BOOL(false);\n \t}\n \n-\tif (SendProcSignal(pid, PROCSIG_LOG_MEMORY_CONTEXT, proc->backendId) < 0)\n+\tif (SendProcSignal(pid, PROCSIG_LOG_MEMORY_CONTEXT, proc->pgprocno) < 0)\n \t{\n \t\t/* Again, just a warning to allow loops */\n \t\tereport(WARNING,\ndiff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c\nindex 13d9994af3..00036ec9c0 100644\n--- a/src/backend/utils/cache/relcache.c\n+++ b/src/backend/utils/cache/relcache.c\n@@ -73,6 +73,7 @@\n #include \"optimizer/optimizer.h\"\n #include \"rewrite/rewriteDefine.h\"\n #include \"rewrite/rowsecurity.h\"\n+#include \"storage/backendid.h\"\n #include \"storage/lmgr.h\"\n #include \"storage/smgr.h\"\n #include \"utils/array.h\"\ndiff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c\nindex f33729513a..0f927751d7 100644\n--- a/src/backend/utils/error/elog.c\n+++ b/src/backend/utils/error/elog.c\n@@ -2685,18 +2685,18 @@ log_line_prefix(StringInfo buf, ErrorData *edata)\n \t\t\t\tbreak;\n \t\t\tcase 'v':\n \t\t\t\t/* keep VXID format in sync with lockfuncs.c */\n-\t\t\t\tif (MyProc != NULL && MyProc->backendId != InvalidBackendId)\n+\t\t\t\tif (MyProc != NULL && MyProc->pgprocno != InvalidBackendId)\n \t\t\t\t{\n \t\t\t\t\tif (padding != 0)\n \t\t\t\t\t{\n \t\t\t\t\t\tchar\t\tstrfbuf[128];\n \n \t\t\t\t\t\tsnprintf(strfbuf, sizeof(strfbuf) - 1, \"%d/%u\",\n-\t\t\t\t\t\t\t\t MyProc->backendId, MyProc->lxid);\n+\t\t\t\t\t\t\t\t MyProc->pgprocno, MyProc->lxid);\n \t\t\t\t\t\tappendStringInfo(buf, \"%*s\", padding, strfbuf);\n \t\t\t\t\t}\n \t\t\t\t\telse\n-\t\t\t\t\t\tappendStringInfo(buf, \"%d/%u\", MyProc->backendId, MyProc->lxid);\n+\t\t\t\t\t\tappendStringInfo(buf, \"%d/%u\", MyProc->pgprocno, MyProc->lxid);\n \t\t\t\t}\n \t\t\t\telse if (padding != 0)\n \t\t\t\t\tappendStringInfoSpaces(buf,\n@@ -2860,8 +2860,8 @@ write_csvlog(ErrorData *edata)\n \n \t/* Virtual transaction id */\n \t/* keep VXID format in sync with lockfuncs.c */\n-\tif (MyProc != NULL && MyProc->backendId != InvalidBackendId)\n-\t\tappendStringInfo(&buf, \"%d/%u\", MyProc->backendId, MyProc->lxid);\n+\tif (MyProc != NULL && MyProc->pgprocno != InvalidBackendId)\n+\t\tappendStringInfo(&buf, \"%d/%u\", MyProc->pgprocno, MyProc->lxid);\n \tappendStringInfoChar(&buf, ',');\n \n \t/* Transaction id */\ndiff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c\nindex 381d9e548d..4ba1914472 100644\n--- a/src/backend/utils/init/globals.c\n+++ b/src/backend/utils/init/globals.c\n@@ -81,8 +81,6 @@ char\t\tpostgres_exec_path[MAXPGPATH];\t/* full path to backend */\n /* note: currently this is not valid in backend processes */\n #endif\n \n-BackendId\tMyBackendId = InvalidBackendId;\n-\n BackendId\tParallelLeaderBackendId = InvalidBackendId;\n \n Oid\t\t\tMyDatabaseId = InvalidOid;\ndiff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c\nindex 78bc64671e..7f5b8e12ee 100644\n--- a/src/backend/utils/init/postinit.c\n+++ b/src/backend/utils/init/postinit.c\n@@ -592,15 +592,10 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,\n \t *\n \t * Sets up MyBackendId, a unique backend identifier.\n \t */\n-\tMyBackendId = InvalidBackendId;\n-\n \tSharedInvalBackendInit(false);\n \n-\tif (MyBackendId > MaxBackends || MyBackendId <= 0)\n-\t\telog(FATAL, \"bad backend ID: %d\", MyBackendId);\n-\n \t/* Now that we have a BackendId, we can participate in ProcSignal */\n-\tProcSignalInit(MyBackendId);\n+\tProcSignalInit();\n \n \t/*\n \t * Also set up timeout handlers needed for backend operation. We need\ndiff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c\nindex 5001efdf7a..df8476d177 100644\n--- a/src/backend/utils/time/snapmgr.c\n+++ b/src/backend/utils/time/snapmgr.c\n@@ -1173,7 +1173,7 @@ ExportSnapshot(Snapshot snapshot)\n \t * inside the transaction from 1.\n \t */\n \tsnprintf(path, sizeof(path), SNAPSHOT_EXPORT_DIR \"/%08X-%08X-%d\",\n-\t\t\t MyProc->backendId, MyProc->lxid, list_length(exportedSnapshots) + 1);\n+\t\t\t MyProc->pgprocno, MyProc->lxid, list_length(exportedSnapshots) + 1);\n \n \t/*\n \t * Copy the snapshot into TopTransactionContext, add it to the\n@@ -1200,7 +1200,7 @@ ExportSnapshot(Snapshot snapshot)\n \t */\n \tinitStringInfo(&buf);\n \n-\tappendStringInfo(&buf, \"vxid:%d/%u\\n\", MyProc->backendId, MyProc->lxid);\n+\tappendStringInfo(&buf, \"vxid:%d/%u\\n\", MyProc->pgprocno, MyProc->lxid);\n \tappendStringInfo(&buf, \"pid:%d\\n\", MyProcPid);\n \tappendStringInfo(&buf, \"dbid:%u\\n\", MyDatabaseId);\n \tappendStringInfo(&buf, \"iso:%d\\n\", XactIsoLevel);\ndiff --git a/src/include/storage/backendid.h b/src/include/storage/backendid.h\nindex 7aa3936899..3772e2b4a2 100644\n--- a/src/include/storage/backendid.h\n+++ b/src/include/storage/backendid.h\n@@ -20,7 +20,8 @@\n */\n typedef int BackendId;\t\t\t/* unique currently active backend identifier */\n \n-#define InvalidBackendId\t\t(-1)\n+#define INVALID_PGPROCNO\t\tPG_INT32_MAX\n+#define InvalidBackendId\t\tINVALID_PGPROCNO\n \n extern PGDLLIMPORT BackendId MyBackendId;\t/* backend id of this backend */\n \ndiff --git a/src/include/storage/lock.h b/src/include/storage/lock.h\nindex 9b2a421c32..13b9352704 100644\n--- a/src/include/storage/lock.h\n+++ b/src/include/storage/lock.h\n@@ -62,7 +62,7 @@ extern bool Debug_deadlocks;\n */\n typedef struct\n {\n-\tBackendId\tbackendId;\t\t/* backendId from PGPROC */\n+\tBackendId\tbackendId;\t\t/* pgprocno from PGPROC */\n \tLocalTransactionId localTransactionId;\t/* lxid from PGPROC */\n } VirtualTransactionId;\n \n@@ -79,7 +79,7 @@ typedef struct\n \t((vxid).backendId = InvalidBackendId, \\\n \t (vxid).localTransactionId = InvalidLocalTransactionId)\n #define GET_VXID_FROM_PGPROC(vxid, proc) \\\n-\t((vxid).backendId = (proc).backendId, \\\n+\t((vxid).backendId = (proc).pgprocno, \\\n \t (vxid).localTransactionId = (proc).lxid)\n \n /* MAX_LOCKMODES cannot be larger than the # of bits in LOCKMASK */\n@@ -445,7 +445,7 @@ typedef struct LockInstanceData\n \tLOCKTAG\t\tlocktag;\t\t/* tag for locked object */\n \tLOCKMASK\tholdMask;\t\t/* locks held by this PGPROC */\n \tLOCKMODE\twaitLockMode;\t/* lock awaited by this PGPROC, if any */\n-\tBackendId\tbackend;\t\t/* backend ID of this PGPROC */\n+\tBackendId\tbackend;\t\t/* pgprocno of this PGPROC */\n \tLocalTransactionId lxid;\t/* local transaction ID of this PGPROC */\n \tTimestampTz waitStart;\t\t/* time at which this PGPROC started waiting\n \t\t\t\t\t\t\t\t * for lock */\ndiff --git a/src/include/storage/proc.h b/src/include/storage/proc.h\nindex be67d8a861..2f3d9cfb17 100644\n--- a/src/include/storage/proc.h\n+++ b/src/include/storage/proc.h\n@@ -17,6 +17,7 @@\n #include \"access/clog.h\"\n #include \"access/xlogdefs.h\"\n #include \"lib/ilist.h\"\n+#include \"storage/backendid.h\"\n #include \"storage/latch.h\"\n #include \"storage/lock.h\"\n #include \"storage/pg_sema.h\"\n@@ -60,6 +61,7 @@ struct XidCache\n #define\t\tPROC_VACUUM_FOR_WRAPAROUND\t0x08\t/* set by autovac only */\n #define\t\tPROC_IN_LOGICAL_DECODING\t0x10\t/* currently doing logical\n \t\t\t\t\t\t\t\t\t\t\t\t * decoding outside xact */\n+#define\t\tPROC_IS_ACTIVE\t\t\t\t0x20\t/* This process is active */\n \n /* flags reset at EOXact */\n #define\t\tPROC_VACUUM_STATE_MASK \\\n@@ -73,11 +75,7 @@ struct XidCache\n */\n #define\t\tFP_LOCK_SLOTS_PER_BACKEND 16\n \n-/*\n- * An invalid pgprocno. Must be larger than the maximum number of PGPROC\n- * structures we could possibly have. See comments for MAX_BACKENDS.\n- */\n-#define INVALID_PGPROCNO\t\tPG_INT32_MAX\n+BackendId MyBackendId;\n \n typedef enum\n {\n@@ -150,7 +148,6 @@ struct PGPROC\n \tint\t\t\tpgprocno;\n \n \t/* These fields are zero while a backend is still starting up: */\n-\tBackendId\tbackendId;\t\t/* This backend's backend ID (if assigned) */\n \tOid\t\t\tdatabaseId;\t\t/* OID of database this backend is using */\n \tOid\t\t\troleId;\t\t\t/* OID of role using this backend */\n \n@@ -418,4 +415,6 @@ extern PGPROC *AuxiliaryPidGetProc(int pid);\n extern void BecomeLockGroupLeader(void);\n extern bool BecomeLockGroupMember(PGPROC *leader, int pid);\n \n+extern PGPROC *GetProcIfAlive(BackendId backend);\n+\n #endif\t\t\t\t\t\t\t/* _PROC_H_ */\ndiff --git a/src/include/storage/procsignal.h b/src/include/storage/procsignal.h\nindex eec186be2e..34a7fff271 100644\n--- a/src/include/storage/procsignal.h\n+++ b/src/include/storage/procsignal.h\n@@ -63,9 +63,9 @@ typedef enum\n extern Size ProcSignalShmemSize(void);\n extern void ProcSignalShmemInit(void);\n \n-extern void ProcSignalInit(int pss_idx);\n+extern void ProcSignalInit(void);\n extern int\tSendProcSignal(pid_t pid, ProcSignalReason reason,\n-\t\t\t\t\t\t BackendId backendId);\n+\t\t\t\t\t\t int\tpgprocno);\n \n extern uint64 EmitProcSignalBarrier(ProcSignalBarrierType type);\n extern void WaitForProcSignalBarrier(uint64 generation);\ndiff --git a/src/include/storage/smgr.h b/src/include/storage/smgr.h\nindex a6fbf7b6a6..7c1063f9f9 100644\n--- a/src/include/storage/smgr.h\n+++ b/src/include/storage/smgr.h\n@@ -78,7 +78,7 @@ typedef SMgrRelationData *SMgrRelation;\n \tRelFileNodeBackendIsTemp((smgr)->smgr_rnode)\n \n extern void smgrinit(void);\n-extern SMgrRelation smgropen(RelFileNode rnode, BackendId backend);\n+extern SMgrRelation smgropen(RelFileNode rnode, int backend);\n extern bool smgrexists(SMgrRelation reln, ForkNumber forknum);\n extern void smgrsetowner(SMgrRelation *owner, SMgrRelation reln);\n extern void smgrclearowner(SMgrRelation *owner, SMgrRelation reln);\ndiff --git a/src/include/utils/rel.h b/src/include/utils/rel.h\nindex b4faa1c123..1ef0571201 100644\n--- a/src/include/utils/rel.h\n+++ b/src/include/utils/rel.h\n@@ -56,7 +56,7 @@ typedef struct RelationData\n \tRelFileNode rd_node;\t\t/* relation physical identifier */\n \tSMgrRelation rd_smgr;\t\t/* cached file handle, or NULL */\n \tint\t\t\trd_refcnt;\t\t/* reference count */\n-\tBackendId\trd_backend;\t\t/* owning backend id, if temporary relation */\n+\tint\t\t\trd_backend;\t\t/* owning backend id, if temporary relation */\n \tbool\t\trd_islocaltemp; /* rel is a temp rel of this session */\n \tbool\t\trd_isnailed;\t/* rel is nailed in cache */\n \tbool\t\trd_isvalid;\t\t/* relcache entry is valid */\n-- \n2.27.0", "msg_date": "Thu, 14 Oct 2021 17:28:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "Hi,\n\nOn 2021-10-14 17:28:34 +0900, Kyotaro Horiguchi wrote:\n> At Wed, 13 Oct 2021 19:52:52 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > Although needing a bit of care for the difference of invalid values\n> > for both though, BackendId can be easily replaced with pgprocno almost\n> > mechanically except sinvaladt. Therefore, we can confine the current\n> > backend ID within sinvaladt isolating from other part. The ids\n> > dedicated for sinvaladt can be packed to small range and perfomance\n> > won't be damaged.\n\nFWIW, I don't actually think there's necessarily that strong a need for\ndensity in sinvaladt. With a few relatively changes we can get rid of the O(n)\nwork in the most crucial paths.\n\nIn https://www.postgresql.org/message-id/20210802171255.k4yv5cfqaqbuuy6f%40alap3.anarazel.de\nI wrote:\n> Another approach to deal with this could be to simply not do the O(n) work in\n> SIInsertDataEntries(). It's not obvious that ->hasMessages is actually\n> necessary - we could atomically read maxMsgNum without acquiring a lock\n> instead of needing the per-backend ->hasMessages. I don't the density would\n> be a relevant factor in SICleanupQueue().\n\nThis'd get rid of the need of density *and* make SIInsertDataEntries()\ncheaper.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 14 Oct 2021 10:53:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "At Thu, 14 Oct 2021 10:53:06 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2021-10-14 17:28:34 +0900, Kyotaro Horiguchi wrote:\n> > At Wed, 13 Oct 2021 19:52:52 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > > Although needing a bit of care for the difference of invalid values\n> > > for both though, BackendId can be easily replaced with pgprocno almost\n> > > mechanically except sinvaladt. Therefore, we can confine the current\n> > > backend ID within sinvaladt isolating from other part. The ids\n> > > dedicated for sinvaladt can be packed to small range and perfomance\n> > > won't be damaged.\n> \n> FWIW, I don't actually think there's necessarily that strong a need for\n> density in sinvaladt. With a few relatively changes we can get rid of the O(n)\n> work in the most crucial paths.\n\nRight. So I left it for the \"future:p\n\n> In https://www.postgresql.org/message-id/20210802171255.k4yv5cfqaqbuuy6f%40alap3.anarazel.de\n> I wrote:\n> > Another approach to deal with this could be to simply not do the O(n) work in\n> > SIInsertDataEntries(). It's not obvious that ->hasMessages is actually\n> > necessary - we could atomically read maxMsgNum without acquiring a lock\n> > instead of needing the per-backend ->hasMessages. I don't the density would\n> > be a relevant factor in SICleanupQueue().\n> \n> This'd get rid of the need of density *and* make SIInsertDataEntries()\n> cheaper.\n\nYes. So.. I tried that. The only part where memory-flush timing is\ncrucial seems to be between writing messages and setting maxMsgNum.\nBy placing memory barrier between them it seems *to me* we can read\nmaxMsgNum safely without locks.\n\nI reread that thread and found we can get rid of O(N) behavior from\ntwo places, SIgnalVirtualTransaction and GetVirtualXIDsDelayingChkpt.\n\nFinally, I got rid of the siindex (the old BackendId) from sinvaladt.c\nat all. Still CleanupInvalidationState and SICleanupQueue has O(N)\nbehavior but they are executed rarely or ends in a short time in the\nmost cases.\n\n\n0001: Reverses the proc freelist so that the backendid is assigned in\n the sane order.\n\n0002: Replaces the current BackendId - that is generated by\n sinvaladt.c intending to pack the ids to a narrow range - with\n pgprocno in most of the tree. The old BackendID is now used only in\n sinvaladt.c\n\n0003: Removes O(N) behavior from SIInsertDataEntries. I'm not sure it\n is correctly revised, though..\n\n0004: Gets rid of O(N), or reduce O(N^2) to O(N) of\n HaveVirtualXIDsDelayingChkpt and SignalVirtualTransaction.\n\n0005: Gets rid of the old BackendID at all from sinvaladt.c.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n From 2b28c29d68eb2c37137aef66c7634b65aa640520 Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Wed, 13 Oct 2021 10:31:26 +0900\nSubject: [PATCH v2 1/5] procfreelist in ascending order\n\n---\n src/backend/storage/lmgr/proc.c | 62 +++++++++++++++++++--------------\n 1 file changed, 36 insertions(+), 26 deletions(-)\n\ndiff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\nindex b7d9da0aa9..78e05976a4 100644\n--- a/src/backend/storage/lmgr/proc.c\n+++ b/src/backend/storage/lmgr/proc.c\n@@ -163,6 +163,9 @@ InitProcGlobal(void)\n \t\t\t\tj;\n \tbool\t\tfound;\n \tuint32\t\tTotalProcs = MaxBackends + NUM_AUXILIARY_PROCS + max_prepared_xacts;\n+\tPGPROC\t **freelist;\n+\tint\t\t\tswitchpoint;\n+\tint\t\t\tarraykind;\n \n \t/* Create the ProcGlobal shared structure */\n \tProcGlobal = (PROC_HDR *)\n@@ -214,6 +217,8 @@ InitProcGlobal(void)\n \tProcGlobal->statusFlags = (uint8 *) ShmemAlloc(TotalProcs * sizeof(*ProcGlobal->statusFlags));\n \tMemSet(ProcGlobal->statusFlags, 0, TotalProcs * sizeof(*ProcGlobal->statusFlags));\n \n+\tswitchpoint = 0;\n+\tarraykind = 0;\n \tfor (i = 0; i < TotalProcs; i++)\n \t{\n \t\t/* Common initialization for all PGPROCs, regardless of type. */\n@@ -239,33 +244,38 @@ InitProcGlobal(void)\n \t\t * linear search. PGPROCs for prepared transactions are added to a\n \t\t * free list by TwoPhaseShmemInit().\n \t\t */\n-\t\tif (i < MaxConnections)\n+\t\tif (i < MaxBackends)\n \t\t{\n-\t\t\t/* PGPROC for normal backend, add to freeProcs list */\n-\t\t\tprocs[i].links.next = (SHM_QUEUE *) ProcGlobal->freeProcs;\n-\t\t\tProcGlobal->freeProcs = &procs[i];\n-\t\t\tprocs[i].procgloballist = &ProcGlobal->freeProcs;\n-\t\t}\n-\t\telse if (i < MaxConnections + autovacuum_max_workers + 1)\n-\t\t{\n-\t\t\t/* PGPROC for AV launcher/worker, add to autovacFreeProcs list */\n-\t\t\tprocs[i].links.next = (SHM_QUEUE *) ProcGlobal->autovacFreeProcs;\n-\t\t\tProcGlobal->autovacFreeProcs = &procs[i];\n-\t\t\tprocs[i].procgloballist = &ProcGlobal->autovacFreeProcs;\n-\t\t}\n-\t\telse if (i < MaxConnections + autovacuum_max_workers + 1 + max_worker_processes)\n-\t\t{\n-\t\t\t/* PGPROC for bgworker, add to bgworkerFreeProcs list */\n-\t\t\tprocs[i].links.next = (SHM_QUEUE *) ProcGlobal->bgworkerFreeProcs;\n-\t\t\tProcGlobal->bgworkerFreeProcs = &procs[i];\n-\t\t\tprocs[i].procgloballist = &ProcGlobal->bgworkerFreeProcs;\n-\t\t}\n-\t\telse if (i < MaxBackends)\n-\t\t{\n-\t\t\t/* PGPROC for walsender, add to walsenderFreeProcs list */\n-\t\t\tprocs[i].links.next = (SHM_QUEUE *) ProcGlobal->walsenderFreeProcs;\n-\t\t\tProcGlobal->walsenderFreeProcs = &procs[i];\n-\t\t\tprocs[i].procgloballist = &ProcGlobal->walsenderFreeProcs;\n+\t\t\tif (i == switchpoint)\n+\t\t\t{\n+\t\t\t\tswitch (arraykind++)\n+\t\t\t\t{\n+\t\t\t\t\tcase 0:\n+\t\t\t\t\t\tfreelist = &ProcGlobal->freeProcs;\n+\t\t\t\t\t\tswitchpoint += MaxConnections;\n+\t\t\t\t\t\tbreak;\n+\n+\t\t\t\t\tcase 1:\n+\t\t\t\t\t\tfreelist = &ProcGlobal->autovacFreeProcs;\n+\t\t\t\t\t\tswitchpoint += autovacuum_max_workers + 1;\n+\t\t\t\t\t\tbreak;\n+\n+\t\t\t\t\tcase 2:\n+\t\t\t\t\t\tfreelist = &ProcGlobal->bgworkerFreeProcs;\n+\t\t\t\t\t\tswitchpoint += max_worker_processes;\n+\t\t\t\t\t\tbreak;\n+\n+\t\t\t\t\tcase 3:\n+\t\t\t\t\t\tfreelist = &ProcGlobal->walsenderFreeProcs;\n+\t\t\t\t}\n+\n+\t\t\t\t/* link the element to the just-switched freelist */\n+\t\t\t\t*freelist = &procs[i];\n+\t\t\t}\n+\t\t\telse\n+\t\t\t\tprocs[i - 1].links.next = (SHM_QUEUE *) &procs[i];\n+\n+\t\t\tprocs[i].procgloballist = freelist;\n \t\t}\n \n \t\t/* Initialize myProcLocks[] shared memory queues. */\n-- \n2.27.0\n\n\n From d7cc2b6b95dd90683b47e5692b2acbc370223ada Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Thu, 14 Oct 2021 17:05:24 +0900\nSubject: [PATCH v2 2/5] Remove BackendId\n\nBackendId was generated by sinvaladt.c and widely used as ids packed\nin narrow range. However, the characteristics of narrow-packed is not\nrequired other than sinvaladt.c and the use of the id in other places\nrather harm. Get rid of the backend id in most of the tree and use\npgprocno as new BackendId.\n\nHowever, sinvaladt still needs such packed id so the old backend id is\nconfined to the module.\n---\n src/backend/access/transam/multixact.c | 23 +++++-----\n src/backend/access/transam/twophase.c | 4 +-\n src/backend/access/transam/xact.c | 2 +-\n src/backend/catalog/namespace.c | 2 +-\n src/backend/commands/async.c | 21 +++++----\n src/backend/commands/indexcmds.c | 3 +-\n src/backend/postmaster/auxprocess.c | 2 +-\n src/backend/postmaster/pgstat.c | 2 +-\n src/backend/storage/ipc/procarray.c | 9 ++--\n src/backend/storage/ipc/procsignal.c | 25 ++++-------\n src/backend/storage/ipc/sinvaladt.c | 47 ++++-----------------\n src/backend/storage/ipc/standby.c | 2 +-\n src/backend/storage/lmgr/lmgr.c | 3 +-\n src/backend/storage/lmgr/lock.c | 20 ++++-----\n src/backend/storage/lmgr/proc.c | 29 ++++++++++---\n src/backend/utils/activity/backend_status.c | 22 +---------\n src/backend/utils/adt/dbsize.c | 5 ++-\n src/backend/utils/adt/mcxtfuncs.c | 2 +-\n src/backend/utils/cache/relcache.c | 1 +\n src/backend/utils/error/elog.c | 10 ++---\n src/backend/utils/init/globals.c | 2 -\n src/backend/utils/init/postinit.c | 7 +--\n src/backend/utils/time/snapmgr.c | 4 +-\n src/include/storage/backendid.h | 3 +-\n src/include/storage/lock.h | 6 +--\n src/include/storage/proc.h | 11 +++--\n src/include/storage/procsignal.h | 4 +-\n src/include/storage/smgr.h | 2 +-\n src/include/utils/rel.h | 2 +-\n 29 files changed, 116 insertions(+), 159 deletions(-)\n\ndiff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c\nindex e6c70ed0bc..585dc50858 100644\n--- a/src/backend/access/transam/multixact.c\n+++ b/src/backend/access/transam/multixact.c\n@@ -238,9 +238,9 @@ typedef struct MultiXactStateData\n \t * immediately following the MultiXactStateData struct. Each is indexed by\n \t * BackendId.\n \t *\n-\t * In both arrays, there's a slot for all normal backends (1..MaxBackends)\n-\t * followed by a slot for max_prepared_xacts prepared transactions. Valid\n-\t * BackendIds start from 1; element zero of each array is never used.\n+\t * In both arrays, there's a slot for all normal backends\n+\t * (0..MaxBackends-1) followed by a slot for max_prepared_xacts prepared\n+\t * transactions.\n \t *\n \t * OldestMemberMXactId[k] is the oldest MultiXactId each backend's current\n \t * transaction(s) could possibly be a member of, or InvalidMultiXactId\n@@ -283,9 +283,9 @@ typedef struct MultiXactStateData\n \n /*\n * Last element of OldestMemberMXactId and OldestVisibleMXactId arrays.\n- * Valid elements are (1..MaxOldestSlot); element 0 is never used.\n+ * Valid elements are (0..MaxOldestSlot).\n */\n-#define MaxOldestSlot\t(MaxBackends + max_prepared_xacts)\n+#define MaxOldestSlot\t(MaxBackends + max_prepared_xacts - 1)\n \n /* Pointers to the state data in shared memory */\n static MultiXactStateData *MultiXactState;\n@@ -697,7 +697,7 @@ MultiXactIdSetOldestVisible(void)\n \t\tif (oldestMXact < FirstMultiXactId)\n \t\t\toldestMXact = FirstMultiXactId;\n \n-\t\tfor (i = 1; i <= MaxOldestSlot; i++)\n+\t\tfor (i = 0 ; i <= MaxOldestSlot; i++)\n \t\t{\n \t\t\tMultiXactId thisoldest = OldestMemberMXactId[i];\n \n@@ -1828,10 +1828,10 @@ MultiXactShmemSize(void)\n {\n \tSize\t\tsize;\n \n-\t/* We need 2*MaxOldestSlot + 1 perBackendXactIds[] entries */\n+\t/* We need 2*(MaxOldestSlot + 1) + 1 perBackendXactIds[] entries */\n #define SHARED_MULTIXACT_STATE_SIZE \\\n \tadd_size(offsetof(MultiXactStateData, perBackendXactIds) + sizeof(MultiXactId), \\\n-\t\t\t mul_size(sizeof(MultiXactId) * 2, MaxOldestSlot))\n+\t\t\t mul_size(sizeof(MultiXactId) * 2, MaxOldestSlot + 1))\n \n \tsize = SHARED_MULTIXACT_STATE_SIZE;\n \tsize = add_size(size, SimpleLruShmemSize(NUM_MULTIXACTOFFSET_BUFFERS, 0));\n@@ -1878,11 +1878,10 @@ MultiXactShmemInit(void)\n \t\tAssert(found);\n \n \t/*\n-\t * Set up array pointers. Note that perBackendXactIds[0] is wasted space\n-\t * since we only use indexes 1..MaxOldestSlot in each array.\n+\t * Set up array pointers.\n \t */\n \tOldestMemberMXactId = MultiXactState->perBackendXactIds;\n-\tOldestVisibleMXactId = OldestMemberMXactId + MaxOldestSlot;\n+\tOldestVisibleMXactId = OldestMemberMXactId + MaxOldestSlot + 1;\n }\n \n /*\n@@ -2525,7 +2524,7 @@ GetOldestMultiXactId(void)\n \t\tnextMXact = FirstMultiXactId;\n \n \toldestMXact = nextMXact;\n-\tfor (i = 1; i <= MaxOldestSlot; i++)\n+\tfor (i = 0; i <= MaxOldestSlot; i++)\n \t{\n \t\tMultiXactId thisoldest;\n \ndiff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c\nindex 2156de187c..76d5bb55d6 100644\n--- a/src/backend/access/transam/twophase.c\n+++ b/src/backend/access/transam/twophase.c\n@@ -293,7 +293,7 @@ TwoPhaseShmemInit(void)\n \t\t\t * prepared transaction. Currently multixact.c uses that\n \t\t\t * technique.\n \t\t\t */\n-\t\t\tgxacts[i].dummyBackendId = MaxBackends + 1 + i;\n+\t\t\tgxacts[i].dummyBackendId = MaxBackends + i;\n \t\t}\n \t}\n \telse\n@@ -459,14 +459,12 @@ MarkAsPreparingGuts(GlobalTransaction gxact, TransactionId xid, const char *gid,\n \tproc->pgprocno = gxact->pgprocno;\n \tSHMQueueElemInit(&(proc->links));\n \tproc->waitStatus = PROC_WAIT_STATUS_OK;\n-\t/* We set up the gxact's VXID as InvalidBackendId/XID */\n \tproc->lxid = (LocalTransactionId) xid;\n \tproc->xid = xid;\n \tAssert(proc->xmin == InvalidTransactionId);\n \tproc->delayChkpt = false;\n \tproc->statusFlags = 0;\n \tproc->pid = 0;\n-\tproc->backendId = InvalidBackendId;\n \tproc->databaseId = databaseid;\n \tproc->roleId = owner;\n \tproc->tempNamespaceId = InvalidOid;\ndiff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\nindex 4cc38f0d85..9a8b0686bd 100644\n--- a/src/backend/access/transam/xact.c\n+++ b/src/backend/access/transam/xact.c\n@@ -2020,7 +2020,7 @@ StartTransaction(void)\n \t * Advertise it in the proc array. We assume assignment of\n \t * localTransactionId is atomic, and the backendId should be set already.\n \t */\n-\tAssert(MyProc->backendId == vxid.backendId);\n+\tAssert(MyProc->pgprocno == vxid.backendId);\n \tMyProc->lxid = vxid.localTransactionId;\n \n \tTRACE_POSTGRESQL_TRANSACTION_START(vxid.localTransactionId);\ndiff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c\nindex 4de8400fd0..f450c46f4d 100644\n--- a/src/backend/catalog/namespace.c\n+++ b/src/backend/catalog/namespace.c\n@@ -3293,7 +3293,7 @@ checkTempNamespaceStatus(Oid namespaceId)\n \t\treturn TEMP_NAMESPACE_NOT_TEMP;\n \n \t/* Is the backend alive? */\n-\tproc = BackendIdGetProc(backendId);\n+\tproc = GetProcIfAlive(backendId);\n \tif (proc == NULL)\n \t\treturn TEMP_NAMESPACE_IDLE;\n \ndiff --git a/src/backend/commands/async.c b/src/backend/commands/async.c\nindex 8557008545..e007c17f51 100644\n--- a/src/backend/commands/async.c\n+++ b/src/backend/commands/async.c\n@@ -272,8 +272,8 @@ typedef struct QueueBackendStatus\n * NotifyQueueTailLock, then NotifyQueueLock, and lastly NotifySLRULock.\n *\n * Each backend uses the backend[] array entry with index equal to its\n- * BackendId (which can range from 1 to MaxBackends). We rely on this to make\n- * SendProcSignal fast.\n+ * BackendId (which can range from 0 to MaxBackends - 1). We rely on this to\n+ * make SendProcSignal fast.\n *\n * The backend[] array entries for actively-listening backends are threaded\n * together using firstListener and the nextListener links, so that we can\n@@ -1122,7 +1122,8 @@ Exec_ListenPreCommit(void)\n \thead = QUEUE_HEAD;\n \tmax = QUEUE_TAIL;\n \tprevListener = InvalidBackendId;\n-\tfor (BackendId i = QUEUE_FIRST_LISTENER; i > 0; i = QUEUE_NEXT_LISTENER(i))\n+\tfor (BackendId i = QUEUE_FIRST_LISTENER; i != InvalidBackendId ;\n+\t\t i = QUEUE_NEXT_LISTENER(i))\n \t{\n \t\tif (QUEUE_BACKEND_DBOID(i) == MyDatabaseId)\n \t\t\tmax = QUEUE_POS_MAX(max, QUEUE_BACKEND_POS(i));\n@@ -1134,7 +1135,7 @@ Exec_ListenPreCommit(void)\n \tQUEUE_BACKEND_PID(MyBackendId) = MyProcPid;\n \tQUEUE_BACKEND_DBOID(MyBackendId) = MyDatabaseId;\n \t/* Insert backend into list of listeners at correct position */\n-\tif (prevListener > 0)\n+\tif (prevListener != InvalidBackendId)\n \t{\n \t\tQUEUE_NEXT_LISTENER(MyBackendId) = QUEUE_NEXT_LISTENER(prevListener);\n \t\tQUEUE_NEXT_LISTENER(prevListener) = MyBackendId;\n@@ -1281,7 +1282,8 @@ asyncQueueUnregister(void)\n \t\tQUEUE_FIRST_LISTENER = QUEUE_NEXT_LISTENER(MyBackendId);\n \telse\n \t{\n-\t\tfor (BackendId i = QUEUE_FIRST_LISTENER; i > 0; i = QUEUE_NEXT_LISTENER(i))\n+\t\tfor (BackendId i = QUEUE_FIRST_LISTENER; i != InvalidBackendId ;\n+\t\t\t i = QUEUE_NEXT_LISTENER(i))\n \t\t{\n \t\t\tif (QUEUE_NEXT_LISTENER(i) == MyBackendId)\n \t\t\t{\n@@ -1590,7 +1592,8 @@ asyncQueueFillWarning(void)\n \t\tQueuePosition min = QUEUE_HEAD;\n \t\tint32\t\tminPid = InvalidPid;\n \n-\t\tfor (BackendId i = QUEUE_FIRST_LISTENER; i > 0; i = QUEUE_NEXT_LISTENER(i))\n+\t\tfor (BackendId i = QUEUE_FIRST_LISTENER; i != InvalidBackendId;\n+\t\t\t i = QUEUE_NEXT_LISTENER(i))\n \t\t{\n \t\t\tAssert(QUEUE_BACKEND_PID(i) != InvalidPid);\n \t\t\tmin = QUEUE_POS_MIN(min, QUEUE_BACKEND_POS(i));\n@@ -1646,7 +1649,8 @@ SignalBackends(void)\n \tcount = 0;\n \n \tLWLockAcquire(NotifyQueueLock, LW_EXCLUSIVE);\n-\tfor (BackendId i = QUEUE_FIRST_LISTENER; i > 0; i = QUEUE_NEXT_LISTENER(i))\n+\tfor (BackendId i = QUEUE_FIRST_LISTENER; i != InvalidBackendId ;\n+\t\t i = QUEUE_NEXT_LISTENER(i))\n \t{\n \t\tint32\t\tpid = QUEUE_BACKEND_PID(i);\n \t\tQueuePosition pos;\n@@ -2183,7 +2187,8 @@ asyncQueueAdvanceTail(void)\n \t */\n \tLWLockAcquire(NotifyQueueLock, LW_EXCLUSIVE);\n \tmin = QUEUE_HEAD;\n-\tfor (BackendId i = QUEUE_FIRST_LISTENER; i > 0; i = QUEUE_NEXT_LISTENER(i))\n+\tfor (int i = QUEUE_FIRST_LISTENER; i != InvalidBackendId ;\n+\t\t i = QUEUE_NEXT_LISTENER(i))\n \t{\n \t\tAssert(QUEUE_BACKEND_PID(i) != InvalidPid);\n \t\tmin = QUEUE_POS_MIN(min, QUEUE_BACKEND_POS(i));\ndiff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c\nindex c14ca27c5e..fa70eed559 100644\n--- a/src/backend/commands/indexcmds.c\n+++ b/src/backend/commands/indexcmds.c\n@@ -463,7 +463,8 @@ WaitForOlderSnapshots(TransactionId limitXmin, bool progress)\n \t\t\t/* If requested, publish who we're going to wait for. */\n \t\t\tif (progress)\n \t\t\t{\n-\t\t\t\tPGPROC\t *holder = BackendIdGetProc(old_snapshots[i].backendId);\n+\t\t\t\tPGPROC\t *holder =\n+\t\t\t\t\t&ProcGlobal->allProcs[old_snapshots[i].backendId];\n \n \t\t\t\tif (holder)\n \t\t\t\t\tpgstat_progress_update_param(PROGRESS_WAITFOR_CURRENT_PID,\ndiff --git a/src/backend/postmaster/auxprocess.c b/src/backend/postmaster/auxprocess.c\nindex 7452f908b2..4c7784b036 100644\n--- a/src/backend/postmaster/auxprocess.c\n+++ b/src/backend/postmaster/auxprocess.c\n@@ -116,7 +116,7 @@ AuxiliaryProcessMain(AuxProcType auxtype)\n \t * This will need rethinking if we ever want more than one of a particular\n \t * auxiliary process type.\n \t */\n-\tProcSignalInit(MaxBackends + MyAuxProcType + 1);\n+\tProcSignalInit();\n \n \t/*\n \t * Auxiliary processes don't run transactions, but they may need a\ndiff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c\nindex b7d0fbaefd..b1cd23f661 100644\n--- a/src/backend/postmaster/pgstat.c\n+++ b/src/backend/postmaster/pgstat.c\n@@ -54,7 +54,7 @@\n #include \"postmaster/postmaster.h\"\n #include \"replication/slot.h\"\n #include \"replication/walsender.h\"\n-#include \"storage/backendid.h\"\n+//#include \"storage/backendid.h\"\n #include \"storage/dsm.h\"\n #include \"storage/fd.h\"\n #include \"storage/ipc.h\"\ndiff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\nindex bd3c7a47fe..100c0dae8c 100644\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n@@ -2592,7 +2592,7 @@ ProcArrayInstallImportedXmin(TransactionId xmin,\n \t\t\tcontinue;\n \n \t\t/* We are only interested in the specific virtual transaction. */\n-\t\tif (proc->backendId != sourcevxid->backendId)\n+\t\tif (proc->pgprocno != sourcevxid->backendId)\n \t\t\tcontinue;\n \t\tif (proc->lxid != sourcevxid->localTransactionId)\n \t\t\tcontinue;\n@@ -3454,7 +3454,7 @@ SignalVirtualTransaction(VirtualTransactionId vxid, ProcSignalReason sigmode,\n \t\t\t\t * Kill the pid if it's still here. If not, that's what we\n \t\t\t\t * wanted so ignore any errors.\n \t\t\t\t */\n-\t\t\t\t(void) SendProcSignal(pid, sigmode, vxid.backendId);\n+\t\t\t\t(void) SendProcSignal(pid, sigmode, proc->pgprocno);\n \t\t\t}\n \t\t\tbreak;\n \t\t}\n@@ -3604,11 +3604,8 @@ CancelDBBackends(Oid databaseid, ProcSignalReason sigmode, bool conflictPending)\n \n \t\tif (databaseid == InvalidOid || proc->databaseId == databaseid)\n \t\t{\n-\t\t\tVirtualTransactionId procvxid;\n \t\t\tpid_t\t\tpid;\n \n-\t\t\tGET_VXID_FROM_PGPROC(procvxid, *proc);\n-\n \t\t\tproc->recoveryConflictPending = conflictPending;\n \t\t\tpid = proc->pid;\n \t\t\tif (pid != 0)\n@@ -3617,7 +3614,7 @@ CancelDBBackends(Oid databaseid, ProcSignalReason sigmode, bool conflictPending)\n \t\t\t\t * Kill the pid if it's still here. If not, that's what we\n \t\t\t\t * wanted so ignore any errors.\n \t\t\t\t */\n-\t\t\t\t(void) SendProcSignal(pid, sigmode, procvxid.backendId);\n+\t\t\t\t(void) SendProcSignal(pid, sigmode, proc->pgprocno);\n \t\t\t}\n \t\t}\n \t}\ndiff --git a/src/backend/storage/ipc/procsignal.c b/src/backend/storage/ipc/procsignal.c\nindex defb75aa26..ffe6939780 100644\n--- a/src/backend/storage/ipc/procsignal.c\n+++ b/src/backend/storage/ipc/procsignal.c\n@@ -81,9 +81,8 @@ typedef struct\n } ProcSignalHeader;\n \n /*\n- * We reserve a slot for each possible BackendId, plus one for each\n- * possible auxiliary process type. (This scheme assumes there is not\n- * more than one of any auxiliary process type at a time.)\n+ * We reserve a slot for PGPROCs for backends and auxiliary processes. Not all\n+ * of auxiliary processes use this but allocate for them for safety.\n */\n #define NumProcSignalSlots\t(MaxBackends + NUM_AUXPROCTYPES)\n \n@@ -153,24 +152,19 @@ ProcSignalShmemInit(void)\n /*\n * ProcSignalInit\n *\t\tRegister the current process in the procsignal array\n- *\n- * The passed index should be my BackendId if the process has one,\n- * or MaxBackends + aux process type if not.\n */\n void\n-ProcSignalInit(int pss_idx)\n+ProcSignalInit(void)\n {\n \tProcSignalSlot *slot;\n \tuint64\t\tbarrier_generation;\n \n-\tAssert(pss_idx >= 1 && pss_idx <= NumProcSignalSlots);\n-\n-\tslot = &ProcSignal->psh_slot[pss_idx - 1];\n+\tslot = &ProcSignal->psh_slot[MyBackendId];\n \n \t/* sanity check */\n \tif (slot->pss_pid != 0)\n \t\telog(LOG, \"process %d taking over ProcSignal slot %d, but it's not empty\",\n-\t\t\t MyProcPid, pss_idx);\n+\t\t\t MyProcPid, MyBackendId);\n \n \t/* Clear out any leftover signal reasons */\n \tMemSet(slot->pss_signalFlags, 0, NUM_PROCSIGNALS * sizeof(sig_atomic_t));\n@@ -199,7 +193,7 @@ ProcSignalInit(int pss_idx)\n \tMyProcSignalSlot = slot;\n \n \t/* Set up to release the slot on process exit */\n-\ton_shmem_exit(CleanupProcSignalState, Int32GetDatum(pss_idx));\n+\ton_shmem_exit(CleanupProcSignalState, (Datum) 0);\n }\n \n /*\n@@ -211,10 +205,9 @@ ProcSignalInit(int pss_idx)\n static void\n CleanupProcSignalState(int status, Datum arg)\n {\n-\tint\t\t\tpss_idx = DatumGetInt32(arg);\n \tProcSignalSlot *slot;\n \n-\tslot = &ProcSignal->psh_slot[pss_idx - 1];\n+\tslot = &ProcSignal->psh_slot[MyBackendId];\n \tAssert(slot == MyProcSignalSlot);\n \n \t/*\n@@ -232,7 +225,7 @@ CleanupProcSignalState(int status, Datum arg)\n \t\t * infinite loop trying to exit\n \t\t */\n \t\telog(LOG, \"process %d releasing ProcSignal slot %d, but it contains %d\",\n-\t\t\t MyProcPid, pss_idx, (int) slot->pss_pid);\n+\t\t\t MyProcPid, MyProcPid, (int) slot->pss_pid);\n \t\treturn;\t\t\t\t\t/* XXX better to zero the slot anyway? */\n \t}\n \n@@ -264,7 +257,7 @@ SendProcSignal(pid_t pid, ProcSignalReason reason, BackendId backendId)\n \n \tif (backendId != InvalidBackendId)\n \t{\n-\t\tslot = &ProcSignal->psh_slot[backendId - 1];\n+\t\tslot = &ProcSignal->psh_slot[backendId];\n \n \t\t/*\n \t\t * Note: Since there's no locking, it's possible that the target\ndiff --git a/src/backend/storage/ipc/sinvaladt.c b/src/backend/storage/ipc/sinvaladt.c\nindex 946bd8e3cb..a90e9920e7 100644\n--- a/src/backend/storage/ipc/sinvaladt.c\n+++ b/src/backend/storage/ipc/sinvaladt.c\n@@ -19,7 +19,6 @@\n \n #include \"access/transam.h\"\n #include \"miscadmin.h\"\n-#include \"storage/backendid.h\"\n #include \"storage/ipc.h\"\n #include \"storage/proc.h\"\n #include \"storage/procsignal.h\"\n@@ -157,7 +156,7 @@ typedef struct ProcState\n \t/*\n \t * Next LocalTransactionId to use for each idle backend slot. We keep\n \t * this here because it is indexed by BackendId and it is convenient to\n-\t * copy the value to and from local memory when MyBackendId is set. It's\n+\t * copy the value to and from local memory when MybackendId is set. It's\n \t * meaningless in an active ProcState entry.\n \t */\n \tLocalTransactionId nextLXID;\n@@ -195,6 +194,7 @@ static LocalTransactionId nextLocalTransactionId;\n \n static void CleanupInvalidationState(int status, Datum arg);\n \n+static int siindex;\n \n /*\n * SInvalShmemSize --- return shared-memory space needed\n@@ -290,7 +290,7 @@ SharedInvalBackendInit(bool sendOnly)\n \t\t\t/*\n \t\t\t * out of procState slots: MaxBackends exceeded -- report normally\n \t\t\t */\n-\t\t\tMyBackendId = InvalidBackendId;\n+\t\t\tsiindex = -1;\n \t\t\tLWLockRelease(SInvalWriteLock);\n \t\t\tereport(FATAL,\n \t\t\t\t\t(errcode(ERRCODE_TOO_MANY_CONNECTIONS),\n@@ -298,10 +298,7 @@ SharedInvalBackendInit(bool sendOnly)\n \t\t}\n \t}\n \n-\tMyBackendId = (stateP - &segP->procState[0]) + 1;\n-\n-\t/* Advertise assigned backend ID in MyProc */\n-\tMyProc->backendId = MyBackendId;\n+\tsiindex = (stateP - &segP->procState[0]);\n \n \t/* Fetch next local transaction ID into local memory */\n \tnextLocalTransactionId = stateP->nextLXID;\n@@ -320,7 +317,7 @@ SharedInvalBackendInit(bool sendOnly)\n \t/* register exit routine to mark my entry inactive at exit */\n \ton_shmem_exit(CleanupInvalidationState, PointerGetDatum(segP));\n \n-\telog(DEBUG4, \"my backend ID is %d\", MyBackendId);\n+\telog(DEBUG4, \"my SI slot index is %d\", siindex);\n }\n \n /*\n@@ -342,7 +339,7 @@ CleanupInvalidationState(int status, Datum arg)\n \n \tLWLockAcquire(SInvalWriteLock, LW_EXCLUSIVE);\n \n-\tstateP = &segP->procState[MyBackendId - 1];\n+\tstateP = &segP->procState[siindex];\n \n \t/* Update next local transaction ID for next holder of this backendID */\n \tstateP->nextLXID = nextLocalTransactionId;\n@@ -365,34 +362,6 @@ CleanupInvalidationState(int status, Datum arg)\n \tLWLockRelease(SInvalWriteLock);\n }\n \n-/*\n- * BackendIdGetProc\n- *\t\tGet the PGPROC structure for a backend, given the backend ID.\n- *\t\tThe result may be out of date arbitrarily quickly, so the caller\n- *\t\tmust be careful about how this information is used. NULL is\n- *\t\treturned if the backend is not active.\n- */\n-PGPROC *\n-BackendIdGetProc(int backendID)\n-{\n-\tPGPROC\t *result = NULL;\n-\tSISeg\t *segP = shmInvalBuffer;\n-\n-\t/* Need to lock out additions/removals of backends */\n-\tLWLockAcquire(SInvalWriteLock, LW_SHARED);\n-\n-\tif (backendID > 0 && backendID <= segP->lastBackend)\n-\t{\n-\t\tProcState *stateP = &segP->procState[backendID - 1];\n-\n-\t\tresult = stateP->proc;\n-\t}\n-\n-\tLWLockRelease(SInvalWriteLock);\n-\n-\treturn result;\n-}\n-\n /*\n * BackendIdGetTransactionIds\n *\t\tGet the xid and xmin of the backend. The result may be out of date\n@@ -541,7 +510,7 @@ SIGetDataEntries(SharedInvalidationMessage *data, int datasize)\n \tint\t\t\tn;\n \n \tsegP = shmInvalBuffer;\n-\tstateP = &segP->procState[MyBackendId - 1];\n+\tstateP = &segP->procState[siindex];\n \n \t/*\n \t * Before starting to take locks, do a quick, unlocked test to see whether\n@@ -730,7 +699,7 @@ SICleanupQueue(bool callerHasWriteLock, int minFree)\n \tif (needSig)\n \t{\n \t\tpid_t\t\this_pid = needSig->procPid;\n-\t\tBackendId\this_backendId = (needSig - &segP->procState[0]) + 1;\n+\t\tint\t\t\this_backendId = needSig->proc->pgprocno;\n \n \t\tneedSig->signaled = true;\n \t\tLWLockRelease(SInvalReadLock);\ndiff --git a/src/backend/storage/ipc/standby.c b/src/backend/storage/ipc/standby.c\nindex b17326bc20..032f0428aa 100644\n--- a/src/backend/storage/ipc/standby.c\n+++ b/src/backend/storage/ipc/standby.c\n@@ -274,7 +274,7 @@ LogRecoveryConflict(ProcSignalReason reason, TimestampTz wait_start,\n \t\tvxids = wait_list;\n \t\twhile (VirtualTransactionIdIsValid(*vxids))\n \t\t{\n-\t\t\tPGPROC\t *proc = BackendIdGetProc(vxids->backendId);\n+\t\t\tPGPROC\t *proc = &ProcGlobal->allProcs[vxids->backendId];\n \n \t\t\t/* proc can be NULL if the target backend is not active */\n \t\t\tif (proc)\ndiff --git a/src/backend/storage/lmgr/lmgr.c b/src/backend/storage/lmgr/lmgr.c\nindex cdf2266d6d..f0208531e0 100644\n--- a/src/backend/storage/lmgr/lmgr.c\n+++ b/src/backend/storage/lmgr/lmgr.c\n@@ -918,7 +918,8 @@ WaitForLockersMultiple(List *locktags, LOCKMODE lockmode, bool progress)\n \t\t\t/* If requested, publish who we're going to wait for. */\n \t\t\tif (progress)\n \t\t\t{\n-\t\t\t\tPGPROC\t *holder = BackendIdGetProc(lockholders->backendId);\n+\t\t\t\tPGPROC\t *holder =\n+\t\t\t\t\t&ProcGlobal->allProcs[lockholders->backendId];\n \n \t\t\t\tif (holder)\n \t\t\t\t\tpgstat_progress_update_param(PROGRESS_WAITFOR_CURRENT_PID,\ndiff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c\nindex 364654e106..3edb7d6fd5 100644\n--- a/src/backend/storage/lmgr/lock.c\n+++ b/src/backend/storage/lmgr/lock.c\n@@ -3695,7 +3695,7 @@ GetLockStatusData(void)\n \t\t\t\t\t\t\t\t proc->fpRelId[f]);\n \t\t\tinstance->holdMask = lockbits << FAST_PATH_LOCKNUMBER_OFFSET;\n \t\t\tinstance->waitLockMode = NoLock;\n-\t\t\tinstance->backend = proc->backendId;\n+\t\t\tinstance->backend = proc->pgprocno;\n \t\t\tinstance->lxid = proc->lxid;\n \t\t\tinstance->pid = proc->pid;\n \t\t\tinstance->leaderPid = proc->pid;\n@@ -3722,14 +3722,14 @@ GetLockStatusData(void)\n \t\t\t\t\trepalloc(data->locks, sizeof(LockInstanceData) * els);\n \t\t\t}\n \n-\t\t\tvxid.backendId = proc->backendId;\n+\t\t\tvxid.backendId = proc->pgprocno;\n \t\t\tvxid.localTransactionId = proc->fpLocalTransactionId;\n \n \t\t\tinstance = &data->locks[el];\n \t\t\tSET_LOCKTAG_VIRTUALTRANSACTION(instance->locktag, vxid);\n \t\t\tinstance->holdMask = LOCKBIT_ON(ExclusiveLock);\n \t\t\tinstance->waitLockMode = NoLock;\n-\t\t\tinstance->backend = proc->backendId;\n+\t\t\tinstance->backend = proc->pgprocno;\n \t\t\tinstance->lxid = proc->lxid;\n \t\t\tinstance->pid = proc->pid;\n \t\t\tinstance->leaderPid = proc->pid;\n@@ -3782,7 +3782,7 @@ GetLockStatusData(void)\n \t\t\tinstance->waitLockMode = proc->waitLockMode;\n \t\telse\n \t\t\tinstance->waitLockMode = NoLock;\n-\t\tinstance->backend = proc->backendId;\n+\t\tinstance->backend = proc->pgprocno;\n \t\tinstance->lxid = proc->lxid;\n \t\tinstance->pid = proc->pid;\n \t\tinstance->leaderPid = proclock->groupLeader->pid;\n@@ -3961,7 +3961,7 @@ GetSingleProcBlockerStatusData(PGPROC *blocked_proc, BlockedProcsData *data)\n \t\t\tinstance->waitLockMode = proc->waitLockMode;\n \t\telse\n \t\t\tinstance->waitLockMode = NoLock;\n-\t\tinstance->backend = proc->backendId;\n+\t\tinstance->backend = proc->pgprocno;\n \t\tinstance->lxid = proc->lxid;\n \t\tinstance->pid = proc->pid;\n \t\tinstance->leaderPid = proclock->groupLeader->pid;\n@@ -4475,7 +4475,7 @@ VirtualXactLockTableInsert(VirtualTransactionId vxid)\n \n \tLWLockAcquire(&MyProc->fpInfoLock, LW_EXCLUSIVE);\n \n-\tAssert(MyProc->backendId == vxid.backendId);\n+\tAssert(MyProc->pgprocno == vxid.backendId);\n \tAssert(MyProc->fpLocalTransactionId == InvalidLocalTransactionId);\n \tAssert(MyProc->fpVXIDLock == false);\n \n@@ -4497,8 +4497,6 @@ VirtualXactLockTableCleanup(void)\n \tbool\t\tfastpath;\n \tLocalTransactionId lxid;\n \n-\tAssert(MyProc->backendId != InvalidBackendId);\n-\n \t/*\n \t * Clean up shared memory state.\n \t */\n@@ -4520,7 +4518,7 @@ VirtualXactLockTableCleanup(void)\n \t\tVirtualTransactionId vxid;\n \t\tLOCKTAG\t\tlocktag;\n \n-\t\tvxid.backendId = MyBackendId;\n+\t\tvxid.backendId = MyProc->pgprocno;\n \t\tvxid.localTransactionId = lxid;\n \t\tSET_LOCKTAG_VIRTUALTRANSACTION(locktag, vxid);\n \n@@ -4571,7 +4569,7 @@ VirtualXactLock(VirtualTransactionId vxid, bool wait)\n \t * relevant lxid is no longer running here, that's enough to prove that\n \t * it's no longer running anywhere.\n \t */\n-\tproc = BackendIdGetProc(vxid.backendId);\n+\tproc = &ProcGlobal->allProcs[vxid.backendId];\n \tif (proc == NULL)\n \t\treturn true;\n \n@@ -4583,7 +4581,7 @@ VirtualXactLock(VirtualTransactionId vxid, bool wait)\n \tLWLockAcquire(&proc->fpInfoLock, LW_EXCLUSIVE);\n \n \t/* If the transaction has ended, our work here is done. */\n-\tif (proc->backendId != vxid.backendId\n+\tif (proc->pgprocno != vxid.backendId\n \t\t|| proc->fpLocalTransactionId != vxid.localTransactionId)\n \t{\n \t\tLWLockRelease(&proc->fpInfoLock);\ndiff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\nindex 78e05976a4..0ee6627e20 100644\n--- a/src/backend/storage/lmgr/proc.c\n+++ b/src/backend/storage/lmgr/proc.c\n@@ -386,6 +386,8 @@ InitProcess(void)\n \tif (IsUnderPostmaster && !IsAutoVacuumLauncherProcess())\n \t\tMarkPostmasterChildActive();\n \n+\tMyBackendId = MyProc->pgprocno;\n+\n \t/*\n \t * Initialize all fields of MyProc, except for those previously\n \t * initialized by InitProcGlobal.\n@@ -398,14 +400,13 @@ InitProcess(void)\n \tMyProc->xid = InvalidTransactionId;\n \tMyProc->xmin = InvalidTransactionId;\n \tMyProc->pid = MyProcPid;\n-\t/* backendId, databaseId and roleId will be filled in later */\n-\tMyProc->backendId = InvalidBackendId;\n+\t/* databaseId and roleId will be filled in later */\n \tMyProc->databaseId = InvalidOid;\n \tMyProc->roleId = InvalidOid;\n \tMyProc->tempNamespaceId = InvalidOid;\n \tMyProc->isBackgroundWorker = IsBackgroundWorker;\n \tMyProc->delayChkpt = false;\n-\tMyProc->statusFlags = 0;\n+\tMyProc->statusFlags = PROC_IS_ACTIVE;\n \t/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */\n \tif (IsAutoVacuumWorkerProcess())\n \t\tMyProc->statusFlags |= PROC_IS_AUTOVACUUM;\n@@ -570,6 +571,7 @@ InitAuxiliaryProcess(void)\n \t((volatile PGPROC *) auxproc)->pid = MyProcPid;\n \n \tMyProc = auxproc;\n+\tMyBackendId = MyProc->pgprocno;\n \n \tSpinLockRelease(ProcStructLock);\n \n@@ -584,13 +586,12 @@ InitAuxiliaryProcess(void)\n \tMyProc->fpLocalTransactionId = InvalidLocalTransactionId;\n \tMyProc->xid = InvalidTransactionId;\n \tMyProc->xmin = InvalidTransactionId;\n-\tMyProc->backendId = InvalidBackendId;\n \tMyProc->databaseId = InvalidOid;\n \tMyProc->roleId = InvalidOid;\n \tMyProc->tempNamespaceId = InvalidOid;\n \tMyProc->isBackgroundWorker = IsBackgroundWorker;\n \tMyProc->delayChkpt = false;\n-\tMyProc->statusFlags = 0;\n+\tMyProc->statusFlags = PROC_IS_ACTIVE;\n \tMyProc->lwWaiting = false;\n \tMyProc->lwWaitMode = 0;\n \tMyProc->waitLock = NULL;\n@@ -913,6 +914,9 @@ ProcKill(int code, Datum arg)\n \tMyProc = NULL;\n \tDisownLatch(&proc->procLatch);\n \n+\t/* mark this process dead */\n+\tproc->statusFlags &= ~PROC_IS_ACTIVE;\n+\n \tprocgloballist = proc->procgloballist;\n \tSpinLockAcquire(ProcStructLock);\n \n@@ -985,6 +989,7 @@ AuxiliaryProcKill(int code, Datum arg)\n \n \t/* Mark auxiliary proc no longer in use */\n \tproc->pid = 0;\n+\tproc->statusFlags &= ~PROC_IS_ACTIVE;\n \n \t/* Update shared estimate of spins_per_delay */\n \tProcGlobal->spins_per_delay = update_spins_per_delay(ProcGlobal->spins_per_delay);\n@@ -2020,3 +2025,17 @@ BecomeLockGroupMember(PGPROC *leader, int pid)\n \n \treturn ok;\n }\n+\n+/*\n+ * Return PGPROC if it is alive.\n+ */\n+PGPROC *\n+GetProcIfAlive(BackendId backend)\n+{\n+\tPGPROC *proc = &ProcGlobal->allProcs[backend];\n+\n+\tif (proc->statusFlags & PROC_IS_ACTIVE)\n+\t\treturn proc;\n+\n+\treturn NULL;\n+}\ndiff --git a/src/backend/utils/activity/backend_status.c b/src/backend/utils/activity/backend_status.c\nindex 7229598822..0490a3a8b2 100644\n--- a/src/backend/utils/activity/backend_status.c\n+++ b/src/backend/utils/activity/backend_status.c\n@@ -249,26 +249,8 @@ void\n pgstat_beinit(void)\n {\n \t/* Initialize MyBEEntry */\n-\tif (MyBackendId != InvalidBackendId)\n-\t{\n-\t\tAssert(MyBackendId >= 1 && MyBackendId <= MaxBackends);\n-\t\tMyBEEntry = &BackendStatusArray[MyBackendId - 1];\n-\t}\n-\telse\n-\t{\n-\t\t/* Must be an auxiliary process */\n-\t\tAssert(MyAuxProcType != NotAnAuxProcess);\n-\n-\t\t/*\n-\t\t * Assign the MyBEEntry for an auxiliary process. Since it doesn't\n-\t\t * have a BackendId, the slot is statically allocated based on the\n-\t\t * auxiliary process type (MyAuxProcType). Backends use slots indexed\n-\t\t * in the range from 1 to MaxBackends (inclusive), so we use\n-\t\t * MaxBackends + AuxBackendType + 1 as the index of the slot for an\n-\t\t * auxiliary process.\n-\t\t */\n-\t\tMyBEEntry = &BackendStatusArray[MaxBackends + MyAuxProcType];\n-\t}\n+\tAssert(MyBackendId >= 0 && MyBackendId < NumBackendStatSlots);\n+\tMyBEEntry = &BackendStatusArray[MyBackendId];\n \n \t/* Set up a process-exit hook to clean up */\n \ton_shmem_exit(pgstat_beshutdown_hook, 0);\ndiff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c\nindex d5a7fb13f3..16acb0c335 100644\n--- a/src/backend/utils/adt/dbsize.c\n+++ b/src/backend/utils/adt/dbsize.c\n@@ -22,6 +22,7 @@\n #include \"commands/dbcommands.h\"\n #include \"commands/tablespace.h\"\n #include \"miscadmin.h\"\n+#include \"storage/backendid.h\"\n #include \"storage/fd.h\"\n #include \"utils/acl.h\"\n #include \"utils/builtins.h\"\n@@ -292,7 +293,7 @@ pg_tablespace_size_name(PG_FUNCTION_ARGS)\n * is no check here or at the call sites for that.\n */\n static int64\n-calculate_relation_size(RelFileNode *rfn, BackendId backend, ForkNumber forknum)\n+calculate_relation_size(RelFileNode *rfn, int backend, ForkNumber forknum)\n {\n \tint64\t\ttotalsize = 0;\n \tchar\t *relationpath;\n@@ -925,7 +926,7 @@ pg_relation_filepath(PG_FUNCTION_ARGS)\n \tHeapTuple\ttuple;\n \tForm_pg_class relform;\n \tRelFileNode rnode;\n-\tBackendId\tbackend;\n+\tint\t\t\tbackend;\n \tchar\t *path;\n \n \ttuple = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));\ndiff --git a/src/backend/utils/adt/mcxtfuncs.c b/src/backend/utils/adt/mcxtfuncs.c\nindex 0d52613bc3..fca87448cf 100644\n--- a/src/backend/utils/adt/mcxtfuncs.c\n+++ b/src/backend/utils/adt/mcxtfuncs.c\n@@ -205,7 +205,7 @@ pg_log_backend_memory_contexts(PG_FUNCTION_ARGS)\n \t\tPG_RETURN_BOOL(false);\n \t}\n \n-\tif (SendProcSignal(pid, PROCSIG_LOG_MEMORY_CONTEXT, proc->backendId) < 0)\n+\tif (SendProcSignal(pid, PROCSIG_LOG_MEMORY_CONTEXT, proc->pgprocno) < 0)\n \t{\n \t\t/* Again, just a warning to allow loops */\n \t\tereport(WARNING,\ndiff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c\nindex 13d9994af3..00036ec9c0 100644\n--- a/src/backend/utils/cache/relcache.c\n+++ b/src/backend/utils/cache/relcache.c\n@@ -73,6 +73,7 @@\n #include \"optimizer/optimizer.h\"\n #include \"rewrite/rewriteDefine.h\"\n #include \"rewrite/rowsecurity.h\"\n+#include \"storage/backendid.h\"\n #include \"storage/lmgr.h\"\n #include \"storage/smgr.h\"\n #include \"utils/array.h\"\ndiff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c\nindex f33729513a..0f927751d7 100644\n--- a/src/backend/utils/error/elog.c\n+++ b/src/backend/utils/error/elog.c\n@@ -2685,18 +2685,18 @@ log_line_prefix(StringInfo buf, ErrorData *edata)\n \t\t\t\tbreak;\n \t\t\tcase 'v':\n \t\t\t\t/* keep VXID format in sync with lockfuncs.c */\n-\t\t\t\tif (MyProc != NULL && MyProc->backendId != InvalidBackendId)\n+\t\t\t\tif (MyProc != NULL && MyProc->pgprocno != InvalidBackendId)\n \t\t\t\t{\n \t\t\t\t\tif (padding != 0)\n \t\t\t\t\t{\n \t\t\t\t\t\tchar\t\tstrfbuf[128];\n \n \t\t\t\t\t\tsnprintf(strfbuf, sizeof(strfbuf) - 1, \"%d/%u\",\n-\t\t\t\t\t\t\t\t MyProc->backendId, MyProc->lxid);\n+\t\t\t\t\t\t\t\t MyProc->pgprocno, MyProc->lxid);\n \t\t\t\t\t\tappendStringInfo(buf, \"%*s\", padding, strfbuf);\n \t\t\t\t\t}\n \t\t\t\t\telse\n-\t\t\t\t\t\tappendStringInfo(buf, \"%d/%u\", MyProc->backendId, MyProc->lxid);\n+\t\t\t\t\t\tappendStringInfo(buf, \"%d/%u\", MyProc->pgprocno, MyProc->lxid);\n \t\t\t\t}\n \t\t\t\telse if (padding != 0)\n \t\t\t\t\tappendStringInfoSpaces(buf,\n@@ -2860,8 +2860,8 @@ write_csvlog(ErrorData *edata)\n \n \t/* Virtual transaction id */\n \t/* keep VXID format in sync with lockfuncs.c */\n-\tif (MyProc != NULL && MyProc->backendId != InvalidBackendId)\n-\t\tappendStringInfo(&buf, \"%d/%u\", MyProc->backendId, MyProc->lxid);\n+\tif (MyProc != NULL && MyProc->pgprocno != InvalidBackendId)\n+\t\tappendStringInfo(&buf, \"%d/%u\", MyProc->pgprocno, MyProc->lxid);\n \tappendStringInfoChar(&buf, ',');\n \n \t/* Transaction id */\ndiff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c\nindex 381d9e548d..4ba1914472 100644\n--- a/src/backend/utils/init/globals.c\n+++ b/src/backend/utils/init/globals.c\n@@ -81,8 +81,6 @@ char\t\tpostgres_exec_path[MAXPGPATH];\t/* full path to backend */\n /* note: currently this is not valid in backend processes */\n #endif\n \n-BackendId\tMyBackendId = InvalidBackendId;\n-\n BackendId\tParallelLeaderBackendId = InvalidBackendId;\n \n Oid\t\t\tMyDatabaseId = InvalidOid;\ndiff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c\nindex 78bc64671e..7f5b8e12ee 100644\n--- a/src/backend/utils/init/postinit.c\n+++ b/src/backend/utils/init/postinit.c\n@@ -592,15 +592,10 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,\n \t *\n \t * Sets up MyBackendId, a unique backend identifier.\n \t */\n-\tMyBackendId = InvalidBackendId;\n-\n \tSharedInvalBackendInit(false);\n \n-\tif (MyBackendId > MaxBackends || MyBackendId <= 0)\n-\t\telog(FATAL, \"bad backend ID: %d\", MyBackendId);\n-\n \t/* Now that we have a BackendId, we can participate in ProcSignal */\n-\tProcSignalInit(MyBackendId);\n+\tProcSignalInit();\n \n \t/*\n \t * Also set up timeout handlers needed for backend operation. We need\ndiff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c\nindex 5001efdf7a..df8476d177 100644\n--- a/src/backend/utils/time/snapmgr.c\n+++ b/src/backend/utils/time/snapmgr.c\n@@ -1173,7 +1173,7 @@ ExportSnapshot(Snapshot snapshot)\n \t * inside the transaction from 1.\n \t */\n \tsnprintf(path, sizeof(path), SNAPSHOT_EXPORT_DIR \"/%08X-%08X-%d\",\n-\t\t\t MyProc->backendId, MyProc->lxid, list_length(exportedSnapshots) + 1);\n+\t\t\t MyProc->pgprocno, MyProc->lxid, list_length(exportedSnapshots) + 1);\n \n \t/*\n \t * Copy the snapshot into TopTransactionContext, add it to the\n@@ -1200,7 +1200,7 @@ ExportSnapshot(Snapshot snapshot)\n \t */\n \tinitStringInfo(&buf);\n \n-\tappendStringInfo(&buf, \"vxid:%d/%u\\n\", MyProc->backendId, MyProc->lxid);\n+\tappendStringInfo(&buf, \"vxid:%d/%u\\n\", MyProc->pgprocno, MyProc->lxid);\n \tappendStringInfo(&buf, \"pid:%d\\n\", MyProcPid);\n \tappendStringInfo(&buf, \"dbid:%u\\n\", MyDatabaseId);\n \tappendStringInfo(&buf, \"iso:%d\\n\", XactIsoLevel);\ndiff --git a/src/include/storage/backendid.h b/src/include/storage/backendid.h\nindex 7aa3936899..3772e2b4a2 100644\n--- a/src/include/storage/backendid.h\n+++ b/src/include/storage/backendid.h\n@@ -20,7 +20,8 @@\n */\n typedef int BackendId;\t\t\t/* unique currently active backend identifier */\n \n-#define InvalidBackendId\t\t(-1)\n+#define INVALID_PGPROCNO\t\tPG_INT32_MAX\n+#define InvalidBackendId\t\tINVALID_PGPROCNO\n \n extern PGDLLIMPORT BackendId MyBackendId;\t/* backend id of this backend */\n \ndiff --git a/src/include/storage/lock.h b/src/include/storage/lock.h\nindex 9b2a421c32..13b9352704 100644\n--- a/src/include/storage/lock.h\n+++ b/src/include/storage/lock.h\n@@ -62,7 +62,7 @@ extern bool Debug_deadlocks;\n */\n typedef struct\n {\n-\tBackendId\tbackendId;\t\t/* backendId from PGPROC */\n+\tBackendId\tbackendId;\t\t/* pgprocno from PGPROC */\n \tLocalTransactionId localTransactionId;\t/* lxid from PGPROC */\n } VirtualTransactionId;\n \n@@ -79,7 +79,7 @@ typedef struct\n \t((vxid).backendId = InvalidBackendId, \\\n \t (vxid).localTransactionId = InvalidLocalTransactionId)\n #define GET_VXID_FROM_PGPROC(vxid, proc) \\\n-\t((vxid).backendId = (proc).backendId, \\\n+\t((vxid).backendId = (proc).pgprocno, \\\n \t (vxid).localTransactionId = (proc).lxid)\n \n /* MAX_LOCKMODES cannot be larger than the # of bits in LOCKMASK */\n@@ -445,7 +445,7 @@ typedef struct LockInstanceData\n \tLOCKTAG\t\tlocktag;\t\t/* tag for locked object */\n \tLOCKMASK\tholdMask;\t\t/* locks held by this PGPROC */\n \tLOCKMODE\twaitLockMode;\t/* lock awaited by this PGPROC, if any */\n-\tBackendId\tbackend;\t\t/* backend ID of this PGPROC */\n+\tBackendId\tbackend;\t\t/* pgprocno of this PGPROC */\n \tLocalTransactionId lxid;\t/* local transaction ID of this PGPROC */\n \tTimestampTz waitStart;\t\t/* time at which this PGPROC started waiting\n \t\t\t\t\t\t\t\t * for lock */\ndiff --git a/src/include/storage/proc.h b/src/include/storage/proc.h\nindex be67d8a861..2f3d9cfb17 100644\n--- a/src/include/storage/proc.h\n+++ b/src/include/storage/proc.h\n@@ -17,6 +17,7 @@\n #include \"access/clog.h\"\n #include \"access/xlogdefs.h\"\n #include \"lib/ilist.h\"\n+#include \"storage/backendid.h\"\n #include \"storage/latch.h\"\n #include \"storage/lock.h\"\n #include \"storage/pg_sema.h\"\n@@ -60,6 +61,7 @@ struct XidCache\n #define\t\tPROC_VACUUM_FOR_WRAPAROUND\t0x08\t/* set by autovac only */\n #define\t\tPROC_IN_LOGICAL_DECODING\t0x10\t/* currently doing logical\n \t\t\t\t\t\t\t\t\t\t\t\t * decoding outside xact */\n+#define\t\tPROC_IS_ACTIVE\t\t\t\t0x20\t/* This process is active */\n \n /* flags reset at EOXact */\n #define\t\tPROC_VACUUM_STATE_MASK \\\n@@ -73,11 +75,7 @@ struct XidCache\n */\n #define\t\tFP_LOCK_SLOTS_PER_BACKEND 16\n \n-/*\n- * An invalid pgprocno. Must be larger than the maximum number of PGPROC\n- * structures we could possibly have. See comments for MAX_BACKENDS.\n- */\n-#define INVALID_PGPROCNO\t\tPG_INT32_MAX\n+BackendId MyBackendId;\n \n typedef enum\n {\n@@ -150,7 +148,6 @@ struct PGPROC\n \tint\t\t\tpgprocno;\n \n \t/* These fields are zero while a backend is still starting up: */\n-\tBackendId\tbackendId;\t\t/* This backend's backend ID (if assigned) */\n \tOid\t\t\tdatabaseId;\t\t/* OID of database this backend is using */\n \tOid\t\t\troleId;\t\t\t/* OID of role using this backend */\n \n@@ -418,4 +415,6 @@ extern PGPROC *AuxiliaryPidGetProc(int pid);\n extern void BecomeLockGroupLeader(void);\n extern bool BecomeLockGroupMember(PGPROC *leader, int pid);\n \n+extern PGPROC *GetProcIfAlive(BackendId backend);\n+\n #endif\t\t\t\t\t\t\t/* _PROC_H_ */\ndiff --git a/src/include/storage/procsignal.h b/src/include/storage/procsignal.h\nindex eec186be2e..34a7fff271 100644\n--- a/src/include/storage/procsignal.h\n+++ b/src/include/storage/procsignal.h\n@@ -63,9 +63,9 @@ typedef enum\n extern Size ProcSignalShmemSize(void);\n extern void ProcSignalShmemInit(void);\n \n-extern void ProcSignalInit(int pss_idx);\n+extern void ProcSignalInit(void);\n extern int\tSendProcSignal(pid_t pid, ProcSignalReason reason,\n-\t\t\t\t\t\t BackendId backendId);\n+\t\t\t\t\t\t int\tpgprocno);\n \n extern uint64 EmitProcSignalBarrier(ProcSignalBarrierType type);\n extern void WaitForProcSignalBarrier(uint64 generation);\ndiff --git a/src/include/storage/smgr.h b/src/include/storage/smgr.h\nindex a6fbf7b6a6..7c1063f9f9 100644\n--- a/src/include/storage/smgr.h\n+++ b/src/include/storage/smgr.h\n@@ -78,7 +78,7 @@ typedef SMgrRelationData *SMgrRelation;\n \tRelFileNodeBackendIsTemp((smgr)->smgr_rnode)\n \n extern void smgrinit(void);\n-extern SMgrRelation smgropen(RelFileNode rnode, BackendId backend);\n+extern SMgrRelation smgropen(RelFileNode rnode, int backend);\n extern bool smgrexists(SMgrRelation reln, ForkNumber forknum);\n extern void smgrsetowner(SMgrRelation *owner, SMgrRelation reln);\n extern void smgrclearowner(SMgrRelation *owner, SMgrRelation reln);\ndiff --git a/src/include/utils/rel.h b/src/include/utils/rel.h\nindex b4faa1c123..1ef0571201 100644\n--- a/src/include/utils/rel.h\n+++ b/src/include/utils/rel.h\n@@ -56,7 +56,7 @@ typedef struct RelationData\n \tRelFileNode rd_node;\t\t/* relation physical identifier */\n \tSMgrRelation rd_smgr;\t\t/* cached file handle, or NULL */\n \tint\t\t\trd_refcnt;\t\t/* reference count */\n-\tBackendId\trd_backend;\t\t/* owning backend id, if temporary relation */\n+\tint\t\t\trd_backend;\t\t/* owning backend id, if temporary relation */\n \tbool\t\trd_islocaltemp; /* rel is a temp rel of this session */\n \tbool\t\trd_isnailed;\t/* rel is nailed in cache */\n \tbool\t\trd_isvalid;\t\t/* relcache entry is valid */\n-- \n2.27.0\n\n\n From 905ec70637191636e1e997e9a42e867a3521f768 Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Fri, 15 Oct 2021 13:15:13 +0900\nSubject: [PATCH v2 3/5] Remove O(n) behavior of SIInsertDataEntries\n\nSIInsertDataEntries ran a loop over a certain range of procState array\nat every insertion. Get rid of the behavior so that we won't be\nannoyed by performance drag that may happen when procState array gets\nsparse. This commit removes hasMessage from ProcState. Instead the\nfunction uses maxMsgNum to check for new messages.\n---\n src/backend/storage/ipc/sinvaladt.c | 63 +++++++++--------------------\n 1 file changed, 20 insertions(+), 43 deletions(-)\n\ndiff --git a/src/backend/storage/ipc/sinvaladt.c b/src/backend/storage/ipc/sinvaladt.c\nindex a90e9920e7..ee3a3accfd 100644\n--- a/src/backend/storage/ipc/sinvaladt.c\n+++ b/src/backend/storage/ipc/sinvaladt.c\n@@ -143,7 +143,6 @@ typedef struct ProcState\n \tint\t\t\tnextMsgNum;\t\t/* next message number to read */\n \tbool\t\tresetState;\t\t/* backend needs to reset its state */\n \tbool\t\tsignaled;\t\t/* backend has been sent catchup signal */\n-\tbool\t\thasMessages;\t/* backend has unread messages */\n \n \t/*\n \t * Backend only sends invalidations, never receives them. This only makes\n@@ -244,7 +243,6 @@ CreateSharedInvalidationState(void)\n \t\tshmInvalBuffer->procState[i].nextMsgNum = 0;\t/* meaningless */\n \t\tshmInvalBuffer->procState[i].resetState = false;\n \t\tshmInvalBuffer->procState[i].signaled = false;\n-\t\tshmInvalBuffer->procState[i].hasMessages = false;\n \t\tshmInvalBuffer->procState[i].nextLXID = InvalidLocalTransactionId;\n \t}\n }\n@@ -309,7 +307,6 @@ SharedInvalBackendInit(bool sendOnly)\n \tstateP->nextMsgNum = segP->maxMsgNum;\n \tstateP->resetState = false;\n \tstateP->signaled = false;\n-\tstateP->hasMessages = false;\n \tstateP->sendOnly = sendOnly;\n \n \tLWLockRelease(SInvalWriteLock);\n@@ -417,7 +414,6 @@ SIInsertDataEntries(const SharedInvalidationMessage *data, int n)\n \t\tint\t\t\tnthistime = Min(n, WRITE_QUANTUM);\n \t\tint\t\t\tnumMsgs;\n \t\tint\t\t\tmax;\n-\t\tint\t\t\ti;\n \n \t\tn -= nthistime;\n \n@@ -450,24 +446,13 @@ SIInsertDataEntries(const SharedInvalidationMessage *data, int n)\n \t\t\tmax++;\n \t\t}\n \n-\t\t/* Update current value of maxMsgNum using spinlock */\n-\t\tSpinLockAcquire(&segP->msgnumLock);\n+\t\t/*\n+\t\t * Update current value of maxMsgNum without taking locks. Make sure\n+\t\t * the inserted messages habve been flushed to main memory before the\n+\t\t * update of the shared variable,\n+\t\t */\n+\t\tpg_memory_barrier();\n \t\tsegP->maxMsgNum = max;\n-\t\tSpinLockRelease(&segP->msgnumLock);\n-\n-\t\t/*\n-\t\t * Now that the maxMsgNum change is globally visible, we give everyone\n-\t\t * a swift kick to make sure they read the newly added messages.\n-\t\t * Releasing SInvalWriteLock will enforce a full memory barrier, so\n-\t\t * these (unlocked) changes will be committed to memory before we exit\n-\t\t * the function.\n-\t\t */\n-\t\tfor (i = 0; i < segP->lastBackend; i++)\n-\t\t{\n-\t\t\tProcState *stateP = &segP->procState[i];\n-\n-\t\t\tstateP->hasMessages = true;\n-\t\t}\n \n \t\tLWLockRelease(SInvalWriteLock);\n \t}\n@@ -523,27 +508,24 @@ SIGetDataEntries(SharedInvalidationMessage *data, int datasize)\n \t * invalidations, any such occurrence is not much different than if the\n \t * invalidation had arrived slightly later in the first place.\n \t */\n-\tif (!stateP->hasMessages)\n+\tmax = segP->maxMsgNum;\n+\n+\tif (stateP->nextMsgNum >= max)\n \t\treturn 0;\n \n \tLWLockAcquire(SInvalReadLock, LW_SHARED);\n \n-\t/*\n-\t * We must reset hasMessages before determining how many messages we're\n-\t * going to read. That way, if new messages arrive after we have\n-\t * determined how many we're reading, the flag will get reset and we'll\n-\t * notice those messages part-way through.\n-\t *\n-\t * Note that, if we don't end up reading all of the messages, we had\n-\t * better be certain to reset this flag before exiting!\n-\t */\n-\tstateP->hasMessages = false;\n-\n-\t/* Fetch current value of maxMsgNum using spinlock */\n-\tSpinLockAcquire(&segP->msgnumLock);\n-\tmax = segP->maxMsgNum;\n-\tSpinLockRelease(&segP->msgnumLock);\n-\n+\tif (stateP->nextMsgNum < max)\n+\t{\n+\t\t/*\n+\t\t * nextMsgNum has been rewinded before we acquired the lock. Recheck\n+\t\t * with maxMsgNum again.\n+\t\t */\n+\t\tmax = segP->maxMsgNum;\n+\t\tif (stateP->nextMsgNum >= max)\n+\t\t\treturn 0;\n+\t}\n+\t\t\n \tif (stateP->resetState)\n \t{\n \t\t/*\n@@ -576,14 +558,9 @@ SIGetDataEntries(SharedInvalidationMessage *data, int datasize)\n \t/*\n \t * If we have caught up completely, reset our \"signaled\" flag so that\n \t * we'll get another signal if we fall behind again.\n-\t *\n-\t * If we haven't caught up completely, reset the hasMessages flag so that\n-\t * we see the remaining messages next time.\n \t */\n \tif (stateP->nextMsgNum >= max)\n \t\tstateP->signaled = false;\n-\telse\n-\t\tstateP->hasMessages = true;\n \n \tLWLockRelease(SInvalReadLock);\n \treturn n;\n-- \n2.27.0\n\n\n From 6338a5397e9103609ba9e842e2390ffab4b9aa5c Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Fri, 15 Oct 2021 14:09:30 +0900\nSubject: [PATCH v2 4/5] Get rid of two O(1) behavior in procarray.\n\nGetVirtualXIDsDelayingChkpt had O(N^2) behavior and\nSignalVirtualTransaction had O(N) behavior. Since the old BackendId\nhas gone they can be reduced to O(N) and O(1) respectively.\n---\n src/backend/storage/ipc/procarray.c | 74 ++++++++++-------------------\n 1 file changed, 26 insertions(+), 48 deletions(-)\n\ndiff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\nindex 100c0dae8c..70ce58b3fc 100644\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n@@ -3079,41 +3079,24 @@ GetVirtualXIDsDelayingChkpt(int *nvxids)\n *\n * This is used with the results of GetVirtualXIDsDelayingChkpt to see if any\n * of the specified VXIDs are still in critical sections of code.\n- *\n- * Note: this is O(N^2) in the number of vxacts that are/were delaying, but\n- * those numbers should be small enough for it not to be a problem.\n */\n bool\n HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids)\n {\n \tbool\t\tresult = false;\n-\tProcArrayStruct *arrayP = procArray;\n \tint\t\t\tindex;\n \n \tLWLockAcquire(ProcArrayLock, LW_SHARED);\n \n-\tfor (index = 0; index < arrayP->numProcs; index++)\n+\tfor (index = 0; index < nvxids; index++)\n \t{\n-\t\tint\t\t\tpgprocno = arrayP->pgprocnos[index];\n-\t\tPGPROC\t *proc = &allProcs[pgprocno];\n-\t\tVirtualTransactionId vxid;\n+\t\tVirtualTransactionId *vxid = &vxids[index];\n+\t\tPGPROC\t\t\t\t *proc = &allProcs[vxid->backendId];\n \n-\t\tGET_VXID_FROM_PGPROC(vxid, *proc);\n-\n-\t\tif (proc->delayChkpt && VirtualTransactionIdIsValid(vxid))\n+\t\tif (proc->delayChkpt && vxid->localTransactionId == proc->lxid)\n \t\t{\n-\t\t\tint\t\t\ti;\n-\n-\t\t\tfor (i = 0; i < nvxids; i++)\n-\t\t\t{\n-\t\t\t\tif (VirtualTransactionIdEquals(vxid, vxids[i]))\n-\t\t\t\t{\n-\t\t\t\t\tresult = true;\n-\t\t\t\t\tbreak;\n-\t\t\t\t}\n-\t\t\t}\n-\t\t\tif (result)\n-\t\t\t\tbreak;\n+\t\t\tresult = true;\n+\t\t\tbreak;\n \t\t}\n \t}\n \n@@ -3429,35 +3412,30 @@ pid_t\n SignalVirtualTransaction(VirtualTransactionId vxid, ProcSignalReason sigmode,\n \t\t\t\t\t\t bool conflictPending)\n {\n-\tProcArrayStruct *arrayP = procArray;\n-\tint\t\t\tindex;\n-\tpid_t\t\tpid = 0;\n+\tPGPROC\t *proc;\n+\tpid_t\t\tpid;\n+\tVirtualTransactionId procvxid PG_USED_FOR_ASSERTS_ONLY;\n \n \tLWLockAcquire(ProcArrayLock, LW_SHARED);\n \n-\tfor (index = 0; index < arrayP->numProcs; index++)\n+\tproc = &allProcs[vxid.backendId];\n+\n+#ifdef USE_ASSERT_CHECKING\n+\tGET_VXID_FROM_PGPROC(procvxid, *proc);\n+\tAssert (procvxid.backendId == vxid.backendId &&\n+\t\t\tprocvxid.localTransactionId == vxid.localTransactionId);\n+#endif\n+\n+\tpid = proc->pid;\n+\tproc->recoveryConflictPending = conflictPending;\n+\n+\tif (pid != 0)\n \t{\n-\t\tint\t\t\tpgprocno = arrayP->pgprocnos[index];\n-\t\tPGPROC\t *proc = &allProcs[pgprocno];\n-\t\tVirtualTransactionId procvxid;\n-\n-\t\tGET_VXID_FROM_PGPROC(procvxid, *proc);\n-\n-\t\tif (procvxid.backendId == vxid.backendId &&\n-\t\t\tprocvxid.localTransactionId == vxid.localTransactionId)\n-\t\t{\n-\t\t\tproc->recoveryConflictPending = conflictPending;\n-\t\t\tpid = proc->pid;\n-\t\t\tif (pid != 0)\n-\t\t\t{\n-\t\t\t\t/*\n-\t\t\t\t * Kill the pid if it's still here. If not, that's what we\n-\t\t\t\t * wanted so ignore any errors.\n-\t\t\t\t */\n-\t\t\t\t(void) SendProcSignal(pid, sigmode, proc->pgprocno);\n-\t\t\t}\n-\t\t\tbreak;\n-\t\t}\n+\t\t/*\n+\t\t * Kill the pid if it's still here. If not, that's what we\n+\t\t * wanted so ignore any errors.\n+\t\t */\n+\t\t(void) SendProcSignal(pid, sigmode, proc->pgprocno);\n \t}\n \n \tLWLockRelease(ProcArrayLock);\n-- \n2.27.0\n\n\n From db5bdd8e1a350cf77b77884ec445b8f0f18f135a Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Fri, 15 Oct 2021 14:23:52 +0900\nSubject: [PATCH v2 5/5] Get rid of the old BackendId at all\n\nSince we have got rid of the annoyging O(N) behavior in\nSIInsertDataEntries, sinvaladt.c can say good-by to the old packed\nBackend ID. Still CleanupInvalidationState and SICleanupQueue has O(N)\nbehavior but they are executed rarely or ends in a short time in the\nmost cases.\n---\n src/backend/storage/ipc/sinvaladt.c | 67 ++++++-----------------------\n 1 file changed, 14 insertions(+), 53 deletions(-)\n\ndiff --git a/src/backend/storage/ipc/sinvaladt.c b/src/backend/storage/ipc/sinvaladt.c\nindex ee3a3accfd..d56c77400f 100644\n--- a/src/backend/storage/ipc/sinvaladt.c\n+++ b/src/backend/storage/ipc/sinvaladt.c\n@@ -193,8 +193,6 @@ static LocalTransactionId nextLocalTransactionId;\n \n static void CleanupInvalidationState(int status, Datum arg);\n \n-static int siindex;\n-\n /*\n * SInvalShmemSize --- return shared-memory space needed\n */\n@@ -254,7 +252,6 @@ CreateSharedInvalidationState(void)\n void\n SharedInvalBackendInit(bool sendOnly)\n {\n-\tint\t\t\tindex;\n \tProcState *stateP = NULL;\n \tSISeg\t *segP = shmInvalBuffer;\n \n@@ -265,38 +262,15 @@ SharedInvalBackendInit(bool sendOnly)\n \t */\n \tLWLockAcquire(SInvalWriteLock, LW_EXCLUSIVE);\n \n-\t/* Look for a free entry in the procState array */\n-\tfor (index = 0; index < segP->lastBackend; index++)\n-\t{\n-\t\tif (segP->procState[index].procPid == 0)\t/* inactive slot? */\n-\t\t{\n-\t\t\tstateP = &segP->procState[index];\n-\t\t\tbreak;\n-\t\t}\n-\t}\n+\tAssert (MyBackendId < MaxBackends);\n+\tstateP = &segP->procState[MyBackendId];\n \n-\tif (stateP == NULL)\n-\t{\n-\t\tif (segP->lastBackend < segP->maxBackends)\n-\t\t{\n-\t\t\tstateP = &segP->procState[segP->lastBackend];\n-\t\t\tAssert(stateP->procPid == 0);\n-\t\t\tsegP->lastBackend++;\n-\t\t}\n-\t\telse\n-\t\t{\n-\t\t\t/*\n-\t\t\t * out of procState slots: MaxBackends exceeded -- report normally\n-\t\t\t */\n-\t\t\tsiindex = -1;\n-\t\t\tLWLockRelease(SInvalWriteLock);\n-\t\t\tereport(FATAL,\n-\t\t\t\t\t(errcode(ERRCODE_TOO_MANY_CONNECTIONS),\n-\t\t\t\t\t errmsg(\"sorry, too many clients already\")));\n-\t\t}\n-\t}\n+\t/* this entry should be free */\n+\tAssert (stateP->procPid == 0);\n \n-\tsiindex = (stateP - &segP->procState[0]);\n+\t/* adjust lastBackend if needed */\n+\tif (segP->lastBackend < MyBackendId)\n+\t\tsegP->lastBackend = MyBackendId;\n \n \t/* Fetch next local transaction ID into local memory */\n \tnextLocalTransactionId = stateP->nextLXID;\n@@ -313,8 +287,6 @@ SharedInvalBackendInit(bool sendOnly)\n \n \t/* register exit routine to mark my entry inactive at exit */\n \ton_shmem_exit(CleanupInvalidationState, PointerGetDatum(segP));\n-\n-\telog(DEBUG4, \"my SI slot index is %d\", siindex);\n }\n \n /*\n@@ -336,7 +308,7 @@ CleanupInvalidationState(int status, Datum arg)\n \n \tLWLockAcquire(SInvalWriteLock, LW_EXCLUSIVE);\n \n-\tstateP = &segP->procState[siindex];\n+\tstateP = &segP->procState[MyBackendId];\n \n \t/* Update next local transaction ID for next holder of this backendID */\n \tstateP->nextLXID = nextLocalTransactionId;\n@@ -349,7 +321,7 @@ CleanupInvalidationState(int status, Datum arg)\n \tstateP->signaled = false;\n \n \t/* Recompute index of last active backend */\n-\tfor (i = segP->lastBackend; i > 0; i--)\n+\tfor (i = segP->lastBackend; i >= 0; i--)\n \t{\n \t\tif (segP->procState[i - 1].procPid != 0)\n \t\t\tbreak;\n@@ -368,27 +340,16 @@ CleanupInvalidationState(int status, Datum arg)\n void\n BackendIdGetTransactionIds(int backendID, TransactionId *xid, TransactionId *xmin)\n {\n-\tSISeg\t *segP = shmInvalBuffer;\n-\n \t*xid = InvalidTransactionId;\n \t*xmin = InvalidTransactionId;\n \n-\t/* Need to lock out additions/removals of backends */\n-\tLWLockAcquire(SInvalWriteLock, LW_SHARED);\n-\n-\tif (backendID > 0 && backendID <= segP->lastBackend)\n+\tif (backendID >= 0 && backendID < MaxBackends)\n \t{\n-\t\tProcState *stateP = &segP->procState[backendID - 1];\n-\t\tPGPROC\t *proc = stateP->proc;\n+\t\tPGPROC\t *proc = &ProcGlobal->allProcs[backendID];\n \n-\t\tif (proc != NULL)\n-\t\t{\n-\t\t\t*xid = proc->xid;\n-\t\t\t*xmin = proc->xmin;\n-\t\t}\n+\t\t*xid = proc->xid;\n+\t\t*xmin = proc->xmin;\n \t}\n-\n-\tLWLockRelease(SInvalWriteLock);\n }\n \n /*\n@@ -495,7 +456,7 @@ SIGetDataEntries(SharedInvalidationMessage *data, int datasize)\n \tint\t\t\tn;\n \n \tsegP = shmInvalBuffer;\n-\tstateP = &segP->procState[siindex];\n+\tstateP = &segP->procState[MyBackendId];\n \n \t/*\n \t * Before starting to take locks, do a quick, unlocked test to see whether\n-- \n2.27.0", "msg_date": "Fri, 15 Oct 2021 15:00:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "(This branch may should leave from this thread..)\n\nAt Fri, 15 Oct 2021 15:00:57 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 14 Oct 2021 10:53:06 -0700, Andres Freund <andres@anarazel.de> wrote in \n> > This'd get rid of the need of density *and* make SIInsertDataEntries()\n> > cheaper.\n> \n> Yes. So.. I tried that. The only part where memory-flush timing is\n> crucial seems to be between writing messages and setting maxMsgNum.\n> By placing memory barrier between them it seems *to me* we can read\n> maxMsgNum safely without locks.\n\nMaybe we need another memory barrier here and the patch was broken\nabout the rechecking on the members in GetSIGetDataEntries..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 19 Oct 2021 16:24:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" }, { "msg_contents": "On Fri, Oct 15, 2021 at 11:31 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 14 Oct 2021 10:53:06 -0700, Andres Freund <andres@anarazel.de> wrote in\n> > Hi,\n> >\n> > On 2021-10-14 17:28:34 +0900, Kyotaro Horiguchi wrote:\n> > > At Wed, 13 Oct 2021 19:52:52 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > > > Although needing a bit of care for the difference of invalid values\n> > > > for both though, BackendId can be easily replaced with pgprocno almost\n> > > > mechanically except sinvaladt. Therefore, we can confine the current\n> > > > backend ID within sinvaladt isolating from other part. The ids\n> > > > dedicated for sinvaladt can be packed to small range and perfomance\n> > > > won't be damaged.\n> >\n> > FWIW, I don't actually think there's necessarily that strong a need for\n> > density in sinvaladt. With a few relatively changes we can get rid of the O(n)\n> > work in the most crucial paths.\n>\n> Right. So I left it for the \"future:p\n>\n> > In https://www.postgresql.org/message-id/20210802171255.k4yv5cfqaqbuuy6f%40alap3.anarazel.de\n> > I wrote:\n> > > Another approach to deal with this could be to simply not do the O(n) work in\n> > > SIInsertDataEntries(). It's not obvious that ->hasMessages is actually\n> > > necessary - we could atomically read maxMsgNum without acquiring a lock\n> > > instead of needing the per-backend ->hasMessages. I don't the density would\n> > > be a relevant factor in SICleanupQueue().\n> >\n> > This'd get rid of the need of density *and* make SIInsertDataEntries()\n> > cheaper.\n>\n> Yes. So.. I tried that. The only part where memory-flush timing is\n> crucial seems to be between writing messages and setting maxMsgNum.\n> By placing memory barrier between them it seems *to me* we can read\n> maxMsgNum safely without locks.\n>\n> I reread that thread and found we can get rid of O(N) behavior from\n> two places, SIgnalVirtualTransaction and GetVirtualXIDsDelayingChkpt.\n>\n> Finally, I got rid of the siindex (the old BackendId) from sinvaladt.c\n> at all. Still CleanupInvalidationState and SICleanupQueue has O(N)\n> behavior but they are executed rarely or ends in a short time in the\n> most cases.\n>\n>\n> 0001: Reverses the proc freelist so that the backendid is assigned in\n> the sane order.\n>\n> 0002: Replaces the current BackendId - that is generated by\n> sinvaladt.c intending to pack the ids to a narrow range - with\n> pgprocno in most of the tree. The old BackendID is now used only in\n> sinvaladt.c\n>\n> 0003: Removes O(N) behavior from SIInsertDataEntries. I'm not sure it\n> is correctly revised, though..\n>\n> 0004: Gets rid of O(N), or reduce O(N^2) to O(N) of\n> HaveVirtualXIDsDelayingChkpt and SignalVirtualTransaction.\n>\n> 0005: Gets rid of the old BackendID at all from sinvaladt.c.\n\nHi,\n\nI'm not sure if the above approach and the patches are okay that I or\nsomeone can start reviewing them. Does anyone have comments on this\nplease?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 10 Nov 2021 16:16:44 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inconsistency in startup process's MyBackendId and procsignal\n array registration with ProcSignalInit()" } ]
[ { "msg_contents": "Hi,\n\nCurrently pg_log_backend_memory_contexts() doesn't log the memory\ncontexts of auxiliary processes such as bgwriter, checkpointer, wal\nwriter, archiver, startup process and wal receiver. It will be useful\nto look at the memory contexts of these processes too, for debugging\npurposes and better understanding of the memory usage pattern of these\nprocesses. Inside the code, we could use the AuxiliaryPidGetProc() to\nget the PGPROC of these processes. Note that, neither\nAuxiliaryPidGetProc() nor BackendPidGetProc() can return PGPROC(as\nthey don't have PGPROC entries at all) entries for the syslogger,\nstats collector processes.\n\nOpen points:\n1) I'm not sure if it's a good idea to log postmaster memory usage\ntoo. Thoughts?\n2) Since with this change pg_log_backend_memory_contexts() will work\nfor auxiliary processes too, do we need to change the function name\nfrom pg_log_backend_memory_contexts() to\npg_log_backend_memory_contexts()/pg_log_memory_contexts()/some other\nname? Or is it a good idea to have a separate function for auxiliary\nprocesses alone, pg_log_auxilliary_process_memory_contexts()?\nThoughts?\n\nI will attach the patch, if possible with test cases, once we agree on\nthe above open points.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 9 Oct 2021 18:53:56 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "enhance pg_log_backend_memory_contexts() to log memory contexts of\n auxiliary processes" }, { "msg_contents": "Thanks for working on this!\n\nOn 2021-10-09 22:23, Bharath Rupireddy wrote:\n> Hi,\n> \n> Currently pg_log_backend_memory_contexts() doesn't log the memory\n> contexts of auxiliary processes such as bgwriter, checkpointer, wal\n> writer, archiver, startup process and wal receiver. It will be useful\n> to look at the memory contexts of these processes too, for debugging\n> purposes and better understanding of the memory usage pattern of these\n> processes.\n\nAs the discussion below, we thought logging memory contexts of other \nthan client backends is possible but were not sure how useful it is.\nAfter all, we have ended up restricting the target process to client \nbackends for now.\n\n \nhttps://www.postgresql.org/message-id/0b0657d5febd0e46565a6bc9c62ba3f6%40oss.nttdata.com\n\nIf we can use debuggers, it's possible to know the memory contexts e.g. \nusing MemoryContextStats().\nSo IMHO if it's necessary to know memory contexts without attaching gdb \nfor other than client backends(probably this means using under \nproduction environment), this enhancement would be pay.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 11 Oct 2021 11:51:18 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory contexts\n of auxiliary processes" }, { "msg_contents": "On Mon, Oct 11, 2021 at 8:21 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>\n> Thanks for working on this!\n>\n> On 2021-10-09 22:23, Bharath Rupireddy wrote:\n> > Hi,\n> >\n> > Currently pg_log_backend_memory_contexts() doesn't log the memory\n> > contexts of auxiliary processes such as bgwriter, checkpointer, wal\n> > writer, archiver, startup process and wal receiver. It will be useful\n> > to look at the memory contexts of these processes too, for debugging\n> > purposes and better understanding of the memory usage pattern of these\n> > processes.\n>\n> As the discussion below, we thought logging memory contexts of other\n> than client backends is possible but were not sure how useful it is.\n> After all, we have ended up restricting the target process to client\n> backends for now.\n>\n>\n> https://www.postgresql.org/message-id/0b0657d5febd0e46565a6bc9c62ba3f6%40oss.nttdata.com\n>\n> If we can use debuggers, it's possible to know the memory contexts e.g.\n> using MemoryContextStats().\n> So IMHO if it's necessary to know memory contexts without attaching gdb\n> for other than client backends(probably this means using under\n> production environment), this enhancement would be pay.\n\nThanks for providing your thoughts. Knowing memory usage of auxiliary\nprocesses is as important as backends (user session processes) without\nattaching debugger in production environments.\n\nThere are some open points as mentioned in my first mail in this\nthread, I will start working on this patch once we agree on them.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 11 Oct 2021 09:55:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory contexts\n of auxiliary processes" }, { "msg_contents": "On Mon, Oct 11, 2021 at 9:55 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Oct 11, 2021 at 8:21 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n> >\n> > Thanks for working on this!\n> >\n> > On 2021-10-09 22:23, Bharath Rupireddy wrote:\n> > > Hi,\n> > >\n> > > Currently pg_log_backend_memory_contexts() doesn't log the memory\n> > > contexts of auxiliary processes such as bgwriter, checkpointer, wal\n> > > writer, archiver, startup process and wal receiver. It will be useful\n> > > to look at the memory contexts of these processes too, for debugging\n> > > purposes and better understanding of the memory usage pattern of these\n> > > processes.\n> >\n> > As the discussion below, we thought logging memory contexts of other\n> > than client backends is possible but were not sure how useful it is.\n> > After all, we have ended up restricting the target process to client\n> > backends for now.\n> >\n> >\n> > https://www.postgresql.org/message-id/0b0657d5febd0e46565a6bc9c62ba3f6%40oss.nttdata.com\n> >\n> > If we can use debuggers, it's possible to know the memory contexts e.g.\n> > using MemoryContextStats().\n> > So IMHO if it's necessary to know memory contexts without attaching gdb\n> > for other than client backends(probably this means using under\n> > production environment), this enhancement would be pay.\n>\n> Thanks for providing your thoughts. Knowing memory usage of auxiliary\n> processes is as important as backends (user session processes) without\n> attaching debugger in production environments.\n>\n> There are some open points as mentioned in my first mail in this\n> thread, I will start working on this patch once we agree on them.\n\nI'm attaching the v1 patch that enables\npg_log_backend_memory_contexts() to log memory contexts of auxiliary\nprocesses. Please review it.\n\nHere's the CF entry - https://commitfest.postgresql.org/35/3385/\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 29 Oct 2021 22:25:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory contexts\n of auxiliary processes" }, { "msg_contents": "At Fri, 29 Oct 2021 22:25:04 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Mon, Oct 11, 2021 at 9:55 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Mon, Oct 11, 2021 at 8:21 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n> > > If we can use debuggers, it's possible to know the memory contexts e.g.\n> > > using MemoryContextStats().\n> > > So IMHO if it's necessary to know memory contexts without attaching gdb\n> > > for other than client backends(probably this means using under\n> > > production environment), this enhancement would be pay.\n> >\n> > Thanks for providing your thoughts. Knowing memory usage of auxiliary\n> > processes is as important as backends (user session processes) without\n> > attaching debugger in production environments.\n> >\n> > There are some open points as mentioned in my first mail in this\n> > thread, I will start working on this patch once we agree on them.\n> \n> I'm attaching the v1 patch that enables\n> pg_log_backend_memory_contexts() to log memory contexts of auxiliary\n> processes. Please review it.\n> \n> Here's the CF entry - https://commitfest.postgresql.org/35/3385/\n\nAfter the patch applied the function looks like this\n\n proc = BackendPidGetProc(pid);\n if (proc == NULL)\n <try aux processes>\n\t<set is_aux_proc>\n if (proc == NULL)\n <error>\n if (!is_aux_proc)\n <set local backend id>\n SendProcSignal(.., the backend id);\n\nis_aux_proc lookslike making the code complex. I think we can remove\nit.\n\n\n+\t/* Only regular backends will have valid backend id, auxiliary processes don't. */\n+\tif (!is_aux_proc)\n+\t\tbackendId = proc->backendId;\n\nI think the reason we need to do this is not that aux processes have\nthe invalid backend id (=InvalidBackendId) but that \"some\" auxiliary\nprocesses may have a broken proc->backendId in regard to\nSendProcSignal (we know that's the startup for now.).\n\n\n+SELECT pg_log_backend_memory_contexts(memcxt_get_proc_pid('autovacuum launcher'+SELECT pg_log_backend_memory_contexts(memcxt_get_proc_pid('logical replication launcher'));\n...\n\nMaybe we can reduce (a quite bit of) run time of the test by\nloopingover the processes but since the test only checks if the\nfunction doesn't fail to send a signal, I'm not sure we need to\nperform the test for all of the processes here. On the other hand,\nthe test is missing the most significant target of the startup\nprocess.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 01 Nov 2021 10:12:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory\n contexts of auxiliary processes" }, { "msg_contents": "On Mon, Nov 1, 2021 at 6:42 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 29 Oct 2021 22:25:04 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > On Mon, Oct 11, 2021 at 9:55 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Mon, Oct 11, 2021 at 8:21 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n> > > > If we can use debuggers, it's possible to know the memory contexts e.g.\n> > > > using MemoryContextStats().\n> > > > So IMHO if it's necessary to know memory contexts without attaching gdb\n> > > > for other than client backends(probably this means using under\n> > > > production environment), this enhancement would be pay.\n> > >\n> > > Thanks for providing your thoughts. Knowing memory usage of auxiliary\n> > > processes is as important as backends (user session processes) without\n> > > attaching debugger in production environments.\n> > >\n> > > There are some open points as mentioned in my first mail in this\n> > > thread, I will start working on this patch once we agree on them.\n> >\n> > I'm attaching the v1 patch that enables\n> > pg_log_backend_memory_contexts() to log memory contexts of auxiliary\n> > processes. Please review it.\n> >\n> > Here's the CF entry - https://commitfest.postgresql.org/35/3385/\n>\n> After the patch applied the function looks like this\n>\n> proc = BackendPidGetProc(pid);\n> if (proc == NULL)\n> <try aux processes>\n> <set is_aux_proc>\n> if (proc == NULL)\n> <error>\n> if (!is_aux_proc)\n> <set local backend id>\n> SendProcSignal(.., the backend id);\n>\n> is_aux_proc lookslike making the code complex. I think we can remove\n> it.\n>\n>\n> + /* Only regular backends will have valid backend id, auxiliary processes don't. */\n> + if (!is_aux_proc)\n> + backendId = proc->backendId;\n>\n> I think the reason we need to do this is not that aux processes have\n> the invalid backend id (=InvalidBackendId) but that \"some\" auxiliary\n> processes may have a broken proc->backendId in regard to\n> SendProcSignal (we know that's the startup for now.).\n\nI wanted to not have any problems signalling the startup process with\nthe current code. Yes, the startup process is the only auxiliary\nprocess that has a valid backind id and we have other threads fixing\nit. Let's keep the way it is in the v1 patch. Based on whichever patch\ngets in we can modify the code.\n\n> +SELECT pg_log_backend_memory_contexts(memcxt_get_proc_pid('autovacuum launcher'+SELECT pg_log_backend_memory_contexts(memcxt_get_proc_pid('logical replication launcher'));\n> ...\n>\n> Maybe we can reduce (a quite bit of) run time of the test by\n> loopingover the processes but since the test only checks if the\n> function doesn't fail to send a signal, I'm not sure we need to\n> perform the test for all of the processes here.\n\nOkay, let me choose the checkpointer for this test, I will remove other tests.\n\n> On the other hand,\n> the test is missing the most significant target of the startup\n> process.\n\nIf we were to have tests for the startup process, then it needs to be\nin TAP tests as we have to start a hot standby where the startup\nprocess will be in continuous mode. Is there any other way that we can\nadd the test case in a .sql file? Do we need to get into this much\ncomplexity for the test case?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 4 Nov 2021 09:35:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory contexts\n of auxiliary processes" }, { "msg_contents": "On Thu, Nov 4, 2021 at 9:35 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > I think the reason we need to do this is not that aux processes have\n> > the invalid backend id (=InvalidBackendId) but that \"some\" auxiliary\n> > processes may have a broken proc->backendId in regard to\n> > SendProcSignal (we know that's the startup for now.).\n>\n> I wanted to not have any problems signalling the startup process with\n> the current code. Yes, the startup process is the only auxiliary\n> process that has a valid backind id and we have other threads fixing\n> it. Let's keep the way it is in the v1 patch. Based on whichever patch\n> gets in we can modify the code.\n\nI added a note there (with XXX) describing the fact that we explicitly\nneed to send invalid backend id to SendProcSignal.\n\n> > +SELECT pg_log_backend_memory_contexts(memcxt_get_proc_pid('autovacuum launcher'+SELECT pg_log_backend_memory_contexts(memcxt_get_proc_pid('logical replication launcher'));\n> > ...\n> >\n> > Maybe we can reduce (a quite bit of) run time of the test by\n> > loopingover the processes but since the test only checks if the\n> > function doesn't fail to send a signal, I'm not sure we need to\n> > perform the test for all of the processes here.\n>\n> Okay, let me choose the checkpointer for this test, I will remove other tests.\n\nI retained the test case just for the checkpointer.\n\n> > On the other hand,\n> > the test is missing the most significant target of the startup\n> > process.\n>\n> If we were to have tests for the startup process, then it needs to be\n> in TAP tests as we have to start a hot standby where the startup\n> process will be in continuous mode. Is there any other way that we can\n> add the test case in a .sql file? Do we need to get into this much\n> complexity for the test case?\n\nI've not added a TAP test case for the startup process, I see it as\nunnecessary. I've tested the startup process case manually here which\njust works.\n\nPSA v2 patch and review it.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 5 Nov 2021 11:12:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory contexts\n of auxiliary processes" }, { "msg_contents": "On Fri, Nov 5, 2021 at 11:12 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> PSA v2 patch and review it.\n\nI've modified the docs part a bit, please consider v3 for review.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 15 Nov 2021 07:47:14 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory contexts\n of auxiliary processes" }, { "msg_contents": "On Mon, Nov 15, 2021 at 7:47 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Nov 5, 2021 at 11:12 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > PSA v2 patch and review it.\n>\n> I've modified the docs part a bit, please consider v3 for review.\n\nThanks for the update patch, Few comments:\n1) Should we change \"CHECK_FOR_INTERRUPTS()\" to\n\"CHECK_FOR_INTERRUPTS() or process specific interrupt handlers\"\n/*\n* pg_log_backend_memory_contexts\n* Signal a backend process to log its memory contexts.\n*\n* By default, only superusers are allowed to signal to log the memory\n* contexts because allowing any users to issue this request at an unbounded\n* rate would cause lots of log messages and which can lead to denial of\n* service. Additional roles can be permitted with GRANT.\n*\n* On receipt of this signal, a backend sets the flag in the signal\n* handler, which causes the next CHECK_FOR_INTERRUPTS() to log the\n* memory contexts.\n*/\nDatum\npg_log_backend_memory_contexts(PG_FUNCTION_ARGS)\n\n2) Should we mention Postmaster process also along with logger and\nstatistics collector process\n+ <glossterm linkend=\"glossary-backend\">backend</glossterm> or the\n+ <glossterm linkend=\"glossary-wal-sender\">WAL sender</glossterm> or the\n+ <glossterm linkend=\"glossary-auxiliary-proc\">auxiliary\nprocess</glossterm>\n+ with the specified process ID. All of the\n+ <glossterm linkend=\"glossary-auxiliary-proc\">auxiliary\nprocesses</glossterm>\n+ are supported except the <glossterm\nlinkend=\"glossary-logger\">logger</glossterm>\n+ and the <glossterm\nlinkend=\"glossary-stats-collector\">statistics collector</glossterm>\n+ as they are not connected to shared memory the function can\nnot make requests.\n+ The backtrace will be logged at <literal>LOG</literal> message level.\n+ They will appear in the server log based on the log configuration set\n+ (See <xref linkend=\"runtime-config-logging\"/> for more information),\n+ but will not be sent to the client regardless of\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 15 Nov 2021 22:03:59 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory contexts\n of auxiliary processes" }, { "msg_contents": "On Mon, Nov 15, 2021 at 10:04 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 7:47 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Fri, Nov 5, 2021 at 11:12 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > PSA v2 patch and review it.\n> >\n> > I've modified the docs part a bit, please consider v3 for review.\n>\n> Thanks for the update patch, Few comments:\n> 1) Should we change \"CHECK_FOR_INTERRUPTS()\" to\n> \"CHECK_FOR_INTERRUPTS() or process specific interrupt handlers\"\n\nDone.\n\n> 2) Should we mention Postmaster process also along with logger and\n> statistics collector process\n> + <glossterm linkend=\"glossary-backend\">backend</glossterm> or the\n> + <glossterm linkend=\"glossary-wal-sender\">WAL sender</glossterm> or the\n> + <glossterm linkend=\"glossary-auxiliary-proc\">auxiliary\n> process</glossterm>\n> + with the specified process ID. All of the\n> + <glossterm linkend=\"glossary-auxiliary-proc\">auxiliary\n> processes</glossterm>\n> + are supported except the <glossterm\n> linkend=\"glossary-logger\">logger</glossterm>\n> + and the <glossterm\n> linkend=\"glossary-stats-collector\">statistics collector</glossterm>\n> + as they are not connected to shared memory the function can\n> not make requests.\n> + The backtrace will be logged at <literal>LOG</literal> message level.\n> + They will appear in the server log based on the log configuration set\n> + (See <xref linkend=\"runtime-config-logging\"/> for more information),\n> + but will not be sent to the client regardless of\n\nDone.\n\nAttaching v4 patch, please review it further.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 15 Nov 2021 22:27:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory contexts\n of auxiliary processes" }, { "msg_contents": "On Mon, Nov 15, 2021 at 10:27 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 10:04 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, Nov 15, 2021 at 7:47 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Fri, Nov 5, 2021 at 11:12 AM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > PSA v2 patch and review it.\n> > >\n> > > I've modified the docs part a bit, please consider v3 for review.\n> >\n> > Thanks for the update patch, Few comments:\n> > 1) Should we change \"CHECK_FOR_INTERRUPTS()\" to\n> > \"CHECK_FOR_INTERRUPTS() or process specific interrupt handlers\"\n>\n> Done.\n>\n> > 2) Should we mention Postmaster process also along with logger and\n> > statistics collector process\n> > + <glossterm linkend=\"glossary-backend\">backend</glossterm> or the\n> > + <glossterm linkend=\"glossary-wal-sender\">WAL sender</glossterm> or the\n> > + <glossterm linkend=\"glossary-auxiliary-proc\">auxiliary\n> > process</glossterm>\n> > + with the specified process ID. All of the\n> > + <glossterm linkend=\"glossary-auxiliary-proc\">auxiliary\n> > processes</glossterm>\n> > + are supported except the <glossterm\n> > linkend=\"glossary-logger\">logger</glossterm>\n> > + and the <glossterm\n> > linkend=\"glossary-stats-collector\">statistics collector</glossterm>\n> > + as they are not connected to shared memory the function can\n> > not make requests.\n> > + The backtrace will be logged at <literal>LOG</literal> message level.\n> > + They will appear in the server log based on the log configuration set\n> > + (See <xref linkend=\"runtime-config-logging\"/> for more information),\n> > + but will not be sent to the client regardless of\n>\n> Done.\n>\n> Attaching v4 patch, please review it further.\n\nOne small comment:\n1) There should be a space in between \"<literal>LOG</literal>message level\"\n+ it can) for memory contexts. These memory contexts will be logged at\n+ <literal>LOG</literal>message level. They will appear in the server log\n+ based on the log configuration set (See <xref\nlinkend=\"runtime-config-logging\"/>\n+ for more information), but will not be sent to the client regardless of\n\nThe rest of the patch looks good to me.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 28 Nov 2021 12:22:06 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory contexts\n of auxiliary processes" }, { "msg_contents": "On Sun, Nov 28, 2021 at 12:22 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Attaching v4 patch, please review it further.\n>\n> One small comment:\n> 1) There should be a space in between \"<literal>LOG</literal>message level\"\n> + it can) for memory contexts. These memory contexts will be logged at\n> + <literal>LOG</literal>message level. They will appear in the server log\n> + based on the log configuration set (See <xref\n> linkend=\"runtime-config-logging\"/>\n> + for more information), but will not be sent to the client regardless of\n\nDone.\n\n> The rest of the patch looks good to me.\n\nThanks for the review. Here's the v5 patch.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sun, 28 Nov 2021 12:25:47 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory contexts\n of auxiliary processes" }, { "msg_contents": "On Sun, Nov 28, 2021 at 12:25 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sun, Nov 28, 2021 at 12:22 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > Attaching v4 patch, please review it further.\n> >\n> > One small comment:\n> > 1) There should be a space in between \"<literal>LOG</literal>message level\"\n> > + it can) for memory contexts. These memory contexts will be logged at\n> > + <literal>LOG</literal>message level. They will appear in the server log\n> > + based on the log configuration set (See <xref\n> > linkend=\"runtime-config-logging\"/>\n> > + for more information), but will not be sent to the client regardless of\n>\n> Done.\n>\n> > The rest of the patch looks good to me.\n>\n> Thanks for the review. Here's the v5 patch.\n\nThanks for the updated patch, one comment:\n1) The function can be indented similar to other functions in the same file:\n+CREATE FUNCTION memcxt_get_proc_pid(text)\n+RETURNS int\n+LANGUAGE SQL\n+AS 'SELECT pid FROM pg_stat_activity WHERE backend_type = $1';\n\nSomething like:\n+CREATE FUNCTION memcxt_get_proc_pid(text)\n+ RETURNS int\n+ LANGUAGE SQL\n+ AS 'SELECT pid FROM pg_stat_activity WHERE backend_type = $1';\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 28 Nov 2021 17:21:35 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory contexts\n of auxiliary processes" }, { "msg_contents": "On Sun, Nov 28, 2021 at 5:21 PM vignesh C <vignesh21@gmail.com> wrote:\n> Thanks for the updated patch, one comment:\n> 1) The function can be indented similar to other functions in the same file:\n> +CREATE FUNCTION memcxt_get_proc_pid(text)\n> +RETURNS int\n> +LANGUAGE SQL\n> +AS 'SELECT pid FROM pg_stat_activity WHERE backend_type = $1';\n>\n> Something like:\n> +CREATE FUNCTION memcxt_get_proc_pid(text)\n> + RETURNS int\n> + LANGUAGE SQL\n> + AS 'SELECT pid FROM pg_stat_activity WHERE backend_type = $1';\n\nDone. PSA v6 patch.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sun, 28 Nov 2021 19:14:49 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory contexts\n of auxiliary processes" }, { "msg_contents": "On Sun, Nov 28, 2021 at 7:15 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sun, Nov 28, 2021 at 5:21 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for the updated patch, one comment:\n> > 1) The function can be indented similar to other functions in the same file:\n> > +CREATE FUNCTION memcxt_get_proc_pid(text)\n> > +RETURNS int\n> > +LANGUAGE SQL\n> > +AS 'SELECT pid FROM pg_stat_activity WHERE backend_type = $1';\n> >\n> > Something like:\n> > +CREATE FUNCTION memcxt_get_proc_pid(text)\n> > + RETURNS int\n> > + LANGUAGE SQL\n> > + AS 'SELECT pid FROM pg_stat_activity WHERE backend_type = $1';\n>\n> Done. PSA v6 patch.\n\nThanks for the updated patch. The patch applies neatly, make\ncheck-world passes and the documentation looks good. I did not find\nany issues with the v6 patch, I'm marking the patch as Ready for\nCommitter.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 29 Nov 2021 08:14:19 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory contexts\n of auxiliary processes" }, { "msg_contents": "\n\nOn 2021/11/29 11:44, vignesh C wrote:\n> Thanks for the updated patch. The patch applies neatly, make\n> check-world passes and the documentation looks good. I did not find\n> any issues with the v6 patch, I'm marking the patch as Ready for\n> Committer.\n\nI started reading the patch.\n\n+CREATE FUNCTION memcxt_get_proc_pid(text)\n+ RETURNS int\n+ LANGUAGE SQL\n+ AS 'SELECT pid FROM pg_stat_activity WHERE backend_type = $1';\n+\n+SELECT pg_log_backend_memory_contexts(memcxt_get_proc_pid('checkpointer'));\n+\n+DROP FUNCTION memcxt_get_proc_pid(text);\n\nWhy is memcxt_get_proc_pid() still necessary? ISTM that we can just replace the above with the following query, instead.\n\n SELECT pg_log_backend_memory_contexts(pid) FROM pg_stat_activity WHERE backend_type = 'checkpointer'\n\n- Requests to log the memory contexts of the backend with the\n- specified process ID. These memory contexts will be logged at\n- <literal>LOG</literal> message level. They will appear in\n- the server log based on the log configuration set\n- (See <xref linkend=\"runtime-config-logging\"/> for more information),\n- but will not be sent to the client regardless of\n+ Requests to log memory contexts of the <glossterm linkend=\"glossary-backend\">backend</glossterm>\n+ or the <glossterm linkend=\"glossary-wal-sender\">WAL sender</glossterm> or\n+ the <glossterm linkend=\"glossary-auxiliary-proc\">auxiliary process</glossterm>\n+ with the specified process ID. This function cannot request\n+ <glossterm linkend=\"glossary-postmaster\">postmaster process</glossterm> or\n+ <glossterm linkend=\"glossary-logger\">logger</glossterm> or\n+ <glossterm linkend=\"glossary-stats-collector\">statistics collector</glossterm>\n+ (all other <glossterm linkend=\"glossary-auxiliary-proc\">auxiliary processes</glossterm>\n\nISTM that you're trying to list all possible processes that pg_log_backend_memory_contexts() can handle. But why didn't you list autovacuum worker (while other special backend, WAL sender, is picked up) and background worker like logical replication launcher? Because the term \"backend\" implicitly includes those processes? If so, why did you pick up WAL sender separately?\n\nI'm tempted to replace these descriptions as follows. Because the following looks simpler and easier to read and understand, to me.\n\n----------------------\nRequests to log the memory contexts of the process with the specified process ID. Possible processes that this function can send the request to are: backend, WAL sender, autovacuum worker, auxiliary processes except logger and stats collector, and background workers.\n----------------------\n\nor\n\n----------------------\nRequests to log the memory contexts of the backend with the specified process ID. This function can send the request to also auxiliary processes except logger and stats collector.\n----------------------\n\n+\t/* See if the process with given pid is an auxiliary process. */\n+\tif (proc == NULL)\n+\t{\n+\t\tproc = AuxiliaryPidGetProc(pid);\n+\t\tis_aux_proc = true;\n+\t}\n\nAs Horiguchi-san told upthread, IMO it's simpler not to use is_aux_proc flag. For example, you can replace this code with\n\n------------------------\nproc = BackendPidGetProc(pid);\n\nif (proc != NULL)\n backendId = proc->backendId;\nelse\n proc = AuxiliaryPidGetProc(pid);\n------------------------\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 8 Jan 2022 00:49:10 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory contexts\n of auxiliary processes" }, { "msg_contents": "On Fri, Jan 7, 2022 at 9:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2021/11/29 11:44, vignesh C wrote:\n> > Thanks for the updated patch. The patch applies neatly, make\n> > check-world passes and the documentation looks good. I did not find\n> > any issues with the v6 patch, I'm marking the patch as Ready for\n> > Committer.\n>\n> I started reading the patch.\n\nThanks.\n\n> +CREATE FUNCTION memcxt_get_proc_pid(text)\n> + RETURNS int\n> + LANGUAGE SQL\n> + AS 'SELECT pid FROM pg_stat_activity WHERE backend_type = $1';\n> +\n> +SELECT pg_log_backend_memory_contexts(memcxt_get_proc_pid('checkpointer'));\n> +\n> +DROP FUNCTION memcxt_get_proc_pid(text);\n>\n> Why is memcxt_get_proc_pid() still necessary? ISTM that we can just replace the above with the following query, instead.\n>\n> SELECT pg_log_backend_memory_contexts(pid) FROM pg_stat_activity WHERE backend_type = 'checkpointer'\n\nChanged.\n\n> I'm tempted to replace these descriptions as follows. Because the following looks simpler and easier to read and understand, to me.\n> ----------------------\n> Requests to log the memory contexts of the backend with the specified process ID. This function can send the request to also auxiliary processes except logger and stats collector.\n> ----------------------\n\nChanged.\n\n> As Horiguchi-san told upthread, IMO it's simpler not to use is_aux_proc flag. For example, you can replace this code with\n>\n> ------------------------\n> proc = BackendPidGetProc(pid);\n>\n> if (proc != NULL)\n> backendId = proc->backendId;\n> else\n> proc = AuxiliaryPidGetProc(pid);\n> ------------------------\n\nChanged.\n\nPSA v7 patch.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 7 Jan 2022 22:20:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory contexts\n of auxiliary processes" }, { "msg_contents": "\n\nOn 2022/01/08 1:50, Bharath Rupireddy wrote:\n> PSA v7 patch.\n\nThanks for updating the patch!\nI applied some cosmetic changes and pushed the patch. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 11 Jan 2022 23:28:43 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: enhance pg_log_backend_memory_contexts() to log memory contexts\n of auxiliary processes" } ]
[ { "msg_contents": "When starts_with() and the equivalent ^@ operator were added, they\nwere plugged into the planner in only a rather half-baked way.\nSelectivity estimation got taught about the operator, but the\nother infrastructure associated with LIKE/regex matching wasn't\nupdated. This causes these operators to be planned more stupidly\nthan a functionally-equivalent LIKE/regex pattern [1].\n\nWith the (admittedly later) introduction of planner support functions,\nit's really quite easy to do better. The attached patch adds a planner\nsupport function for starts_with(), with these benefits:\n\n* A condition such as \"textcol ^@ constant\" can now use a regular\nbtree index, not only an SP-GiST index, so long as the index's\ncollation is C. (This works just like \"textcol LIKE 'foo%'\".)\n\n* \"starts_with(textcol, constant)\" can be optimized the same as\n\"textcol ^@ constant\".\n\nI also rejiggered match_pattern_prefix() a bit, with the effect\nthat fixed-prefix LIKE and regex patterns are now more like\nstarts_with() in another way: if you apply one to an SPGiST-indexed\ncolumn, you'll get an index condition using ^@ rather than two\nindex conditions with >= and <. That should be more efficient\nat runtime, though I didn't try to do any performance testing.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/CADT4RqB13KQHOJqqQ%2BWXmYtJrukS2UiFdtfTvT-XA3qYLyB6Cw%40mail.gmail.com", "msg_date": "Sat, 09 Oct 2021 13:23:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Add planner support function for starts_with()" }, { "msg_contents": "On 10/9/21, 10:24 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> With the (admittedly later) introduction of planner support functions,\r\n> it's really quite easy to do better. The attached patch adds a planner\r\n> support function for starts_with(), with these benefits:\r\n\r\nThe patch looks reasonable to me.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 17 Nov 2021 19:00:13 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add planner support function for starts_with()" }, { "msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> On 10/9/21, 10:24 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n>> With the (admittedly later) introduction of planner support functions,\n>> it's really quite easy to do better. The attached patch adds a planner\n>> support function for starts_with(), with these benefits:\n\n> The patch looks reasonable to me.\n\nPushed, thanks for reviewing!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Nov 2021 16:54:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Add planner support function for starts_with()" } ]
[ { "msg_contents": "Hi all,\n\nIn building off of prior art regarding the 'pg_read_all_data' and\n'pg_write_all_data' roles, I would like to propose an extension to roles\nthat would allow for database-specific role memberships (for the purpose of\ngranting database-specific privileges) as an additional layer of\nabstraction.\n\n= Problem =\n\nThere is currently no mechanism to grant the privileges afforded by the\ndefault roles on a per-database basis. This makes it difficult to cleanly\naccomplish permissions such as 'db_datareader' and 'db_datawriter' (which\nare database-level roles in SQL Server that respectively grant read and\nwrite access within a specific database).\n\nThe recently-added 'pg_read_all_data' and 'pg_write_all_data' work\nsimilarly to 'db_datareader' and 'db_datawriter', but work cluster-wide.\n\n= Proposal =\n\nI propose an extension to the GRANT / REVOKE syntax as well as an\nadditional column within pg_auth_members in order to track role memberships\nthat are only effective within the specified database.\n\nRole membership (and subsequent privileges) would be calculated using the\nfollowing algorithm:\n - Check for regular (cluster-wide) role membership (the way it works today)\n - Check for database-specific role membership based on the\ncurrently-connected database\n\nAttached is a proof of concept patch that implements this.\n\n= Implementation Notes =\n\n- A new column (pg_auth_members.dbid) in the system catalog that is set to\nInvalidOid for regular role memberships, or the oid of the given database\nfor database-specific role memberships.\n\n- GRANT / REVOKE syntax has been extended to include the ability to specify\na database-specific role membership:\n - \"IN DATABASE database_name\" would cause the GRANT to be applicable only\nwithin the specified database.\n - \"IN CURRENT DATABASE\" would cause the GRANT to be applicable only\nwithin the currently-connected database.\n - Omission of the clause would create a regular (cluster-wide) role\nmembership (the way it works today).\n\nThe proposed syntax (applies to REVOKE as well):\n\nGRANT role_name [, ...] TO role_specification [, ...]\n [ IN DATABASE database_name | IN CURRENT DATABASE ]\n [ WITH ADMIN OPTION ]\n [ GRANTED BY role_specification ]\n\n- DROP DATABASE has been updated to clean up any database-specific role\nmemberships that are associated with the database being dropped.\n\n- pg_dump_all will dump database-specific role memberships using the \"IN\nCURRENT DATABASE\" syntax. (pg_dump has not been modified)\n\n- is_admin_of_role()'s signature has been updated to include the oid of the\ndatabase being checked as a third argument. This now returns true if the\nmember has WITH ADMIN OPTION either globally or for the database given.\n\n- roles_is_member_of() will additionally include any database-specific role\nmemberships for the database being checked in its result set.\n\n= Example =\n\nCREATE DATABASE accounting;\nCREATE DATABASE sales;\n\nCREATE ROLE alice;\nCREATE ROLE bob;\n\n-- Alice is granted read-all privileges cluster-wide (nothing new here)\nGRANT pg_read_all_data TO alice;\n\n-- Bob is granted read-all privileges to just the accounting database\nGRANT pg_read_all_data TO bob IN DATABASE accounting;\n\n= Final Thoughts =\n\nThis is my first attempt at contributing code to the project, and I would\nnot self-identify as a C programmer. I wanted to get a sense for how\nreceptive the contributors and community would be to this proposal and\nwhether there were any concerns or preferred alternatives before I further\nembark on a fool's errand.\n\nThoughts?\n\nThanks,\n\n-- Kenaniah", "msg_date": "Sat, 9 Oct 2021 16:13:49 -0700", "msg_from": "Kenaniah Cerny <kenaniah@gmail.com>", "msg_from_op": true, "msg_subject": "Proposal: allow database-specific role memberships" }, { "msg_contents": "On Sun, Oct 10, 2021 at 2:29 PM Kenaniah Cerny <kenaniah@gmail.com> wrote:\n\n> In building off of prior art regarding the 'pg_read_all_data' and\n> 'pg_write_all_data' roles, I would like to propose an extension to roles\n> that would allow for database-specific role memberships (for the purpose of\n> granting database-specific privileges) as an additional layer of\n> abstraction.\n>\n> = Problem =\n>\n> There is currently no mechanism to grant the privileges afforded by the\n> default roles on a per-database basis. This makes it difficult to cleanly\n> accomplish permissions such as 'db_datareader' and 'db_datawriter' (which\n> are database-level roles in SQL Server that respectively grant read and\n> write access within a specific database).\n>\n> The recently-added 'pg_read_all_data' and 'pg_write_all_data' work\n> similarly to 'db_datareader' and 'db_datawriter', but work cluster-wide.\n>\n\nMy first impression is that this is more complex than just restricting\nwhich databases users are allowed to connect to. The added flexibility\nthis would provide has some benefit but doesn't seem worth the added\ncomplexity.\n\nDavid J.\n\nOn Sun, Oct 10, 2021 at 2:29 PM Kenaniah Cerny <kenaniah@gmail.com> wrote:In building off of prior art regarding the 'pg_read_all_data' and 'pg_write_all_data' roles, I would like to propose an extension to roles that would allow for database-specific role memberships (for the purpose of granting database-specific privileges) as an additional layer of abstraction.= Problem =There is currently no mechanism to grant the privileges afforded by the default roles on a per-database basis. This makes it difficult to cleanly accomplish permissions such as 'db_datareader' and 'db_datawriter' (which are database-level roles in SQL Server that respectively grant read and write access within a specific database).The recently-added 'pg_read_all_data' and 'pg_write_all_data' work similarly to 'db_datareader' and 'db_datawriter', but work cluster-wide.My first impression is that this is more complex than just restricting which databases users are allowed to connect to.  The added flexibility this would provide has some benefit but doesn't seem worth the added complexity.David J.", "msg_date": "Sun, 10 Oct 2021 16:45:30 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Greetings,\n\n* David G. Johnston (david.g.johnston@gmail.com) wrote:\n> On Sun, Oct 10, 2021 at 2:29 PM Kenaniah Cerny <kenaniah@gmail.com> wrote:\n> \n> > In building off of prior art regarding the 'pg_read_all_data' and\n> > 'pg_write_all_data' roles, I would like to propose an extension to roles\n> > that would allow for database-specific role memberships (for the purpose of\n> > granting database-specific privileges) as an additional layer of\n> > abstraction.\n> >\n> > = Problem =\n> >\n> > There is currently no mechanism to grant the privileges afforded by the\n> > default roles on a per-database basis. This makes it difficult to cleanly\n> > accomplish permissions such as 'db_datareader' and 'db_datawriter' (which\n> > are database-level roles in SQL Server that respectively grant read and\n> > write access within a specific database).\n> >\n> > The recently-added 'pg_read_all_data' and 'pg_write_all_data' work\n> > similarly to 'db_datareader' and 'db_datawriter', but work cluster-wide.\n> \n> My first impression is that this is more complex than just restricting\n> which databases users are allowed to connect to. The added flexibility\n> this would provide has some benefit but doesn't seem worth the added\n> complexity.\n\nHaving an ability to GRANT predefined roles within a particular database\nis certainly something that I'd considered when adding the pg_read/write\ndata roles. I'm not super thrilled with the idea of adding a column to\npg_auth_members just for predefined roles though and I'm not sure that\nsuch role membership makes sense for non-predefined roles. Would\nwelcome input from others as to if that's something that would make\nsense or if folks have asked about that before. We'd need to carefully\nthink through what this means in terms of making sure we don't end up\nwith any loops too.\n\nDoes seem like we'd probably need to change more than just what's\nsuggested here so that you could, for example, ask \"is role X a member\nof role Y in database Z\" without actually being connected to database Z.\nThat's just a matter of adding some functions though- the existing\nfunctions would work with just the assumption that you're asking about\nwithin the current database.\n\nI don't think \"just don't grant access to those other databases\"\nis actually a proper answer- there is certainly a use-case for \"I want\nuser X to have read access to all tables in *this* database, and also\nallow them to connect to some other database but not have that same\nlevel of access there.\"\n\nThanks,\n\nStephen", "msg_date": "Mon, 11 Oct 2021 11:00:58 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "On Mon, 11 Oct 2021 at 11:01, Stephen Frost <sfrost@snowman.net> wrote:\n\n\n> Having an ability to GRANT predefined roles within a particular database\n> is certainly something that I'd considered when adding the pg_read/write\n> data roles. I'm not super thrilled with the idea of adding a column to\n> pg_auth_members just for predefined roles though and I'm not sure that\n> such role membership makes sense for non-predefined roles. Would\n> welcome input from others as to if that's something that would make\n> sense or if folks have asked about that before. We'd need to carefully\n> think through what this means in terms of making sure we don't end up\n> with any loops too.\n>\n\nI think the ability to grant a role within a particular database would be\nuseful. For example, imagine I have a dev/testing instance with multiple\ndatabases, each a copy of production modified in some way for different\ntesting purposes. For example, one might be scrambled data (to make the\ntesting data non- or less- confidential); another might be limited to data\nfrom the last year (to reduce the size of everything); another might be\nlimited to 1% of all the customers (to reduce the size in a different way);\nand of course these could be combined.\n\nIt’s easy to imagine that I might want to grant a user the ability to\nconnect to all of these databases, but to have different privileges. For\nexample, maybe they have read_confidential_data in the scrambled database\nbut not in the reduced-but-not-scrambled databases. But maybe they have a\nlesser level of access to these databases, so just using the connect\nprivilege won't do the job.\n\nI’ve already found it a bit weird that I can set per-role, per-database\nsettings (e.g search_path), and of course privileges on individual objects,\nbut not which roles the role is a member of.\n\nI haven’t thought about implementation at all however. The thought occurs\nto me that the union of all the role memberships in all the database should\nform a directed acyclic graph. In other words, you could not have X a\nmember of Y (possibly indirectly) in one database while Y is a member of X\nin another database; the role memberships in each database would then be a\nsubset of the complete graph of memberships.\n\nOn Mon, 11 Oct 2021 at 11:01, Stephen Frost <sfrost@snowman.net> wrote: \nHaving an ability to GRANT predefined roles within a particular database\nis certainly something that I'd considered when adding the pg_read/write\ndata roles.  I'm not super thrilled with the idea of adding a column to\npg_auth_members just for predefined roles though and I'm not sure that\nsuch role membership makes sense for non-predefined roles.  Would\nwelcome input from others as to if that's something that would make\nsense or if folks have asked about that before.  We'd need to carefully\nthink through what this means in terms of making sure we don't end up\nwith any loops too.I think the ability to grant a role within a particular database would be useful. For example, imagine I have a dev/testing instance with multiple databases, each a copy of production modified in some way for different testing purposes. For example, one might be scrambled data (to make the testing data non- or less- confidential); another might be limited to data from the last year (to reduce the size of everything); another might be limited to 1% of all the customers (to reduce the size in a different way); and of course these could be combined.It’s easy to imagine that I might want to grant a user the ability to connect to all of these databases, but to have different privileges. For example, maybe they have read_confidential_data in the scrambled database but not in the reduced-but-not-scrambled databases. But maybe they have a lesser level of access to these databases, so just using the connect privilege won't do the job.I’ve already found it a bit weird that I can set per-role, per-database settings (e.g search_path), and of course privileges on individual objects, but not which roles the role is a member of.I haven’t thought about implementation at all however. The thought occurs to me that the union of all the role memberships in all the database should form a directed acyclic graph. In other words, you could not have X a member of Y (possibly indirectly) in one database while Y is a member of X in another database; the role memberships in each database would then be a subset of the complete graph of memberships.", "msg_date": "Mon, 11 Oct 2021 11:15:10 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "On Monday, October 11, 2021, Stephen Frost <sfrost@snowman.net> wrote:\n\n>\n> I don't think \"just don't grant access to those other databases\"\n> is actually a proper answer- there is certainly a use-case for \"I want\n> user X to have read access to all tables in *this* database, and also\n> allow them to connect to some other database but not have that same\n> level of access there.\"\n>\n\nSure, that has a benefit. But creating a second user for the other\ndatabase and putting the onus on the user to use the correct credentials\nwhen logging into a particular database is a valid option - it is in fact\nthe status quo. Due to the complexity of adding a whole new grant\ndimension to the system the status quo is an appealing option. Annoyance\nfactor aside it technically solves the per-database permissions problem put\nforth.\n\nDavid J.\n\nOn Monday, October 11, 2021, Stephen Frost <sfrost@snowman.net> wrote:\nI don't think \"just don't grant access to those other databases\"\nis actually a proper answer- there is certainly a use-case for \"I want\nuser X to have read access to all tables in *this* database, and also\nallow them to connect to some other database but not have that same\nlevel of access there.\"\nSure, that has a benefit.  But creating a second user for the other database and putting the onus on the user to use the correct credentials when logging into a particular database is a valid option  - it is in fact the status quo.  Due to the complexity of adding a whole new grant dimension to the system the status quo is an appealing option.  Annoyance factor aside it technically solves the per-database permissions problem put forth.David J.", "msg_date": "Mon, 11 Oct 2021 08:44:20 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Greetings,\n\n* David G. Johnston (david.g.johnston@gmail.com) wrote:\n> On Monday, October 11, 2021, Stephen Frost <sfrost@snowman.net> wrote:\n> > I don't think \"just don't grant access to those other databases\"\n> > is actually a proper answer- there is certainly a use-case for \"I want\n> > user X to have read access to all tables in *this* database, and also\n> > allow them to connect to some other database but not have that same\n> > level of access there.\"\n> \n> Sure, that has a benefit. But creating a second user for the other\n> database and putting the onus on the user to use the correct credentials\n> when logging into a particular database is a valid option - it is in fact\n> the status quo. Due to the complexity of adding a whole new grant\n> dimension to the system the status quo is an appealing option. Annoyance\n> factor aside it technically solves the per-database permissions problem put\n> forth.\n\nI disagree entirely that forcing users to have multiple accounts and to\ndeal with \"using the correct one\" is at all reasonable. That's an utter\nhack that results in a given user having multiple different accounts-\nsomething that gets really ugly to deal with in enterprise deployments\nwhich use any kind of centralized authentication system.\n\nNo, that's not a solution. Perhaps there's another way to implement\nthis capability that is simpler than what's proposed here, but saying\n\"just give each user two accounts\" isn't a solution. Sure, it'll work\nfor existing released versions of PG, just like there's a lot of things\nthat people can do to hack around our deficiencies, but that doesn't\nchange that these are areas which we are lacking and where we should be\ntrying to provide a proper solution.\n\nThanks,\n\nStephen", "msg_date": "Mon, 11 Oct 2021 12:05:04 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Hi all,\n\nThank you for the feedback so far!\n\nAttached is a completed implementation (including tests and documentation).\nBased on the feedback I have received so far, I will be submitting this\nimplementation to the commitfest.\n\nThanks again,\n\nKenaniah\n\nOn Mon, Oct 11, 2021 at 9:05 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * David G. Johnston (david.g.johnston@gmail.com) wrote:\n> > On Monday, October 11, 2021, Stephen Frost <sfrost@snowman.net> wrote:\n> > > I don't think \"just don't grant access to those other databases\"\n> > > is actually a proper answer- there is certainly a use-case for \"I want\n> > > user X to have read access to all tables in *this* database, and also\n> > > allow them to connect to some other database but not have that same\n> > > level of access there.\"\n> >\n> > Sure, that has a benefit. But creating a second user for the other\n> > database and putting the onus on the user to use the correct credentials\n> > when logging into a particular database is a valid option - it is in\n> fact\n> > the status quo. Due to the complexity of adding a whole new grant\n> > dimension to the system the status quo is an appealing option. Annoyance\n> > factor aside it technically solves the per-database permissions problem\n> put\n> > forth.\n>\n> I disagree entirely that forcing users to have multiple accounts and to\n> deal with \"using the correct one\" is at all reasonable. That's an utter\n> hack that results in a given user having multiple different accounts-\n> something that gets really ugly to deal with in enterprise deployments\n> which use any kind of centralized authentication system.\n>\n> No, that's not a solution. Perhaps there's another way to implement\n> this capability that is simpler than what's proposed here, but saying\n> \"just give each user two accounts\" isn't a solution. Sure, it'll work\n> for existing released versions of PG, just like there's a lot of things\n> that people can do to hack around our deficiencies, but that doesn't\n> change that these are areas which we are lacking and where we should be\n> trying to provide a proper solution.\n>\n> Thanks,\n>\n> Stephen\n>", "msg_date": "Sun, 24 Oct 2021 00:54:40 -0700", "msg_from": "Kenaniah Cerny <kenaniah@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nThe patch does not apply on HEAD anymore. Looks like it needs to be rebased.\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Thu, 28 Oct 2021 18:03:36 +0000", "msg_from": "Asif Rehman <asifr.rehman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Thank you Asif. A rebased patch is attached.\n\nOn Thu, Oct 28, 2021 at 11:04 AM Asif Rehman <asifr.rehman@gmail.com> wrote:\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: not tested\n> Implements feature: not tested\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> The patch does not apply on HEAD anymore. Looks like it needs to be\n> rebased.\n>\n> The new status of this patch is: Waiting on Author\n>", "msg_date": "Thu, 28 Oct 2021 12:39:07 -0700", "msg_from": "Kenaniah Cerny <kenaniah@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "> On 28 Oct 2021, at 21:39, Kenaniah Cerny <kenaniah@gmail.com> wrote:\n\n> Thank you Asif. A rebased patch is attached.\n\nThis patch fails to apply yet again, this time due to a collusion in\ncatversion.h. I think it's fine to omit the change in catversion.h as it's\nlikely to repeatedly cause conflicts, and instead just mention it on the\nthread. Any committer picking it up will know to perform the change anyways,\nso leaving it out can keep the patch from conflicting.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 17 Nov 2021 14:16:46 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Thank you for the advice!\n\nAttached is a rebased version of the patch that omits catversion.h in order\nto avoid conflicts.\n\nOn Wed, Nov 17, 2021 at 6:17 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 28 Oct 2021, at 21:39, Kenaniah Cerny <kenaniah@gmail.com> wrote:\n>\n> > Thank you Asif. A rebased patch is attached.\n>\n> This patch fails to apply yet again, this time due to a collusion in\n> catversion.h. I think it's fine to omit the change in catversion.h as it's\n> likely to repeatedly cause conflicts, and instead just mention it on the\n> thread. Any committer picking it up will know to perform the change\n> anyways,\n> so leaving it out can keep the patch from conflicting.\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>", "msg_date": "Wed, 1 Dec 2021 11:26:32 -0700", "msg_from": "Kenaniah Cerny <kenaniah@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Hi,\n\nOn Thu, Dec 2, 2021 at 2:26 AM Kenaniah Cerny <kenaniah@gmail.com> wrote:\n>\n> Attached is a rebased version of the patch that omits catversion.h in order to avoid conflicts.\n\nUnfortunately even without that the patch doesn't apply anymore\naccording to the cfbot: http://cfbot.cputube.org/patch_36_3374.log\n\n1 out of 3 hunks FAILED -- saving rejects to file src/backend/parser/gram.y.rej\n[...]\n2 out of 8 hunks FAILED -- saving rejects to file\nsrc/bin/pg_dump/pg_dumpall.c.rej\n\nCould you send a rebased version?\n\nIn the meantime I'm switching the patch to Waiting on Author.\n\n\n", "msg_date": "Wed, 12 Jan 2022 15:00:51 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "The latest rebased version of the patch is attached.\n\nOn Tue, Jan 11, 2022 at 11:01 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n>\n> On Thu, Dec 2, 2021 at 2:26 AM Kenaniah Cerny <kenaniah@gmail.com> wrote:\n> >\n> > Attached is a rebased version of the patch that omits catversion.h in\n> order to avoid conflicts.\n>\n> Unfortunately even without that the patch doesn't apply anymore\n> according to the cfbot: http://cfbot.cputube.org/patch_36_3374.log\n>\n> 1 out of 3 hunks FAILED -- saving rejects to file\n> src/backend/parser/gram.y.rej\n> [...]\n> 2 out of 8 hunks FAILED -- saving rejects to file\n> src/bin/pg_dump/pg_dumpall.c.rej\n>\n> Could you send a rebased version?\n>\n> In the meantime I'm switching the patch to Waiting on Author.\n>", "msg_date": "Fri, 21 Jan 2022 14:12:25 -0800", "msg_from": "Kenaniah Cerny <kenaniah@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "On Fri, Jan 21, 2022 at 3:12 PM Kenaniah Cerny <kenaniah@gmail.com> wrote:\n\n> The latest rebased version of the patch is attached.\n>\n\nAs I was just reminded, we tend to avoid specifying specific PostgreSQL\nversions in our documentation. We just say what the current version does.\nHere, the note sentences at lines 62 and 63 don't follow documentation\nnorms on that score and should just be removed. The last two sentences\nbelong in the main description body, not a note. Thus the whole note goes\naway.\n\nI don't think I really appreciated the value this feature would have when\ncombined with the predefined roles like pg_read_all_data and\npg_write_all_data.\n\nI suppose I don't really appreciate the warning about SUPERUSER, etc...or\nat least why this warning is somehow specific to the per-database version\nof role membership. If this warning is desirable it should be worded to\napply to role membership in general - and possibly proposed as a separate\npatch for consideration.\n\nI didn't dive deeply but I think we now have at three places in the acl.c\ncode where after setting memlist from the system cache we perform nearly\nidentical for loops to generate the final roles_list. Possibly this needs\na refactor first so that you can introduce the per-database stuff more\nsuccinctly. Basically, the vast majority of this commit is just adding\nInvalidOid and databaseOid all other the place - with a few minor code\nchanges to accommodate the new arguments. The acl.c code should try and be\nmade done the same after post-refactor.\n\nDavid J.\n\nOn Fri, Jan 21, 2022 at 3:12 PM Kenaniah Cerny <kenaniah@gmail.com> wrote:The latest rebased version of the patch is attached.As I was just reminded, we tend to avoid specifying specific PostgreSQL versions in our documentation.  We just say what the current version does.  Here, the note sentences at lines 62 and 63 don't follow documentation norms on that score and should just be removed.  The last two sentences belong in the main description body, not a note.  Thus the whole note goes away.I don't think I really appreciated the value this feature would have when combined with the predefined roles like pg_read_all_data and pg_write_all_data.I suppose I don't really appreciate the warning about SUPERUSER, etc...or at least why this warning is somehow specific to the per-database version of role membership.  If this warning is desirable it should be worded to apply to role membership in general - and possibly proposed as a separate patch for consideration.I didn't dive deeply but I think we now have at three places in the acl.c code where after setting memlist from the system cache we perform nearly identical for loops to generate the final roles_list.  Possibly this needs a refactor first so that you can introduce the per-database stuff more succinctly.  Basically, the vast majority of this commit is just adding InvalidOid and databaseOid all other the place - with a few minor code changes to accommodate the new arguments.  The acl.c code should try and be made done the same after post-refactor.David J.", "msg_date": "Fri, 21 Jan 2022 16:04:25 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Thanks for the feedback.\n\nI have attached an alternate version of the v5 patch that incorporates the\nsuggested changes to the documentation and DRYs up some of the acl.c code\nfor comparison. As for the databaseOid / InvalidOid parameter, I'm open to\nany suggestions for how to make that even cleaner, but am currently at a\nloss as to how that would look.\n\nCI is showing a failure to run pg_dump on just the Linux - Debian Bullseye\njob (https://cirrus-ci.com/task/5265343722553344). Does anyone have any\nideas as to where I should look in order to debug that?\n\nKenaniah\n\nOn Fri, Jan 21, 2022 at 3:04 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Fri, Jan 21, 2022 at 3:12 PM Kenaniah Cerny <kenaniah@gmail.com> wrote:\n>\n>> The latest rebased version of the patch is attached.\n>>\n>\n> As I was just reminded, we tend to avoid specifying specific PostgreSQL\n> versions in our documentation. We just say what the current version does.\n> Here, the note sentences at lines 62 and 63 don't follow documentation\n> norms on that score and should just be removed. The last two sentences\n> belong in the main description body, not a note. Thus the whole note goes\n> away.\n>\n> I don't think I really appreciated the value this feature would have when\n> combined with the predefined roles like pg_read_all_data and\n> pg_write_all_data.\n>\n> I suppose I don't really appreciate the warning about SUPERUSER, etc...or\n> at least why this warning is somehow specific to the per-database version\n> of role membership. If this warning is desirable it should be worded to\n> apply to role membership in general - and possibly proposed as a separate\n> patch for consideration.\n>\n> I didn't dive deeply but I think we now have at three places in the acl.c\n> code where after setting memlist from the system cache we perform nearly\n> identical for loops to generate the final roles_list. Possibly this needs\n> a refactor first so that you can introduce the per-database stuff more\n> succinctly. Basically, the vast majority of this commit is just adding\n> InvalidOid and databaseOid all other the place - with a few minor code\n> changes to accommodate the new arguments. The acl.c code should try and be\n> made done the same after post-refactor.\n>\n> David J.\n>\n>", "msg_date": "Fri, 21 Jan 2022 19:01:21 -0800", "msg_from": "Kenaniah Cerny <kenaniah@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 21, 2022 at 07:01:21PM -0800, Kenaniah Cerny wrote:\n> Thanks for the feedback.\n> \n> I have attached an alternate version of the v5 patch that incorporates the\n> suggested changes to the documentation and DRYs up some of the acl.c code\n> for comparison. As for the databaseOid / InvalidOid parameter, I'm open to\n> any suggestions for how to make that even cleaner, but am currently at a\n> loss as to how that would look.\n> \n> CI is showing a failure to run pg_dump on just the Linux - Debian Bullseye\n> job (https://cirrus-ci.com/task/5265343722553344). Does anyone have any\n> ideas as to where I should look in order to debug that?\n\nDid you try to reproduce it on some GNU/Linux system? FTR I had and I get a\nsegfault in pg_dumpall:\n\n(gdb) bt\n#0 __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44\n#1 0x00007f329e7e40cf in __pthread_kill_internal (signo=6, threadid=<optimized out>) at pthread_kill.c:78\n#2 0x00007f329e7987a2 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26\n#3 0x00007f329e783449 in __GI_abort () at abort.c:79\n#4 0x00007f329e7d85d8 in __libc_message (action=action@entry=do_abort, fmt=fmt@entry=0x7f329e90b6aa \"%s\\n\") at ../sysdeps/posix/libc_fatal.c:155\n#5 0x00007f329e7edcfa in malloc_printerr (str=str@entry=0x7f329e9092c3 \"free(): invalid pointer\") at malloc.c:5536\n#6 0x00007f329e7ef504 in _int_free (av=<optimized out>, p=<optimized out>, have_lock=0) at malloc.c:4327\n#7 0x00007f329e7f1f81 in __GI___libc_free (mem=<optimized out>) at malloc.c:3279\n#8 0x00007f329e7dbec5 in __GI__IO_free_backup_area (fp=fp@entry=0x561775f126c0) at genops.c:190\n#9 0x00007f329e7db6af in _IO_new_file_overflow (f=0x561775f126c0, ch=-1) at fileops.c:758\n#10 0x00007f329e7da7be in _IO_new_file_xsputn (n=2, data=<optimized out>, f=<optimized out>) at /usr/src/debug/sys-libs/glibc-2.34-r4/glibc-2.34/libio/libioP.h:947\n#11 _IO_new_file_xsputn (f=0x561775f126c0, data=<optimized out>, n=2) at fileops.c:1197\n#12 0x00007f329e7cfd32 in __GI__IO_fwrite (buf=0x7ffc90bb0ac0, size=1, count=2, fp=0x561775f126c0) at /usr/src/debug/sys-libs/glibc-2.34-r4/glibc-2.34/libio/libioP.h:947\n#13 0x000056177483c758 in flushbuffer (target=0x7ffc90bb0a90) at snprintf.c:310\n#14 0x000056177483c4e8 in pg_vfprintf (stream=0x561775f126c0, fmt=0x561774840dec \"\\n\\n\", args=0x7ffc90bb0f00) at snprintf.c:259\n#15 0x000056177483c5ce in pg_fprintf (stream=0x561775f126c0, fmt=0x561774840dec \"\\n\\n\") at snprintf.c:270\n#16 0x0000561774831893 in dumpRoleMembership (conn=0x561775f09600, databaseId=0x561775f152d2 \"1\") at pg_dumpall.c:991\n#17 0x0000561774832426 in dumpDatabases (conn=0x561775f09600) at pg_dumpall.c:1332\n#18 0x000056177483049e in main (argc=3, argv=0x7ffc90bb1658) at pg_dumpall.c:596\n\nI didn't look in detail, but:\n\n@@ -1323,6 +1327,10 @@ dumpDatabases(PGconn *conn)\n exit_nicely(1);\n }\n\n+ /* Dump database-specific roles if server is running 15.0 or later */\n+ if (server_version >= 150000)\n+ dumpRoleMembership(conn, dbid);\n+\n\nIsn't that trying print to OPF after the possible fclose(OPF) a bit before and\nbefore it's reopened?\n\n\n", "msg_date": "Sat, 22 Jan 2022 17:10:18 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Thank you so much for the backtrace!\n\nThis latest patch should address by moving the dumpRoleMembership call to\nbefore the pointer is closed.\n\nOn Sat, Jan 22, 2022 at 1:11 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n>\n> On Fri, Jan 21, 2022 at 07:01:21PM -0800, Kenaniah Cerny wrote:\n> > Thanks for the feedback.\n> >\n> > I have attached an alternate version of the v5 patch that incorporates\n> the\n> > suggested changes to the documentation and DRYs up some of the acl.c code\n> > for comparison. As for the databaseOid / InvalidOid parameter, I'm open\n> to\n> > any suggestions for how to make that even cleaner, but am currently at a\n> > loss as to how that would look.\n> >\n> > CI is showing a failure to run pg_dump on just the Linux - Debian\n> Bullseye\n> > job (https://cirrus-ci.com/task/5265343722553344). Does anyone have any\n> > ideas as to where I should look in order to debug that?\n>\n> Did you try to reproduce it on some GNU/Linux system? FTR I had and I get\n> a\n> segfault in pg_dumpall:\n>\n> (gdb) bt\n> #0 __pthread_kill_implementation (threadid=<optimized out>,\n> signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44\n> #1 0x00007f329e7e40cf in __pthread_kill_internal (signo=6,\n> threadid=<optimized out>) at pthread_kill.c:78\n> #2 0x00007f329e7987a2 in __GI_raise (sig=sig@entry=6) at\n> ../sysdeps/posix/raise.c:26\n> #3 0x00007f329e783449 in __GI_abort () at abort.c:79\n> #4 0x00007f329e7d85d8 in __libc_message (action=action@entry=do_abort,\n> fmt=fmt@entry=0x7f329e90b6aa \"%s\\n\") at ../sysdeps/posix/libc_fatal.c:155\n> #5 0x00007f329e7edcfa in malloc_printerr (str=str@entry=0x7f329e9092c3\n> \"free(): invalid pointer\") at malloc.c:5536\n> #6 0x00007f329e7ef504 in _int_free (av=<optimized out>, p=<optimized\n> out>, have_lock=0) at malloc.c:4327\n> #7 0x00007f329e7f1f81 in __GI___libc_free (mem=<optimized out>) at\n> malloc.c:3279\n> #8 0x00007f329e7dbec5 in __GI__IO_free_backup_area (fp=fp@entry=0x561775f126c0)\n> at genops.c:190\n> #9 0x00007f329e7db6af in _IO_new_file_overflow (f=0x561775f126c0, ch=-1)\n> at fileops.c:758\n> #10 0x00007f329e7da7be in _IO_new_file_xsputn (n=2, data=<optimized out>,\n> f=<optimized out>) at\n> /usr/src/debug/sys-libs/glibc-2.34-r4/glibc-2.34/libio/libioP.h:947\n> #11 _IO_new_file_xsputn (f=0x561775f126c0, data=<optimized out>, n=2) at\n> fileops.c:1197\n> #12 0x00007f329e7cfd32 in __GI__IO_fwrite (buf=0x7ffc90bb0ac0, size=1,\n> count=2, fp=0x561775f126c0) at\n> /usr/src/debug/sys-libs/glibc-2.34-r4/glibc-2.34/libio/libioP.h:947\n> #13 0x000056177483c758 in flushbuffer (target=0x7ffc90bb0a90) at\n> snprintf.c:310\n> #14 0x000056177483c4e8 in pg_vfprintf (stream=0x561775f126c0,\n> fmt=0x561774840dec \"\\n\\n\", args=0x7ffc90bb0f00) at snprintf.c:259\n> #15 0x000056177483c5ce in pg_fprintf (stream=0x561775f126c0,\n> fmt=0x561774840dec \"\\n\\n\") at snprintf.c:270\n> #16 0x0000561774831893 in dumpRoleMembership (conn=0x561775f09600,\n> databaseId=0x561775f152d2 \"1\") at pg_dumpall.c:991\n> #17 0x0000561774832426 in dumpDatabases (conn=0x561775f09600) at\n> pg_dumpall.c:1332\n> #18 0x000056177483049e in main (argc=3, argv=0x7ffc90bb1658) at\n> pg_dumpall.c:596\n>\n> I didn't look in detail, but:\n>\n> @@ -1323,6 +1327,10 @@ dumpDatabases(PGconn *conn)\n> exit_nicely(1);\n> }\n>\n> + /* Dump database-specific roles if server is running 15.0 or later\n> */\n> + if (server_version >= 150000)\n> + dumpRoleMembership(conn, dbid);\n> +\n>\n> Isn't that trying print to OPF after the possible fclose(OPF) a bit before\n> and\n> before it's reopened?\n>", "msg_date": "Sat, 22 Jan 2022 05:28:05 -0800", "msg_from": "Kenaniah Cerny <kenaniah@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Hi,\n\nOn Sat, Jan 22, 2022 at 05:28:05AM -0800, Kenaniah Cerny wrote:\n> Thank you so much for the backtrace!\n> \n> This latest patch should address by moving the dumpRoleMembership call to\n> before the pointer is closed.\n\nThanks! The cfbot turned green since:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/3374\n\n\n", "msg_date": "Sat, 22 Jan 2022 22:56:44 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Hi,\n\nOn 2022-01-22 22:56:44 +0800, Julien Rouhaud wrote:\n> On Sat, Jan 22, 2022 at 05:28:05AM -0800, Kenaniah Cerny wrote:\n> > Thank you so much for the backtrace!\n> > \n> > This latest patch should address by moving the dumpRoleMembership call to\n> > before the pointer is closed.\n> \n> Thanks! The cfbot turned green since:\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/3374\n\nred again: https://cirrus-ci.com/task/5516269981007872?logs=test_world#L1480\n\nMarked as waiting-on-author.\n\n- Andres\n\n\n", "msg_date": "Mon, 21 Mar 2022 17:40:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Thanks Andres,\n\nAn updated patch is attached.\n\n- Kenaniah\n\nOn Mon, Mar 21, 2022 at 5:40 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-01-22 22:56:44 +0800, Julien Rouhaud wrote:\n> > On Sat, Jan 22, 2022 at 05:28:05AM -0800, Kenaniah Cerny wrote:\n> > > Thank you so much for the backtrace!\n> > >\n> > > This latest patch should address by moving the dumpRoleMembership call\n> to\n> > > before the pointer is closed.\n> >\n> > Thanks! The cfbot turned green since:\n> >\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/3374\n>\n> red again:\n> https://cirrus-ci.com/task/5516269981007872?logs=test_world#L1480\n>\n> Marked as waiting-on-author.\n>\n> - Andres\n>", "msg_date": "Wed, 23 Mar 2022 14:26:20 -0700", "msg_from": "Kenaniah Cerny <kenaniah@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Hi all,\n\ncfbot is once again green as of the v7 patch:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/37/3374\n\n- Kenaniah\n\nHi all,cfbot is once again green as of the v7 patch: https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/37/3374- Kenaniah", "msg_date": "Wed, 23 Mar 2022 15:34:01 -0700", "msg_from": "Kenaniah Cerny <kenaniah@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Patch doesn't apply again...\n\n[image: 1jfj7m.jpg]", "msg_date": "Fri, 1 Apr 2022 10:32:05 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "I love that jpg! I'm saving it.\n\nAttached is a newly-rebased patch -- would love to get a review from\nsomeone whenever possible.\n\nThanks,\n\n- Kenaniah\n\nOn Fri, Apr 1, 2022 at 7:32 AM Greg Stark <stark@mit.edu> wrote:\n\n>\n> Patch doesn't apply again...\n>\n> [image: 1jfj7m.jpg]\n>", "msg_date": "Sat, 2 Apr 2022 15:08:23 -0700", "msg_from": "Kenaniah Cerny <kenaniah@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Kenaniah Cerny <kenaniah@gmail.com> wrote:\n\n> Attached is a newly-rebased patch -- would love to get a review from someone whenever possible.\n\nI've picked this patch for a review. The patch currently does not apply to the\nmaster branch, so I could only read the diff. Following are my comments:\n\n* I think that roles_is_member_of() deserves a comment explaining why the code\n that you moved into append_role_memberships() needs to be called twice,\n i.e. once for global memberships and once for the database-specific ones.\n\n I think the reason is that if, for example, role \"A\" is a database-specific\n member of role \"B\" and \"B\" is a \"global\" member of role \"C\", then \"A\" should\n not be considered a member of \"C\", unless \"A\" is granted \"C\" explicitly. Is\n this behavior intended?\n\n Note that in this example, the \"C\" members are a superset of \"B\" members,\n and thus \"C\" should have weaker permissions on database objects than\n \"B\". What's then the reason to not consider \"A\" a member of \"C\"? If \"C\"\n gives its members some permissions of \"B\" (e.g. \"pg_write_all_data\"), then I\n think the roles hierarchy is poorly designed.\n\n A counter-example might help me to understand.\n\n* Why do you think that \"unsafe_tests\" is the appropriate name for the\n directory that contains regression tests?\n\nI can spend more time on the review if the patch gets rebased.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 29 Jun 2022 15:45:48 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Hi Antonin,\n\nFirst of all, thank you so much for taking the time to review my patch.\nI'll answer your questions in reverse order:\n\nThe \"unsafe_tests\" directory is where the pre-existing role tests were\nlocated. According to the readme of the \"unsafe_tests\" directory, the tests\ncontained within are not run during \"make installcheck\" because they could\nhave side-effects that seem undesirable for a production installation. This\nseemed like a reasonable location as the new tests that this patch\nintroduces also modifies the \"state\" of the database cluster by adding,\nmodifying, and removing roles & databases (including template1).\n\nRegarding roles_is_member_of(), the nuance is that role \"A\" in your example\nwould only be considered a member of role \"B\" (and by extension role \"C\")\nwhen connected to the database in which \"A\" was granted database-specific\nmembership to \"B\". Conversely, when connected to any other database, \"A\"\nwould not be considered to be a member of \"B\".\n\nThis patch is designed to solve the scenarios in which one may want to\ngrant constrained access to a broader set of privileges. For example,\nmembership in \"pg_read_all_data\" effectively grants SELECT and USAGE rights\non everything (implicitly cluster-wide in today's implementation). By\ngranting a role membership to \"pg_read_all_data\" within the context of a\nspecific database, the grantee's read-everything privilege is effectively\nconstrained to just that specific database (as membership within\n\"pg_read_all_data\" would not otherwise be held).\n\nA rebased version is attached.\n\nThanks again!\n\n- Kenaniah\n\nOn Wed, Jun 29, 2022 at 6:45 AM Antonin Houska <ah@cybertec.at> wrote:\n\n> Kenaniah Cerny <kenaniah@gmail.com> wrote:\n>\n> > Attached is a newly-rebased patch -- would love to get a review from\n> someone whenever possible.\n>\n> I've picked this patch for a review. The patch currently does not apply to\n> the\n> master branch, so I could only read the diff. Following are my comments:\n>\n> * I think that roles_is_member_of() deserves a comment explaining why the\n> code\n> that you moved into append_role_memberships() needs to be called twice,\n> i.e. once for global memberships and once for the database-specific ones.\n>\n> I think the reason is that if, for example, role \"A\" is a\n> database-specific\n> member of role \"B\" and \"B\" is a \"global\" member of role \"C\", then \"A\"\n> should\n> not be considered a member of \"C\", unless \"A\" is granted \"C\" explicitly.\n> Is\n> this behavior intended?\n>\n> Note that in this example, the \"C\" members are a superset of \"B\" members,\n> and thus \"C\" should have weaker permissions on database objects than\n> \"B\". What's then the reason to not consider \"A\" a member of \"C\"? If \"C\"\n> gives its members some permissions of \"B\" (e.g. \"pg_write_all_data\"),\n> then I\n> think the roles hierarchy is poorly designed.\n>\n> A counter-example might help me to understand.\n>\n> * Why do you think that \"unsafe_tests\" is the appropriate name for the\n> directory that contains regression tests?\n>\n> I can spend more time on the review if the patch gets rebased.\n>\n> --\n> Antonin Houska\n> Web: https://www.cybertec-postgresql.com\n>", "msg_date": "Mon, 4 Jul 2022 13:17:00 -0700", "msg_from": "Kenaniah Cerny <kenaniah@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Rebased yet again...\n\nOn Mon, Jul 4, 2022 at 1:17 PM Kenaniah Cerny <kenaniah@gmail.com> wrote:\n\n> Hi Antonin,\n>\n> First of all, thank you so much for taking the time to review my patch.\n> I'll answer your questions in reverse order:\n>\n> The \"unsafe_tests\" directory is where the pre-existing role tests were\n> located. According to the readme of the \"unsafe_tests\" directory, the tests\n> contained within are not run during \"make installcheck\" because they could\n> have side-effects that seem undesirable for a production installation. This\n> seemed like a reasonable location as the new tests that this patch\n> introduces also modifies the \"state\" of the database cluster by adding,\n> modifying, and removing roles & databases (including template1).\n>\n> Regarding roles_is_member_of(), the nuance is that role \"A\" in your\n> example would only be considered a member of role \"B\" (and by extension\n> role \"C\") when connected to the database in which \"A\" was granted\n> database-specific membership to \"B\". Conversely, when connected to any\n> other database, \"A\" would not be considered to be a member of \"B\".\n>\n> This patch is designed to solve the scenarios in which one may want to\n> grant constrained access to a broader set of privileges. For example,\n> membership in \"pg_read_all_data\" effectively grants SELECT and USAGE rights\n> on everything (implicitly cluster-wide in today's implementation). By\n> granting a role membership to \"pg_read_all_data\" within the context of a\n> specific database, the grantee's read-everything privilege is effectively\n> constrained to just that specific database (as membership within\n> \"pg_read_all_data\" would not otherwise be held).\n>\n> A rebased version is attached.\n>\n> Thanks again!\n>\n> - Kenaniah\n>\n> On Wed, Jun 29, 2022 at 6:45 AM Antonin Houska <ah@cybertec.at> wrote:\n>\n>> Kenaniah Cerny <kenaniah@gmail.com> wrote:\n>>\n>> > Attached is a newly-rebased patch -- would love to get a review from\n>> someone whenever possible.\n>>\n>> I've picked this patch for a review. The patch currently does not apply\n>> to the\n>> master branch, so I could only read the diff. Following are my comments:\n>>\n>> * I think that roles_is_member_of() deserves a comment explaining why the\n>> code\n>> that you moved into append_role_memberships() needs to be called twice,\n>> i.e. once for global memberships and once for the database-specific\n>> ones.\n>>\n>> I think the reason is that if, for example, role \"A\" is a\n>> database-specific\n>> member of role \"B\" and \"B\" is a \"global\" member of role \"C\", then \"A\"\n>> should\n>> not be considered a member of \"C\", unless \"A\" is granted \"C\"\n>> explicitly. Is\n>> this behavior intended?\n>>\n>> Note that in this example, the \"C\" members are a superset of \"B\"\n>> members,\n>> and thus \"C\" should have weaker permissions on database objects than\n>> \"B\". What's then the reason to not consider \"A\" a member of \"C\"? If \"C\"\n>> gives its members some permissions of \"B\" (e.g. \"pg_write_all_data\"),\n>> then I\n>> think the roles hierarchy is poorly designed.\n>>\n>> A counter-example might help me to understand.\n>>\n>> * Why do you think that \"unsafe_tests\" is the appropriate name for the\n>> directory that contains regression tests?\n>>\n>> I can spend more time on the review if the patch gets rebased.\n>>\n>> --\n>> Antonin Houska\n>> Web: https://www.cybertec-postgresql.com\n>>\n>", "msg_date": "Sun, 17 Jul 2022 12:27:17 -0700", "msg_from": "Kenaniah Cerny <kenaniah@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Kenaniah Cerny <kenaniah@gmail.com> wrote:\n\n> Rebased yet again...\n> \n> On Mon, Jul 4, 2022 at 1:17 PM Kenaniah Cerny <kenaniah@gmail.com> wrote:\n\n> The \"unsafe_tests\" directory is where the pre-existing role tests were\n> located. According to the readme of the \"unsafe_tests\" directory, the tests\n> contained within are not run during \"make installcheck\" because they could\n> have side-effects that seem undesirable for a production installation. This\n> seemed like a reasonable location as the new tests that this patch\n> introduces also modifies the \"state\" of the database cluster by adding,\n> modifying, and removing roles & databases (including template1).\n\nok, I missed the purpose of \"unsafe_tests\" so far, thanks.\n\n> Regarding roles_is_member_of(), the nuance is that role \"A\" in your example\n> would only be considered a member of role \"B\" (and by extension role \"C\")\n> when connected to the database in which \"A\" was granted database-specific\n> membership to \"B\".\n\n> Conversely, when connected to any other database, \"A\" would not be considered to be a member of \"B\". \n> \n> This patch is designed to solve the scenarios in which one may want to\n> grant constrained access to a broader set of privileges. For example,\n> membership in \"pg_read_all_data\" effectively grants SELECT and USAGE rights\n> on everything (implicitly cluster-wide in today's implementation). By\n> granting a role membership to \"pg_read_all_data\" within the context of a\n> specific database, the grantee's read-everything privilege is effectively\n> constrained to just that specific database (as membership within\n> \"pg_read_all_data\" would not otherwise be held).\n\nok, I tried to view the problem rather from general perspective. However, the\npermissions like \"pg_read_all_data\" are unusual in that they are rather strong\nand at the same time they are usually located at the top of the groups\nhierarchy. I've got no better idea how to solve the problem.\n\nA few more comments on the patch:\n\n* It's not clear from the explanation of the GRANT ... IN DATABASE ... / GRANT\n ... IN CURRENT DATABASE ... that, even if \"membership in ... will be\n effective only when the recipient is connected to the database ...\", the\n ADMIN option might not be \"fully effective\". I refer to the part of the\n regression tests starting with\n\n -- Ensure database-specific admin option can only grant within that database\n\n For example, \"role_read_34\" does have the ADMIN option for the\n \"pg_read_all_data\" role and for the \"db_4\" database:\n\n GRANT pg_read_all_data TO role_read_34 IN DATABASE db_4 WITH ADMIN OPTION;\n\n (in other words, \"role_read_34\" does have the database-specific membership\n in \"pg_read_all_data\"), but it cannot use the option (in other words, cannot\n use some ability resulting from that membership) unless the session to that\n database is active:\n\n \\connect db_3\n SET SESSION AUTHORIZATION role_read_34;\n ...\n GRANT pg_read_all_data TO role_granted IN CURRENT DATABASE; -- success\n GRANT pg_read_all_data TO role_granted IN DATABASE db_3; -- notice\n NOTICE: role \"role_granted\" is already a member of role \"pg_read_all_data\" in database \"db_3\"\n GRANT pg_read_all_data TO role_granted IN DATABASE db_4; -- error\n ERROR: must have admin option on role \"pg_read_all_data\"\n\n\nSpecifically on the regression tests:\n\n * The function check_memberships() has no parameters - is there a reason not to use a view?\n\n * I'm not sure if the pg_auth_members catalog can contain InvalidOid in\n other columns than dbid. Thus I think that the query in\n check_memberships() only needs an outer JOIN for the pg_database table,\n while the other joins can be inner.\n\n * In this part\n\n\tSET SESSION AUTHORIZATION role_read_12_noinherit;\n\tSELECT * FROM data; -- error\n\tSET ROLE role_read_12; -- error\n\tSELECT * FROM data; -- error\n\n I think you don't need to query the table again if the SET ROLE statement\n failed and the same query had been executed before the SET ROLE.\n\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 19 Jul 2022 13:33:16 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Hi Antonin,\n\nThank you again for the detailed review and questions. It was encouraging\nto see the increasing level of nuance in this latest round.\n\nIt's not clear from the explanation of the GRANT ... IN DATABASE ... / GRANT\n> ... IN CURRENT DATABASE ... that, even if \"membership in ... will be\n> effective only when the recipient is connected to the database ...\", the\n> ADMIN option might not be \"fully effective\".\n\n\nWhile I'm not entirely sure what you mean by fully effective, it sounds\nlike you may have expected a database-specific WITH ADMIN OPTION grant to\nbe able to take effect when connected to a different database (such as\nbeing able to use db_4's database-specific grants when connected to db_3).\nThe documentation updated in this patch specifies that membership (for\ndatabase-specific grants) would be effective only when the grantee is\nconnected to the same database that the grant was issued for.\n\nIn the case of attempting to make a role grant to db_4 from within db_3,\nthe user would need to have a cluster-wide admin option for the role being\ngranted, as the test case you referenced in your example aims to verify.\n\nI have added a couple of lines to the documentation included with this\npatch in order to clarify.\n\n\n> Specifically on the regression tests:\n>\n> * The function check_memberships() has no parameters - is there a\n> reason not to use a view?\n>\n\nI believe a view would work just as well -- this was an implementation\ndetail that was fashioned to match the pre-existing rolenames.sql file's\ntest format.\n\n\n> * I'm not sure if the pg_auth_members catalog can contain InvalidOid in\n> other columns than dbid. Thus I think that the query in\n> check_memberships() only needs an outer JOIN for the pg_database\n> table,\n> while the other joins can be inner.\n>\n\nThis is probably true. The tests run just as well using inner joins for\npg_roles, as this latest version of the patch reflects.\n\n\n> * In this part\n>\n> SET SESSION AUTHORIZATION role_read_12_noinherit;\n> SELECT * FROM data; -- error\n> SET ROLE role_read_12; -- error\n> SELECT * FROM data; -- error\n>\n> I think you don't need to query the table again if the SET ROLE\n> statement\n> failed and the same query had been executed before the SET ROLE.\n\n\nI left that last query in place as a sanity check to ensure that\nrole_read_12's privileges were indeed not in effect after the call to SET\nROLE.\n\nAs we appear to now be working through the minutiae, it is my hope that\nthis will soon be ready for merge.\n\n- Kenaniah", "msg_date": "Sun, 24 Jul 2022 16:03:17 -0700", "msg_from": "Kenaniah Cerny <kenaniah@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "On Mon, Jul 25, 2022 at 4:03 AM Kenaniah Cerny <kenaniah@gmail.com> wrote:\n\n> Hi Antonin,\n>\n> Thank you again for the detailed review and questions. It was encouraging\n> to see the increasing level of nuance in this latest round.\n>\n> It's not clear from the explanation of the GRANT ... IN DATABASE ... /\n>> GRANT\n>> ... IN CURRENT DATABASE ... that, even if \"membership in ... will be\n>> effective only when the recipient is connected to the database ...\", the\n>> ADMIN option might not be \"fully effective\".\n>\n>\n> While I'm not entirely sure what you mean by fully effective, it sounds\n> like you may have expected a database-specific WITH ADMIN OPTION grant to\n> be able to take effect when connected to a different database (such as\n> being able to use db_4's database-specific grants when connected to db_3).\n> The documentation updated in this patch specifies that membership (for\n> database-specific grants) would be effective only when the grantee is\n> connected to the same database that the grant was issued for.\n>\n> In the case of attempting to make a role grant to db_4 from within db_3,\n> the user would need to have a cluster-wide admin option for the role being\n> granted, as the test case you referenced in your example aims to verify.\n>\n> I have added a couple of lines to the documentation included with this\n> patch in order to clarify.\n>\n>\n>> Specifically on the regression tests:\n>>\n>> * The function check_memberships() has no parameters - is there a\n>> reason not to use a view?\n>>\n>\n> I believe a view would work just as well -- this was an implementation\n> detail that was fashioned to match the pre-existing rolenames.sql file's\n> test format.\n>\n>\n>> * I'm not sure if the pg_auth_members catalog can contain InvalidOid\n>> in\n>> other columns than dbid. Thus I think that the query in\n>> check_memberships() only needs an outer JOIN for the pg_database\n>> table,\n>> while the other joins can be inner.\n>>\n>\n> This is probably true. The tests run just as well using inner joins for\n> pg_roles, as this latest version of the patch reflects.\n>\n>\n>> * In this part\n>>\n>> SET SESSION AUTHORIZATION role_read_12_noinherit;\n>> SELECT * FROM data; -- error\n>> SET ROLE role_read_12; -- error\n>> SELECT * FROM data; -- error\n>>\n>> I think you don't need to query the table again if the SET ROLE\n>> statement\n>> failed and the same query had been executed before the SET ROLE.\n>\n>\n> I left that last query in place as a sanity check to ensure that\n> role_read_12's privileges were indeed not in effect after the call to SET\n> ROLE.\n>\n> As we appear to now be working through the minutiae, it is my hope that\n> this will soon be ready for merge.\n>\n> - Kenaniah\n>\n\nThe patch requires a rebase, please do that.\n\nHunk #5 succeeded at 454 (offset 28 lines). 1 out of 5 hunks FAILED\n-- saving rejects to file doc/src/sgml/ref/grant.sgml.rej\n...\n...\n\n\n-- \nIbrar Ahmed\n\nOn Mon, Jul 25, 2022 at 4:03 AM Kenaniah Cerny <kenaniah@gmail.com> wrote:Hi Antonin,Thank you again for the detailed review and questions. It was encouraging to see the increasing level of nuance in this latest round. It's not clear from the explanation of the GRANT ... IN DATABASE ... / GRANT\n  ... IN CURRENT DATABASE ... that, even if \"membership in ... will be\n  effective only when the recipient is connected to the database ...\", the\n  ADMIN option might not be \"fully effective\". While I'm not entirely sure what you mean by fully effective, it sounds like you may have expected a database-specific WITH ADMIN OPTION grant to be able to take effect when connected to a different database (such as being able to use db_4's database-specific grants when connected to db_3). The documentation updated in this patch specifies that membership (for database-specific grants) would be effective only when the grantee is connected to the same database that the grant was issued for.In the case of attempting to make a role grant to db_4 from within db_3, the user would need to have a cluster-wide admin option for the role being granted, as the test case you referenced in your example aims to verify.I have added a couple of lines to the documentation included with this patch in order to clarify. \nSpecifically on the regression tests:\n\n    * The function check_memberships() has no parameters - is there a reason not to use a view?I believe a view would work just as well -- this was an implementation detail that was fashioned to match the pre-existing rolenames.sql file's test format.  \n    * I'm not sure if the pg_auth_members catalog can contain InvalidOid in\n      other columns than dbid. Thus I think that the query in\n      check_memberships() only needs an outer JOIN for the pg_database table,\n      while the other joins can be inner.This is probably true. The tests run just as well using inner joins for pg_roles, as this latest version of the patch reflects. \n    * In this part\n\n        SET SESSION AUTHORIZATION role_read_12_noinherit;\n        SELECT * FROM data; -- error\n        SET ROLE role_read_12; -- error\n        SELECT * FROM data; -- error\n\n    I think you don't need to query the table again if the SET ROLE statement\n    failed and the same query had been executed before the SET ROLE.I left that last query in place as a sanity check to ensure that role_read_12's privileges were indeed not in effect after the call to SET ROLE. As we appear to now be working through the minutiae, it is my hope that this will soon be ready for merge.- Kenaniah The patch requires a rebase, please do that.Hunk #5 succeeded at 454 (offset 28 lines). 1 out of 5 hunks FAILED -- saving rejects to file doc/src/sgml/ref/grant.sgml.rej......-- Ibrar Ahmed", "msg_date": "Wed, 7 Sep 2022 12:50:32 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "On Wed, Sep 07, 2022 at 12:50:32PM +0500, Ibrar Ahmed wrote:\n> The patch requires a rebase, please do that.\n> \n> Hunk #5 succeeded at 454 (offset 28 lines). 1 out of 5 hunks FAILED\n> -- saving rejects to file doc/src/sgml/ref/grant.sgml.rej\n\nThere has been no updates on this thread for one month, so this has\nbeen switched as RwF.\n--\nMichael", "msg_date": "Wed, 12 Oct 2022 16:42:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" }, { "msg_contents": "Michael Paquier a écrit :\n> On Wed, Sep 07, 2022 at 12:50:32PM +0500, Ibrar Ahmed wrote:\n>> The patch requires a rebase, please do that.\n>>\n>> Hunk #5 succeeded at 454 (offset 28 lines). 1 out of 5 hunks FAILED\n>> -- saving rejects to file doc/src/sgml/ref/grant.sgml.rej\n> \n> There has been no updates on this thread for one month, so this has\n> been switched as RwF.\n\nI took the liberty to rebase this (old) patch, originally authored by \nKenaniah Cerny.\n\n\nThis is about adding a \"IN DATABASE <datname>\" clause to GRANT and \nREVOKE commands allowing to control role membership in a database scope, \nrather that cluster-wise. This could be interesting in combination with \npredefined roles, e.g.:\n\n GRANT pg_read_all_data TO bob IN DATABASE app;\n GRANT pg_maintain TO dba IN DATABASE metrics;\n\nwithout having to grant too many privileges when a user is supposed to \nonly operate on some databases.\n\n\nThe logic of the original patch (as of its version 11) is preserved. One \nnoticeable change concerns tests: they got moved in src/test/regress \n(there were in 'unsafe_tests'), with proper cleanup, and now avoid using \nsuperuser as well as modifying templates.\n\n\nIs this a feature that's still interesting? (Feedbacks, from 2022, in \nthe thread were a bit mixed.)\n\nPersonally, I have a few concerns regarding the feature and its \nimplementation:\n\n- The IN DATABASE clause does not make much sense for some roles, like \npg_read_all_stats (the implementation does not guard against this).\n\n- An 'IN SCHEMA' clause might be a natural supplementary feature. \nHowever, the current implementation relying on a new 'dbid' column added \nin pg_auth_members catalog might not fit well in that case.\n\n\nThanks,\nDenis", "msg_date": "Tue, 24 Sep 2024 10:19:07 +0200", "msg_from": "Denis Laxalde <denis.laxalde@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: allow database-specific role memberships" } ]
[ { "msg_contents": "Hi,\n\nThe freebsd image I use for CI runs just failed because the package name for\nopenldap changed (it's now either openldap{24,25}-{client,server}, instead of\nopenldap-..}. I naively resolved that conflict by choosing the openldap25-*\npackages. Which unfortunately turns out to break 001_auth.pl :(\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5061394509856768/tap/src/test/ldap/tmp_check/log/regress_log_001_auth\n\n# Running: ldapsearch -h localhost -p 51649 -s base -b dc=example,dc=net -D cn=Manager,dc=example,dc=net -y /tmp/cirrus-ci-build/src/test/ldap/tmp_check/ldappassword -n 'objectclass=*'\nldapsearch: unrecognized option -h\nusage: ldapsearch [options] [filter [attributes...]]\n\nSeems we need to replace -h & -p with a -H ldap://server:port/ style URI? I\nthink that's fine to do unconditionally, the -H schema is pretty old I think\n(I seem to recall using it in the mid 2000s, when I learned to not like ldap\nby experience).\n\nThe only reason I'm hesitating a bit is that f0e60ee4bc0, the commit adding\nthe ldap test suite, used an ldap:// uri for the server, but then 27cd521e6e7\n(adding the ldapsearch) didn't use that for the ldapsearch? Thomas?\n\nSo, does anybody see a reason not to go for the trivial\n\ndiff --git i/src/test/ldap/t/001_auth.pl w/src/test/ldap/t/001_auth.pl\nindex f670bc5e0d5..a025a641b02 100644\n--- i/src/test/ldap/t/001_auth.pl\n+++ w/src/test/ldap/t/001_auth.pl\n@@ -130,8 +130,8 @@ while (1)\n last\n if (\n system_log(\n- \"ldapsearch\", \"-h\", $ldap_server, \"-p\",\n- $ldap_port, \"-s\", \"base\", \"-b\",\n+ \"ldapsearch\", \"-H\", \"$ldap_url\", \"-s\",\n+ \"base\", \"-b\",\n $ldap_basedn, \"-D\", $ldap_rootdn, \"-y\",\n $ldap_pwfile, \"-n\", \"'objectclass=*'\") == 0);\n die \"cannot connect to slapd\" if ++$retries >= 300;\n\n\nAlthough I'm mildly tempted to rewrap the parameters, it's kinda odd how the\ntrailing parameter on one line, has its value on the next line.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 9 Oct 2021 16:38:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "ldap/t/001_auth.pl fails with openldap 2.5" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> So, does anybody see a reason not to go for the trivial\n> [ patch ]\n\nI'd be happy to rely on the buildfarm's opinion here.\n\n> Although I'm mildly tempted to rewrap the parameters, it's kinda odd how the\n> trailing parameter on one line, has its value on the next line.\n\nI'm betting that perltidy did that. If you want to fix it so it\nstays fixed, maybe reordering the parameters could help.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 10 Oct 2021 00:45:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ldap/t/001_auth.pl fails with openldap 2.5" }, { "msg_contents": "On Sun, Oct 10, 2021 at 12:39 PM Andres Freund <andres@anarazel.de> wrote:\n> Seems we need to replace -h & -p with a -H ldap://server:port/ style URI? I\n> think that's fine to do unconditionally, the -H schema is pretty old I think\n> (I seem to recall using it in the mid 2000s, when I learned to not like ldap\n> by experience).\n\n+1\n\n> The only reason I'm hesitating a bit is that f0e60ee4bc0, the commit adding\n> the ldap test suite, used an ldap:// uri for the server, but then 27cd521e6e7\n> (adding the ldapsearch) didn't use that for the ldapsearch? Thomas?\n\nMy mistake, I probably copied it from somewhere without reading the\ndeprecation notice in the man page. I found the discussion[1], which\nsays \"These options have been deprecated since 2000, so you've had\nliterally 20 years to migrate away from them\".\n\n[1] https://bugs.openldap.org/show_bug.cgi?id=8618\n\n\n", "msg_date": "Thu, 14 Oct 2021 11:41:42 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ldap/t/001_auth.pl fails with openldap 2.5" }, { "msg_contents": "Hi,\n\nOn 2021-10-10 00:45:41 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Although I'm mildly tempted to rewrap the parameters, it's kinda odd how the\n> > trailing parameter on one line, has its value on the next line.\n> \n> I'm betting that perltidy did that. If you want to fix it so it\n> stays fixed, maybe reordering the parameters could help.\n\nYou were right on that front. Since perltidy insisted on reflowing due to the\nreduction in number of parameters anyway, I did end up switching things around\nso that the parameters look a bit more reasonable.\n\n> > So, does anybody see a reason not to go for the trivial\n> > [ patch ]\n> \n> I'd be happy to rely on the buildfarm's opinion here.\n\nLet's see what it says...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 19 Oct 2021 11:20:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: ldap/t/001_auth.pl fails with openldap 2.5" } ]
[ { "msg_contents": "Hi\n\nI would like to fix an issue https://github.com/okbob/pspg/issues/188 where\nthe write to clipboard from pspg doesn`t work on macos. But it is hard to\nfix it without any macos. I am not a user of macos and I would not buy it\njust for this purpose.\n\nIs it possible to get some remote ssh access?\n\nRegards\n\nPavel\n\nHiI would like to fix an issue https://github.com/okbob/pspg/issues/188 where the write to clipboard from pspg  doesn`t work on macos. But it is hard to fix it without any macos. I am not a user of macos and I would not buy it just for this purpose. Is it possible to get some remote ssh access?RegardsPavel", "msg_date": "Mon, 11 Oct 2021 06:31:03 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "is possible an remote access to some macos?" }, { "msg_contents": "On Mon, Oct 11, 2021 at 06:31:03AM +0200, Pavel Stehule wrote:\n> I would like to fix an issue https://github.com/okbob/pspg/issues/188 where\n> the write to clipboard from pspg doesn`t work on macos. But it is hard to\n> fix it without any macos. I am not a user of macos and I would not buy it\n> just for this purpose.\n> \n> Is it possible to get some remote ssh access?\n\nYou can request a GCC Compile Farm account\n(https://cfarm.tetaneutral.net/users/new/).\n\n\n", "msg_date": "Mon, 11 Oct 2021 18:49:26 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: is possible an remote access to some macos?" }, { "msg_contents": "On 10/11/2021 8:49 pm, Noah Misch wrote:\n> On Mon, Oct 11, 2021 at 06:31:03AM +0200, Pavel Stehule wrote:\n>> I would like to fix an issue https://github.com/okbob/pspg/issues/188 \n>> where\n>> the write to clipboard from pspg doesn`t work on macos. But it is \n>> hard to\n>> fix it without any macos. I am not a user of macos and I would not buy \n>> it\n>> just for this purpose.\n>> \n>> Is it possible to get some remote ssh access?\n> \n> You can request a GCC Compile Farm account\n> (https://cfarm.tetaneutral.net/users/new/).\n\nAWS also has macos instances:\nhttps://aws.amazon.com/pm/ec2-mac/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Mon, 11 Oct 2021 21:04:17 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: is possible an remote access to some macos?" } ]
[ { "msg_contents": "While creating an \"export snapshot\" I don't see any protection why the\nnumber of xids in the snapshot can not cross the\n\"GetMaxSnapshotXidCount()\"?.\n\nBasically, while converting the HISTORIC snapshot to the MVCC snapshot\nin \"SnapBuildInitialSnapshot()\", we add all the xids between\nsnap->xmin to snap->xmax to the MVCC snap->xip array (xids for which\ncommit were not recorded). The problem is that we add both topxids as\nwell as the subxids into the same array and expect that the \"xid\"\ncount does not cross the \"GetMaxSnapshotXidCount()\". So it seems like\nan issue but I am not sure what is the fix for this, some options are\na) Don't limit the xid count in the exported snapshot and dynamically\nresize the array b) Increase the limit to GetMaxSnapshotXidCount() +\nGetMaxSnapshotSubxidCount(). But in option b) there would still be a\nproblem that how do we handle the overflowed subtransaction?\n\nI have locally, reproduced the issue,\n\n1. Configuration\nmax_connections= 5\nautovacuum = off\nmax_worker_processes = 0\n\n2.Then from pgbench I have run the attached script (test.sql) from 5 clients.\n./pgbench -i postgres\n./pgbench -c4 -j4 -T 3000 -f test1.sql -P1 postgres\n\n3. Concurrently, create replication slot,\n[dilipkumar@localhost bin]$ ./psql \"dbname=postgres replication=database\"\npostgres[7367]=#\npostgres[6463]=# CREATE_REPLICATION_SLOT \"slot\" LOGICAL \"test_decoding\";\nERROR: 40001: initial slot snapshot too large\nLOCATION: SnapBuildInitialSnapshot, snapbuild.c:597\npostgres[6463]=# CREATE_REPLICATION_SLOT \"slot\" LOGICAL \"test_decoding\";\nERROR: XX000: clearing exported snapshot in wrong transaction state\nLOCATION: SnapBuildClearExportedSnapshot, snapbuild.c:690\n\nI could reproduce this issue, at least once in 8-10 attempts of\ncreating the replication slot.\n\nNote: After that issue, I have noticed one more issue \"clearing\nexported snapshot in wrong transaction state\", that is because the\n\"ExportInProgress\" is not cleared on the transaction abort, for this,\na simple fix is we can clear this state on the transaction abort,\nmaybe I will raise this as a separate issue?\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 11 Oct 2021 11:49:41 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Error \"initial slot snapshot too large\" in create replication slot" }, { "msg_contents": "At Mon, 11 Oct 2021 11:49:41 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> While creating an \"export snapshot\" I don't see any protection why the\n> number of xids in the snapshot can not cross the\n> \"GetMaxSnapshotXidCount()\"?.\n> \n> Basically, while converting the HISTORIC snapshot to the MVCC snapshot\n> in \"SnapBuildInitialSnapshot()\", we add all the xids between\n> snap->xmin to snap->xmax to the MVCC snap->xip array (xids for which\n> commit were not recorded). The problem is that we add both topxids as\n> well as the subxids into the same array and expect that the \"xid\"\n> count does not cross the \"GetMaxSnapshotXidCount()\". So it seems like\n> an issue but I am not sure what is the fix for this, some options are\n\nIt seems to me that it is a compromise between the restriction of the\nlegitimate snapshot and snapshots created by snapbuild. If the xids\noverflow, the resulting snapshot may lose a siginificant xid, i.e, a\ntop-level xid.\n\n> a) Don't limit the xid count in the exported snapshot and dynamically\n> resize the array b) Increase the limit to GetMaxSnapshotXidCount() +\n> GetMaxSnapshotSubxidCount(). But in option b) there would still be a\n> problem that how do we handle the overflowed subtransaction?\n\nI'm afraid that we shouldn't expand the size limits. If I understand\nit correctly, we only need the top-level xids in the exported snapshot\nand reorder buffer knows whether a xid is a top-level or not after\nestablishing a consistent snapshot.\n\nThe attached diff tries to make SnapBuildInitialSnapshot exclude\nsubtransaction from generated snapshots. It seems working fine for\nyou repro. (Of course, I'm not confident that it is the correct thing,\nthough..)\n\nWhat do you think about this?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\nindex 46e66608cf..4e452cce7c 100644\n--- a/src/backend/replication/logical/reorderbuffer.c\n+++ b/src/backend/replication/logical/reorderbuffer.c\n@@ -3326,6 +3326,26 @@ ReorderBufferXidHasBaseSnapshot(ReorderBuffer *rb, TransactionId xid)\n }\n \n \n+/*\n+ * ReorderBufferXidIsKnownSubXact\n+ *\t\tReturns true if the xid is a known subtransaction.\n+ */\n+bool\n+ReorderBufferXidIsKnownSubXact(ReorderBuffer *rb, TransactionId xid)\n+{\n+\tReorderBufferTXN *txn;\n+\n+\ttxn = ReorderBufferTXNByXid(rb, xid, false,\n+\t\t\t\t\t\t\t\tNULL, InvalidXLogRecPtr, false);\n+\n+\t/* a known subtxn? */\n+\tif (txn && rbtxn_is_known_subxact(txn))\n+\t\treturn true;\n+\n+\treturn false;\n+}\n+\n+\n /*\n * ---------------------------------------\n * Disk serialization support\ndiff --git a/src/backend/replication/logical/snapbuild.c b/src/backend/replication/logical/snapbuild.c\nindex a5333349a8..12d283f4de 100644\n--- a/src/backend/replication/logical/snapbuild.c\n+++ b/src/backend/replication/logical/snapbuild.c\n@@ -591,12 +591,18 @@ SnapBuildInitialSnapshot(SnapBuild *builder)\n \n \t\tif (test == NULL)\n \t\t{\n-\t\t\tif (newxcnt >= GetMaxSnapshotXidCount())\n-\t\t\t\tereport(ERROR,\n-\t\t\t\t\t\t(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),\n-\t\t\t\t\t\t errmsg(\"initial slot snapshot too large\")));\n+\t\t\tbool issubxact =\n+\t\t\t\tReorderBufferXidIsKnownSubXact(builder->reorder, xid);\n \n-\t\t\tnewxip[newxcnt++] = xid;\n+\t\t\tif (!issubxact)\n+\t\t\t{\t\t\t\t\n+\t\t\t\tif (newxcnt >= GetMaxSnapshotXidCount())\n+\t\t\t\t\tereport(ERROR,\n+\t\t\t\t\t\t\t(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),\n+\t\t\t\t\t\t\t errmsg(\"initial slot snapshot too large\")));\n+\t\t\t\t\n+\t\t\t\tnewxip[newxcnt++] = xid;\n+\t\t\t}\n \t\t}\n \n \t\tTransactionIdAdvance(xid);\ndiff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h\nindex 5b40ff75f7..e5fa1051d7 100644\n--- a/src/include/replication/reorderbuffer.h\n+++ b/src/include/replication/reorderbuffer.h\n@@ -669,6 +669,7 @@ void\t\tReorderBufferXidSetCatalogChanges(ReorderBuffer *, TransactionId xid, XLog\n bool\t\tReorderBufferXidHasCatalogChanges(ReorderBuffer *, TransactionId xid);\n bool\t\tReorderBufferXidHasBaseSnapshot(ReorderBuffer *, TransactionId xid);\n \n+bool\t\tReorderBufferXidIsKnownSubXact(ReorderBuffer *rb, TransactionId xid);\n bool\t\tReorderBufferRememberPrepareInfo(ReorderBuffer *rb, TransactionId xid,\n \t\t\t\t\t\t\t\t\t\t\t XLogRecPtr prepare_lsn, XLogRecPtr end_lsn,\n \t\t\t\t\t\t\t\t\t\t\t TimestampTz prepare_time,", "msg_date": "Mon, 11 Oct 2021 19:59:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Mon, Oct 11, 2021 at 4:29 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 11 Oct 2021 11:49:41 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > While creating an \"export snapshot\" I don't see any protection why the\n> > number of xids in the snapshot can not cross the\n> > \"GetMaxSnapshotXidCount()\"?.\n> >\n> > Basically, while converting the HISTORIC snapshot to the MVCC snapshot\n> > in \"SnapBuildInitialSnapshot()\", we add all the xids between\n> > snap->xmin to snap->xmax to the MVCC snap->xip array (xids for which\n> > commit were not recorded). The problem is that we add both topxids as\n> > well as the subxids into the same array and expect that the \"xid\"\n> > count does not cross the \"GetMaxSnapshotXidCount()\". So it seems like\n> > an issue but I am not sure what is the fix for this, some options are\n>\n> It seems to me that it is a compromise between the restriction of the\n> legitimate snapshot and snapshots created by snapbuild. If the xids\n> overflow, the resulting snapshot may lose a siginificant xid, i.e, a\n> top-level xid.\n>\n> > a) Don't limit the xid count in the exported snapshot and dynamically\n> > resize the array b) Increase the limit to GetMaxSnapshotXidCount() +\n> > GetMaxSnapshotSubxidCount(). But in option b) there would still be a\n> > problem that how do we handle the overflowed subtransaction?\n>\n> I'm afraid that we shouldn't expand the size limits. If I understand\n> it correctly, we only need the top-level xids in the exported snapshot\n\nBut since we are using this as an MVCC snapshot, if we don't have the\nsubxid and if we also don't mark the \"suboverflowed\" flag then IMHO\nthe sub-transaction visibility might be wrong, Am I missing something?\n\n> and reorder buffer knows whether a xid is a top-level or not after\n> establishing a consistent snapshot.\n>\n> The attached diff tries to make SnapBuildInitialSnapshot exclude\n> subtransaction from generated snapshots. It seems working fine for\n> you repro. (Of course, I'm not confident that it is the correct thing,\n> though..)\n>\n> What do you think about this?\n\nIf your statement that we only need top-xids in the exported snapshot,\nis true then this fix looks fine to me. If not then we might need to\nadd the sub-xids in the snapshot->subxip array and if it crosses the\nlimit of GetMaxSnapshotSubxidCount(), then we can mark \"suboverflowed\"\nflag.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Oct 2021 16:48:10 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "At Mon, 11 Oct 2021 16:48:10 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Mon, Oct 11, 2021 at 4:29 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Mon, 11 Oct 2021 11:49:41 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > > While creating an \"export snapshot\" I don't see any protection why the\n> > > number of xids in the snapshot can not cross the\n> > > \"GetMaxSnapshotXidCount()\"?.\n> > >\n> > > Basically, while converting the HISTORIC snapshot to the MVCC snapshot\n> > > in \"SnapBuildInitialSnapshot()\", we add all the xids between\n> > > snap->xmin to snap->xmax to the MVCC snap->xip array (xids for which\n> > > commit were not recorded). The problem is that we add both topxids as\n> > > well as the subxids into the same array and expect that the \"xid\"\n> > > count does not cross the \"GetMaxSnapshotXidCount()\". So it seems like\n> > > an issue but I am not sure what is the fix for this, some options are\n> >\n> > It seems to me that it is a compromise between the restriction of the\n> > legitimate snapshot and snapshots created by snapbuild. If the xids\n> > overflow, the resulting snapshot may lose a siginificant xid, i.e, a\n> > top-level xid.\n> >\n> > > a) Don't limit the xid count in the exported snapshot and dynamically\n> > > resize the array b) Increase the limit to GetMaxSnapshotXidCount() +\n> > > GetMaxSnapshotSubxidCount(). But in option b) there would still be a\n> > > problem that how do we handle the overflowed subtransaction?\n> >\n> > I'm afraid that we shouldn't expand the size limits. If I understand\n> > it correctly, we only need the top-level xids in the exported snapshot\n> \n> But since we are using this as an MVCC snapshot, if we don't have the\n> subxid and if we also don't mark the \"suboverflowed\" flag then IMHO\n> the sub-transaction visibility might be wrong, Am I missing something?\n\nSorry I should have set suboverflowed in the generated snapshot.\nHowever, we can store subxid list as well when the snapshot (or\nrunning_xact) is not overflown. These (should) works the same way.\n\nOn physical standby, snapshot is created just filling up only subxip\nwith all top and sub xids (procrray.c:2400). It would be better we do\nthe same thing here.\n\n> > and reorder buffer knows whether a xid is a top-level or not after\n> > establishing a consistent snapshot.\n> >\n> > The attached diff tries to make SnapBuildInitialSnapshot exclude\n> > subtransaction from generated snapshots. It seems working fine for\n> > you repro. (Of course, I'm not confident that it is the correct thing,\n> > though..)\n> >\n> > What do you think about this?\n> \n> If your statement that we only need top-xids in the exported snapshot,\n> is true then this fix looks fine to me. If not then we might need to\n> add the sub-xids in the snapshot->subxip array and if it crosses the\n> limit of GetMaxSnapshotSubxidCount(), then we can mark \"suboverflowed\"\n> flag.\n\nSo I came up with the attached version.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 12 Oct 2021 13:59:59 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "At Tue, 12 Oct 2021 13:59:59 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> So I came up with the attached version.\n\nSorry, it was losing a piece of change.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 12 Oct 2021 14:05:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Tue, Oct 12, 2021 at 10:35 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 12 Oct 2021 13:59:59 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > So I came up with the attached version.\n>\n> Sorry, it was losing a piece of change.\n\nYeah, at a high level this is on the idea I have in mind, I will do a\ndetailed review in a day or two. Thanks for working on this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 Oct 2021 11:30:43 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Tue, Oct 12, 2021 at 11:30 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Oct 12, 2021 at 10:35 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Tue, 12 Oct 2021 13:59:59 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > > So I came up with the attached version.\n> >\n> > Sorry, it was losing a piece of change.\n>\n> Yeah, at a high level this is on the idea I have in mind, I will do a\n> detailed review in a day or two. Thanks for working on this.\n\nWhile doing the detailed review, I think there are a couple of\nproblems with the patch, the main problem of storing all the xid in\nthe snap->subxip is that once we mark the snapshot overflown then the\nXidInMVCCSnapshot, will not search the subxip array, instead it will\nfetch the topXid and search in the snap->xip array. Another issue is\nthat the total xids could be GetMaxSnapshotSubxidCount()\n+GetMaxSnapshotXidCount().\n\nI think what we should be doing is that if the xid is know subxid then\nadd in the snap->subxip array otherwise in snap->xip array. So\nsnap->xip array size will be GetMaxSnapshotXidCount() whereas the\nsnap->subxip array size will be GetMaxSnapshotSubxidCount(). And if\nthe array size is full then we can stop adding the subxids to the\narray.\n\nWhat is your thought on this?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Oct 2021 14:25:07 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Tue, Oct 19, 2021 at 2:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Oct 12, 2021 at 11:30 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Oct 12, 2021 at 10:35 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Tue, 12 Oct 2021 13:59:59 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > > > So I came up with the attached version.\n> > >\n> > > Sorry, it was losing a piece of change.\n> >\n> > Yeah, at a high level this is on the idea I have in mind, I will do a\n> > detailed review in a day or two. Thanks for working on this.\n>\n> While doing the detailed review, I think there are a couple of\n> problems with the patch, the main problem of storing all the xid in\n> the snap->subxip is that once we mark the snapshot overflown then the\n> XidInMVCCSnapshot, will not search the subxip array, instead it will\n> fetch the topXid and search in the snap->xip array.\n\nI missed that you are marking the snapshot as takenDuringRecovery so\nyour fix looks fine.\n\nAnother issue is\n> that the total xids could be GetMaxSnapshotSubxidCount()\n> +GetMaxSnapshotXidCount().\n>\n\nI have fixed this, the updated patch is attached.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 2 Nov 2021 16:40:39 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "Hi,\n\nOn Tue, Nov 02, 2021 at 04:40:39PM +0530, Dilip Kumar wrote:\n> \n> I have fixed this, the updated patch is attached.\n\nThe cfbot reports that this patch doesn't compile:\nhttps://cirrus-ci.com/task/5642000073490432?logs=build\n\n[03:01:24.477] snapbuild.c: In function ‘SnapBuildInitialSnapshot’:\n[03:01:24.477] snapbuild.c:587:2: error: ‘newsubxcnt’ undeclared (first use in this function); did you mean ‘newsubxip’?\n[03:01:24.477] 587 | newsubxcnt = 0;\n[03:01:24.477] | ^~~~~~~~~~\n[03:01:24.477] | newsubxip\n[03:01:24.477] snapbuild.c:587:2: note: each undeclared identifier is reported only once for each function it appears in\n[03:01:24.477] snapbuild.c:535:8: warning: unused variable ‘maxxidcnt’ [-Wunused-variable]\n[03:01:24.477] 535 | int maxxidcnt;\n[03:01:24.477] | ^~~~~~~~~\n\nCould you send a new version? In the meantime I will switch the patch to\nWaiting on Author.\n\n\n", "msg_date": "Wed, 12 Jan 2022 18:38:55 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Wed, Jan 12, 2022 at 4:09 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n>\n> On Tue, Nov 02, 2021 at 04:40:39PM +0530, Dilip Kumar wrote:\n> >\n> > I have fixed this, the updated patch is attached.\n>\n> The cfbot reports that this patch doesn't compile:\n> https://cirrus-ci.com/task/5642000073490432?logs=build\n>\n> [03:01:24.477] snapbuild.c: In function ‘SnapBuildInitialSnapshot’:\n> [03:01:24.477] snapbuild.c:587:2: error: ‘newsubxcnt’ undeclared (first\n> use in this function); did you mean ‘newsubxip’?\n> [03:01:24.477] 587 | newsubxcnt = 0;\n> [03:01:24.477] | ^~~~~~~~~~\n> [03:01:24.477] | newsubxip\n> [03:01:24.477] snapbuild.c:587:2: note: each undeclared identifier is\n> reported only once for each function it appears in\n> [03:01:24.477] snapbuild.c:535:8: warning: unused variable ‘maxxidcnt’\n> [-Wunused-variable]\n> [03:01:24.477] 535 | int maxxidcnt;\n> [03:01:24.477] | ^~~~~~~~~\n>\n> Could you send a new version? In the meantime I will switch the patch to\n> Waiting on Author.\n>\n\nThanks for notifying, I will work on this and send the update patch soon.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Jan 12, 2022 at 4:09 PM Julien Rouhaud <rjuju123@gmail.com> wrote:Hi,\n\nOn Tue, Nov 02, 2021 at 04:40:39PM +0530, Dilip Kumar wrote:\n> \n> I have fixed this, the updated patch is attached.\n\nThe cfbot reports that this patch doesn't compile:\nhttps://cirrus-ci.com/task/5642000073490432?logs=build\n\n[03:01:24.477] snapbuild.c: In function ‘SnapBuildInitialSnapshot’:\n[03:01:24.477] snapbuild.c:587:2: error: ‘newsubxcnt’ undeclared (first use in this function); did you mean ‘newsubxip’?\n[03:01:24.477]   587 |  newsubxcnt = 0;\n[03:01:24.477]       |  ^~~~~~~~~~\n[03:01:24.477]       |  newsubxip\n[03:01:24.477] snapbuild.c:587:2: note: each undeclared identifier is reported only once for each function it appears in\n[03:01:24.477] snapbuild.c:535:8: warning: unused variable ‘maxxidcnt’ [-Wunused-variable]\n[03:01:24.477]   535 |  int   maxxidcnt;\n[03:01:24.477]       |        ^~~~~~~~~\n\nCould you send a new version?  In the meantime I will switch the patch to\nWaiting on Author.\nThanks for notifying, I will work on this and send the update patch soon.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 17 Jan 2022 09:27:14 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "At Tue, 2 Nov 2021 16:40:39 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Tue, Oct 19, 2021 at 2:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Oct 12, 2021 at 11:30 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Tue, Oct 12, 2021 at 10:35 AM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > >\n> > > > At Tue, 12 Oct 2021 13:59:59 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > > > > So I came up with the attached version.\n> > > >\n> > > > Sorry, it was losing a piece of change.\n> > >\n> > > Yeah, at a high level this is on the idea I have in mind, I will do a\n> > > detailed review in a day or two. Thanks for working on this.\n> >\n> > While doing the detailed review, I think there are a couple of\n> > problems with the patch, the main problem of storing all the xid in\n> > the snap->subxip is that once we mark the snapshot overflown then the\n> > XidInMVCCSnapshot, will not search the subxip array, instead it will\n> > fetch the topXid and search in the snap->xip array.\n> \n> I missed that you are marking the snapshot as takenDuringRecovery so\n> your fix looks fine.\n> \n> Another issue is\n> > that the total xids could be GetMaxSnapshotSubxidCount()\n> > +GetMaxSnapshotXidCount().\n> >\n> \n> I have fixed this, the updated patch is attached.\n\nMmm. The size of the array cannot be larger than the numbers the\n*Connt() functions return. Thus we cannot attach the oversized array\nto ->subxip. (I don't recall clearly but that would lead to assertion\nfailure somewhere..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 31 Jan 2022 14:31:58 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "At Mon, 17 Jan 2022 09:27:14 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \r\n> On Wed, Jan 12, 2022 at 4:09 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\r\n> > The cfbot reports that this patch doesn't compile:\r\n> > https://cirrus-ci.com/task/5642000073490432?logs=build\r\n> >\r\n> > [03:01:24.477] snapbuild.c: In function ‘SnapBuildInitialSnapshot’:\r\n> > [03:01:24.477] snapbuild.c:587:2: error: ‘newsubxcnt’ undeclared (first\r\n> > use in this function); did you mean ‘newsubxip’?\r\n> > [03:01:24.477] 587 | newsubxcnt = 0;\r\n> > [03:01:24.477] | ^~~~~~~~~~\r\n> > [03:01:24.477] | newsubxip\r\n> > [03:01:24.477] snapbuild.c:587:2: note: each undeclared identifier is\r\n> > reported only once for each function it appears in\r\n> > [03:01:24.477] snapbuild.c:535:8: warning: unused variable ‘maxxidcnt’\r\n> > [-Wunused-variable]\r\n> > [03:01:24.477] 535 | int maxxidcnt;\r\n> > [03:01:24.477] | ^~~~~~~~~\r\n> >\r\n> > Could you send a new version? In the meantime I will switch the patch to\r\n> > Waiting on Author.\r\n> >\r\n> \r\n> Thanks for notifying, I will work on this and send the update patch soon.\r\n\r\nme> Mmm. The size of the array cannot be larger than the numbers the\r\nme> *Connt() functions return. Thus we cannot attach the oversized array\r\nme> to ->subxip. (I don't recall clearly but that would lead to assertion\r\nme> failure somewhere..)\r\n\r\nThen, I fixed the v3 error and post v4.\r\n\r\nTo recap:\r\n\r\nSnapBUildInitialSnapshot tries to store XIDS of both top and sub\r\ntransactions into snapshot->xip array but the array is easily\r\noverflowed and CREATE_REPLICATOIN_SLOT command ends with an error.\r\n\r\nTo fix this, this patch is doing the following things.\r\n\r\n- Use subxip array instead of xip array to allow us have larger array\r\n for xids. So the snapshot is marked as takenDuringRecovery, which\r\n is a kind of abuse but largely reduces the chance of getting\r\n \"initial slot snapshot too large\" error.\r\n\r\n- Still if subxip is overflowed, retry with excluding subtransactions\r\n then set suboverflowed. This causes XidInMVCCSnapshot (finally)\r\n scans over subxip array for targetted top-level xid.\r\n\r\nWe could take another way: make a !takenDuringRecovery snapshot by\r\nusing xip instead of subxip. It is cleaner but it has far larger\r\nchance of needing to retry.\r\n\r\n(renamed the patch since it represented a part of the patch)\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center", "msg_date": "Mon, 31 Jan 2022 15:20:11 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Mon, Jan 31, 2022 at 11:50 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> SnapBUildInitialSnapshot tries to store XIDS of both top and sub\n> transactions into snapshot->xip array but the array is easily\n> overflowed and CREATE_REPLICATOIN_SLOT command ends with an error.\n>\n> To fix this, this patch is doing the following things.\n>\n> - Use subxip array instead of xip array to allow us have larger array\n> for xids. So the snapshot is marked as takenDuringRecovery, which\n> is a kind of abuse but largely reduces the chance of getting\n> \"initial slot snapshot too large\" error.\n>\n> - Still if subxip is overflowed, retry with excluding subtransactions\n> then set suboverflowed. This causes XidInMVCCSnapshot (finally)\n> scans over subxip array for targetted top-level xid.\n>\n> We could take another way: make a !takenDuringRecovery snapshot by\n> using xip instead of subxip. It is cleaner but it has far larger\n> chance of needing to retry.\n>\n> (renamed the patch since it represented a part of the patch)\n>\n\nThanks for the updated version. I will look into it this week.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 31 Jan 2022 12:34:52 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Mon, Jan 31, 2022 at 11:50 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 17 Jan 2022 09:27:14 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n>\n> me> Mmm. The size of the array cannot be larger than the numbers the\n> me> *Connt() functions return. Thus we cannot attach the oversized array\n> me> to ->subxip. (I don't recall clearly but that would lead to assertion\n> me> failure somewhere..)\n>\n> Then, I fixed the v3 error and post v4.\n\nYeah you are right, SetTransactionSnapshot() has that assertion.\nAnyway after looking again it appears that\nGetMaxSnapshotSubxidCount is the correct size because this is\nPGPROC_MAX_CACHED_SUBXIDS +1, i.e. it considers top transactions as\nwell so we don't need to add them separately.\n\n>\n> SnapBUildInitialSnapshot tries to store XIDS of both top and sub\n> transactions into snapshot->xip array but the array is easily\n> overflowed and CREATE_REPLICATOIN_SLOT command ends with an error.\n>\n> To fix this, this patch is doing the following things.\n>\n> - Use subxip array instead of xip array to allow us have larger array\n> for xids. So the snapshot is marked as takenDuringRecovery, which\n> is a kind of abuse but largely reduces the chance of getting\n> \"initial slot snapshot too large\" error.\n\nRight. I think the patch looks fine to me.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Feb 2022 13:52:54 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "В Пн, 07/02/2022 в 13:52 +0530, Dilip Kumar пишет:\n> On Mon, Jan 31, 2022 at 11:50 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Mon, 17 Jan 2022 09:27:14 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > \n> > me> Mmm. The size of the array cannot be larger than the numbers the\n> > me> *Connt() functions return. Thus we cannot attach the oversized array\n> > me> to ->subxip. (I don't recall clearly but that would lead to assertion\n> > me> failure somewhere..)\n> > \n> > Then, I fixed the v3 error and post v4.\n> \n> Yeah you are right, SetTransactionSnapshot() has that assertion.\n> Anyway after looking again it appears that\n> GetMaxSnapshotSubxidCount is the correct size because this is\n> PGPROC_MAX_CACHED_SUBXIDS +1, i.e. it considers top transactions as\n> well so we don't need to add them separately.\n> \n> > SnapBUildInitialSnapshot tries to store XIDS of both top and sub\n> > transactions into snapshot->xip array but the array is easily\n> > overflowed and CREATE_REPLICATOIN_SLOT command ends with an error.\n> > \n> > To fix this, this patch is doing the following things.\n> > \n> > - Use subxip array instead of xip array to allow us have larger array\n> > for xids. So the snapshot is marked as takenDuringRecovery, which\n> > is a kind of abuse but largely reduces the chance of getting\n> > \"initial slot snapshot too large\" error.\n> \n> Right. I think the patch looks fine to me.\n> \n\nGood day.\n\nI've looked to the patch. Personally I'd prefer dynamically resize\nxip array. But I think there is issue with upgrade if replica source\nis upgraded before destination, right?\n\nConcerning patch, I think more comments should be written about new\nusage case for `takenDuringRecovery`. May be this field should be renamed\nat all?\n\nAnd there are checks for `takenDuringRecovery` in `heapgetpage` and\n`heapam_scan_sample_next_tuple`. Are this checks affected by the change?\nNeither the preceding discussion nor commit message answer me.\n\n-------\n\nregards\n\nYura Sokolov\nPostgres Professional\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com\n\n\n\n", "msg_date": "Sun, 13 Feb 2022 17:35:38 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "At Sun, 13 Feb 2022 17:35:38 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> В Пн, 07/02/2022 в 13:52 +0530, Dilip Kumar пишет:\n> > Right. I think the patch looks fine to me.\n> > \n> \n> Good day.\n> \n> I've looked to the patch. Personally I'd prefer dynamically resize\n> xip array. But I think there is issue with upgrade if replica source\n> is upgraded before destination, right?\n\nI don't think it is relevant. I think we don't convey snapshot via\nstreaming replication. But I think that expanding xip or subxip is\nwrong, since it is tied with ProcArray structure. (Even though we\nabuse the arrays in some situations, like this).\n\n> Concerning patch, I think more comments should be written about new\n> usage case for `takenDuringRecovery`. May be this field should be renamed\n> at all?\n\nI don't feel the need to rename it so much. It just signals that \"this\nsnapshot is in the form for recovery\". The more significant reason is\nthat I don't come up better name:p\n\nAnd the comment is slightly modified and gets a pointer to detailed\ncomment.\n\n+\t * Although this snapshot is not acutally taken during recovery, all XIDs\n+\t * are stored in subxip. See GetSnapshotData for the details of that form\n+\t * of snapshot.\n\n\n> And there are checks for `takenDuringRecovery` in `heapgetpage` and\n> `heapam_scan_sample_next_tuple`. Are this checks affected by the change?\n> Neither the preceding discussion nor commit message answer me.\n\nThe snapshot works correctly, but for the heapgetpage case, it foces\nall_visible to be false. That unnecessarily prevents visibility check\nfrom skipping.\n\nAn annoying thing in SnapBuildInitialSnapshot is that we don't know\nthe number of xids before looping over the xid range, and we don't\nwant to bother sorting top-level xids and subxids unless we have to do\nso.\n\nIs it better that we hassle in SnapBuildInitialSnapshot to create a\n!takenDuringRecovery snapshot?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 01 Apr 2022 14:44:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "At Fri, 01 Apr 2022 14:44:30 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Sun, 13 Feb 2022 17:35:38 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > And there are checks for `takenDuringRecovery` in `heapgetpage` and\n> > `heapam_scan_sample_next_tuple`. Are this checks affected by the change?\n> > Neither the preceding discussion nor commit message answer me.\n> \n> The snapshot works correctly, but for the heapgetpage case, it foces\n> all_visible to be false. That unnecessarily prevents visibility check\n> from skipping.\n> \n> An annoying thing in SnapBuildInitialSnapshot is that we don't know\n> the number of xids before looping over the xid range, and we don't\n> want to bother sorting top-level xids and subxids unless we have to do\n> so.\n> \n> Is it better that we hassle in SnapBuildInitialSnapshot to create a\n> !takenDuringRecovery snapshot?\n\nSo this is that. v5 creates a regular snapshot.\n\nBy the way, is there any chance this could be committed to 15?\nOtherwise I'll immediately move this to the next CF.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 01 Apr 2022 15:53:16 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Thu, Mar 31, 2022 at 11:53 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> So this is that. v5 creates a regular snapshot.\n\nThis patch will need a quick rebase over 905c020bef9, which added\n`extern` to several missing locations.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 5 Jul 2022 11:32:42 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "At Tue, 5 Jul 2022 11:32:42 -0700, Jacob Champion <jchampion@timescale.com> wrote in \n> On Thu, Mar 31, 2022 at 11:53 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > So this is that. v5 creates a regular snapshot.\n> \n> This patch will need a quick rebase over 905c020bef9, which added\n> `extern` to several missing locations.\n\nThanks! Just rebased.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 19 Jul 2022 11:55:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Mon, Jul 18, 2022 at 10:55 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Thanks! Just rebased.\n\nHi,\n\nI mentioned this patch to Andres in conversation, and he expressed a\nconcern that there might be no guarantee that we retain enough CLOG to\nlook up XIDs. Presumably this wouldn't be an issue when the snapshot\ndoesn't get marked suboverflowed, but what if it does?\n\nAdding Andres in the hopes that he may comment further.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 9 Sep 2022 13:19:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "Hi,\n\nThanks for working on this!\n\n\nI think this should include a test that fails without this change and succeeds\nwith it...\n\n\nOn 2022-07-19 11:55:06 +0900, Kyotaro Horiguchi wrote:\n> From abcf0a0e0b3e2de9927d8943a3e3c145ab189508 Mon Sep 17 00:00:00 2001\n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> Date: Tue, 19 Jul 2022 11:50:29 +0900\n> Subject: [PATCH v6] Create correct snapshot during CREATE_REPLICATION_SLOT\n\nThis sees a tad misleading - the previous snapshot wasn't borken, right?\n\n\n> +/*\n> + * ReorderBufferXidIsKnownSubXact\n> + *\t\tReturns true if the xid is a known subtransaction.\n> + */\n> +bool\n> +ReorderBufferXidIsKnownSubXact(ReorderBuffer *rb, TransactionId xid)\n> +{\n> +\tReorderBufferTXN *txn;\n> +\n> +\ttxn = ReorderBufferTXNByXid(rb, xid, false,\n> +\t\t\t\t\t\t\t\tNULL, InvalidXLogRecPtr, false);\n> +\n> +\t/* a known subtxn? */\n> +\tif (txn && rbtxn_is_known_subxact(txn))\n> +\t\treturn true;\n> +\n> +\treturn false;\n> +}\n\nThe comments here just seem to restate the code....\n\n\nIt's not obvious to me that it's the right design (or even correct) to ask\nreorderbuffer about an xid being a subxid. Maybe I'm missing something, but\nwhy would reorderbuffer even be guaranteed to know about all these subxids?\n\n\n> @@ -568,9 +571,17 @@ SnapBuildInitialSnapshot(SnapBuild *builder)\n>\n> \tMyProc->xmin = snap->xmin;\n>\n> -\t/* allocate in transaction context */\n> +\t/*\n> +\t * Allocate in transaction context.\n> +\t *\n> +\t * We could use only subxip to store all xids (takenduringrecovery\n> +\t * snapshot) but that causes useless visibility checks later so we hasle to\n> +\t * create a normal snapshot.\n> +\t */\n\nI can't really parse this comment at this point, and I seriously doubt I could\nlater on.\n\n\n> @@ -591,12 +605,24 @@ SnapBuildInitialSnapshot(SnapBuild *builder)\n>\n> \t\tif (test == NULL)\n> \t\t{\n> -\t\t\tif (newxcnt >= GetMaxSnapshotXidCount())\n> -\t\t\t\tereport(ERROR,\n> -\t\t\t\t\t\t(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),\n> -\t\t\t\t\t\t errmsg(\"initial slot snapshot too large\")));\n> -\n> -\t\t\tnewxip[newxcnt++] = xid;\n> +\t\t\t/* Store the xid to the appropriate xid array */\n> +\t\t\tif (ReorderBufferXidIsKnownSubXact(builder->reorder, xid))\n> +\t\t\t{\n> +\t\t\t\tif (!overflowed)\n> +\t\t\t\t{\n> +\t\t\t\t\tif (newsubxcnt >= GetMaxSnapshotSubxidCount())\n> +\t\t\t\t\t\toverflowed = true;\n> +\t\t\t\t\telse\n> +\t\t\t\t\t\tnewsubxip[newsubxcnt++] = xid;\n> +\t\t\t\t}\n> +\t\t\t}\n> +\t\t\telse\n> +\t\t\t{\n> +\t\t\t\tif (newxcnt >= GetMaxSnapshotXidCount())\n> +\t\t\t\t\telog(ERROR,\n> +\t\t\t\t\t\t \"too many transactions while creating snapshot\");\n> +\t\t\t\tnewxip[newxcnt++] = xid;\n> +\t\t\t}\n> \t\t}\n\nHm, this is starting to be pretty deeply nested...\n\n\nI wonder if a better fix here wouldn't be to allow importing a snapshot with a\nlarger ->xid array. Yes, we can't do that in CurrentSnapshotData, but IIRC we\nneed to be in a transactional snapshot anyway, which is copied anyway?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 12 Sep 2022 14:51:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "Hi,\n\nOn 2022-09-09 13:19:14 -0400, Robert Haas wrote:\n> I mentioned this patch to Andres in conversation, and he expressed a\n> concern that there might be no guarantee that we retain enough CLOG to\n> look up XIDs.\n\nI was concerned we wouldn't keep enough subtrans, rather than clog. But I\nthink we're ok, because we need to have an appropriate ->xmin for exporting /\nimporting the snapshot.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 12 Sep 2022 14:53:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Tue, Sep 13, 2022 at 3:22 AM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> It's not obvious to me that it's the right design (or even correct) to ask\n> reorderbuffer about an xid being a subxid. Maybe I'm missing something, but\n> why would reorderbuffer even be guaranteed to know about all these subxids?\n\nYeah, you are right, the reorderbuffer will only know about the\ntransaction for which changes got added to the reorder buffer. So\nthis seems not to be the right design idea.\n\n>\n> I wonder if a better fix here wouldn't be to allow importing a snapshot with a\n> larger ->xid array. Yes, we can't do that in CurrentSnapshotData, but IIRC we\n> need to be in a transactional snapshot anyway, which is copied anyway?\n\nYeah when I first found this issue, I thought that should be the\nsolution. But later it went in a different direction.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 13 Sep 2022 07:00:42 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "Thanks for raizing this up, Robert and the comment, Andres.\n\nAt Tue, 13 Sep 2022 07:00:42 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Tue, Sep 13, 2022 at 3:22 AM Andres Freund <andres@anarazel.de> wrote:\n> \n> >\n> > It's not obvious to me that it's the right design (or even correct) to ask\n> > reorderbuffer about an xid being a subxid. Maybe I'm missing something, but\n> > why would reorderbuffer even be guaranteed to know about all these subxids?\n> \n> Yeah, you are right, the reorderbuffer will only know about the\n> transaction for which changes got added to the reorder buffer. So\n> this seems not to be the right design idea.\n\nThat function is called after the SnapBuild reaches\nSNAPBUILD_CONSISTENT state ,or SnapBuildInitialSnapshot() rejects\nother than that state. That is, IIUC the top-sub relationship of all\nthe currently running transactions is fully known to reorder buffer.\nWe need a comment about that.\n\n> > I wonder if a better fix here wouldn't be to allow importing a snapshot with a\n> > larger ->xid array. Yes, we can't do that in CurrentSnapshotData, but IIRC we\n> > need to be in a transactional snapshot anyway, which is copied anyway?\n> \n> Yeah when I first found this issue, I thought that should be the\n> solution. But later it went in a different direction.\n\nI think that that is the best solution if rbtxn_is_known_subxzact() is\nknown to be unreliable at the time.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 13 Sep 2022 15:22:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Tue, Sep 13, 2022 at 11:52 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Thanks for raizing this up, Robert and the comment, Andres.\n>\n> At Tue, 13 Sep 2022 07:00:42 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > On Tue, Sep 13, 2022 at 3:22 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > >\n> > > It's not obvious to me that it's the right design (or even correct) to ask\n> > > reorderbuffer about an xid being a subxid. Maybe I'm missing something, but\n> > > why would reorderbuffer even be guaranteed to know about all these subxids?\n> >\n> > Yeah, you are right, the reorderbuffer will only know about the\n> > transaction for which changes got added to the reorder buffer. So\n> > this seems not to be the right design idea.\n>\n> That function is called after the SnapBuild reaches\n> SNAPBUILD_CONSISTENT state ,or SnapBuildInitialSnapshot() rejects\n> other than that state. That is, IIUC the top-sub relationship of all\n> the currently running transactions is fully known to reorder buffer.\n> We need a comment about that.\n\nI don't think this assumption is true, any xid started after switching\nto the SNAPBUILD_FULL_SNAPSHOT and before switching to the\nSNAPBUILD_CONSISTENT, might still be in progress so we can not\nidentify whether they are subxact or not from reorder buffer.\n\nrefer to this comment:\n/*\n* c) transition from FULL_SNAPSHOT to CONSISTENT.\n*\n* In FULL_SNAPSHOT state (see d) ), and this xl_running_xacts'\n* oldestRunningXid is >= than nextXid from when we switched to\n* FULL_SNAPSHOT. This means all transactions that are currently in\n* progress have a catalog snapshot, and all their changes have been\n* collected. Switch to CONSISTENT.\n*/\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 13 Sep 2022 12:08:18 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "At Mon, 12 Sep 2022 14:51:56 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> Thanks for working on this!\n> \n> \n> I think this should include a test that fails without this change and succeeds\n> with it...\n> \n> \n> On 2022-07-19 11:55:06 +0900, Kyotaro Horiguchi wrote:\n> > From abcf0a0e0b3e2de9927d8943a3e3c145ab189508 Mon Sep 17 00:00:00 2001\n> > From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > Date: Tue, 19 Jul 2022 11:50:29 +0900\n> > Subject: [PATCH v6] Create correct snapshot during CREATE_REPLICATION_SLOT\n> \n> This sees a tad misleading - the previous snapshot wasn't borken, right?\n\nI saw it kind of broken that ->xip contains sub transactions. But I\ndidn't meant it's broken by \"correct\". Is \"proper\" suitable there?\n\n\n> > +/*\n> > + * ReorderBufferXidIsKnownSubXact\n> > + *\t\tReturns true if the xid is a known subtransaction.\n> > + */\n> > +bool\n> > +ReorderBufferXidIsKnownSubXact(ReorderBuffer *rb, TransactionId xid)\n> > +{\n> > +\tReorderBufferTXN *txn;\n> > +\n> > +\ttxn = ReorderBufferTXNByXid(rb, xid, false,\n> > +\t\t\t\t\t\t\t\tNULL, InvalidXLogRecPtr, false);\n> > +\n> > +\t/* a known subtxn? */\n> > +\tif (txn && rbtxn_is_known_subxact(txn))\n> > +\t\treturn true;\n> > +\n> > +\treturn false;\n> > +}\n> \n> The comments here just seem to restate the code....\n\nYeah, it is pulled from the existing code but result looks like so..\n\n> It's not obvious to me that it's the right design (or even correct) to ask\n> reorderbuffer about an xid being a subxid. Maybe I'm missing something, but\n> why would reorderbuffer even be guaranteed to know about all these subxids?\n\nI think you're missing that the code is visited only after the reorder\nbuffer's state becomes SNAPBUILD_CONSISTENT. I think\nrbtxn_is_known_subxact() is reliable at that stage.\n\n> > @@ -568,9 +571,17 @@ SnapBuildInitialSnapshot(SnapBuild *builder)\n> >\n> > \tMyProc->xmin = snap->xmin;\n> >\n> > -\t/* allocate in transaction context */\n> > +\t/*\n> > +\t * Allocate in transaction context.\n> > +\t *\n> > +\t * We could use only subxip to store all xids (takenduringrecovery\n> > +\t * snapshot) but that causes useless visibility checks later so we hasle to\n> > +\t * create a normal snapshot.\n> > +\t */\n> \n> I can't really parse this comment at this point, and I seriously doubt I could\n> later on.\n\nMmm. The \"takenduringrecovery\" is relly perplexing (it has been\nsomehow lower-cased..), but after removing the parenthesized part, it\nlooks like this. And it had a misspelling but I removed that word. Is\nthis still unreadable?\n\nWe could use only subxip to store all xids but that causes useless\nvisibility checks later so we create a normal snapshot.\n\n\n> > @@ -591,12 +605,24 @@ SnapBuildInitialSnapshot(SnapBuild *builder)\n> >\n> > \t\tif (test == NULL)\n> > \t\t{\n> > -\t\t\tif (newxcnt >= GetMaxSnapshotXidCount())\n> > -\t\t\t\tereport(ERROR,\n> > -\t\t\t\t\t\t(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),\n> > -\t\t\t\t\t\t errmsg(\"initial slot snapshot too large\")));\n> > -\n> > -\t\t\tnewxip[newxcnt++] = xid;\n> > +\t\t\t/* Store the xid to the appropriate xid array */\n> > +\t\t\tif (ReorderBufferXidIsKnownSubXact(builder->reorder, xid))\n> > +\t\t\t{\n> > +\t\t\t\tif (!overflowed)\n> > +\t\t\t\t{\n> > +\t\t\t\t\tif (newsubxcnt >= GetMaxSnapshotSubxidCount())\n> > +\t\t\t\t\t\toverflowed = true;\n> > +\t\t\t\t\telse\n> > +\t\t\t\t\t\tnewsubxip[newsubxcnt++] = xid;\n> > +\t\t\t\t}\n> > +\t\t\t}\n> > +\t\t\telse\n> > +\t\t\t{\n> > +\t\t\t\tif (newxcnt >= GetMaxSnapshotXidCount())\n> > +\t\t\t\t\telog(ERROR,\n> > +\t\t\t\t\t\t \"too many transactions while creating snapshot\");\n> > +\t\t\t\tnewxip[newxcnt++] = xid;\n> > +\t\t\t}\n> > \t\t}\n> \n> Hm, this is starting to be pretty deeply nested...\n\nYeah, at least one if() is removable.\n\n> I wonder if a better fix here wouldn't be to allow importing a snapshot with a\n> larger ->xid array. Yes, we can't do that in CurrentSnapshotData, but IIRC we\n> need to be in a transactional snapshot anyway, which is copied anyway?\n\nThe other reason for oversized xip array is it causes visibility check\nwhen it is used. AFAICS XidInMVCCSnapshot has additional path for\ntakenDuringRecovery snapshots that contains a linear search (currently\nit is replaced by pg_lfind32()). This saves us from doing this for\nthat snapshot.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 13 Sep 2022 15:45:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "At Tue, 13 Sep 2022 12:08:18 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Tue, Sep 13, 2022 at 11:52 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > That function is called after the SnapBuild reaches\n> > SNAPBUILD_CONSISTENT state ,or SnapBuildInitialSnapshot() rejects\n> > other than that state. That is, IIUC the top-sub relationship of all\n> > the currently running transactions is fully known to reorder buffer.\n> > We need a comment about that.\n> \n> I don't think this assumption is true, any xid started after switching\n> to the SNAPBUILD_FULL_SNAPSHOT and before switching to the\n> SNAPBUILD_CONSISTENT, might still be in progress so we can not\n> identify whether they are subxact or not from reorder buffer.\n\nYeah, I misunderstood that the relationship is recorded earlier\n(how?). Thus it is not reliable in the first place.\n\nI agree that the best way is oversized xip. \n\n\nBy the way, I feel that \"is >= than\" is redundant or plain wrong..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 13 Sep 2022 16:10:59 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "At Tue, 13 Sep 2022 15:45:07 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 12 Sep 2022 14:51:56 -0700, Andres Freund <andres@anarazel.de> wrote in \n> > This sees a tad misleading - the previous snapshot wasn't borken, right?\n> \n> I saw it kind of broken that ->xip contains sub transactions. But I\n> didn't meant it's broken by \"correct\". Is \"proper\" suitable there?\n\nNo. It's not broken if it is takenDuringRecovery. So this flag can be\nused to notify that xip can be oversized.\n\nI realized that rbtxn_is_known_subxact is not reliable. I'm\nredirecting to oversized xip. Pleas wait for a while.\n\nregareds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 13 Sep 2022 16:15:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "At Tue, 13 Sep 2022 16:10:59 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 13 Sep 2022 12:08:18 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> > On Tue, Sep 13, 2022 at 11:52 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > That function is called after the SnapBuild reaches\n> > > SNAPBUILD_CONSISTENT state ,or SnapBuildInitialSnapshot() rejects\n> > > other than that state. That is, IIUC the top-sub relationship of all\n> > > the currently running transactions is fully known to reorder buffer.\n> > > We need a comment about that.\n> > \n> > I don't think this assumption is true, any xid started after switching\n> > to the SNAPBUILD_FULL_SNAPSHOT and before switching to the\n> > SNAPBUILD_CONSISTENT, might still be in progress so we can not\n> > identify whether they are subxact or not from reorder buffer.\n> \n> Yeah, I misunderstood that the relationship is recorded earlier\n> (how?). Thus it is not reliable in the first place.\n> \n> I agree that the best way is oversized xip. \n> \n> \n> By the way, I feel that \"is >= than\" is redundant or plain wrong..\n\nBy the way GetSnapshotData() does this:\n\n>\t\tsnapshot->subxip = (TransactionId *)\n>\t\t\tmalloc(GetMaxSnapshotSubxidCount() * sizeof(TransactionId));\n...\n>\tif (!snapshot->takenDuringRecovery)\n...\n>\telse\n>\t{\n>\t\tsubcount = KnownAssignedXidsGetAndSetXmin(snapshot->subxip, &xmin,\n>\t\t\t\t\t\t\t\t\t\t\t\t xmax);\n\nIt is possible that the subxip is overrun. We need to expand the array\nsomehow. Or assign the array of the size (GetMaxSnapshotXidCount() +\nGetMaxSnapshotSubxidCount()) for takenDuringRecovery snapshots.\n\n(I feel deja vu..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 13 Sep 2022 16:30:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "Sigh..\n\nAt Tue, 13 Sep 2022 16:30:06 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> It is possible that the subxip is overrun. We need to expand the array\n> somehow. Or assign the array of the size (GetMaxSnapshotXidCount() +\n> GetMaxSnapshotSubxidCount()) for takenDuringRecovery snapshots.\n\nAnd I found that this is already done. What we should is doing the\nsame thing in snapbuild.\n\nSorry for the noise..\n\nregards.\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 13 Sep 2022 16:31:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "At Tue, 13 Sep 2022 16:15:34 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 13 Sep 2022 15:45:07 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > At Mon, 12 Sep 2022 14:51:56 -0700, Andres Freund <andres@anarazel.de> wrote in \n> > > This sees a tad misleading - the previous snapshot wasn't borken, right?\n> > \n> > I saw it kind of broken that ->xip contains sub transactions. But I\n> > didn't meant it's broken by \"correct\". Is \"proper\" suitable there?\n> \n> No. It's not broken if it is takenDuringRecovery. So this flag can be\n> used to notify that xip can be oversized.\n> \n> I realized that rbtxn_is_known_subxact is not reliable. I'm\n> redirecting to oversized xip. Pleas wait for a while.\n\nHowever, the reader of saved snapshots (ImportSnapshot) has the\nrestriction that\n\n>\tif (xcnt < 0 || xcnt > GetMaxSnapshotXidCount())\n>\t\tereport(ERROR,\n\nand\n\n>\t\tif (xcnt < 0 || xcnt > GetMaxSnapshotSubxidCount())\n>\t\t\tereport(ERROR,\n (this xid is subxcnt)\n\nAnd ExportSnapshot repalces oversized subxip with overflowed.\n\nSo even when GetSnapshotData() returns a snapshot with oversized\nsubxip, it is saved as just \"overflowed\" on exporting. I don't think\nthis is the expected behavior since such (no xip and overflowed)\nsnapshot no longer works.\n\nOn the other hand, it seems to me that snapbuild doesn't like\ntakenDuringRecovery snapshots.\n\nSo snapshot needs additional flag signals that xip is oversized and\nall xid are holded there. And also need to let takenDuringRecovery\nsuggest subxip is oversized.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 13 Sep 2022 17:31:05 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Tue, Sep 13, 2022 at 05:31:05PM +0900, Kyotaro Horiguchi wrote:\n> And ExportSnapshot repalces oversized subxip with overflowed.\n> \n> So even when GetSnapshotData() returns a snapshot with oversized\n> subxip, it is saved as just \"overflowed\" on exporting. I don't think\n> this is the expected behavior since such (no xip and overflowed)\n> snapshot no longer works.\n> \n> On the other hand, it seems to me that snapbuild doesn't like\n> takenDuringRecovery snapshots.\n> \n> So snapshot needs additional flag signals that xip is oversized and\n> all xid are holded there. And also need to let takenDuringRecovery\n> suggest subxip is oversized.\n\nThe discussion seems to have stopped here. As this is classified as a\nbug fix, I have moved this patch to next CF, waiting on author for the\nmoment.\n--\nMichael", "msg_date": "Wed, 12 Oct 2022 14:10:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "Hi,\n\nOn 2022-10-12 14:10:15 +0900, Michael Paquier wrote:\n> The discussion seems to have stopped here. As this is classified as a\n> bug fix, I have moved this patch to next CF, waiting on author for the\n> moment.\n\nFWIW, I view this more as lifting a limitation. I wouldn't want to\nbackpatch the change.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Feb 2023 07:26:06 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Wed, 12 Oct 2022 at 01:10, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Sep 13, 2022 at 05:31:05PM +0900, Kyotaro Horiguchi wrote:\n> > And ExportSnapshot repalces oversized subxip with overflowed.\n> >\n> > So even when GetSnapshotData() returns a snapshot with oversized\n> > subxip, it is saved as just \"overflowed\" on exporting. I don't think\n> > this is the expected behavior since such (no xip and overflowed)\n> > snapshot no longer works.\n> >\n> > On the other hand, it seems to me that snapbuild doesn't like\n> > takenDuringRecovery snapshots.\n> >\n> > So snapshot needs additional flag signals that xip is oversized and\n> > all xid are holded there. And also need to let takenDuringRecovery\n> > suggest subxip is oversized.\n>\n> The discussion seems to have stopped here. As this is classified as a\n> bug fix, I have moved this patch to next CF, waiting on author for the\n> moment.\n\nKyotoro Horiguchi, any chance you'll be able to work on this for this\ncommitfest? If so shout (or anyone else is planning to push it over\nthe line.... Andres?) otherwise I'll move it on to the next release.\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Mon, 20 Mar 2023 13:46:51 -0400", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "At Mon, 20 Mar 2023 13:46:51 -0400, \"Gregory Stark (as CFM)\" <stark.cfm@gmail.com> wrote in \n> On Wed, 12 Oct 2022 at 01:10, Michael Paquier <michael@paquier.xyz> wrote:\n> > The discussion seems to have stopped here. As this is classified as a\n> > bug fix, I have moved this patch to next CF, waiting on author for the\n> > moment.\n> \n> Kyotoro Horiguchi, any chance you'll be able to work on this for this\n> commitfest? If so shout (or anyone else is planning to push it over\n> the line.... Andres?) otherwise I'll move it on to the next release.\n\nUgg. sorry for being lazy. I have lost track of the conversation. I'm\ncurrently working on this and will come back soon with a new version.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 22 Mar 2023 14:27:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "At Wed, 22 Mar 2023 14:27:40 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 20 Mar 2023 13:46:51 -0400, \"Gregory Stark (as CFM)\" <stark.cfm@gmail.com> wrote in \n> > Kyotoro Horiguchi, any chance you'll be able to work on this for this\n> > commitfest? If so shout (or anyone else is planning to push it over\n> > the line.... Andres?) otherwise I'll move it on to the next release.\n> \n> Ugg. sorry for being lazy. I have lost track of the conversation. I'm\n> currently working on this and will come back soon with a new version.\n\nI relized that attempting to make SnapshotData.xip expansible was\nmaking procarray.c and snapmgr.c too complicated. The main reason is\nthat SnapShotData is allocated in various ways, like on the stack,\nusing palloc including xip/subxip arrays, with palloc then allocating\nxip/subxip arrays separately, or statically allocated and then having\nxip/subxip arrays malloc'ed later. This variety was making the\nexpansion logic a mess.\n\nSo I went back to square one and decided to use subxip as an extension\nfor the xip array instead.\n\nLike the comment added in the function SnapBuildInitialSnapshot\nmentions, I don't think we can reliably identify top-level XIDs. So,\nthis patch just increases the allowed number of XIDs by using the\nsubxip array.\n\n(The title of the patch was changed accordingly.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 23 Mar 2023 14:23:05 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Thu, Mar 23, 2023 at 10:53 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 22 Mar 2023 14:27:40 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > At Mon, 20 Mar 2023 13:46:51 -0400, \"Gregory Stark (as CFM)\" <stark.cfm@gmail.com> wrote in\n> > > Kyotoro Horiguchi, any chance you'll be able to work on this for this\n> > > commitfest? If so shout (or anyone else is planning to push it over\n> > > the line.... Andres?) otherwise I'll move it on to the next release.\n> >\n> > Ugg. sorry for being lazy. I have lost track of the conversation. I'm\n> > currently working on this and will come back soon with a new version.\n>\n> I relized that attempting to make SnapshotData.xip expansible was\n> making procarray.c and snapmgr.c too complicated. The main reason is\n> that SnapShotData is allocated in various ways, like on the stack,\n> using palloc including xip/subxip arrays, with palloc then allocating\n> xip/subxip arrays separately, or statically allocated and then having\n> xip/subxip arrays malloc'ed later. This variety was making the\n> expansion logic a mess.\n>\n> So I went back to square one and decided to use subxip as an extension\n> for the xip array instead.\n>\n> Like the comment added in the function SnapBuildInitialSnapshot\n> mentions, I don't think we can reliably identify top-level XIDs. So,\n> this patch just increases the allowed number of XIDs by using the\n> subxip array.\n\nThanks for working on this, your idea looks fine but my only worry is\nthat in the future if someone tries to change the logic of\nXidInMVCCSnapshot() then they must be aware that the snap->xip array\nand snap->subxip array no long distinguishes the top xids and subxids.\nI agree with the current logic if we are not marking sub-overflow then\nthere is no issue, so can we document this in the SnapshotData\nstructure?\n\nAlso, there are some typos in the patch\n/idetify/identify\n/carete/create\n/Aallocate/Allocate\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Mar 2023 14:15:12 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "Thanks for looking this!\r\n\r\nAt Thu, 23 Mar 2023 14:15:12 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \r\n> On Thu, Mar 23, 2023 at 10:53 AM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> Thanks for working on this, your idea looks fine but my only worry is\r\n> that in the future if someone tries to change the logic of\r\n> XidInMVCCSnapshot() then they must be aware that the snap->xip array\r\n> and snap->subxip array no long distinguishes the top xids and subxids.\r\n\r\nYeah, I had the same thought when I was working on the posted version.\r\n\r\n> I agree with the current logic if we are not marking sub-overflow then\r\n> there is no issue, so can we document this in the SnapshotData\r\n> structure?\r\n\r\n(I found that it was alrady mentioned...)\r\n\r\nIn a unpublished version (what I referred to as \"a mess\"), I added a\r\nflag called \"topsub_mixed\" to SnapshotData, indicating that XIDs of\r\ntop and sub transactions are stored in xip and subxip arrays in a\r\nmixed manner. However, I eventually removed it since it could only be\r\nused for sanity checks related to suboverflowed.\r\n\r\nI inserted the following sentense in the middle of the comments for\r\nxip and subxip.\r\n\r\n> In the case of !suboverflowed, there's a situation where this\r\n> contains both top and sub-transaction IDs in a mixed manner.\r\n\r\nAnd added similar a similar sentense to a comment of\r\nXidInMVCCSnapshot.\r\n\r\nWhile doning this, I realized that we could simplify and optimize XID\r\nsearch code by combining the two XID arrays. If !suboverflowed, the\r\narray stored all active XIDs of both top and\r\nsub-transactions. Otherwise it only stores active top XIDs and maybe\r\nXIDs of some sub-transactions. If many subXIDs are stored when\r\noverflowed, there might lead to some degradation but I think the win\r\nwe gain from searching just one XID array in most cases outweighs\r\nthat. (I didn't do this (of course) in this version.)\r\n\r\n> Also, there are some typos in the patch\r\n> /idetify/identify\r\n> /carete/create\r\n> /Aallocate/Allocate\r\n\r\nOops! Thanks for pointing out them.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center", "msg_date": "Fri, 24 Mar 2023 12:01:16 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Thu, Mar 23, 2023 at 11:02 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> [ new patch ]\n\nWell, I guess nobody is too excited about fixing this, because it's\nbeen another 10 months with no discussion. Andres doesn't even seem to\nthink this is as much a bug as it is a limitation, for all that it's\nfiled in the CF application under bug fixes. I kind of wonder if we\nshould just close this entry in the CF, but I'll hold off on that for\nnow.\n\n /*\n * For normal MVCC snapshot this contains the all xact IDs that are in\n * progress, unless the snapshot was taken during recovery in which case\n- * it's empty. For historic MVCC snapshots, the meaning is inverted, i.e.\n- * it contains *committed* transactions between xmin and xmax.\n+ * it's empty. In the case of !suboverflowed, there's a situation where\n+ * this contains both top and sub-transaction IDs in a mixed manner. For\n+ * historic MVCC snapshots, the meaning is inverted, i.e. it contains\n+ * *committed* transactions between xmin and xmax.\n *\n * note: all ids in xip[] satisfy xmin <= xip[i] < xmax\n */\n\nI have to say that I don't like this at all. It's bad enough that we\nalready use the xip/subxip arrays in two different ways depending on\nthe situation. Increasing that to three different ways seems painful.\nHow is anyone supposed to keep track of how the array is being used at\nwhich point in the code?\n\nIf we are going to do that, I suspect it needs comment updates in more\nplaces than what the patch does currently. For instance:\n\n+ if (newxcnt < xiplen)\n+ newxip[newxcnt++] = xid;\n+ else\n+ newsubxip[newsubxcnt++] = xid;\n\nJust imagine coming across this code in 5 or 10 years and finding that\nit had no comment explaining anything. Yikes!\n\nAside from the details of the patch, and perhaps more seriously, I'm\nnot really clear that we have consensus on an approach. A few\ndifferent proposals seem to have been floated, and it doesn't seem\nlike anybody hates anybody else's idea completely, but it doesn't\nreally seem like everyone agrees on what to do, either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 15:17:11 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Sat, 6 Jan 2024 at 01:47, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Mar 23, 2023 at 11:02 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > [ new patch ]\n>\n> Well, I guess nobody is too excited about fixing this, because it's\n> been another 10 months with no discussion. Andres doesn't even seem to\n> think this is as much a bug as it is a limitation, for all that it's\n> filed in the CF application under bug fixes. I kind of wonder if we\n> should just close this entry in the CF, but I'll hold off on that for\n> now.\n\nI have changed the status of the patch to \"Waiting on Author\" as we\ndon't have a concrete patch with an accepted design which is in a\nreviewable shape. We can think if we want to pursue this patch further\nor probably close this in the current commitfest and start it again\nwhen someone wants to work on this more actively.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 11 Jan 2024 19:41:59 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "Thank you for the comments. This will help move the discussion\r\nforward.\r\n\r\nAt Fri, 5 Jan 2024 15:17:11 -0500, Robert Haas <robertmhaas@gmail.com> wrote in \r\n> On Thu, Mar 23, 2023 at 11:02 PM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> > [ new patch ]\r\n> \r\n> Well, I guess nobody is too excited about fixing this, because it's\r\n> been another 10 months with no discussion. Andres doesn't even seem to\r\n> think this is as much a bug as it is a limitation, for all that it's\r\n> filed in the CF application under bug fixes. I kind of wonder if we\r\n> should just close this entry in the CF, but I'll hold off on that for\r\n> now.\r\n\r\nPerhaps you are correct. Ultimately, this issue is largely\r\ntheoretical, and I don't believe anyone would be inconvenienced by\r\nimposing this contraint.\r\n\r\n> * note: all ids in xip[] satisfy xmin <= xip[i] < xmax\r\n> */\r\n> \r\n> I have to say that I don't like this at all. It's bad enough that we\r\n> already use the xip/subxip arrays in two different ways depending on\r\n> the situation. Increasing that to three different ways seems painful.\r\n> How is anyone supposed to keep track of how the array is being used at\r\n> which point in the code?\r\n\r\nI understand. So, summirizing the current state briefly, I believe it\r\nas follows:\r\n\r\na. snapbuild lacks a means to differentiate between top and sub xids\r\n during snapshot building.\r\n\r\nb. Abusing takenDuringRecovery could lead to potential issues, so it\r\n has been rejected.\r\n\r\nc. Dynamic sizing of xip is likely to have a significant impact on\r\n performance (as mentioned in the comments of GetSnapshotData).\r\n\r\nd. (new!) Adding a third storing method is not favored.\r\n\r\nAs a method to satisfy these prerequisites, I think one direction\r\ncould be to split takenDuringRecovery into flags indicating the\r\nstorage method and creation timing. I present this as a last-ditch\r\neffort. It's a rough proposal, and further validation should be\r\nnecessary. If this direction is also not acceptable, I'll proceed with\r\nremoving the CF entry.\r\n\r\n> If we are going to do that, I suspect it needs comment updates in more\r\n> places than what the patch does currently. For instance:\r\n> \r\n> + if (newxcnt < xiplen)\r\n> + newxip[newxcnt++] = xid;\r\n> + else\r\n> + newsubxip[newsubxcnt++] = xid;\r\n> \r\n> Just imagine coming across this code in 5 or 10 years and finding that\r\n> it had no comment explaining anything. Yikes!\r\n\r\n^^;\r\n\r\n> Aside from the details of the patch, and perhaps more seriously, I'm\r\n> not really clear that we have consensus on an approach. A few\r\n> different proposals seem to have been floated, and it doesn't seem\r\n> like anybody hates anybody else's idea completely, but it doesn't\r\n> really seem like everyone agrees on what to do, either.\r\n\r\nI don't fully agree with that.It's not so much that I dislike other\r\nproposals, but rather that we haven't been able to find a definitive\r\nsolution that stands out.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center", "msg_date": "Fri, 12 Jan 2024 11:46:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Thu, Jan 11, 2024 at 9:47 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> I understand. So, summirizing the current state briefly, I believe it\n> as follows:\n>\n> a. snapbuild lacks a means to differentiate between top and sub xids\n> during snapshot building.\n>\n> b. Abusing takenDuringRecovery could lead to potential issues, so it\n> has been rejected.\n>\n> c. Dynamic sizing of xip is likely to have a significant impact on\n> performance (as mentioned in the comments of GetSnapshotData).\n>\n> d. (new!) Adding a third storing method is not favored.\n>\n> As a method to satisfy these prerequisites, I think one direction\n> could be to split takenDuringRecovery into flags indicating the\n> storage method and creation timing. I present this as a last-ditch\n> effort. It's a rough proposal, and further validation should be\n> necessary. If this direction is also not acceptable, I'll proceed with\n> removing the CF entry.\n\nI think that the idea of evolving takenDuringRecovery could\npotentially work for this problem and maybe for some other things as\nwell. I remember studying that flag before and coming to the\nconclusion that it had some pretty specific and surprising semantics\nthat weren't obvious from the name -- I don't remember the details --\nand so I think improving it could be useful. Also, I think that\nmultiple storage methods could be more palatable if there were a clear\nway to distinguish which one was in use and good comments explaining\nthe distinction in relevant places.\n\nHowever, I wonder whether this whole area is in need of a bigger\nrethink. There seem to be a number of situations in which the split\ninto xip and subxip arrays is not very convenient, and also some\nsituations where it's quite important. Sometimes we want to record\nwhat's committed, and sometimes what isn't. It's all a bit messy and\ninconsistent. The necessity of limiting snapshot size is annoying,\ntoo. I have no real idea what can be done about all of this, but what\nstrikes me is that the current system has grown up incrementally: we\nstarted with a data structure designed for the original use case, and\nnow by gradually adding new use cases things have gotten complicated.\nIf we were designing things over from scratch, maybe we'd do it\ndifferently and end up with something less messy. And maybe someone\ncan imagine a redesign that is likewise less messy.\n\nBut on the other hand, maybe not. Perhaps we can't really do any\nbetter than what we have. Then the question becomes whether this case\nis important enough to justify additional code complexity. I don't\nthink I've personally seen users run into this problem so I have no\nspecial reason to think that it's important, but if it's causing\nissues for other people then maybe it is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Jan 2024 11:28:09 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "At Fri, 12 Jan 2024 11:28:09 -0500, Robert Haas <robertmhaas@gmail.com> wrote in \n> However, I wonder whether this whole area is in need of a bigger\n> rethink. There seem to be a number of situations in which the split\n> into xip and subxip arrays is not very convenient, and also some\n> situations where it's quite important. Sometimes we want to record\n> what's committed, and sometimes what isn't. It's all a bit messy and\n> inconsistent. The necessity of limiting snapshot size is annoying,\n> too. I have no real idea what can be done about all of this, but what\n> strikes me is that the current system has grown up incrementally: we\n> started with a data structure designed for the original use case, and\n> now by gradually adding new use cases things have gotten complicated.\n> If we were designing things over from scratch, maybe we'd do it\n> differently and end up with something less messy. And maybe someone\n> can imagine a redesign that is likewise less messy.\n> \n> But on the other hand, maybe not. Perhaps we can't really do any\n> better than what we have. Then the question becomes whether this case\n> is important enough to justify additional code complexity. I don't\n> think I've personally seen users run into this problem so I have no\n> special reason to think that it's important, but if it's causing\n> issues for other people then maybe it is.\n\nThank you for the deep insights. I have understood your points. As I\ncan't think of any further simple modifications on this line, I will\nwithdraw this CF entry. At the moment, I also lack a fundamental,\ncomprehensive solution, but should if I or anyone else come up with\nsuch a solution in the future, I believe it would worth a separate\ndiscussion.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 16 Jan 2024 11:18:54 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" }, { "msg_contents": "On Mon, Jan 15, 2024 at 9:18 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Thank you for the deep insights. I have understood your points. As I\n> can't think of any further simple modifications on this line, I will\n> withdraw this CF entry. At the moment, I also lack a fundamental,\n> comprehensive solution, but should if I or anyone else come up with\n> such a solution in the future, I believe it would worth a separate\n> discussion.\n\nI completely agree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Jan 2024 09:15:28 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error \"initial slot snapshot too large\" in create replication\n slot" } ]
[ { "msg_contents": "Hi,\n\nWhen testing logical replication, I found a case which caused assert coredump on\nlatest HEAD. The reproduction steps are as follows:\n\n1)\n----publisher----\ncreate table test(i int);\ncreate publication pub for table test;\nbegin;\n\ninsert into test values(1);\n\n2)\n----subscriber----\ncreate table test(i int);\ncreate subscription sub connection 'dbname=postgres port=10000' publication pub;\n- wait for a second and Ctrl-C\n\n3)\n----publisher----\ncommit;\n\nI can see the walsender tried to release a not-quite-ready repliaction slot\nthat was created when create a subscription. But the pgstat has been shutdown\nbefore invoking ReplicationSlotRelease().\n\nThe stack is as follows:\n\n#2 in ExceptionalCondition (pgstat_is_initialized && !pgstat_is_shutdown)\n#3 in pgstat_assert_is_up () at pgstat.c:4852\n#4 in pgstat_send (msg=msg@entry=0x7ffe716f7470, len=len@entry=144) at pgstat.c:3075\n#5 in pgstat_report_replslot_drop (slotname=slotname@entry=0x7fbcf57a3c98 \"sub\") at pgstat.c:1869\n#6 in ReplicationSlotDropPtr (slot=0x7fbcf57a3c80) at slot.c:696\n#7 in ReplicationSlotDropAcquired () at slot.c:585\n#8 in ReplicationSlotRelease () at slot.c:482\n#9 in ProcKill (code=<optimized out>, arg=<optimized out>) at proc.c:852\n#10 in shmem_exit (code=code@entry=0) at ipc.c:272\n#11 in proc_exit_prepare (code=code@entry=0) at ipc.c:194\n#12 in proc_exit (code=code@entry=0) at ipc.c:107\n#13 in ProcessRepliesIfAny () at walsender.c:1807\n#14 in WalSndWaitForWal (loc=loc@entry=22087632) at walsender.c:1417\n#15 in logical_read_xlog_page (state=0x2f8c600, targetPagePtr=22085632,\n reqLen=, targetRecPtr=, cur_page=0x2f6c1e0 \"\\016\\321\\005\") at walsender.c:821\n#16 in ReadPageInternal (state=state@entry=0x2f8c600,\n pageptr=pageptr@entry=22085632, reqLen=reqLen@entry=2000) at xlogreader.c:667\n#17 in XLogReadRecord (state=0x2f8c600,\n errormsg=errormsg@entry=0x7ffe716f7f98) at xlogreader.c:337\n#18 in DecodingContextFindStartpoint (ctx=ctx@entry=0x2f8c240)\n at logical.c:606\n#19 in CreateReplicationSlot (cmd=cmd@entry=0x2f1aef0)\n\nIs this behavior expected ?\n\nBest regards,\nHou zhijie\n\n\n\n", "msg_date": "Mon, 11 Oct 2021 07:55:19 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Drop replslot after pgstat_shutdown cause assert coredump" }, { "msg_contents": "On Mon, Oct 11, 2021 at 6:55 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> I can see the walsender tried to release a not-quite-ready repliaction slot\n> that was created when create a subscription. But the pgstat has been shutdown\n> before invoking ReplicationSlotRelease().\n>\n> The stack is as follows:\n>\n> #2 in ExceptionalCondition (pgstat_is_initialized && !pgstat_is_shutdown)\n> #3 in pgstat_assert_is_up () at pgstat.c:4852\n> #4 in pgstat_send (msg=msg@entry=0x7ffe716f7470, len=len@entry=144) at pgstat.c:3075\n> #5 in pgstat_report_replslot_drop (slotname=slotname@entry=0x7fbcf57a3c98 \"sub\") at pgstat.c:1869\n> #6 in ReplicationSlotDropPtr (slot=0x7fbcf57a3c80) at slot.c:696\n> #7 in ReplicationSlotDropAcquired () at slot.c:585\n> #8 in ReplicationSlotRelease () at slot.c:482\n> #9 in ProcKill (code=<optimized out>, arg=<optimized out>) at proc.c:852\n> #10 in shmem_exit (code=code@entry=0) at ipc.c:272\n> #11 in proc_exit_prepare (code=code@entry=0) at ipc.c:194\n> #12 in proc_exit (code=code@entry=0) at ipc.c:107\n> #13 in ProcessRepliesIfAny () at walsender.c:1807\n> #14 in WalSndWaitForWal (loc=loc@entry=22087632) at walsender.c:1417\n> #15 in logical_read_xlog_page (state=0x2f8c600, targetPagePtr=22085632,\n> reqLen=, targetRecPtr=, cur_page=0x2f6c1e0 \"\\016\\321\\005\") at walsender.c:821\n> #16 in ReadPageInternal (state=state@entry=0x2f8c600,\n> pageptr=pageptr@entry=22085632, reqLen=reqLen@entry=2000) at xlogreader.c:667\n> #17 in XLogReadRecord (state=0x2f8c600,\n> errormsg=errormsg@entry=0x7ffe716f7f98) at xlogreader.c:337\n> #18 in DecodingContextFindStartpoint (ctx=ctx@entry=0x2f8c240)\n> at logical.c:606\n> #19 in CreateReplicationSlot (cmd=cmd@entry=0x2f1aef0)\n>\n> Is this behavior expected ?\n>\n\nI'd say it's not!\n\nJust looking at the stacktrace, I'm thinking that the following commit\nmay have had a bearing on this problem, by causing pgstat to be\nshutdown earlier:\n\ncommit fb2c5028e63589c01fbdf8b031be824766dc7a68\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Fri Aug 6 10:05:57 2021 -0700\n\n pgstat: Schedule per-backend pgstat shutdown via before_shmem_exit().\n\n\nCan you see if the problem can be reproduced prior to this commit?\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Tue, 12 Oct 2021 00:15:40 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Drop replslot after pgstat_shutdown cause assert coredump" }, { "msg_contents": "\n\nOn 2021/10/11 22:15, Greg Nancarrow wrote:\n> On Mon, Oct 11, 2021 at 6:55 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n>>\n>> I can see the walsender tried to release a not-quite-ready repliaction slot\n>> that was created when create a subscription. But the pgstat has been shutdown\n>> before invoking ReplicationSlotRelease().\n>>\n>> The stack is as follows:\n>>\n>> #2 in ExceptionalCondition (pgstat_is_initialized && !pgstat_is_shutdown)\n>> #3 in pgstat_assert_is_up () at pgstat.c:4852\n>> #4 in pgstat_send (msg=msg@entry=0x7ffe716f7470, len=len@entry=144) at pgstat.c:3075\n>> #5 in pgstat_report_replslot_drop (slotname=slotname@entry=0x7fbcf57a3c98 \"sub\") at pgstat.c:1869\n>> #6 in ReplicationSlotDropPtr (slot=0x7fbcf57a3c80) at slot.c:696\n>> #7 in ReplicationSlotDropAcquired () at slot.c:585\n>> #8 in ReplicationSlotRelease () at slot.c:482\n>> #9 in ProcKill (code=<optimized out>, arg=<optimized out>) at proc.c:852\n>> #10 in shmem_exit (code=code@entry=0) at ipc.c:272\n>> #11 in proc_exit_prepare (code=code@entry=0) at ipc.c:194\n>> #12 in proc_exit (code=code@entry=0) at ipc.c:107\n>> #13 in ProcessRepliesIfAny () at walsender.c:1807\n>> #14 in WalSndWaitForWal (loc=loc@entry=22087632) at walsender.c:1417\n>> #15 in logical_read_xlog_page (state=0x2f8c600, targetPagePtr=22085632,\n>> reqLen=, targetRecPtr=, cur_page=0x2f6c1e0 \"\\016\\321\\005\") at walsender.c:821\n>> #16 in ReadPageInternal (state=state@entry=0x2f8c600,\n>> pageptr=pageptr@entry=22085632, reqLen=reqLen@entry=2000) at xlogreader.c:667\n>> #17 in XLogReadRecord (state=0x2f8c600,\n>> errormsg=errormsg@entry=0x7ffe716f7f98) at xlogreader.c:337\n>> #18 in DecodingContextFindStartpoint (ctx=ctx@entry=0x2f8c240)\n>> at logical.c:606\n>> #19 in CreateReplicationSlot (cmd=cmd@entry=0x2f1aef0)\n>>\n>> Is this behavior expected ?\n>>\n> \n> I'd say it's not!\n\nYes. I think this is a bug.\n\n\n> Just looking at the stacktrace, I'm thinking that the following commit\n> may have had a bearing on this problem, by causing pgstat to be\n> shutdown earlier:\n> \n> commit fb2c5028e63589c01fbdf8b031be824766dc7a68\n> Author: Andres Freund <andres@anarazel.de>\n> Date: Fri Aug 6 10:05:57 2021 -0700\n> \n> pgstat: Schedule per-backend pgstat shutdown via before_shmem_exit().\n> \n> \n> Can you see if the problem can be reproduced prior to this commit?\n\nEven in prior to the commit, pgstat_shutdown_hook() can be called\nbefore ProcKill() at the backend exit, so ISTM that the problem can\nbe reproduced.\n\nProbably we need to make sure that pgstat_shutdown_hook() is called\nafter ProcKill(), e.g., by registering pgstat_shutdown_hook() into\non_proc_exit_list (I'm not sure if this change is safe, though).\nOr maybe pgstat logic for replication slot drop needs to be overhauled.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 22 Oct 2021 02:10:21 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Drop replslot after pgstat_shutdown cause assert coredump" }, { "msg_contents": "At Fri, 22 Oct 2021 02:10:21 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Even in prior to the commit, pgstat_shutdown_hook() can be called\n> before ProcKill() at the backend exit, so ISTM that the problem can\n> be reproduced.\n> \n> Probably we need to make sure that pgstat_shutdown_hook() is called\n> after ProcKill(), e.g., by registering pgstat_shutdown_hook() into\n\nConsidering the coming shared-memory based stats collector, pgstat\nmust be shutdown before shared memory shutdown. Every operation that\nrequires stats collector also must be shut down before the pgstat\nshutdown. A naive solution would be having before-pgstat-shutdown hook\nbut I'm not sure it's the right direction.\n\n> on_proc_exit_list (I'm not sure if this change is safe, though).\n> Or maybe pgstat logic for replication slot drop needs to be\n> overhauled.\n\nI think we don't want to lose the stats numbers of the to-be-dropped\nslot. So the slot-drop must happen before pgstat shutdown.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 22 Oct 2021 09:45:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Drop replslot after pgstat_shutdown cause assert coredump" }, { "msg_contents": "I said:\n> Considering the coming shared-memory based stats collector, pgstat\n> must be shutdown before shared memory shutdown. Every operation that\n> requires stats collector also must be shut down before the pgstat\n> shutdown. A naive solution would be having before-pgstat-shutdown hook\n> but I'm not sure it's the right direction.\n\nFor this particular issue, we can add an explicit initilization phase\nof replication slot per backend, which simply registers before_shmem\ncallback. It would work fine unless we carelessly place the\ninitialization before pgstat_initialize() (not pgstat_init()) call.\n\n(Honestly, I haven't been able to reproduce the issue itself for\n myself yet..)\n\n> > on_proc_exit_list (I'm not sure if this change is safe, though).\n> > Or maybe pgstat logic for replication slot drop needs to be\n> > overhauled.\n> \n> I think we don't want to lose the stats numbers of the to-be-dropped\n> slot. So the slot-drop must happen before pgstat shutdown.\n\nI haven't sought other similar issues. I'm going to check it if they,\nif any, can be fixe the same way.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c\nindex b7d0fbaefd..13762f82af 100644\n--- a/src/backend/postmaster/pgstat.c\n+++ b/src/backend/postmaster/pgstat.c\n@@ -306,6 +306,8 @@ static bool pgstat_is_initialized = false;\n static bool pgstat_is_shutdown = false;\n #endif\n \n+/* per-backend variable for assertion */\n+bool pgstat_initialized PG_USED_FOR_ASSERTS_ONLY = false;\n \n /* ----------\n * Local function forward declarations\n@@ -3036,6 +3038,7 @@ pgstat_initialize(void)\n \n \t/* Set up a process-exit hook to clean up */\n \tbefore_shmem_exit(pgstat_shutdown_hook, 0);\n+\tpgstat_initialized = true;\n \n #ifdef USE_ASSERT_CHECKING\n \tpgstat_is_initialized = true;\ndiff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c\nindex 1c6c0c7ce2..e0430aefa9 100644\n--- a/src/backend/replication/slot.c\n+++ b/src/backend/replication/slot.c\n@@ -46,6 +46,7 @@\n #include \"pgstat.h\"\n #include \"replication/slot.h\"\n #include \"storage/fd.h\"\n+#include \"storage/ipc.h\"\n #include \"storage/proc.h\"\n #include \"storage/procarray.h\"\n #include \"utils/builtins.h\"\n@@ -160,6 +161,33 @@ ReplicationSlotsShmemInit(void)\n \t}\n }\n \n+/*\n+ * Exit hook to cleanup replication slots.\n+ */\n+static void\n+ReplicationSlotShutdown(int code, Datum arg)\n+{\n+\t/* Make sure active replication slots are released */\n+\tif (MyReplicationSlot != NULL)\n+\t\tReplicationSlotRelease();\n+\n+\t/* Also cleanup all the temporary slots. */\n+\tReplicationSlotCleanup();\n+}\n+\n+/*\n+ * Initialize of replication slot facility per backend.\n+ */\n+void\n+ReplicationSlotInit(void)\n+{\n+\tif (max_replication_slots > 0)\n+\t{\n+\t\tassert_pgstat_initialized();\n+\t\tbefore_shmem_exit(ReplicationSlotShutdown, (Datum) 0);\n+\t}\n+}\n+\n /*\n * Check whether the passed slot name is valid and report errors at elevel.\n *\ndiff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\nindex b7d9da0aa9..b593ec8964 100644\n--- a/src/backend/storage/lmgr/proc.c\n+++ b/src/backend/storage/lmgr/proc.c\n@@ -41,7 +41,6 @@\n #include \"miscadmin.h\"\n #include \"pgstat.h\"\n #include \"postmaster/autovacuum.h\"\n-#include \"replication/slot.h\"\n #include \"replication/syncrep.h\"\n #include \"replication/walsender.h\"\n #include \"storage/condition_variable.h\"\n@@ -847,13 +846,6 @@ ProcKill(int code, Datum arg)\n \t/* Cancel any pending condition variable sleep, too */\n \tConditionVariableCancelSleep();\n \n-\t/* Make sure active replication slots are released */\n-\tif (MyReplicationSlot != NULL)\n-\t\tReplicationSlotRelease();\n-\n-\t/* Also cleanup all the temporary slots. */\n-\tReplicationSlotCleanup();\n-\n \t/*\n \t * Detach from any lock group of which we are a member. If the leader\n \t * exist before all other group members, its PGPROC will remain allocated\ndiff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c\nindex 78bc64671e..dd83864b54 100644\n--- a/src/backend/utils/init/postinit.c\n+++ b/src/backend/utils/init/postinit.c\n@@ -40,6 +40,7 @@\n #include \"pgstat.h\"\n #include \"postmaster/autovacuum.h\"\n #include \"postmaster/postmaster.h\"\n+#include \"replication/slot.h\"\n #include \"replication/walsender.h\"\n #include \"storage/bufmgr.h\"\n #include \"storage/fd.h\"\n@@ -531,6 +532,12 @@ BaseInit(void)\n \t */\n \tpgstat_initialize();\n \n+\t/*\n+\t * Initialize replication slot. This must be after pgstat_initialize() so\n+\t * that the cleanup happnes before the shutdown of pgstat facility.\n+\t */\n+\tReplicationSlotInit();\n+\n \t/* Do local initialization of storage and buffer managers */\n \tInitSync();\n \tsmgrinit();\ndiff --git a/src/include/pgstat.h b/src/include/pgstat.h\nindex bcd3588ea2..f06810c115 100644\n--- a/src/include/pgstat.h\n+++ b/src/include/pgstat.h\n@@ -992,6 +992,14 @@ extern PgStat_Counter pgStatTransactionIdleTime;\n */\n extern SessionEndType pgStatSessionEndCause;\n \n+/*\n+ * modules requires pgstat required to install their before-shmem hook after\n+ * pgstat. This variable is used to make sure that.\n+ */\n+extern bool pgstat_initialized;\n+#define assert_pgstat_initialized() Assert (pgstat_initialized);\n+\n+\n /* ----------\n * Functions called from postmaster\n * ----------\ndiff --git a/src/include/replication/slot.h b/src/include/replication/slot.h\nindex 53d773ccff..124d107662 100644\n--- a/src/include/replication/slot.h\n+++ b/src/include/replication/slot.h\n@@ -193,6 +193,9 @@ extern PGDLLIMPORT int max_replication_slots;\n extern Size ReplicationSlotsShmemSize(void);\n extern void ReplicationSlotsShmemInit(void);\n \n+/* per-backend initialization */\n+extern void ReplicationSlotInit(void);\n+\n /* management of individual slots */\n extern void ReplicationSlotCreate(const char *name, bool db_specific,\n \t\t\t\t\t\t\t\t ReplicationSlotPersistency p, bool two_phase);", "msg_date": "Fri, 22 Oct 2021 11:43:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Drop replslot after pgstat_shutdown cause assert coredump" }, { "msg_contents": "At Fri, 22 Oct 2021 11:43:08 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> (Honestly, I haven't been able to reproduce the issue itself for\n> myself yet..)\n\nI managed to reproduce it for me.\n\npsql \"dbname=postgres replication=database\"\npostgres=# CREATE_REPLICATION_SLOT \"ts1\" TEMPORARY LOGICAL \"pgoutput\";\npostgres=# C-d\n(crash)\n\nAnd confirmed that it doesn't happen with the fix.\n\n> I haven't sought other similar issues. I'm going to check it if they,\n> if any, can be fixed the same way.\n\nFileClose calls pgstat_report_tempfile() via\nBeforeShmemExit_Files. It is already registered after pgstat.\nI added a call to assert_pgstat_initialized() to \n\nAll other pgstat functions seem to be called outside shmem_exit.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c\nindex b7d0fbaefd..13762f82af 100644\n--- a/src/backend/postmaster/pgstat.c\n+++ b/src/backend/postmaster/pgstat.c\n@@ -306,6 +306,8 @@ static bool pgstat_is_initialized = false;\n static bool pgstat_is_shutdown = false;\n #endif\n \n+/* per-backend variable for assertion */\n+bool pgstat_initialized PG_USED_FOR_ASSERTS_ONLY = false;\n \n /* ----------\n * Local function forward declarations\n@@ -3036,6 +3038,7 @@ pgstat_initialize(void)\n \n \t/* Set up a process-exit hook to clean up */\n \tbefore_shmem_exit(pgstat_shutdown_hook, 0);\n+\tpgstat_initialized = true;\n \n #ifdef USE_ASSERT_CHECKING\n \tpgstat_is_initialized = true;\ndiff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c\nindex 1c6c0c7ce2..b2c719d31e 100644\n--- a/src/backend/replication/slot.c\n+++ b/src/backend/replication/slot.c\n@@ -46,6 +46,7 @@\n #include \"pgstat.h\"\n #include \"replication/slot.h\"\n #include \"storage/fd.h\"\n+#include \"storage/ipc.h\"\n #include \"storage/proc.h\"\n #include \"storage/procarray.h\"\n #include \"utils/builtins.h\"\n@@ -160,6 +161,33 @@ ReplicationSlotsShmemInit(void)\n \t}\n }\n \n+/*\n+ * Exit hook to cleanup replication slots.\n+ */\n+static void\n+ReplicationSlotShutdown(int code, Datum arg)\n+{\n+\t/* Make sure active replication slots are released */\n+\tif (MyReplicationSlot != NULL)\n+\t\tReplicationSlotRelease();\n+\n+\t/* Also cleanup all the temporary slots. */\n+\tReplicationSlotCleanup();\n+}\n+\n+/*\n+ * Initialize replication slot facility per backend.\n+ */\n+void\n+ReplicationSlotInit(void)\n+{\n+\tif (max_replication_slots < 1)\n+\t\treturn;\n+\n+\tassert_pgstat_initialized();\t/* the callback requires pgstat */\n+\tbefore_shmem_exit(ReplicationSlotShutdown, (Datum) 0);\n+}\n+\n /*\n * Check whether the passed slot name is valid and report errors at elevel.\n *\ndiff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c\nindex f9cda6906d..8fbacdc86c 100644\n--- a/src/backend/storage/file/fd.c\n+++ b/src/backend/storage/file/fd.c\n@@ -917,6 +917,7 @@ InitTemporaryFileAccess(void)\n \t * Register before-shmem-exit hook to ensure temp files are dropped while\n \t * we can still report stats.\n \t */\n+\tassert_pgstat_initialized();\t/* the callback requires pgstat */\n \tbefore_shmem_exit(BeforeShmemExit_Files, 0);\n \n #ifdef USE_ASSERT_CHECKING\ndiff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\nindex b7d9da0aa9..b593ec8964 100644\n--- a/src/backend/storage/lmgr/proc.c\n+++ b/src/backend/storage/lmgr/proc.c\n@@ -41,7 +41,6 @@\n #include \"miscadmin.h\"\n #include \"pgstat.h\"\n #include \"postmaster/autovacuum.h\"\n-#include \"replication/slot.h\"\n #include \"replication/syncrep.h\"\n #include \"replication/walsender.h\"\n #include \"storage/condition_variable.h\"\n@@ -847,13 +846,6 @@ ProcKill(int code, Datum arg)\n \t/* Cancel any pending condition variable sleep, too */\n \tConditionVariableCancelSleep();\n \n-\t/* Make sure active replication slots are released */\n-\tif (MyReplicationSlot != NULL)\n-\t\tReplicationSlotRelease();\n-\n-\t/* Also cleanup all the temporary slots. */\n-\tReplicationSlotCleanup();\n-\n \t/*\n \t * Detach from any lock group of which we are a member. If the leader\n \t * exist before all other group members, its PGPROC will remain allocated\ndiff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c\nindex 78bc64671e..b7c1a400f5 100644\n--- a/src/backend/utils/init/postinit.c\n+++ b/src/backend/utils/init/postinit.c\n@@ -40,6 +40,7 @@\n #include \"pgstat.h\"\n #include \"postmaster/autovacuum.h\"\n #include \"postmaster/postmaster.h\"\n+#include \"replication/slot.h\"\n #include \"replication/walsender.h\"\n #include \"storage/bufmgr.h\"\n #include \"storage/fd.h\"\n@@ -531,6 +532,12 @@ BaseInit(void)\n \t */\n \tpgstat_initialize();\n \n+\t/*\n+\t * Initialize replication slot. This must be after pgstat_initialize() so\n+\t * that the cleanup happens before the shutdown of pgstat facility.\n+\t */\n+\tReplicationSlotInit();\n+\n \t/* Do local initialization of storage and buffer managers */\n \tInitSync();\n \tsmgrinit();\ndiff --git a/src/include/pgstat.h b/src/include/pgstat.h\nindex bcd3588ea2..3727e4cd53 100644\n--- a/src/include/pgstat.h\n+++ b/src/include/pgstat.h\n@@ -992,6 +992,15 @@ extern PgStat_Counter pgStatTransactionIdleTime;\n */\n extern SessionEndType pgStatSessionEndCause;\n \n+/*\n+ * Modules that require pgstat (at process exit) should install their\n+ * before-shmem hook after pgstat. This variable is used to make sure of that\n+ * prerequisite.\n+ */\n+extern bool pgstat_initialized;\n+#define assert_pgstat_initialized() Assert (pgstat_initialized);\n+\n+\n /* ----------\n * Functions called from postmaster\n * ----------\ndiff --git a/src/include/replication/slot.h b/src/include/replication/slot.h\nindex 53d773ccff..124d107662 100644\n--- a/src/include/replication/slot.h\n+++ b/src/include/replication/slot.h\n@@ -193,6 +193,9 @@ extern PGDLLIMPORT int max_replication_slots;\n extern Size ReplicationSlotsShmemSize(void);\n extern void ReplicationSlotsShmemInit(void);\n \n+/* per-backend initialization */\n+extern void ReplicationSlotInit(void);\n+\n /* management of individual slots */\n extern void ReplicationSlotCreate(const char *name, bool db_specific,\n \t\t\t\t\t\t\t\t ReplicationSlotPersistency p, bool two_phase);", "msg_date": "Fri, 22 Oct 2021 13:47:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Drop replslot after pgstat_shutdown cause assert coredump" } ]
[ { "msg_contents": "Hi,\n\nThe comments for pgfdw_get_cleanup_result() say this:\n\n * It's not a huge problem if we throw an ERROR here, but if we get into error\n * recursion trouble, we'll end up slamming the connection shut, which will\n * necessitate failing the entire toplevel transaction even if subtransactions\n * were used. Try to use WARNING where we can.\n\nBut we don’t use WARNING anywhere in that function. The right place\nfor this is pgfdw_exec_cleanup_query()?\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 11 Oct 2021 17:05:58 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "postgres_fdw: misplaced? comments in connection.c" }, { "msg_contents": "On Mon, Oct 11, 2021 at 5:05 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> The comments for pgfdw_get_cleanup_result() say this:\n>\n> * It's not a huge problem if we throw an ERROR here, but if we get into error\n> * recursion trouble, we'll end up slamming the connection shut, which will\n> * necessitate failing the entire toplevel transaction even if subtransactions\n> * were used. Try to use WARNING where we can.\n>\n> But we don’t use WARNING anywhere in that function. The right place\n> for this is pgfdw_exec_cleanup_query()?\n\nI noticed that pgfdw_cancel_query(), which is called during (sub)abort\ncleanup if necessary, also uses WARNING, instead of ERROR, to avoid\nthe error-recursion-trouble issue. So I think it would be good to\nmove this to pgfdw_cancel_query() as well as\npgfdw_exec_cleanup_query(). Attached is a patch for that.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Tue, 12 Oct 2021 13:33:39 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: misplaced? comments in connection.c" }, { "msg_contents": "On Tue, Oct 12, 2021 at 1:33 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Mon, Oct 11, 2021 at 5:05 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > The comments for pgfdw_get_cleanup_result() say this:\n> >\n> > * It's not a huge problem if we throw an ERROR here, but if we get into error\n> > * recursion trouble, we'll end up slamming the connection shut, which will\n> > * necessitate failing the entire toplevel transaction even if subtransactions\n> > * were used. Try to use WARNING where we can.\n> >\n> > But we don’t use WARNING anywhere in that function. The right place\n> > for this is pgfdw_exec_cleanup_query()?\n>\n> I noticed that pgfdw_cancel_query(), which is called during (sub)abort\n> cleanup if necessary, also uses WARNING, instead of ERROR, to avoid\n> the error-recursion-trouble issue. So I think it would be good to\n> move this to pgfdw_cancel_query() as well as\n> pgfdw_exec_cleanup_query(). Attached is a patch for that.\n\nThere seems to be no objections, so I have applied the patch.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 13 Oct 2021 19:15:40 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: misplaced? comments in connection.c" } ]
[ { "msg_contents": "Hi All,\n\nWhile using IMMUTABLE functions in index expression, we are getting below\ncorruption on HEAD.\n\npostgres=# CREATE TABLE tab1 (c1 numeric, c2 numeric);\nCREATE TABLE\n\npostgres=# INSERT INTO tab1 values (10, 100);\nINSERT 0 1\n\npostgres=# CREATE OR REPLACE FUNCTION func1(var1 numeric)\nRETURNS NUMERIC AS $$\nDECLARE\nresult numeric;\nBEGIN\n SELECT c2 into result FROM tab1 WHERE c1=var1;\n RETURN result;\nEND;\n$$ LANGUAGE plpgsql IMMUTABLE;\nCREATE FUNCTION\n\n-- When using the IMMUTABLE function in creating an index for the first\ntime, it is working fine.\npostgres=# CREATE INDEX idx1 ON tab1(func1(c1));\nCREATE INDEX\n\n-- Executing the similar query for 2nd time, We are getting the error\npostgres=# CREATE INDEX idx2 ON tab1(func1(c1));\nERROR: could not read block 0 in file \"base/13675/16391\": read only 0 of\n8192 bytes\nCONTEXT: SQL statement \"SELECT c2 FROM tab1 WHERE c1=var1\"\nPL/pgSQL function func1(numeric) line 5 at SQL statement\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi All,While using IMMUTABLE functions in index expression, we are getting below corruption on HEAD.postgres=# CREATE TABLE  tab1 (c1 numeric, c2 numeric);CREATE TABLEpostgres=# INSERT INTO  tab1 values (10, 100);INSERT 0 1postgres=# CREATE OR REPLACE FUNCTION func1(var1 numeric)RETURNS NUMERIC AS $$DECLAREresult numeric;BEGIN SELECT c2 into result FROM  tab1 WHERE c1=var1; RETURN result;END;$$ LANGUAGE plpgsql IMMUTABLE;CREATE FUNCTION-- When using the IMMUTABLE function in creating an index for the first time, it is working fine.postgres=# CREATE INDEX idx1 ON  tab1(func1(c1));CREATE INDEX-- Executing the similar query for 2nd time, We are getting the errorpostgres=# CREATE INDEX idx2 ON  tab1(func1(c1));ERROR:  could not read block 0 in file \"base/13675/16391\": read only 0 of 8192 bytesCONTEXT:  SQL statement \"SELECT c2             FROM  tab1 WHERE c1=var1\"PL/pgSQL function func1(numeric) line 5 at SQL statement-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 11 Oct 2021 20:14:59 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Corruption with IMMUTABLE functions in index expression." }, { "msg_contents": "On Monday, October 11, 2021, Prabhat Sahu <prabhat.sahu@enterprisedb.com>\nwrote:\n>\n> While using IMMUTABLE functions in index expression, we are getting below\n> corruption on HEAD.\n>\n\nThat function is not actually immutable (the system doesn’t check whether\nyour claim of immutability and the function definition match, its up to you\nto know and specify the correct label for what the function does) so not\nour problem. Write a trigger instead.\n\nDavid J.\n\nOn Monday, October 11, 2021, Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:While using IMMUTABLE functions in index expression, we are getting below corruption on HEAD.That function is not actually immutable (the system doesn’t check whether your claim of immutability and the function definition match, its up to you to know and specify the correct label for what the function does) so not our problem.  Write a trigger instead.  David J.", "msg_date": "Mon, 11 Oct 2021 08:47:55 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption with IMMUTABLE functions in index expression." }, { "msg_contents": "\n\n> 11 окт. 2021 г., в 20:47, David G. Johnston <david.g.johnston@gmail.com> написал(а):\n> \n> On Monday, October 11, 2021, Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:\n> While using IMMUTABLE functions in index expression, we are getting below corruption on HEAD.\n> \n> That function is not actually immutable (the system doesn’t check whether your claim of immutability and the function definition match, its up to you to know and specify the correct label for what the function does) so not our problem. Write a trigger instead. \n+1, but the error is strange. This might be a sign of some wrong assumption somewhere. My wild guess is that metapage is read before it was written.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 11 Oct 2021 21:08:18 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Corruption with IMMUTABLE functions in index expression." }, { "msg_contents": "\n\nOn 10/11/21 18:08, Andrey Borodin wrote:\n> \n> \n>> 11 окт. 2021 г., в 20:47, David G. Johnston <david.g.johnston@gmail.com> написал(а):\n>>\n>> On Monday, October 11, 2021, Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:\n>> While using IMMUTABLE functions in index expression, we are getting below corruption on HEAD.\n>>\n>> That function is not actually immutable (the system doesn’t check whether your claim of immutability and the function definition match, its up to you to know and specify the correct label for what the function does) so not our problem. Write a trigger instead.\n> +1, but the error is strange. This might be a sign of some wrong assumption somewhere. My wild guess is that metapage is read before it was written.\n> \n\nTrue, but I can't reproduce it. So either the build is broken in some \nway, or perhaps there's something else going on. What would be quite \nhelpful is a backtrace showing why the error was triggered. i.e. set a \nbreakpoint on the ereport in mdread().\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 11 Oct 2021 18:26:09 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Corruption with IMMUTABLE functions in index expression." }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Monday, October 11, 2021, Prabhat Sahu <prabhat.sahu@enterprisedb.com>\n> wrote:\n>> While using IMMUTABLE functions in index expression, we are getting below\n>> corruption on HEAD.\n\n> That function is not actually immutable (the system doesn’t check whether\n> your claim of immutability and the function definition match, its up to you\n> to know and specify the correct label for what the function does) so not\n> our problem. Write a trigger instead.\n\nYeah. What is happening is that the function's SELECT on the subject\ntable is trying to examine the not-yet-valid new index. While that could\nbe argued to be a bug, I share David's lack of interest in fixing it,\nbecause I do not believe that there are any cases where a function that\naccesses an index's subject table is really going to be immutable.\n\nTo prevent this access, we'd have to set pg_index.indisvalid false\ninitially and then update it to true after the index is built.\nWe do do that in CREATE INDEX CONCURRENTLY (so you can make this\nexample work by specifying CONCURRENTLY), but I'm unexcited about\ncreating bloat in pg_index for the standard case in order to support\na usage scenario that is going to cause you all sorts of other pain.\nTo paraphrase Henry Spencer: if you lie to the planner, it will get\nits revenge.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Oct 2021 12:27:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption with IMMUTABLE functions in index expression." }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> True, but I can't reproduce it. So either the build is broken in some \n> way, or perhaps there's something else going on. What would be quite \n> helpful is a backtrace showing why the error was triggered. i.e. set a \n> breakpoint on the ereport in mdread().\n\nIt reproduced as-described for me. The planner sees the index as\nalready indisvalid, so it figures it can ask for the tree height:\n\n#0 errfinish (filename=0xa4c15c \"md.c\", lineno=686, \n funcname=0xac6a98 <__func__.13643> \"mdread\") at elog.c:515\n#1 0x00000000004dc8fb in mdread (reln=<optimized out>, \n forknum=<optimized out>, blocknum=0, buffer=0x7fad931b4f80 \"\") at md.c:682\n#2 0x00000000007fd15c in ReadBuffer_common (smgr=0x1d72140, \n relpersistence=<optimized out>, forkNum=MAIN_FORKNUM, blockNum=0, \n mode=RBM_NORMAL, strategy=<optimized out>, hit=0x7fff63a1d7af)\n at bufmgr.c:1003\n#3 0x00000000007fdb54 in ReadBufferExtended (reln=0x7fad9c4b69d8, \n forkNum=MAIN_FORKNUM, blockNum=0, mode=<optimized out>, \n strategy=<optimized out>) at ../../../../src/include/utils/rel.h:548\n#4 0x00000000005797f5 in _bt_getbuf (rel=0x7fad9c4b69d8, \n blkno=<optimized out>, access=1) at nbtpage.c:878\n#5 0x0000000000579bc7 in _bt_getrootheight (rel=rel@entry=0x7fad9c4b69d8)\n at nbtpage.c:680\n#6 0x000000000078106a in get_relation_info (root=root@entry=0x1d84b28, \n relationObjectId=59210, inhparent=false, rel=rel@entry=0x1d85290)\n at plancat.c:419\n#7 0x0000000000785451 in build_simple_rel (root=0x1d84b28, relid=1, \n parent=0x0) at relnode.c:308\n#8 0x000000000075792f in add_base_rels_to_query (root=root@entry=0x1d84b28, \n jtnode=<optimized out>) at initsplan.c:122\n#9 0x000000000075ac68 in query_planner (root=root@entry=0x1d84b28, \n--Type <RET> for more, q to quit, c to continue without paging--q\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Oct 2021 12:36:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption with IMMUTABLE functions in index expression." }, { "msg_contents": "On Mon, Oct 11, 2021 at 9:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah. What is happening is that the function's SELECT on the subject\n> table is trying to examine the not-yet-valid new index. While that could\n> be argued to be a bug, I share David's lack of interest in fixing it,\n> because I do not believe that there are any cases where a function that\n> accesses an index's subject table is really going to be immutable.\n\nRight. It might be different if this was something that users\nsometimes expect will work, based on some plausible-though-wrong\nunderstanding of expression indexes. But experience suggests that they\ndon't.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 11 Oct 2021 09:37:51 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Corruption with IMMUTABLE functions in index expression." }, { "msg_contents": "Hi,\n\nOn 2021-10-11 12:27:44 -0400, Tom Lane wrote:\n> While that could be argued to be a bug, I share David's lack of interest in\n> fixing it, because I do not believe that there are any cases where a\n> function that accesses an index's subject table is really going to be\n> immutable.\n\n> To prevent this access, we'd have to set pg_index.indisvalid false\n> initially and then update it to true after the index is built.\n> We do do that in CREATE INDEX CONCURRENTLY (so you can make this\n> example work by specifying CONCURRENTLY), but I'm unexcited about\n> creating bloat in pg_index for the standard case in order to support\n> a usage scenario that is going to cause you all sorts of other pain.\n> To paraphrase Henry Spencer: if you lie to the planner, it will get\n> its revenge.\n\nI agree that there's not much point in making this really \"work\", but perhaps\nwe could try to generate a more useful error message, without incurring undue\noverhead? I think there've been a few reports of this over the years,\nincluding some internally to postgres, e.g. during catalog table index\nrebuilds.\n\nPerhaps we could set pg_index.indisvalid to false initially, and if opening an\nindex where pg_index.indisvalid error out with a different error message if\nTransactionIdIsCurrentTransactionId(xmin). And then use an inplace update to\nset indisvalid to true, to avoid the bloat?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 11 Oct 2021 11:25:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption with IMMUTABLE functions in index expression." }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Perhaps we could set pg_index.indisvalid to false initially, and if opening an\n> index where pg_index.indisvalid error out with a different error message if\n> TransactionIdIsCurrentTransactionId(xmin). And then use an inplace update to\n> set indisvalid to true, to avoid the bloat?\n\nI still can't get excited about it ... but yeah, update-in-place would\nbe enough to remove the bloat objection. I doubt we need any code\nchanges beyond changing the indisvalid state.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Oct 2021 14:59:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption with IMMUTABLE functions in index expression." }, { "msg_contents": "Hi,\n\nOn 2021-10-11 14:59:22 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Perhaps we could set pg_index.indisvalid to false initially, and if opening an\n> > index where pg_index.indisvalid error out with a different error message if\n> > TransactionIdIsCurrentTransactionId(xmin). And then use an inplace update to\n> > set indisvalid to true, to avoid the bloat?\n> \n> I still can't get excited about it ...\n\nUnderstandable, me neither...\n\n\n> I doubt we need any code changes beyond changing the indisvalid state.\n\nI was thinking we'd want to throw an error if an index that's being created is\naccessed during the index build, rather than just not include it in\nplanning...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 11 Oct 2021 20:07:58 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption with IMMUTABLE functions in index expression." }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-10-11 14:59:22 -0400, Tom Lane wrote:\n>> I doubt we need any code changes beyond changing the indisvalid state.\n\n> I was thinking we'd want to throw an error if an index that's being created is\n> accessed during the index build, rather than just not include it in\n> planning...\n\nAFAICT we *will* throw an error, just not a very intelligible one.\nBut until someone's shown another way to reach that error besides\nthe planner's path, I'm not thinking we need to expend effort on\nmaking the error nicer.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Oct 2021 23:21:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption with IMMUTABLE functions in index expression." } ]
[ { "msg_contents": "As reported at [1], if the transaction is aborted during export\nsnapshot then ExportInProgress and SavedResourceOwnerDuringExport are\nnot getting reset and that is throwing an error\n\"clearing exported snapshot in wrong transaction state\" while\nexecuting the next command. The attached patch clears this state if\nthe transaction is aborted.\n\n[1] https://www.postgresql.org/message-id/CAFiTN-tqopqpfS6HHug2nnOGieJJ_nm-Nvy0WBZ=Zpo-LqtSJA@mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 11 Oct 2021 20:46:32 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Reset snapshot export state on the transaction abort" }, { "msg_contents": "On Mon, Oct 11, 2021 at 08:46:32PM +0530, Dilip Kumar wrote:\n> As reported at [1], if the transaction is aborted during export\n> snapshot then ExportInProgress and SavedResourceOwnerDuringExport are\n> not getting reset and that is throwing an error\n> \"clearing exported snapshot in wrong transaction state\" while\n> executing the next command. The attached patch clears this state if\n> the transaction is aborted.\n\nInjecting an error is enough to reproduce the failure in a second\ncommand after the first one failed. This could happen on OOM for the\npalloc() done at the beginning of SnapBuildInitialSnapshot().\n\n@@ -2698,6 +2698,9 @@ AbortTransaction(void)\n /* Reset logical streaming state. */\n ResetLogicalStreamingState();\n\n+ /* Reset snapshot export state. */\n+ ResetSnapBuildExportSnapshotState();\nShouldn't we care about the case of a sub-transaction abort as well?\nSee AbortSubTransaction().\n--\nMichael", "msg_date": "Wed, 13 Oct 2021 13:47:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Reset snapshot export state on the transaction abort" }, { "msg_contents": "On Wed, Oct 13, 2021 at 10:17 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Oct 11, 2021 at 08:46:32PM +0530, Dilip Kumar wrote:\n> > As reported at [1], if the transaction is aborted during export\n> > snapshot then ExportInProgress and SavedResourceOwnerDuringExport are\n> > not getting reset and that is throwing an error\n> > \"clearing exported snapshot in wrong transaction state\" while\n> > executing the next command. The attached patch clears this state if\n> > the transaction is aborted.\n\nCorrect.\n\n> Injecting an error is enough to reproduce the failure in a second\n> command after the first one failed. This could happen on OOM for the\n> palloc() done at the beginning of SnapBuildInitialSnapshot().\n>\n> @@ -2698,6 +2698,9 @@ AbortTransaction(void)\n> /* Reset logical streaming state. */\n> ResetLogicalStreamingState();\n>\n> + /* Reset snapshot export state. */\n> + ResetSnapBuildExportSnapshotState();\n> Shouldn't we care about the case of a sub-transaction abort as well?\n> See AbortSubTransaction().\n\n\nActually, it is not required because 1) Snapshot export can not be\nallowed within a transaction block, basically, it starts its own\ntransaction block and aborts that while executing any next replication\ncommand see SnapBuildClearExportedSnapshot(). So our problem is only\nif the transaction block internally started for exporting, gets\naborted before any next command arrives. So there is no possibility\nof starting any sub transaction.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Oct 2021 10:53:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reset snapshot export state on the transaction abort" }, { "msg_contents": "On Wed, Oct 13, 2021 at 10:53:24AM +0530, Dilip Kumar wrote:\n> Actually, it is not required because 1) Snapshot export can not be\n> allowed within a transaction block, basically, it starts its own\n> transaction block and aborts that while executing any next replication\n> command see SnapBuildClearExportedSnapshot(). So our problem is only\n> if the transaction block internally started for exporting, gets\n> aborted before any next command arrives. So there is no possibility\n> of starting any sub transaction.\n\nYes, you are right here. I did not remember the semantics this relies\non. I have played more with the patch, reviewed the whole, and the\nfields you are resetting as part of the snapshot builds seem correct\nto me. So let's fix this.\n--\nMichael", "msg_date": "Thu, 14 Oct 2021 15:54:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Reset snapshot export state on the transaction abort" }, { "msg_contents": "On Thu, Oct 14, 2021 at 12:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Oct 13, 2021 at 10:53:24AM +0530, Dilip Kumar wrote:\n> > Actually, it is not required because 1) Snapshot export can not be\n> > allowed within a transaction block, basically, it starts its own\n> > transaction block and aborts that while executing any next replication\n> > command see SnapBuildClearExportedSnapshot(). So our problem is only\n> > if the transaction block internally started for exporting, gets\n> > aborted before any next command arrives. So there is no possibility\n> > of starting any sub transaction.\n>\n> Yes, you are right here. I did not remember the semantics this relies\n> on. I have played more with the patch, reviewed the whole, and the\n> fields you are resetting as part of the snapshot builds seem correct\n> to me. So let's fix this.\n\nGreat, thanks!\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Oct 2021 14:58:55 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reset snapshot export state on the transaction abort" }, { "msg_contents": "On Thu, Oct 14, 2021 at 02:58:55PM +0530, Dilip Kumar wrote:\n> On Thu, Oct 14, 2021 at 12:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Yes, you are right here. I did not remember the semantics this relies\n>> on. I have played more with the patch, reviewed the whole, and the\n>> fields you are resetting as part of the snapshot builds seem correct\n>> to me. So let's fix this.\n> \n> Great, thanks!\n\nWhile double-checking this stuff, I have noticed something that's\nwrong in the patch when a command that follows a\nCREATE_REPLICATION_SLOT query resets SnapBuildClearExportedSnapshot().\nOnce the slot is created, the WAL sender is in a TRANS_INPROGRESS\nstate, meaning that AbortCurrentTransaction() would call\nAbortTransaction(), hence calling ResetSnapBuildExportSnapshotState()\nand resetting SavedResourceOwnerDuringExport to NULL before we store a\nNULL into CurrentResourceOwner :)\n\nOne solution would be as simple as saving\nSavedResourceOwnerDuringExport into a temporary variable before\ncalling AbortCurrentTransaction(), and save it back into\nCurrentResourceOwner once we are done in\nSnapBuildClearExportedSnapshot() as we need to rely on\nAbortTransaction() to do the static state cleanup if an error happens\nuntil the command after the replslot creation command shows up.\n--\nMichael", "msg_date": "Sat, 16 Oct 2021 12:43:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Reset snapshot export state on the transaction abort" }, { "msg_contents": "On Sat, Oct 16, 2021 at 9:13 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> While double-checking this stuff, I have noticed something that's\n> wrong in the patch when a command that follows a\n> CREATE_REPLICATION_SLOT query resets SnapBuildClearExportedSnapshot().\n> Once the slot is created, the WAL sender is in a TRANS_INPROGRESS\n> state, meaning that AbortCurrentTransaction() would call\n> AbortTransaction(), hence calling ResetSnapBuildExportSnapshotState()\n> and resetting SavedResourceOwnerDuringExport to NULL before we store a\n> NULL into CurrentResourceOwner :)\n\nRight, good catch!\n\n> One solution would be as simple as saving\n> SavedResourceOwnerDuringExport into a temporary variable before\n> calling AbortCurrentTransaction(), and save it back into\n> CurrentResourceOwner once we are done in\n> SnapBuildClearExportedSnapshot() as we need to rely on\n> AbortTransaction() to do the static state cleanup if an error happens\n> until the command after the replslot creation command shows up.\n\nYeah, this idea looks fine to me. I have modified the patch. In\naddition to that I have removed calling\nResetSnapBuildExportSnapshotState from the\nSnapBuildClearExportedSnapshot because that is anyway being called\nfrom the AbortTransaction.\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 16 Oct 2021 15:39:32 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reset snapshot export state on the transaction abort" }, { "msg_contents": "On Sat, Oct 16, 2021 at 3:10 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Sat, Oct 16, 2021 at 9:13 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> >\n> > While double-checking this stuff, I have noticed something that's\n> > wrong in the patch when a command that follows a\n> > CREATE_REPLICATION_SLOT query resets SnapBuildClearExportedSnapshot().\n> > Once the slot is created, the WAL sender is in a TRANS_INPROGRESS\n> > state, meaning that AbortCurrentTransaction() would call\n> > AbortTransaction(), hence calling ResetSnapBuildExportSnapshotState()\n> > and resetting SavedResourceOwnerDuringExport to NULL before we store a\n> > NULL into CurrentResourceOwner :)\n>\n> Right, good catch!\n>\n> > One solution would be as simple as saving\n> > SavedResourceOwnerDuringExport into a temporary variable before\n> > calling AbortCurrentTransaction(), and save it back into\n> > CurrentResourceOwner once we are done in\n> > SnapBuildClearExportedSnapshot() as we need to rely on\n> > AbortTransaction() to do the static state cleanup if an error happens\n> > until the command after the replslot creation command shows up.\n>\n> Yeah, this idea looks fine to me. I have modified the patch. In\n> addition to that I have removed calling\n> ResetSnapBuildExportSnapshotState from the\n> SnapBuildClearExportedSnapshot because that is anyway being called\n> from the AbortTransaction.\n>\n>\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n\nHi,\n\nbq. While exporting a snapshot we set a temporary states which get\n\n a temporary states -> temporary states\n\n+extern void ResetSnapBuildExportSnapshotState(void);\n\nResetSnapBuildExportSnapshotState() is only called inside snapbuild.c\nI wonder if the addition to snapbuild.h is needed.\n\nCheers\n\nOn Sat, Oct 16, 2021 at 3:10 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Sat, Oct 16, 2021 at 9:13 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> While double-checking this stuff, I have noticed something that's\n> wrong in the patch when a command that follows a\n> CREATE_REPLICATION_SLOT query resets SnapBuildClearExportedSnapshot().\n> Once the slot is created, the WAL sender is in a TRANS_INPROGRESS\n> state, meaning that AbortCurrentTransaction() would call\n> AbortTransaction(), hence calling ResetSnapBuildExportSnapshotState()\n> and resetting SavedResourceOwnerDuringExport to NULL before we store a\n> NULL into CurrentResourceOwner :)\n\nRight, good catch!\n\n> One solution would be as simple as saving\n> SavedResourceOwnerDuringExport into a temporary variable before\n> calling AbortCurrentTransaction(), and save it back into\n> CurrentResourceOwner once we are done in\n> SnapBuildClearExportedSnapshot() as we need to rely on\n> AbortTransaction() to do the static state cleanup if an error happens\n> until the command after the replslot creation command shows up.\n\nYeah, this idea looks fine to me.  I have modified the patch.  In\naddition to that I have removed calling\nResetSnapBuildExportSnapshotState from the\nSnapBuildClearExportedSnapshot because that is anyway being called\nfrom the AbortTransaction.\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.comHi,bq. While exporting a snapshot we set a temporary states which get a temporary states -> temporary states+extern void ResetSnapBuildExportSnapshotState(void);ResetSnapBuildExportSnapshotState() is only called inside snapbuild.cI wonder if the addition to snapbuild.h is needed.Cheers", "msg_date": "Sat, 16 Oct 2021 08:31:36 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Reset snapshot export state on the transaction abort" }, { "msg_contents": "On Sat, Oct 16, 2021 at 08:31:36AM -0700, Zhihong Yu wrote:\n> On Sat, Oct 16, 2021 at 3:10 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> On Sat, Oct 16, 2021 at 9:13 AM Michael Paquier <michael@paquier.xyz>\n>> wrote:\n>>> One solution would be as simple as saving\n>>> SavedResourceOwnerDuringExport into a temporary variable before\n>>> calling AbortCurrentTransaction(), and save it back into\n>>> CurrentResourceOwner once we are done in\n>>> SnapBuildClearExportedSnapshot() as we need to rely on\n>>> AbortTransaction() to do the static state cleanup if an error happens\n>>> until the command after the replslot creation command shows up.\n>>\n>> Yeah, this idea looks fine to me. I have modified the patch. In\n>> addition to that I have removed calling\n>> ResetSnapBuildExportSnapshotState from the\n>> SnapBuildClearExportedSnapshot because that is anyway being called\n>> from the AbortTransaction.\n\nThat seems logically fine. I'll check that tomorrow.\n\n> +extern void ResetSnapBuildExportSnapshotState(void);\n> \n> ResetSnapBuildExportSnapshotState() is only called inside snapbuild.c\n> I wonder if the addition to snapbuild.h is needed.\n\nAs of xact.c in v2 of the patch, we have that:\n@@ -2698,6 +2699,9 @@ AbortTransaction(void)\n /* Reset logical streaming state. */\n\t ResetLogicalStreamingState();\n\n+ /* Reset snapshot export state. */\n+ ResetSnapBuildExportSnapshotState();\n--\nMichael", "msg_date": "Sun, 17 Oct 2021 09:33:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Reset snapshot export state on the transaction abort" }, { "msg_contents": "On Sun, Oct 17, 2021 at 09:33:48AM +0900, Michael Paquier wrote:\n> That seems logically fine. I'll check that tomorrow.\n\nAnd that looks indeed fine. I have adjusted a couple of things, and\nbackpatched the fix.\n--\nMichael", "msg_date": "Mon, 18 Oct 2021 12:20:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Reset snapshot export state on the transaction abort" }, { "msg_contents": "On Mon, Oct 18, 2021 at 8:51 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Oct 17, 2021 at 09:33:48AM +0900, Michael Paquier wrote:\n> > That seems logically fine. I'll check that tomorrow.\n>\n> And that looks indeed fine. I have adjusted a couple of things, and\n> backpatched the fix.\n\nThanks!\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Oct 2021 11:12:09 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reset snapshot export state on the transaction abort" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 17220\nLogged by: Matthijs van der Vleuten\nEmail address: postgresql@zr40.nl\nPostgreSQL version: 14.0\nOperating system: Debian sid\nDescription: \n\nUser 'musttu' on IRC reported the following bug: After running \"ALTER INDEX\nsome_idx ALTER COLUMN expr SET (n_distinct=100)\", the index and table become\nunusable. All further statements involving the table result in: \"ERROR: \noperator class text_ops has no options\".\r\n\r\nThey reported this on the RDS version of 13.3, but I've been able to\nreproduce this on Debian with 13.4 and 14.0. It does not reproduce on 12.8,\nall statements succeed on that version.\r\n\r\nAs a workaround, I've suggested the following catalog change in order to be\nable to drop the index:\r\nUPDATE pg_attribute SET attoptions = NULL WHERE attrelid =\n'tbl_col_idx'::regclass;\r\nHowever, they were not able to do this, since RDS does not expose a true\nsuperuser.\r\n\r\nReproduction:\r\nzr40@[local]:5432 ~=# select version();\r\n version \n \r\n──────────────────────────────────────────────────────────────────────────────────────────────────────────────────\r\n PostgreSQL 14.0 (Debian 14.0-1.pgdg+1) on x86_64-pc-linux-gnu, compiled by\ngcc (Debian 10.3.0-11) 10.3.0, 64-bit\r\n(1 row)\r\nzr40@[local]:5432 ~=# create table test (col text);\r\nCREATE TABLE\r\nzr40@[local]:5432 ~=# create index on test (col);\r\nCREATE INDEX\r\nzr40@[local]:5432 ~=# alter index test_col_idx alter column col set\n(n_distinct=100);\r\nALTER INDEX\r\nzr40@[local]:5432 ~=# alter index test_col_idx alter column col reset\n(n_distinct);\r\nERROR: 22023: operator class text_ops has no options\r\nLOCATION: index_opclass_options, indexam.c:971\r\nzr40@[local]:5432 ~=# drop index test_col_idx;\r\nERROR: 22023: operator class text_ops has no options\r\nLOCATION: index_opclass_options, indexam.c:971\r\nzr40@[local]:5432 ~=# drop table test;\r\nERROR: 22023: operator class text_ops has no options\r\nLOCATION: index_opclass_options, indexam.c:971", "msg_date": "Mon, 11 Oct 2021 15:25:49 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #17220: ALTER INDEX ALTER COLUMN SET (..) with an optionless\n opclass makes index and table unusable" }, { "msg_contents": "On 10/11/21 5:25 PM, PG Bug reporting form wrote:\n> \n> User 'musttu' on IRC reported the following bug: After running \"ALTER INDEX\n> some_idx ALTER COLUMN expr SET (n_distinct=100)\", the index and table become\n> unusable. All further statements involving the table result in: \"ERROR: \n> operator class text_ops has no options\".\n> \n> They reported this on the RDS version of 13.3, but I've been able to\n> reproduce this on Debian with 13.4 and 14.0. It does not reproduce on 12.8,\n> all statements succeed on that version.\n\nThis was broken by 911e702077 (Implement operator class parameters).\n-- \nVik Fearing\n\n\n", "msg_date": "Mon, 11 Oct 2021 20:03:13 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: BUG #17220: ALTER INDEX ALTER COLUMN SET (..) with an optionless\n opclass makes index and table unusable" }, { "msg_contents": "On 10/11/21, 11:03 AM, \"Vik Fearing\" <vik@postgresfriends.org> wrote:\r\n> On 10/11/21 5:25 PM, PG Bug reporting form wrote:\r\n>>\r\n>> User 'musttu' on IRC reported the following bug: After running \"ALTER INDEX\r\n>> some_idx ALTER COLUMN expr SET (n_distinct=100)\", the index and table become\r\n>> unusable. All further statements involving the table result in: \"ERROR:\r\n>> operator class text_ops has no options\".\r\n>>\r\n>> They reported this on the RDS version of 13.3, but I've been able to\r\n>> reproduce this on Debian with 13.4 and 14.0. It does not reproduce on 12.8,\r\n>> all statements succeed on that version.\r\n>\r\n> This was broken by 911e702077 (Implement operator class parameters).\r\n\r\nMoving to pgsql-hackers@.\r\n\r\nAt first glance, it looks like ALTER INDEX .. ALTER COLUMN ... SET\r\nuses the wrong validation function. I've attached a patch where I've\r\nattempted to fix that and added some tests.\r\n\r\nNathan", "msg_date": "Wed, 13 Oct 2021 00:06:32 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17220: ALTER INDEX ALTER COLUMN SET (..) with an optionless\n opclass\n makes index and table unusable" }, { "msg_contents": "On 10/13/21 2:06 AM, Bossart, Nathan wrote:\n> On 10/11/21, 11:03 AM, \"Vik Fearing\" <vik@postgresfriends.org> wrote:\n>> On 10/11/21 5:25 PM, PG Bug reporting form wrote:\n>>>\n>>> User 'musttu' on IRC reported the following bug: After running \"ALTER INDEX\n>>> some_idx ALTER COLUMN expr SET (n_distinct=100)\", the index and table become\n>>> unusable. All further statements involving the table result in: \"ERROR:\n>>> operator class text_ops has no options\".\n>>>\n>>> They reported this on the RDS version of 13.3, but I've been able to\n>>> reproduce this on Debian with 13.4 and 14.0. It does not reproduce on 12.8,\n>>> all statements succeed on that version.\n>>\n>> This was broken by 911e702077 (Implement operator class parameters).\n> \n> Moving to pgsql-hackers@.\n> \n> At first glance, it looks like ALTER INDEX .. ALTER COLUMN ... SET\n> uses the wrong validation function. I've attached a patch where I've\n> attempted to fix that and added some tests.\n\nAh, thank you. I was in the (slow) process of writing basically this\nexact patch. So I'll stop now and endorse yours.\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 13 Oct 2021 02:30:09 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: BUG #17220: ALTER INDEX ALTER COLUMN SET (..) with an optionless\n opclass makes index and table unusable" }, { "msg_contents": "On 10/12/21, 5:31 PM, \"Vik Fearing\" <vik@postgresfriends.org> wrote:\r\n> On 10/13/21 2:06 AM, Bossart, Nathan wrote:\r\n>> Moving to pgsql-hackers@.\r\n>>\r\n>> At first glance, it looks like ALTER INDEX .. ALTER COLUMN ... SET\r\n>> uses the wrong validation function. I've attached a patch where I've\r\n>> attempted to fix that and added some tests.\r\n>\r\n> Ah, thank you. I was in the (slow) process of writing basically this\r\n> exact patch. So I'll stop now and endorse yours.\r\n\r\nOops, sorry about that. Thanks for the endorsement.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 13 Oct 2021 02:20:29 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17220: ALTER INDEX ALTER COLUMN SET (..) with an optionless\n opclass\n makes index and table unusable" }, { "msg_contents": "On Wed, Oct 13, 2021 at 12:06:32AM +0000, Bossart, Nathan wrote:\n> At first glance, it looks like ALTER INDEX .. ALTER COLUMN ... SET\n> uses the wrong validation function. I've attached a patch where I've\n> attempted to fix that and added some tests.\n\nThe gap is larger than than, because ALTER INDEX .. ALTER COLUMN\n.. SET is supported by the parser but we don't document it. The only\nthing we document now is SET STATISTICS that applies to a column\n*number*.\n\nAnyway, specifying a column name for an ALTER INDEX is not right, no?\nJust take for example the case of an expression which has a hardcoded\ncolumn name in pg_attribute. So these are not specific to indexes,\nwhich is why we apply column numbers for the statistics case. I think\nthat we'd better just reject those cases until there is a proper\ndesign done here. As far as I can see, I guess that we should do\nthings similarly to what we do for SET STATISTICS with column\nnumbers when it comes to indexes.\n--\nMichael", "msg_date": "Wed, 13 Oct 2021 12:42:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #17220: ALTER INDEX ALTER COLUMN SET (..) with an optionless\n opclass makes index and table unusable" }, { "msg_contents": "On Wed, Oct 13, 2021 at 9:12 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Oct 13, 2021 at 12:06:32AM +0000, Bossart, Nathan wrote:\n> > At first glance, it looks like ALTER INDEX .. ALTER COLUMN ... SET\n> > uses the wrong validation function. I've attached a patch where I've\n> > attempted to fix that and added some tests.\n>\n> The gap is larger than than, because ALTER INDEX .. ALTER COLUMN\n> .. SET is supported by the parser but we don't document it. The only\n> thing we document now is SET STATISTICS that applies to a column\n> *number*.\n>\n> Anyway, specifying a column name for an ALTER INDEX is not right, no?\n> Just take for example the case of an expression which has a hardcoded\n> column name in pg_attribute. So these are not specific to indexes,\n> which is why we apply column numbers for the statistics case. I think\n> that we'd better just reject those cases until there is a proper\n> design done here. As far as I can see, I guess that we should do\n> things similarly to what we do for SET STATISTICS with column\n> numbers when it comes to indexes.\n\n+1 it should behave similarly to SET STATISTICS for the index and if\nsomeone tries to set with the column name then it should throw an\nerror.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Oct 2021 13:59:51 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17220: ALTER INDEX ALTER COLUMN SET (..) with an optionless\n opclass makes index and table unusable" }, { "msg_contents": "On 10/13/21, 1:31 AM, \"Dilip Kumar\" <dilipbalaut@gmail.com> wrote:\r\n> On Wed, Oct 13, 2021 at 9:12 AM Michael Paquier <michael@paquier.xyz> wrote:\r\n>> Anyway, specifying a column name for an ALTER INDEX is not right, no?\r\n>> Just take for example the case of an expression which has a hardcoded\r\n>> column name in pg_attribute. So these are not specific to indexes,\r\n>> which is why we apply column numbers for the statistics case. I think\r\n>> that we'd better just reject those cases until there is a proper\r\n>> design done here. As far as I can see, I guess that we should do\r\n>> things similarly to what we do for SET STATISTICS with column\r\n>> numbers when it comes to indexes.\r\n>\r\n> +1 it should behave similarly to SET STATISTICS for the index and if\r\n> someone tries to set with the column name then it should throw an\r\n> error.\r\n\r\nGood point. I agree that rejecting this case is probably the best\r\noption for now. In addition to the bug mentioned in this thread, this\r\nfunctionality doesn't even work for supported options currently.\r\n\r\n postgres=> create table test (a tsvector);\r\n CREATE TABLE\r\n postgres=> create index on test using gist (a);\r\n CREATE INDEX\r\n postgres=> alter index test_a_idx alter column a set (siglen = 100);\r\n ERROR: unrecognized parameter \"siglen\"\r\n\r\nAFAICT the fact that these commands can succeed at all seems to be\r\nunintentional, and I wonder if modifying these options requires extra\r\nsteps such as rebuilding the index.\r\n\r\nI've attached new patch that just adds an ERROR.\r\n\r\nNathan", "msg_date": "Wed, 13 Oct 2021 17:20:56 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17220: ALTER INDEX ALTER COLUMN SET (..) with an optionless\n opclass\n makes index and table unusable" }, { "msg_contents": "On Wed, Oct 13, 2021 at 05:20:56PM +0000, Bossart, Nathan wrote:\n> AFAICT the fact that these commands can succeed at all seems to be\n> unintentional, and I wonder if modifying these options requires extra\n> steps such as rebuilding the index.\n\nI was looking at all this business with more attention, and this code\nblock is standing out in analyze.c:\n/*\n * Now we can compute the statistics for the expression columns.\n */\nif (numindexrows > 0)\n{\n MemoryContextSwitchTo(col_context);\n for (i = 0; i < attr_cnt; i++)\n {\n VacAttrStats *stats = thisdata->vacattrstats[i];\n AttributeOpts *aopt =\n get_attribute_options(stats->attr->attrelid,\n stats->attr->attnum);\n\n stats->exprvals = exprvals + i;\n stats->exprnulls = exprnulls + i;\n stats->rowstride = attr_cnt;\n stats->compute_stats(stats,\n ind_fetch_func,\n numindexrows,\n totalindexrows);\n\n /*\n * If the n_distinct option is specified, it overrides the\n * above computation. For indices, we always use just\n * n_distinct, not n_distinct_inherited.\n */\n if (aopt != NULL && aopt->n_distinct != 0.0)\n stats->stadistinct = aopt->n_distinct;\n\n MemoryContextResetAndDeleteChildren(col_context);\n }\n}\n\nWhen computing statistics on an index expression, this code means that\nwe would grab the value of n_distinct from the *index* if set and\nforce the stats to use it, and not use what the parent table has. For\nexample, say:\ncreate table aa (a int);\ninsert into aa values (generate_series(1,1000));\ncreate index aai on aa((a+a)) where a > 500;\nalter index aai alter column expr set (n_distinct = 2);\nanalyze aa; -- n_distinct forced to 2.0 for the index stats\n\nThis code comes from 76a47c0 back in 2010. In PG <= 12, this would\nwork, but that does not as of 13~. Enforcing n_distinct for index\nattributes was discussed back when this code was introduced:\nhttps://www.postgresql.org/message-id/603c8f071001101127w3253899vb3f3e15073638774@mail.gmail.com\n\nThis means that we've lost the ability to enforce n_distinct for\nexpression indexes for two years. But, do we really care about this\ncase? My answer to that would be \"no\" as long as we don't have a\ndocumented grammar rather, and we don't dump them either. But I think\nthat we'd better do something with the code in analyze.c rather than\nletting it just dead, and my take is that we should remove the call to\nget_attribute_options() for this code path.\n\nAny opinions? @Robert: you were involved in 76a47c0, so I am adding\nyou in CC.\n--\nMichael", "msg_date": "Thu, 14 Oct 2021 11:07:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #17220: ALTER INDEX ALTER COLUMN SET (..) with an optionless\n opclass makes index and table unusable" }, { "msg_contents": "On Thu, Oct 14, 2021 at 11:07:21AM +0900, Michael Paquier wrote:\n> This means that we've lost the ability to enforce n_distinct for\n> expression indexes for two years. But, do we really care about this\n> case? My answer to that would be \"no\" as long as we don't have a\n> documented grammar rather, and we don't dump them either. But I think\n> that we'd better do something with the code in analyze.c rather than\n> letting it just dead, and my take is that we should remove the call to\n> get_attribute_options() for this code path.\n> \n> Any opinions? @Robert: you were involved in 76a47c0, so I am adding\n> you in CC.\n\nHearing nothing, and after pondering on this point, I think that\nremoving the get_attribute_options() is the right way to go for now\nif there is a point in the future to get n_distinct done for all index\nAMs.\n\nI have reviewed the last patch posted upthread, and while testing\npartitioned indexes I have noticed that we don't need to do a custom\ncheck as part of ATExecSetOptions(), because we have already that in\nATSimplePermissions() with details on the relkind failing. This makes\nthe patch simpler, with a better error message generated. I have\nadded a case for partitioned indexes while on it.\n\nWorth noting that I have spotted an extra weird call of\nget_attribute_options() in extended statistics. This is unrelated to\nthis thread and I am not done analyzing it yet, but a quick check\nshows that we call it with an InvalidOid for expression stats, which\nis surprising.. I'll start a thread once/if I find anything\ninteresting on this one.\n\nAttached is the patch I am finishing with, that should go down to\nv13 (this is going to conflict on REL_13_STABLE, for sure).\n\nThoughts?\n--\nMichael", "msg_date": "Mon, 18 Oct 2021 16:46:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #17220: ALTER INDEX ALTER COLUMN SET (..) with an optionless\n opclass makes index and table unusable" }, { "msg_contents": "On 10/18/21, 12:47 AM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> I have reviewed the last patch posted upthread, and while testing\r\n> partitioned indexes I have noticed that we don't need to do a custom\r\n> check as part of ATExecSetOptions(), because we have already that in\r\n> ATSimplePermissions() with details on the relkind failing. This makes\r\n> the patch simpler, with a better error message generated. I have\r\n> added a case for partitioned indexes while on it.\r\n\r\nAh, yes, that is much better.\r\n\r\n> Attached is the patch I am finishing with, that should go down to\r\n> v13 (this is going to conflict on REL_13_STABLE, for sure).\r\n\r\n+DROP INDEX btree_tall_tbl_idx2;\r\n+ERROR: index \"btree_tall_tbl_idx2\" does not exist\r\n\r\nI think this is supposed to be \"btree_tall_idx2\". Otherwise, the\r\npatch looks reasonable to me.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 18 Oct 2021 21:43:58 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17220: ALTER INDEX ALTER COLUMN SET (..) with an optionless\n opclass\n makes index and table unusable" }, { "msg_contents": "On Mon, Oct 18, 2021 at 09:43:58PM +0000, Bossart, Nathan wrote:\n> +DROP INDEX btree_tall_tbl_idx2;\n> +ERROR: index \"btree_tall_tbl_idx2\" does not exist\n> \n> I think this is supposed to be \"btree_tall_idx2\". Otherwise, the\n> patch looks reasonable to me.\n\nThanks for double-checking. Applied and back-patched, with a small\nconflict regarding the error message in ~14.\n--\nMichael", "msg_date": "Tue, 19 Oct 2021 11:08:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #17220: ALTER INDEX ALTER COLUMN SET (..) with an optionless\n opclass makes index and table unusable" }, { "msg_contents": "On 10/18/21, 7:09 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> Thanks for double-checking. Applied and back-patched, with a small\r\n> conflict regarding the error message in ~14.\r\n\r\nThanks!\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 19 Oct 2021 03:07:51 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17220: ALTER INDEX ALTER COLUMN SET (..) with an optionless\n opclass\n makes index and table unusable" }, { "msg_contents": "On Mon, Oct 18, 2021, at 09:46, Michael Paquier wrote:\n> On Thu, Oct 14, 2021 at 11:07:21AM +0900, Michael Paquier wrote:\n> Attached is the patch I am finishing with, that should go down to\n> v13 (this is going to conflict on REL_13_STABLE, for sure).\n>\n> Thoughts?\n\nThe test case doesn't seem entirely correct to me? The index being dropped (btree_tall_tbl_idx2) doesn't exist.\n\nAlso, I don't believe this tests the case of dropping the index when it previously has been altered in this way.\n\n\n", "msg_date": "Tue, 19 Oct 2021 08:48:05 +0200", "msg_from": "\"Matthijs van der Vleuten\" <postgresql@zr40.nl>", "msg_from_op": false, "msg_subject": "Re: BUG #17220: ALTER INDEX ALTER COLUMN SET (..) with an optionless\n opclass\n makes index and table unusable" }, { "msg_contents": "On 10/18/21, 11:49 PM, \"Matthijs van der Vleuten\" <postgresql@zr40.nl> wrote:\r\n> The test case doesn't seem entirely correct to me? The index being\r\n> dropped (btree_tall_tbl_idx2) doesn't exist.\r\n\r\nThis was fixed before it was committed [0].\r\n\r\n> Also, I don't believe this tests the case of dropping the index when\r\n> it previously has been altered in this way.\r\n\r\nThat can still fail with the \"has no options\" ERROR, and fixing it\r\nwill still require a manual catalog update. The ERROR is actually\r\ncoming from the call to index_open(), so bypassing it might require\r\nsome rather intrusive changes. Given that it took over a year for\r\nthis bug to be reported, I suspect it might be more trouble than it's\r\nworth.\r\n\r\nNathan\r\n\r\n[0] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fdd8857\r\n\r\n", "msg_date": "Tue, 19 Oct 2021 14:40:04 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17220: ALTER INDEX ALTER COLUMN SET (..) with an optionless\n opclass\n makes index and table unusable" }, { "msg_contents": "On Tue, Oct 19, 2021 at 02:40:04PM +0000, Bossart, Nathan wrote:\n> On 10/18/21, 11:49 PM, \"Matthijs van der Vleuten\" <postgresql@zr40.nl> wrote:\n>> The test case doesn't seem entirely correct to me? The index being\n>> dropped (btree_tall_tbl_idx2) doesn't exist.\n> \n> This was fixed before it was committed [0].\n\nYes, my apologies about this brain fade. The committed code is\nhopefully fine :)\n\n>> Also, I don't believe this tests the case of dropping the index when\n>> it previously has been altered in this way.\n> \n> That can still fail with the \"has no options\" ERROR, and fixing it\n> will still require a manual catalog update. The ERROR is actually\n> coming from the call to index_open(), so bypassing it might require\n> some rather intrusive changes. Given that it took over a year for\n> this bug to be reported, I suspect it might be more trouble than it's\n> worth.\n\nThis may need a mention in the release notes, but the problem is I\nguess not that spread enough to worry or people would have complained\nmore since 13 was out. Logical dumps discard that automatically and\neven for the ANALYZE case, the pre-committed code would have just\nignored the reloptions retrieved by get_attribute_options().\n--\nMichael", "msg_date": "Wed, 20 Oct 2021 12:47:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #17220: ALTER INDEX ALTER COLUMN SET (..) with an optionless\n opclass makes index and table unusable" } ]
[ { "msg_contents": "Hi,\n\nWhile working on [1], it is found that currently the ProcState array\ndoesn't have entries for auxiliary processes, it does have entries for\nMaxBackends. But the startup process is eating up one slot from\nMaxBackends. We need to increase the size of the ProcState array by 1\nat least for the startup process. The startup process uses ProcState\nslot via InitRecoveryTransactionEnvironment->SharedInvalBackendInit.\nThe procState array size is initialized to MaxBackends in\nSInvalShmemSize.\n\nThe consequence of not fixing this issue is that the database may hit\nthe error \"sorry, too many clients already\" soon in\nSharedInvalBackendInit.\n\nAttaching a patch to fix this issue. Thoughts?\n\n[1] https://www.postgresql.org/message-id/2222ab6f-46b1-d5c0-603d-8f6680739db4%40oss.nttdata.com\n\nRegards,\nBharath Rupireddy.", "msg_date": "Tue, 12 Oct 2021 00:37:47 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Accommodate startup process in a separate ProcState array slot\n instead of in MaxBackends slots." }, { "msg_contents": "\n\nOn 2021/10/12 4:07, Bharath Rupireddy wrote:\n> Hi,\n> \n> While working on [1], it is found that currently the ProcState array\n> doesn't have entries for auxiliary processes, it does have entries for\n> MaxBackends. But the startup process is eating up one slot from\n> MaxBackends. We need to increase the size of the ProcState array by 1\n> at least for the startup process. The startup process uses ProcState\n> slot via InitRecoveryTransactionEnvironment->SharedInvalBackendInit.\n> The procState array size is initialized to MaxBackends in\n> SInvalShmemSize.\n> \n> The consequence of not fixing this issue is that the database may hit\n> the error \"sorry, too many clients already\" soon in\n> SharedInvalBackendInit.\n> \n> Attaching a patch to fix this issue. Thoughts?\n\nThanks for making the patch! LGTM.\nBarring any objection, I will commit it.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 12 Oct 2021 09:07:52 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Accommodate startup process in a separate ProcState array slot\n instead of in MaxBackends slots." }, { "msg_contents": "On Tue, Oct 12, 2021 at 5:37 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2021/10/12 4:07, Bharath Rupireddy wrote:\n> > Hi,\n> >\n> > While working on [1], it is found that currently the ProcState array\n> > doesn't have entries for auxiliary processes, it does have entries for\n> > MaxBackends. But the startup process is eating up one slot from\n> > MaxBackends. We need to increase the size of the ProcState array by 1\n> > at least for the startup process. The startup process uses ProcState\n> > slot via InitRecoveryTransactionEnvironment->SharedInvalBackendInit.\n> > The procState array size is initialized to MaxBackends in\n> > SInvalShmemSize.\n> >\n> > The consequence of not fixing this issue is that the database may hit\n> > the error \"sorry, too many clients already\" soon in\n> > SharedInvalBackendInit.\n> >\n> > Attaching a patch to fix this issue. Thoughts?\n>\n> Thanks for making the patch! LGTM.\n> Barring any objection, I will commit it.\n\nThanks for reviewing. I've made a CF entry for this, just to ensure\nthe tests on different CF bot server passes(and yes no failures) -\nhttps://commitfest.postgresql.org/35/3355/\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 12 Oct 2021 12:16:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Accommodate startup process in a separate ProcState array slot\n instead of in MaxBackends slots." }, { "msg_contents": "\n\nOn 2021/10/12 15:46, Bharath Rupireddy wrote:\n> On Tue, Oct 12, 2021 at 5:37 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2021/10/12 4:07, Bharath Rupireddy wrote:\n>>> Hi,\n>>>\n>>> While working on [1], it is found that currently the ProcState array\n>>> doesn't have entries for auxiliary processes, it does have entries for\n>>> MaxBackends. But the startup process is eating up one slot from\n>>> MaxBackends. We need to increase the size of the ProcState array by 1\n>>> at least for the startup process. The startup process uses ProcState\n>>> slot via InitRecoveryTransactionEnvironment->SharedInvalBackendInit.\n>>> The procState array size is initialized to MaxBackends in\n>>> SInvalShmemSize.\n>>>\n>>> The consequence of not fixing this issue is that the database may hit\n>>> the error \"sorry, too many clients already\" soon in\n>>> SharedInvalBackendInit.\n\nOn second thought, I wonder if this error could not happen in practice. No?\nBecause autovacuum doesn't work during recovery and the startup process\ncan safely use the ProcState entry for autovacuum worker process.\nAlso since the minimal allowed value of autovacuum_max_workers is one,\nthe ProcState array guarantees to have at least one entry for autovacuum worker.\n\nIf this understanding is right, we don't need to enlarge the array and\ncan just update the comment. I don't strongly oppose to enlarge\nthe array in the master, but I'm not sure it's worth doing that\nin back branches if the issue can cause no actual error.\n\n\n>>>\n>>> Attaching a patch to fix this issue. Thoughts?\n>>\n>> Thanks for making the patch! LGTM.\n>> Barring any objection, I will commit it.\n> \n> Thanks for reviewing. I've made a CF entry for this, just to ensure\n> the tests on different CF bot server passes(and yes no failures) -\n> https://commitfest.postgresql.org/35/3355/\n\nThanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 14 Oct 2021 14:26:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Accommodate startup process in a separate ProcState array slot\n instead of in MaxBackends slots." }, { "msg_contents": "On Thu, Oct 14, 2021 at 10:56 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2021/10/12 15:46, Bharath Rupireddy wrote:\n> > On Tue, Oct 12, 2021 at 5:37 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >> On 2021/10/12 4:07, Bharath Rupireddy wrote:\n> >>> Hi,\n> >>>\n> >>> While working on [1], it is found that currently the ProcState array\n> >>> doesn't have entries for auxiliary processes, it does have entries for\n> >>> MaxBackends. But the startup process is eating up one slot from\n> >>> MaxBackends. We need to increase the size of the ProcState array by 1\n> >>> at least for the startup process. The startup process uses ProcState\n> >>> slot via InitRecoveryTransactionEnvironment->SharedInvalBackendInit.\n> >>> The procState array size is initialized to MaxBackends in\n> >>> SInvalShmemSize.\n> >>>\n> >>> The consequence of not fixing this issue is that the database may hit\n> >>> the error \"sorry, too many clients already\" soon in\n> >>> SharedInvalBackendInit.\n>\n> On second thought, I wonder if this error could not happen in practice. No?\n> Because autovacuum doesn't work during recovery and the startup process\n> can safely use the ProcState entry for autovacuum worker process.\n> Also since the minimal allowed value of autovacuum_max_workers is one,\n> the ProcState array guarantees to have at least one entry for autovacuum worker.\n>\n> If this understanding is right, we don't need to enlarge the array and\n> can just update the comment. I don't strongly oppose to enlarge\n> the array in the master, but I'm not sure it's worth doing that\n> in back branches if the issue can cause no actual error.\n\nYes, the issue can't happen. The comment in the SInvalShmemSize,\nmentioning about the startup process always having an extra slot\nbecause the autovacuum worker is not active during recovery, looks\nokay. But, is it safe to assume that always? Do we have a way to\nspecify that in the form an Assert(when_i_am_startup_proc &&\nautovacuum_not_running) (this looks a bit dirty though)? Instead, we\ncan just enlarge the array in the master and be confident about the\nfact that the startup process always has one dedicated slot.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 16 Oct 2021 16:37:10 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Accommodate startup process in a separate ProcState array slot\n instead of in MaxBackends slots." }, { "msg_contents": "В Сб, 16/10/2021 в 16:37 +0530, Bharath Rupireddy пишет:\n> On Thu, Oct 14, 2021 at 10:56 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n> > On 2021/10/12 15:46, Bharath Rupireddy wrote:\n> > > On Tue, Oct 12, 2021 at 5:37 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > > > On 2021/10/12 4:07, Bharath Rupireddy wrote:\n> > > > > Hi,\n> > > > > \n> > > > > While working on [1], it is found that currently the ProcState array\n> > > > > doesn't have entries for auxiliary processes, it does have entries for\n> > > > > MaxBackends. But the startup process is eating up one slot from\n> > > > > MaxBackends. We need to increase the size of the ProcState array by 1\n> > > > > at least for the startup process. The startup process uses ProcState\n> > > > > slot via InitRecoveryTransactionEnvironment->SharedInvalBackendInit.\n> > > > > The procState array size is initialized to MaxBackends in\n> > > > > SInvalShmemSize.\n> > > > > \n> > > > > The consequence of not fixing this issue is that the database may hit\n> > > > > the error \"sorry, too many clients already\" soon in\n> > > > > SharedInvalBackendInit.\n> > \n> > On second thought, I wonder if this error could not happen in practice. No?\n> > Because autovacuum doesn't work during recovery and the startup process\n> > can safely use the ProcState entry for autovacuum worker process.\n> > Also since the minimal allowed value of autovacuum_max_workers is one,\n> > the ProcState array guarantees to have at least one entry for autovacuum worker.\n> > \n> > If this understanding is right, we don't need to enlarge the array and\n> > can just update the comment. I don't strongly oppose to enlarge\n> > the array in the master, but I'm not sure it's worth doing that\n> > in back branches if the issue can cause no actual error.\n> \n> Yes, the issue can't happen. The comment in the SInvalShmemSize,\n> mentioning about the startup process always having an extra slot\n> because the autovacuum worker is not active during recovery, looks\n> okay. But, is it safe to assume that always? Do we have a way to\n> specify that in the form an Assert(when_i_am_startup_proc &&\n> autovacuum_not_running) (this looks a bit dirty though)? Instead, we\n> can just enlarge the array in the master and be confident about the\n> fact that the startup process always has one dedicated slot.\n\nBut this slot wont be used for most of cluster life. It will be just\nwaste.\n\nAnd `Assert(there_is_startup_proc && autovacuum_not_running)` has\nvalue on its own, hasn't it? So why doesn't add it with comment.\n\nregards,\nYura Sokolov\n\n\n\n", "msg_date": "Fri, 11 Feb 2022 17:26:37 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Accommodate startup process in a separate ProcState array slot\n instead of in MaxBackends slots." }, { "msg_contents": "On Fri, Feb 11, 2022 at 7:56 PM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n>\n> В Сб, 16/10/2021 в 16:37 +0530, Bharath Rupireddy пишет:\n> > On Thu, Oct 14, 2021 at 10:56 AM Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote:\n> > > On 2021/10/12 15:46, Bharath Rupireddy wrote:\n> > > > On Tue, Oct 12, 2021 at 5:37 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > > > > On 2021/10/12 4:07, Bharath Rupireddy wrote:\n> > > > > > Hi,\n> > > > > >\n> > > > > > While working on [1], it is found that currently the ProcState array\n> > > > > > doesn't have entries for auxiliary processes, it does have entries for\n> > > > > > MaxBackends. But the startup process is eating up one slot from\n> > > > > > MaxBackends. We need to increase the size of the ProcState array by 1\n> > > > > > at least for the startup process. The startup process uses ProcState\n> > > > > > slot via InitRecoveryTransactionEnvironment->SharedInvalBackendInit.\n> > > > > > The procState array size is initialized to MaxBackends in\n> > > > > > SInvalShmemSize.\n> > > > > >\n> > > > > > The consequence of not fixing this issue is that the database may hit\n> > > > > > the error \"sorry, too many clients already\" soon in\n> > > > > > SharedInvalBackendInit.\n> > >\n> > > On second thought, I wonder if this error could not happen in practice. No?\n> > > Because autovacuum doesn't work during recovery and the startup process\n> > > can safely use the ProcState entry for autovacuum worker process.\n> > > Also since the minimal allowed value of autovacuum_max_workers is one,\n> > > the ProcState array guarantees to have at least one entry for autovacuum worker.\n> > >\n> > > If this understanding is right, we don't need to enlarge the array and\n> > > can just update the comment. I don't strongly oppose to enlarge\n> > > the array in the master, but I'm not sure it's worth doing that\n> > > in back branches if the issue can cause no actual error.\n> >\n> > Yes, the issue can't happen. The comment in the SInvalShmemSize,\n> > mentioning about the startup process always having an extra slot\n> > because the autovacuum worker is not active during recovery, looks\n> > okay. But, is it safe to assume that always? Do we have a way to\n> > specify that in the form an Assert(when_i_am_startup_proc &&\n> > autovacuum_not_running) (this looks a bit dirty though)? Instead, we\n> > can just enlarge the array in the master and be confident about the\n> > fact that the startup process always has one dedicated slot.\n>\n> But this slot wont be used for most of cluster life. It will be just\n> waste.\n\nCorrect. In the standby autovacuum launcher and worker are not started\nso, the startup process will always have a slot free for it to use.\n\n> And `Assert(there_is_startup_proc && autovacuum_not_running)` has\n> value on its own, hasn't it? So why doesn't add it with comment.\n\nAssertion doesn't make sense to me now. Because the postmaster ensures\nthat the autovacuum launcher/workers will not get started in standby\nmode and we can't reliably know in InitRecoveryTransactionEnvironment\n(startup process) whether or not autovacuum launcher process has been\nstarted.\n\nFWIW, here's a patch just adding a comment on how the startup process\ncan get a free procState array slot even when SInvalShmemSize hasn't\naccounted for it.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sat, 12 Feb 2022 16:56:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Accommodate startup process in a separate ProcState array slot\n instead of in MaxBackends slots." }, { "msg_contents": "В Сб, 12/02/2022 в 16:56 +0530, Bharath Rupireddy пишет:\n> On Fri, Feb 11, 2022 at 7:56 PM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> > В Сб, 16/10/2021 в 16:37 +0530, Bharath Rupireddy пишет:\n> > > On Thu, Oct 14, 2021 at 10:56 AM Fujii Masao\n> > > <masao.fujii@oss.nttdata.com> wrote:\n> > > > On 2021/10/12 15:46, Bharath Rupireddy wrote:\n> > > > > On Tue, Oct 12, 2021 at 5:37 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > > > > > On 2021/10/12 4:07, Bharath Rupireddy wrote:\n> > > > > > > Hi,\n> > > > > > > \n> > > > > > > While working on [1], it is found that currently the ProcState array\n> > > > > > > doesn't have entries for auxiliary processes, it does have entries for\n> > > > > > > MaxBackends. But the startup process is eating up one slot from\n> > > > > > > MaxBackends. We need to increase the size of the ProcState array by 1\n> > > > > > > at least for the startup process. The startup process uses ProcState\n> > > > > > > slot via InitRecoveryTransactionEnvironment->SharedInvalBackendInit.\n> > > > > > > The procState array size is initialized to MaxBackends in\n> > > > > > > SInvalShmemSize.\n> > > > > > > \n> > > > > > > The consequence of not fixing this issue is that the database may hit\n> > > > > > > the error \"sorry, too many clients already\" soon in\n> > > > > > > SharedInvalBackendInit.\n> > > > \n> > > > On second thought, I wonder if this error could not happen in practice. No?\n> > > > Because autovacuum doesn't work during recovery and the startup process\n> > > > can safely use the ProcState entry for autovacuum worker process.\n> > > > Also since the minimal allowed value of autovacuum_max_workers is one,\n> > > > the ProcState array guarantees to have at least one entry for autovacuum worker.\n> > > > \n> > > > If this understanding is right, we don't need to enlarge the array and\n> > > > can just update the comment. I don't strongly oppose to enlarge\n> > > > the array in the master, but I'm not sure it's worth doing that\n> > > > in back branches if the issue can cause no actual error.\n> > > \n> > > Yes, the issue can't happen. The comment in the SInvalShmemSize,\n> > > mentioning about the startup process always having an extra slot\n> > > because the autovacuum worker is not active during recovery, looks\n> > > okay. But, is it safe to assume that always? Do we have a way to\n> > > specify that in the form an Assert(when_i_am_startup_proc &&\n> > > autovacuum_not_running) (this looks a bit dirty though)? Instead, we\n> > > can just enlarge the array in the master and be confident about the\n> > > fact that the startup process always has one dedicated slot.\n> > \n> > But this slot wont be used for most of cluster life. It will be just\n> > waste.\n> \n> Correct. In the standby autovacuum launcher and worker are not started\n> so, the startup process will always have a slot free for it to use.\n> \n> > And `Assert(there_is_startup_proc && autovacuum_not_running)` has\n> > value on its own, hasn't it? So why doesn't add it with comment.\n> \n> Assertion doesn't make sense to me now. Because the postmaster ensures\n> that the autovacuum launcher/workers will not get started in standby\n> mode and we can't reliably know in InitRecoveryTransactionEnvironment\n> (startup process) whether or not autovacuum launcher process has been\n> started.\n> \n> FWIW, here's a patch just adding a comment on how the startup process\n> can get a free procState array slot even when SInvalShmemSize hasn't\n> accounted for it.\n\nI think, comment is a good thing.\nMarked as \"Ready for committer\".\n\n> \n> Regards,\n> Bharath Rupireddy.\n\n\n\n", "msg_date": "Mon, 21 Feb 2022 12:06:28 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Accommodate startup process in a separate ProcState array slot\n instead of in MaxBackends slots." }, { "msg_contents": "On Sat, Feb 12, 2022 at 6:26 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> FWIW, here's a patch just adding a comment on how the startup process\n> can get a free procState array slot even when SInvalShmemSize hasn't\n> accounted for it.\n\nI don't think the positioning of this code comment is very good,\nbecause it's commenting on 0 lines of code. Perhaps that problem could\nbe fixed by making it the second paragraph of the immediately\npreceding comment instead of a separate block, but I think the right\nplace to comment on this sort of thing is actually in the code that\nsizes the data structure - i.e. SInvalShmemSize. If someone looks at\nthat function and says \"hey, this uses GetMaxBackends(), that's off by\none!\" they are not ever going to find this comment explaining the\nreasoning.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 25 Mar 2022 15:49:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Accommodate startup process in a separate ProcState array slot\n instead of in MaxBackends slots." }, { "msg_contents": "On Sat, Mar 26, 2022 at 1:20 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Feb 12, 2022 at 6:26 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > FWIW, here's a patch just adding a comment on how the startup process\n> > can get a free procState array slot even when SInvalShmemSize hasn't\n> > accounted for it.\n>\n> I don't think the positioning of this code comment is very good,\n> because it's commenting on 0 lines of code. Perhaps that problem could\n> be fixed by making it the second paragraph of the immediately\n> preceding comment instead of a separate block, but I think the right\n> place to comment on this sort of thing is actually in the code that\n> sizes the data structure - i.e. SInvalShmemSize. If someone looks at\n> that function and says \"hey, this uses GetMaxBackends(), that's off by\n> one!\" they are not ever going to find this comment explaining the\n> reasoning.\n\nThanks. It makes sense to put the comment in SInvalShmemSize.\nAttaching v2 patch. Please review it.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sat, 26 Mar 2022 11:14:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Accommodate startup process in a separate ProcState array slot\n instead of in MaxBackends slots." }, { "msg_contents": "On Sat, Mar 26, 2022 at 2:23 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Thanks. It makes sense to put the comment in SInvalShmemSize.\n> Attaching v2 patch. Please review it.\n\nHow about this version, which I have edited lightly for grammar?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 28 Mar 2022 15:16:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Accommodate startup process in a separate ProcState array slot\n instead of in MaxBackends slots." }, { "msg_contents": "On Tue, Mar 29, 2022 at 12:47 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Mar 26, 2022 at 2:23 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Thanks. It makes sense to put the comment in SInvalShmemSize.\n> > Attaching v2 patch. Please review it.\n>\n> How about this version, which I have edited lightly for grammar?\n\nThanks. LGTM.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 29 Mar 2022 12:51:35 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Accommodate startup process in a separate ProcState array slot\n instead of in MaxBackends slots." }, { "msg_contents": "On Tue, Mar 29, 2022 at 3:21 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Thanks. LGTM.\n\nCommitted.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Mar 2022 09:30:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Accommodate startup process in a separate ProcState array slot\n instead of in MaxBackends slots." } ]
[ { "msg_contents": "This commit broke psql \\d datname.nspname.relname\n\ncommit 2c8726c4b0a496608919d1f78a5abc8c9b6e0868\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: Wed Feb 3 13:19:41 2021 -0500\n\n Factor pattern-construction logic out of processSQLNamePattern.\n...\n patternToSQLRegex is a little more general than what is required\n by processSQLNamePattern. That function is only interested in\n patterns that can have up to 2 parts, a schema and a relation;\n but patternToSQLRegex can limit the maximum number of parts to\n between 1 and 3, so that patterns can look like either\n \"database.schema.relation\", \"schema.relation\", or \"relation\"\n depending on how it's invoked and what the user specifies.\n\n processSQLNamePattern only passes two buffers, so works exactly\n the same as before, always interpreting the pattern as either\n a \"schema.relation\" pattern or a \"relation\" pattern. But,\n future callers can use this function in other ways.\n\n|$ LD_LIBRARY_PATH=tmp_install/usr/local/pgsql/lib/ src/bin/psql/psql -h /tmp regression\n|psql (15devel)\n|Type \"help\" for help.\n|regression=# \\d regresion.public.bit_defaults\n|Did not find any relation named \"regresion.public.bit_defaults\".\n|regression=# \\d public.bit_defaults\n| Table \"public.bit_defaults\"\n|...\n\nThis worked before v14 (even though the commit message says otherwise).\n\n|$ /usr/lib/postgresql/13/bin/psql -h /tmp regression\n|psql (13.2 (Debian 13.2-1.pgdg100+1), server 15devel)\n|...\n|regression=# \\d regresion.public.bit_defaults\n| Table \"public.bit_defaults\"\n|...\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 11 Oct 2021 16:24:27 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "\n\n> On Oct 11, 2021, at 2:24 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> This commit broke psql \\d datname.nspname.relname\n> \n> commit 2c8726c4b0a496608919d1f78a5abc8c9b6e0868\n> Author: Robert Haas <rhaas@postgresql.org>\n> Date: Wed Feb 3 13:19:41 2021 -0500\n> \n> Factor pattern-construction logic out of processSQLNamePattern.\n> ...\n> patternToSQLRegex is a little more general than what is required\n> by processSQLNamePattern. That function is only interested in\n> patterns that can have up to 2 parts, a schema and a relation;\n> but patternToSQLRegex can limit the maximum number of parts to\n> between 1 and 3, so that patterns can look like either\n> \"database.schema.relation\", \"schema.relation\", or \"relation\"\n> depending on how it's invoked and what the user specifies.\n> \n> processSQLNamePattern only passes two buffers, so works exactly\n> the same as before, always interpreting the pattern as either\n> a \"schema.relation\" pattern or a \"relation\" pattern. But,\n> future callers can use this function in other ways.\n> \n> |$ LD_LIBRARY_PATH=tmp_install/usr/local/pgsql/lib/ src/bin/psql/psql -h /tmp regression\n> |psql (15devel)\n> |Type \"help\" for help.\n> |regression=# \\d regresion.public.bit_defaults\n> |Did not find any relation named \"regresion.public.bit_defaults\".\n> |regression=# \\d public.bit_defaults\n> | Table \"public.bit_defaults\"\n> |...\n> \n> This worked before v14 (even though the commit message says otherwise).\n> \n> |$ /usr/lib/postgresql/13/bin/psql -h /tmp regression\n> |psql (13.2 (Debian 13.2-1.pgdg100+1), server 15devel)\n> |...\n> |regression=# \\d regresion.public.bit_defaults\n> | Table \"public.bit_defaults\"\n> |...\n\nI can only assume that you are intentionally misspelling \"regression\" as \"regresion\" (with only one \"s\") as part of the test. I have not checked if that worked before v14, but if it ignored the misspelled database name before v14, and it rejects it now, I'm not sure that counts as a bug. \n\nAm I misunderstanding your bug report?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 11 Oct 2021 14:47:59 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> I can only assume that you are intentionally misspelling \"regression\" as \"regresion\" (with only one \"s\") as part of the test. I have not checked if that worked before v14, but if it ignored the misspelled database name before v14, and it rejects it now, I'm not sure that counts as a bug. \n\nDoesn't work with the correct DB name, either:\n\nregression=# \\d public.bit_defaults\n Table \"public.bit_defaults\"\n Column | Type | Collation | Nullable | Default \n--------+----------------+-----------+----------+---------------------\n b1 | bit(4) | | | '1001'::\"bit\"\n b2 | bit(4) | | | '0101'::\"bit\"\n b3 | bit varying(5) | | | '1001'::bit varying\n b4 | bit varying(5) | | | '0101'::\"bit\"\n\nregression=# \\d regression.public.bit_defaults\nDid not find any relation named \"regression.public.bit_defaults\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Oct 2021 18:04:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "\n\n> On Oct 11, 2021, at 3:04 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Doesn't work with the correct DB name, either:\n> \n> regression=# \\d public.bit_defaults\n> Table \"public.bit_defaults\"\n> Column | Type | Collation | Nullable | Default \n> --------+----------------+-----------+----------+---------------------\n> b1 | bit(4) | | | '1001'::\"bit\"\n> b2 | bit(4) | | | '0101'::\"bit\"\n> b3 | bit varying(5) | | | '1001'::bit varying\n> b4 | bit varying(5) | | | '0101'::\"bit\"\n> \n> regression=# \\d regression.public.bit_defaults\n> Did not find any relation named \"regression.public.bit_defaults\".\n\nREL_13_STABLE appears to accept any amount of nonsense you like:\n\nfoo=# \\d nonesuch.foo.a.b.c.d.bar.baz\n Table \"bar.baz\"\n Column | Type | Collation | Nullable | Default \n--------+---------+-----------+----------+---------\n i | integer | | | \n\n\nIs this something we're intentionally supporting? There is no regression test covering this, else we'd have seen breakage in the build-farm.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 11 Oct 2021 15:25:43 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Mon, Oct 11, 2021 at 02:47:59PM -0700, Mark Dilger wrote:\n> > |$ LD_LIBRARY_PATH=tmp_install/usr/local/pgsql/lib/ src/bin/psql/psql -h /tmp regression\n> > |psql (15devel)\n> > |Type \"help\" for help.\n> > |regression=# \\d regresion.public.bit_defaults\n> > |Did not find any relation named \"regresion.public.bit_defaults\".\n> > |regression=# \\d public.bit_defaults\n> > | Table \"public.bit_defaults\"\n> > |...\n> > \n> > This worked before v14 (even though the commit message says otherwise).\n> > \n> > |$ /usr/lib/postgresql/13/bin/psql -h /tmp regression\n> > |psql (13.2 (Debian 13.2-1.pgdg100+1), server 15devel)\n> > |...\n> > |regression=# \\d regresion.public.bit_defaults\n> > | Table \"public.bit_defaults\"\n> > |...\n> \n> I can only assume that you are intentionally misspelling \"regression\" as \"regresion\" (with only one \"s\") as part of the test. I have not checked if that worked before v14, but if it ignored the misspelled database name before v14, and it rejects it now, I'm not sure that counts as a bug. \n> \n> Am I misunderstanding your bug report?\n\nIt's not intentional but certainly confusing to put a typo there.\nSorry for that (and good eyes, BTW).\n\nIn v15/master:\n\tregression=# \\d regression.public.bit_defaults\n\tDid not find any relation named \"regression.public.bit_defaults\".\n\nAfter reverting that commit and recompiling psql:\n\tregression=# \\d regression.public.bit_defaults\n\t\t\t Table \"public.bit_defaults\"\n\t...\n\nIn v13 psql:\n\tregression=# \\d regression.public.bit_defaults\n\t\t\t Table \"public.bit_defaults\"\n\t...\n\nIt looks like before v13 any \"datname\" prefix was ignored.\n\nBut now it fails to show the table because it does:\n\nWHERE c.relname OPERATOR(pg_catalog.~) '^(public.bit_defaults)$' COLLATE pg_catalog.default\n AND n.nspname OPERATOR(pg_catalog.~) '^(regression)$' COLLATE pg_catalog.default\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 11 Oct 2021 17:26:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "\n\n> On Oct 11, 2021, at 3:26 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> It looks like before v13 any \"datname\" prefix was ignored.\n\nThe evidence so far suggests that something is broken in v14, but it is less clear to me what the appropriate behavior is. The v14 psql is rejecting even a correctly named database.schema.table, but v13 psql accepted lots.of.nonsense.schema.table, and neither of those seems at first glance to be correct. But perhaps there are good reasons for ignoring the nonsense prefixes?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 11 Oct 2021 15:32:16 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Oct 11, 2021, at 3:04 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Doesn't work with the correct DB name, either:\n>> regression=# \\d regression.public.bit_defaults\n>> Did not find any relation named \"regression.public.bit_defaults\".\n\n> REL_13_STABLE appears to accept any amount of nonsense you like:\n\nYeah, I'm pretty sure that the old rule was to just ignore whatever\nappeared in the database-name position. While we could tighten that\nup to insist that it match the current DB's name, I'm not sure that\nI see the point. There's no near-term prospect of doing anything\nuseful with some other DB's name there, so being more restrictive\nseems like it'll probably break peoples' scripts to little purpose.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Oct 2021 18:37:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "\n\n> On Oct 11, 2021, at 3:37 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n>> REL_13_STABLE appears to accept any amount of nonsense you like:\n> \n> Yeah, I'm pretty sure that the old rule was to just ignore whatever\n> appeared in the database-name position. While we could tighten that\n> up to insist that it match the current DB's name, I'm not sure that\n> I see the point. There's no near-term prospect of doing anything\n> useful with some other DB's name there, so being more restrictive\n> seems like it'll probably break peoples' scripts to little purpose.\n\nYou appear correct about the old behavior. It's unclear how intentional it was. There was a schema buffer and a name buffer, and while parsing the name, if a dot was encountered, the contents just parsed were copied into the schema buffer. If multiple dots were encountered, that had the consequence of blowing away the earlier ones.\n\nBut since we allow tables and schemas with dotted names in them, I'm uncertain what \\d foo.bar.baz is really asking. That could be \"foo.bar\".\"baz\", or \"foo\".\"bar\".\"baz\", or \"foo\".\"bar.baz\", or even \"public\".\"foo.bar.baz\". The old behavior seems a bit dangerous. There may be tables with all those names, and the user may not have meant the one that we gave them.\n\nThe v14 code is no better. It just assumes that is \"foo\".\"bar.baz\". So (with debugging statements included):\n\nfoo=# create table \"foo.bar.baz\" (i integer);\nCREATE TABLE\nfoo=# \\d public.foo.bar.baz\nConverting \"public.foo.bar.baz\"\nGOT \"^(public)$\" . \"^(foo.bar.baz)$\"\n Table \"public.foo.bar.baz\"\n Column | Type | Collation | Nullable | Default \n--------+---------+-----------+----------+---------\n i | integer | | | \n\nI expect I'll have to submit a patch restoring the old behavior, but I wonder if that's the best direction to go.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 11 Oct 2021 16:35:17 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Mon, 11 Oct 2021 at 19:35, Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n\n> But since we allow tables and schemas with dotted names in them, I'm\n> uncertain what \\d foo.bar.baz is really asking.\n>\n\nFWIW, it’s absolutely clear to me that \".\" is a special character which has\nto be quoted in order to be in an identifier. In other words, a.b.c is\nthree identifiers separated by two period punctuation marks; what exactly\nthose periods mean is another question. If somebody uses periods in their\nnames, they have to quote those names just as if they used capital letters\netc.\n\nBut that's just my impression. I comment at all because I remember looking\nat something to do with the grammar (I think I wanted to implement ALTER …\nRENAME TO newschema.newname) and noticed that a database name could be\ngiven in the syntax.\n\nOn Mon, 11 Oct 2021 at 19:35, Mark Dilger <mark.dilger@enterprisedb.com> wrote: \nBut since we allow tables and schemas with dotted names in them, I'm uncertain what  \\d foo.bar.baz is really asking.FWIW, it’s absolutely clear to me that \".\" is a special character which has to be quoted in order to be in an identifier. In other words, a.b.c is three identifiers separated by two period punctuation marks; what exactly those periods mean is another question. If somebody uses periods in their names, they have to quote those names just as if they used capital letters etc.But that's just my impression. I comment at all because I remember looking at something to do with the grammar (I think I wanted to implement ALTER … RENAME TO newschema.newname) and noticed that a database name could be given in the syntax.", "msg_date": "Mon, 11 Oct 2021 19:41:08 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> But since we allow tables and schemas with dotted names in them, I'm uncertain what \\d foo.bar.baz is really asking. That could be \"foo.bar\".\"baz\", or \"foo\".\"bar\".\"baz\", or \"foo\".\"bar.baz\", or even \"public\".\"foo.bar.baz\". The old behavior seems a bit dangerous. There may be tables with all those names, and the user may not have meant the one that we gave them.\n\nYou are attacking a straw man here. To use a period in an identifier,\nyou have to double-quote it; that's the same in SQL or \\d.\n\nregression=# create table \"foo.bar\" (f1 int);\nCREATE TABLE\nregression=# \\d foo.bar\nDid not find any relation named \"foo.bar\".\nregression=# \\d \"foo.bar\"\n Table \"public.foo.bar\"\n Column | Type | Collation | Nullable | Default \n--------+---------+-----------+----------+---------\n f1 | integer | | | \n\nAccording to a quick test, you did not manage to break that in v14.\n\n> I expect I'll have to submit a patch restoring the old behavior, but I wonder if that's the best direction to go.\n\nI do not understand why you're even questioning that. The old\nbehavior had stood for a decade or two without complaints.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Oct 2021 19:49:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "\n\n> On Oct 11, 2021, at 4:49 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> You are attacking a straw man here. To use a period in an identifier,\n> you have to double-quote it; that's the same in SQL or \\d.\n\nThat's a strange argument. If somebody gives an invalid identifier, we shouldn't assume they know the proper use of quotations. Somebody asking for a.b.c.d.e is clearly in the dark about something. Maybe it's the need to quote the \"a.b\" part separately from the \"c.d.e\" part, or maybe it's something else. There are lots of reasonable guesses about what they meant, and for backward compatibility reasons we define using the suffix d.e and ignoring the prefix a.b.c as the correct answer. That's a pretty arbitrary thing to do, but it has the advantage of being backwards compatible.\n\n>> I expect I'll have to submit a patch restoring the old behavior, but I wonder if that's the best direction to go.\n> \n> I do not understand why you're even questioning that. The old\n> behavior had stood for a decade or two without complaints.\n\nI find the backward compatibility argument appealing, but since we have clients that understand the full database.schema.relation format without ignoring the database portion, our client behavior is getting inconsistent. I'd like to leave the door open for someday supporting server.database.schema.relation format, too. I was just wondering when it might be time to stop being lenient in psql and instead reject malformed identifiers.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 11 Oct 2021 19:09:03 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Mon, Oct 11, 2021 at 7:09 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I was just wondering when it might be time to stop being lenient in psql and instead reject malformed identifiers.\n\nI suppose that I probably wouldn't have chosen this behavior in a\ngreen field situation. But Hyrum's law tells us that there are bound\nto be some number of users relying on it. I don't think that it's\nworth inconveniencing those people without getting a clear benefit in\nreturn.\n\nBeing lenient here just doesn't have much downside in practice, as\nevidenced by the total lack of complaints about that lenience.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 11 Oct 2021 19:33:10 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Mon, Oct 11, 2021 at 10:33 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Oct 11, 2021 at 7:09 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > I was just wondering when it might be time to stop being lenient in psql and instead reject malformed identifiers.\n>\n> I suppose that I probably wouldn't have chosen this behavior in a\n> green field situation. But Hyrum's law tells us that there are bound\n> to be some number of users relying on it. I don't think that it's\n> worth inconveniencing those people without getting a clear benefit in\n> return.\n>\n> Being lenient here just doesn't have much downside in practice, as\n> evidenced by the total lack of complaints about that lenience.\n\nI find it kind of surprising to find everyone agreeing with this\nargument. I mean, PostgreSQL users are often quick to criticize MySQL\nfor accepting 0000-00-00 as a date, because it isn't, and you\nshouldn't accept garbage and do stuff with it as if it were valid\ndata. But by the same argument, accepting a database name that we know\nis not correct as a request to show data in the current database seems\nwrong to me.\n\nI completely agree that somebody might be relying on the fact that \\d\nthisdb.someschema.sometable does something sensible when logged into\nthisdb, but surely no user is relying on \\d\njgldslghksdghjsgkhsdgjhskg.someschema.sometable is going to just\nignore the leading gibberish. Nor do I understand why we'd want to\nignore the leading gibberish. Saying, as Tom did, that nobody has\ncomplained about that behavior is just another way of saying that\nnobody tested it. Surely if someone had, it wouldn't be like this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 Oct 2021 10:23:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Oct 11, 2021 at 10:33 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> Being lenient here just doesn't have much downside in practice, as\n>> evidenced by the total lack of complaints about that lenience.\n\n> I find it kind of surprising to find everyone agreeing with this\n> argument.\n\nIf the behavior v14 had implemented were \"throw an error if the\nfirst word doesn't match the current database name\", perhaps nobody\nwould have questioned it. But that's not what we have. It's fairly\nclear that neither you nor Mark thought very much about this case,\nlet alone tested it. Given that, I am not very pleased that you\nare retroactively trying to justify breaking it by claiming that\nit was already broken. It's been that way since 7.3 implemented\nschemas, more or less, and nobody's complained about it. Therefore\nI see little argument for changing that behavior. Changing it in\nan already-released branch is especially suspect.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Oct 2021 10:30:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "\n\n> On Oct 12, 2021, at 7:30 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> If the behavior v14 had implemented were \"throw an error if the\n> first word doesn't match the current database name\", perhaps nobody\n> would have questioned it. But that's not what we have. It's fairly\n> clear that neither you nor Mark thought very much about this case,\n> let alone tested it. Given that, I am not very pleased that you\n> are retroactively trying to justify breaking it by claiming that\n> it was already broken. It's been that way since 7.3 implemented\n> schemas, more or less, and nobody's complained about it. Therefore\n> I see little argument for changing that behavior. Changing it in\n> an already-released branch is especially suspect.\n\nI completely agree that we need to fix this. My question was only whether \"fix\" means to make it accept database.schema.table or whether it means to accept any.prefix.at.all.schema.table. It sounds like more people like the latter, so I'll go with that unless this debate rages on and a different conclusion is reached.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 12 Oct 2021 07:37:58 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Tue, Oct 12, 2021 at 10:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If the behavior v14 had implemented were \"throw an error if the\n> first word doesn't match the current database name\", perhaps nobody\n> would have questioned it. But that's not what we have. It's fairly\n> clear that neither you nor Mark thought very much about this case,\n> let alone tested it. Given that, I am not very pleased that you\n> are retroactively trying to justify breaking it by claiming that\n> it was already broken. It's been that way since 7.3 implemented\n> schemas, more or less, and nobody's complained about it. Therefore\n> I see little argument for changing that behavior. Changing it in\n> an already-released branch is especially suspect.\n\nOh, give me a break. The previous behavior obviously hasn't been\ntested either, and is broken on its face. If someone *had* complained\nabout it, I imagine you would have promptly fixed it and likely\nback-patched the fix, probably in under 24 hours from the time of the\nreport. I find it difficult to take seriously the contention that\nanyone is expecting \\d dlsgjdsghj.sdhg.l.dsg.jkhsdg.foo.bar to work\nlike \\d foo.bar, or that they would even prefer that behavior over an\nerror message. You're carefully avoiding addressing that question in\nfavor of having a discussion of backward compatibility, but a better\nterm for what we're talking about here would be bug-compatibility.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 Oct 2021 10:40:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, Oct 12, 2021 at 10:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > If the behavior v14 had implemented were \"throw an error if the\n> > first word doesn't match the current database name\", perhaps nobody\n> > would have questioned it. But that's not what we have. It's fairly\n> > clear that neither you nor Mark thought very much about this case,\n> > let alone tested it. Given that, I am not very pleased that you\n> > are retroactively trying to justify breaking it by claiming that\n> > it was already broken. It's been that way since 7.3 implemented\n> > schemas, more or less, and nobody's complained about it. Therefore\n> > I see little argument for changing that behavior. Changing it in\n> > an already-released branch is especially suspect.\n> \n> Oh, give me a break. The previous behavior obviously hasn't been\n> tested either, and is broken on its face. If someone *had* complained\n> about it, I imagine you would have promptly fixed it and likely\n> back-patched the fix, probably in under 24 hours from the time of the\n> report. I find it difficult to take seriously the contention that\n> anyone is expecting \\d dlsgjdsghj.sdhg.l.dsg.jkhsdg.foo.bar to work\n> like \\d foo.bar, or that they would even prefer that behavior over an\n> error message. You're carefully avoiding addressing that question in\n> favor of having a discussion of backward compatibility, but a better\n> term for what we're talking about here would be bug-compatibility.\n\nI tend to agree with Robert on this particular case. Accepting random\nnonsense there isn't a feature or something which really needs to be\npreserved. For my 2c, I would hope that one day we will be able to\naccept other database names there and if that happens, what then? We'd\n\"break\" these cases anyway. Better to be clear that such nonsense isn't\nintended to be accepted and clean that up. I do think it'd be good to\naccept the current database name there as that's reasonable.\n\nThanks,\n\nStephen", "msg_date": "Tue, 12 Oct 2021 11:19:55 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Tue, Oct 12, 2021 at 7:41 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Oh, give me a break. The previous behavior obviously hasn't been\n> tested either, and is broken on its face. If someone *had* complained\n> about it, I imagine you would have promptly fixed it and likely\n> back-patched the fix, probably in under 24 hours from the time of the\n> report.\n\nYou're asking us to imagine a counterfactual. But this counterfactual\nbug report would have to describe a real practical problem. The\ndetails would matter. It's reasonable to suppose that we haven't seen\nsuch a bug report for a reason.\n\nI can't speak for Tom. My position on this is that it's better to\nleave it alone at this time, given the history, and the lack of\ncomplaints from users.\n\n> I find it difficult to take seriously the contention that\n> anyone is expecting \\d dlsgjdsghj.sdhg.l.dsg.jkhsdg.foo.bar to work\n> like \\d foo.bar, or that they would even prefer that behavior over an\n> error message. You're carefully avoiding addressing that question in\n> favor of having a discussion of backward compatibility, but a better\n> term for what we're talking about here would be bug-compatibility.\n\nLet's assume that it is bug compatibility. Is that intrinsically a bad thing?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 12 Oct 2021 09:44:25 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On 10/12/21 5:19 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * Robert Haas (robertmhaas@gmail.com) wrote:\n>> On Tue, Oct 12, 2021 at 10:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> If the behavior v14 had implemented were \"throw an error if the\n>>> first word doesn't match the current database name\", perhaps nobody\n>>> would have questioned it. But that's not what we have. It's fairly\n>>> clear that neither you nor Mark thought very much about this case,\n>>> let alone tested it. Given that, I am not very pleased that you\n>>> are retroactively trying to justify breaking it by claiming that\n>>> it was already broken. It's been that way since 7.3 implemented\n>>> schemas, more or less, and nobody's complained about it. Therefore\n>>> I see little argument for changing that behavior. Changing it in\n>>> an already-released branch is especially suspect.\n>>\n>> Oh, give me a break. The previous behavior obviously hasn't been\n>> tested either, and is broken on its face. If someone *had* complained\n>> about it, I imagine you would have promptly fixed it and likely\n>> back-patched the fix, probably in under 24 hours from the time of the\n>> report. I find it difficult to take seriously the contention that\n>> anyone is expecting \\d dlsgjdsghj.sdhg.l.dsg.jkhsdg.foo.bar to work\n>> like \\d foo.bar, or that they would even prefer that behavior over an\n>> error message. You're carefully avoiding addressing that question in\n>> favor of having a discussion of backward compatibility, but a better\n>> term for what we're talking about here would be bug-compatibility.\n> \n> I tend to agree with Robert on this particular case. Accepting random\n> nonsense there isn't a feature or something which really needs to be\n> preserved. For my 2c, I would hope that one day we will be able to\n> accept other database names there and if that happens, what then? We'd\n> \"break\" these cases anyway. Better to be clear that such nonsense isn't\n> intended to be accepted and clean that up. I do think it'd be good to\n> accept the current database name there as that's reasonable.\n\nI am going to throw my hat in with Robert and Stephen, too. At least\nfor 15 if we don't want to change this behavior in back branches.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 12 Oct 2021 18:52:30 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "I understand Tom's position to be that the behavior should be changed back,\nsince it was 1) unintentional; and 2) breaks legitimate use (when the datname\nmatches current_database).\n\nI think there's an easy answer here that would satisfy everyone; two patches:\n0001 to fix the unintentional behavior change;\n0002 to reject garbage input: anything with more than 3 dot-separated\n components, or with 3 components where the first doesn't match\n current_database.\n\n0001 would be backpatched to v14.\n\nIf it turns out there's no consensus on 0002, or if it were really hard for\nsome reason, or (more likely) nobody went to the bother to implement it this\nyear, then that's okay.\n\nI would prefer if it errored if the datname didn't match the current database.\nAfter all, it would've helped me to avoid making a confusing problem report.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 12 Oct 2021 11:57:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Tue, Oct 12, 2021 at 12:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> You're asking us to imagine a counterfactual. But this counterfactual\n> bug report would have to describe a real practical problem.\n\nYes. And I think this one should be held to the same standard: \\d\nmydb.myschema.mytable not working is potentially a real, practical\nproblem. \\d sdlgkjdss.dsgkjsk.sdgskldjgds.myschema.mytable not working\nisn't.\n\n> Let's assume that it is bug compatibility. Is that intrinsically a bad thing?\n\nWell my view is that having the same bugs is better than having\ndifferent ones, but fixing the bugs is superior to either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 Oct 2021 13:01:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Tue, Oct 12, 2021 at 12:57 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I think there's an easy answer here that would satisfy everyone; two patches:\n> 0001 to fix the unintentional behavior change;\n> 0002 to reject garbage input: anything with more than 3 dot-separated\n> components, or with 3 components where the first doesn't match\n> current_database.\n>\n> 0001 would be backpatched to v14.\n>\n> If it turns out there's no consensus on 0002, or if it were really hard for\n> some reason, or (more likely) nobody went to the bother to implement it this\n> year, then that's okay.\n\nThis might work, but I fear that 0001 would end up being substantially\nmore complicated than a combined patch that solves both problems\ntogether.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 Oct 2021 13:03:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "> On Oct 12, 2021, at 10:03 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Tue, Oct 12, 2021 at 12:57 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> I think there's an easy answer here that would satisfy everyone; two patches:\n>> 0001 to fix the unintentional behavior change;\n>> 0002 to reject garbage input: anything with more than 3 dot-separated\n>> components, or with 3 components where the first doesn't match\n>> current_database.\n>> \n>> 0001 would be backpatched to v14.\n>> \n>> If it turns out there's no consensus on 0002, or if it were really hard for\n>> some reason, or (more likely) nobody went to the bother to implement it this\n>> year, then that's okay.\n> \n> This might work, but I fear that 0001 would end up being substantially\n> more complicated than a combined patch that solves both problems\n> together.\n\nHere is a WIP patch that restores the old behavior, just so you can eyeball how large it is. (It passes check-world and I've read it over once, but I'm not ready to stand by this as correct quite yet.) I need to add a regression test to make sure this behavior is not accidentally changed in the future, and will repost after doing so.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 12 Oct 2021 10:18:42 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "\n\n> On Oct 12, 2021, at 10:01 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Tue, Oct 12, 2021 at 12:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> You're asking us to imagine a counterfactual. But this counterfactual\n>> bug report would have to describe a real practical problem.\n> \n> Yes. And I think this one should be held to the same standard: \\d\n> mydb.myschema.mytable not working is potentially a real, practical\n> problem. \\d sdlgkjdss.dsgkjsk.sdgskldjgds.myschema.mytable not working\n> isn't.\n\nI favor restoring the v13 behavior, but I don't think \\d mydb.myschema.mytable was ever legitimate. You got exactly the same results with \\d nosuchdb.myschema.mytable, meaning the user was given a false sense of security that the database name was being used to fetch the definition from the database they specified.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 12 Oct 2021 10:38:16 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Tue, Oct 12, 2021 at 1:18 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Here is a WIP patch that restores the old behavior, just so you can eyeball how large it is.\n\nI guess that's not that bad. Why did we end up with the behavior that\nthe current comment describes this way?\n\n\"(Additional dots in the name portion are not treated as special.)\"\n\nI thought there was some reason why it needed to work that way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 Oct 2021 13:54:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "\n\n> On Oct 12, 2021, at 10:54 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Tue, Oct 12, 2021 at 1:18 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> Here is a WIP patch that restores the old behavior, just so you can eyeball how large it is.\n> \n> I guess that's not that bad. Why did we end up with the behavior that\n> the current comment describes this way?\n> \n> \"(Additional dots in the name portion are not treated as special.)\"\n> \n> I thought there was some reason why it needed to work that way.\n\nWe're not talking about the parsing of string literals, but rather about the parsing of shell-style patterns. The primary caller of this logic is processSQLNamePattern(), which expects only a relname or a (schema,relname) pair, not database names nor anything else.\n\nThe pattern myschema.my.*table is not a three-part pattern, but a two part pattern, with a literal schema name and a relation name pattern. In v14 it can be seen to work as follows:\n\n\\d pg_toast.pg_.oast_2619\nTOAST table \"pg_toast.pg_toast_2619\"\n Column | Type\n------------+---------\n chunk_id | oid\n chunk_seq | integer\n chunk_data | bytea\nOwning table: \"pg_catalog.pg_statistic\"\nIndexes:\n \"pg_toast_2619_index\" PRIMARY KEY, btree (chunk_id, chunk_seq)\n\n\\d pg_toast.pg_.*_2619\nTOAST table \"pg_toast.pg_toast_2619\"\n Column | Type\n------------+---------\n chunk_id | oid\n chunk_seq | integer\n chunk_data | bytea\nOwning table: \"pg_catalog.pg_statistic\"\nIndexes:\n \"pg_toast_2619_index\" PRIMARY KEY, btree (chunk_id, chunk_seq)\n\nIn v13, neither of those matched anything (which is defensible, I guess) but the following did match, which is really nuts:\n\n+CREATE SCHEMA g_;\n+CREATE TABLE g_.oast_2619 (i integer);\n+\\d pg_toast..g_.oast_2619\n+ Table \"g_.oast_2619\"\n+ Column | Type | Collation | Nullable | Default \n+--------+---------+-----------+----------+---------\n+ i | integer | | | \n\n\nThe behavior Justin reported in the original complaint was \\d regresion.public.bit_defaults, which gets handled as schema =~ /^(regresion)$/ and relname =~ /^(public.bit_defaults)$/. That gives no results for him, but I tend to think no results is defensible.\n\nApparently, this behavior breaks an old bug, and we need to restore the old bug and then debate this behavioral change for v15. I'd rather people had engaged in the discussion about this feature during the v14 cycle, since this patch was posted and reviewed on -hackers, and I don't recall anybody complaining about it.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 12 Oct 2021 12:26:10 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "\n\n> On Oct 12, 2021, at 10:18 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Here is a WIP patch that restores the old behavior, just so you can eyeball how large it is. (It passes check-world and I've read it over once, but I'm not ready to stand by this as correct quite yet.) I need to add a regression test to make sure this behavior is not accidentally changed in the future, and will repost after doing so.\n\nI wasn't thinking critically enough about how psql handles \\d when I accepted Justin's initial characterization of the bug. The psql client has never thought about the stuff to the left of the schema name as a database name, even if some users thought about it that way. It also doesn't think about the pattern as a literal string.\n\nThe psql client's interpretation of the pattern is a bit of a chimera, following shell glob patterns for some things and POSIX regex rules for others. The reason for that is shell glob stuff gets transliterated into the corresponding POSIX syntax, but non-shell-glob stuff is left in tact, with the one outlier being dots, which have a very special interpretation. The interpretation of a dot as meaning \"match one character\" is not a shell glob rule but a regex one, and one that psql never supported because it split the pattern on all dots and threw away stuff to the left. There was therefore never an opportunity for an unquoted dot to make it through to the POSIX regular expression for processing. For other regex type stuff, it happily passed it through to the POSIX regex, so that the following examples work even though they contain non-shell-glob regex stuff:\n\nv13=# create table ababab (i integer);\nCREATE TABLE\n\nv13=# \\dt (ab){3}\n List of relations\n Schema | Name | Type | Owner \n--------+--------+-------+-------------\n public | ababab | table | mark.dilger\n(1 row)\n\nv13=# \\dt pg_catalog.pg_clas{1,2}\n List of relations\n Schema | Name | Type | Owner \n------------+----------+-------+-------------\n pg_catalog | pg_class | table | mark.dilger\n\nv13=# \\dt pg_catalog.pg_[am]{1,3}\n List of relations\n Schema | Name | Type | Owner \n------------+-------+-------+-------------\n pg_catalog | pg_am | table | mark.dilger\n(1 row)\n\nSplitting the pattern on all the dots and throwing away any additional leftmost fields is a bug, and when you stop doing that, passing additional dots through to the POSIX regular expression for processing is the most natural thing to do. This is, in fact, how v14 works. It is a bit debatable whether treating the first dot as a separator and the additional dots as stuff to be passed through is the right thing, so we could call the v14 behavior a mis-feature, but it's not as clearcut as the discussion upthread suggested. Reverting to v13 behavior seems wrong, but I'm now uncertain how to proceed.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 12 Oct 2021 14:21:03 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Tue, Oct 12, 2021 at 5:21 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I wasn't thinking critically enough about how psql handles \\d when I accepted Justin's initial characterization of the bug. The psql client has never thought about the stuff to the left of the schema name as a database name, even if some users thought about it that way. It also doesn't think about the pattern as a literal string.\n\nI agree.\n\n> The psql client's interpretation of the pattern is a bit of a chimera, following shell glob patterns for some things and POSIX regex rules for others.\n\nYes. And that's pretty weird, but it's long-established precedent so\nwe have to deal with it.\n\n> Splitting the pattern on all the dots and throwing away any additional leftmost fields is a bug, ...\n\nI also agree with you right up to here.\n\n> and when you stop doing that, passing additional dots through to the POSIX regular expression for processing is the most natural thing to do. This is, in fact, how v14 works. It is a bit debatable whether treating the first dot as a separator and the additional dots as stuff to be passed through is the right thing, so we could call the v14 behavior a mis-feature, but it's not as clearcut as the discussion upthread suggested. Reverting to v13 behavior seems wrong, but I'm now uncertain how to proceed.\n\nBut not this part, or at least not entirely.\n\nIf we pass the dots through to the POSIX regular expression, we can\nonly do that either for the table name or the schema name, not both -\neither the first or last dot must mark the boundary between the two.\nThat means that you can't use all the same regexy things for one as\nyou can for the other, which is a strange system. I knew that your\npatch made it do that, and I committed it that way because I didn't\nthink it really mattered, and also because the whole system is already\npretty strange, so what's one more bit of strangeness?\n\nI think there are at least 3 defensible behaviors here:\n\n1. Leave it like it is. If there is more than one dot, the extra ones\nare part of one of the regex-glob thingies.\n\n2. If there is more than one dot, error! Tell the user they messed up.\n\n3. If there are exactly two dots, treat it as db-schema-user. Accept\nit if the dbname matches the current db, and otherwise say we can't\naccess the named db. If there are more than two dots, then (a) it's an\nerror as in (2) or (b) the extra ones become part of the regex-glob\nthingies as in (2).\n\nThe thing that's unprincipled about (3) is that we can't support a\nregexp-glob thingy there -- we can only test for a literal string\nmatch. And I already said what I thought was wrong with (1). But none\nof these are horrible, and I don't think it really matters which one\nwe adopt. I don't even know if I can really rank the choices I just\nlisted against each other. Before I was arguing for (3a) but I'm not\nsure I actually like that one particularly better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Oct 2021 09:24:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "\n\n> On Oct 13, 2021, at 6:24 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n>> and when you stop doing that, passing additional dots through to the POSIX regular expression for processing is the most natural thing to do. This is, in fact, how v14 works. It is a bit debatable whether treating the first dot as a separator and the additional dots as stuff to be passed through is the right thing, so we could call the v14 behavior a mis-feature, but it's not as clearcut as the discussion upthread suggested. Reverting to v13 behavior seems wrong, but I'm now uncertain how to proceed.\n> \n> But not this part, or at least not entirely.\n> \n> If we pass the dots through to the POSIX regular expression, we can\n> only do that either for the table name or the schema name, not both -\n\nAgreed.\n\n> either the first or last dot must mark the boundary between the two.\n> That means that you can't use all the same regexy things for one as\n> you can for the other, which is a strange system.\n\nThe closest analogy is how regular expressions consider \\1 \\2 .. \\9 as backreferences, but \\10 \\11 ... are dependent on context: \"A multi-digit sequence not starting with a zero is taken as a back reference if it comes after a suitable subexpression (i.e., the number is in the legal range for a back reference), and otherwise is taken as octal.\" Taking a dot as a separator if it can be taken that way, and as a regex character otherwise, is not totally out of line with existing precedent. On the other hand, the backreference vs. octal precedent is not one I particularly like.\n\n> I knew that your\n> patch made it do that, and I committed it that way because I didn't\n> think it really mattered, and also because the whole system is already\n> pretty strange, so what's one more bit of strangeness?\n> \n> I think there are at least 3 defensible behaviors here:\n> \n> 1. Leave it like it is. If there is more than one dot, the extra ones\n> are part of one of the regex-glob thingies.\n> \n> 2. If there is more than one dot, error! Tell the user they messed up.\n\nI don't like the backward compatibility issues with this one. Justin's use of database.schema.relname will work up until v14 (by throwing away the database part), then draw an error in v14, then (assuming we support the database portion in v15 onward) start working again.\n\n> 3. If there are exactly two dots, treat it as db-schema-user. Accept\n> it if the dbname matches the current db, and otherwise say we can't\n> access the named db. If there are more than two dots, then (a) it's an\n> error as in (2) or (b) the extra ones become part of the regex-glob\n> thingies as in (2).\n\n3a is a bit strange, when considered in the context of patterns. If db1, db2, and db3 all exist and each have a table foo.bar, and psql is connected to db1, how should the command \\d db?.foo.bar behave? We have no problem with db1.foo.bar, but we do have problems with the other two. If the answer is to complain about the databases that are unconnected, consider what happens if the user writes this in a script when only db1 exists, and later the script stops working because somebody created database db2. Maybe that's not completely horrible, but surely it is less than ideal.\n\n3b is what pg_amcheck does. It accepts database.schema.relname, and it will complain if no matching database/schema/relation can be found (unless --no-strict-names was given.)\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 13 Oct 2021 07:40:44 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Wed, Oct 13, 2021 at 10:40 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> 3a is a bit strange, when considered in the context of patterns. If db1, db2, and db3 all exist and each have a table foo.bar, and psql is connected to db1, how should the command \\d db?.foo.bar behave? We have no problem with db1.foo.bar, but we do have problems with the other two. If the answer is to complain about the databases that are unconnected, consider what happens if the user writes this in a script when only db1 exists, and later the script stops working because somebody created database db2. Maybe that's not completely horrible, but surely it is less than ideal.\n>\n> 3b is what pg_amcheck does. It accepts database.schema.relname, and it will complain if no matching database/schema/relation can be found (unless --no-strict-names was given.)\n\nWell, like I said, we can't treat a part that's purportedly a DB name\nas a pattern, so when connected to db1, I presume the command \\d\ndb?.foo.bar would have to behave just like \\d\ndskjlglsghdksgdjkshg.foo.bar. I suppose technically I'm wrong: db?\ncould be matched against the list of database names as a pattern, and\nthen we could complain only if it doesn't match exactly and only the\ncurrent DB. But I don't like adding a bunch of extra code to\naccomplish nothing useful, so if we're going to match it all I think\nit should just strcmp().\n\nBut I'm still not sure what the best thing to do overall is here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Oct 2021 11:43:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Tue, Oct 12, 2021 at 12:57 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I would prefer if it errored if the datname didn't match the current database.\n> After all, it would've helped me to avoid making a confusing problem report.\n\nHow would you have felt if it had said something like:\n\nerror: argument to \\d should be of the form\n[schema-name-pattern.]relation-name-pattern\n\nWould that have been better or worse for you than accepting a third\npart of the pattern as a database name if and only if it matched the\ncurrent database name exactly?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Oct 2021 12:46:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Wed, Oct 13, 2021 at 12:46:27PM -0400, Robert Haas wrote:\n> On Tue, Oct 12, 2021 at 12:57 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I would prefer if it errored if the datname didn't match the current database.\n> > After all, it would've helped me to avoid making a confusing problem report.\n> \n> How would you have felt if it had said something like:\n> \n> error: argument to \\d should be of the form\n> [schema-name-pattern.]relation-name-pattern\n> \n> Would that have been better or worse for you than accepting a third\n> part of the pattern as a database name if and only if it matched the\n> current database name exactly?\n\nI don't normally type \\d a.b.c. I think I copied it out of a log message and\npasted it, and didn't even really know or expect it to work without removing\nthe datname prefix. After it worked, I noticed a short while later when using\nthe pg14 client that it had stopped working.\n\nIt seems unfortunate if names from log messages qualified with datname were now\nrejected. Like this one:\n\n| automatic analyze of table \"ts.child.cdrs_2021_10_12\"...\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 13 Oct 2021 11:54:26 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Wed, Oct 13, 2021 at 12:54 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> It seems unfortunate if names from log messages qualified with datname were now\n> rejected. Like this one:\n>\n> | automatic analyze of table \"ts.child.cdrs_2021_10_12\"...\n\nThat's a good argument, IMHO.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Oct 2021 13:05:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Oct 13, 2021 at 12:54 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > It seems unfortunate if names from log messages qualified with datname were now\n> > rejected. Like this one:\n> >\n> > | automatic analyze of table \"ts.child.cdrs_2021_10_12\"...\n> \n> That's a good argument, IMHO.\n\nAgreed.\n\nThanks,\n\nStephen", "msg_date": "Wed, 13 Oct 2021 14:55:23 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "\n\n> On Oct 13, 2021, at 8:43 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Wed, Oct 13, 2021 at 10:40 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> 3a is a bit strange, when considered in the context of patterns. If db1, db2, and db3 all exist and each have a table foo.bar, and psql is connected to db1, how should the command \\d db?.foo.bar behave? We have no problem with db1.foo.bar, but we do have problems with the other two. If the answer is to complain about the databases that are unconnected, consider what happens if the user writes this in a script when only db1 exists, and later the script stops working because somebody created database db2. Maybe that's not completely horrible, but surely it is less than ideal.\n>> \n>> 3b is what pg_amcheck does. It accepts database.schema.relname, and it will complain if no matching database/schema/relation can be found (unless --no-strict-names was given.)\n> \n> Well, like I said, we can't treat a part that's purportedly a DB name\n> as a pattern, so when connected to db1, I presume the command \\d\n> db?.foo.bar would have to behave just like \\d\n> dskjlglsghdksgdjkshg.foo.bar. I suppose technically I'm wrong: db?\n> could be matched against the list of database names as a pattern, and\n> then we could complain only if it doesn't match exactly and only the\n> current DB. But I don't like adding a bunch of extra code to\n> accomplish nothing useful, so if we're going to match it all I think\n> it should just strcmp().\n> \n> But I'm still not sure what the best thing to do overall is here.\n\nThe issue of name parsing impacts pg_dump and pg_dumpall, also. Consider what happens with:\n\npg_dump -t production.critical.secrets > secrets.dump\ndropdb production\n\nIn v13, if your default database is \"testing\", and database \"testing\" has the same schemas and tables (but not data) as production, you are unhappy. You just dumped a copy of your test data and blew away the production data.\n\nYou could end up unhappy in v14, if database \"testing\" has a schema named \"production\" and a table that matches the pattern /^critical.secrets$/, but otherwise, you'll get an error from pg_dump, \"pg_dump: error: no matching tables were found\". Neither behavior seems correct.\n\nThe function where the processing occurs is processSQLNamePattern, which is called by pg_dump, pg_dumpall, and psql. All three callers expect processSQLNamePattern to append where-clauses to a buffer, not to execute any sql of its own. I propose that processSQLNamePattern return an error code if the pattern contains more than three parts, but otherwise insert the database portion into the buffer as a \"pg_catalog.current_database() OPERATOR(pg_catalog.=) <database>\", where <database> is a properly escaped representation of the database portion. Maybe someday we can change that to OPERATOR(pg_catalog.~), but for now we lack the sufficient logic for handling multiple matching database names. (The situation is different for pg_dumpall, as it's using the normal logic for matching a relation name, not for matching a database, and we'd still be fine matching that against a pattern.)\n\nFor psql and pg_dump, I'm tempted to restrict the database portion (if not quoted) to neither contain shell glob characters nor POSIX regex characters, and return an error code if any are found, so that the clients can raise an appropriate error to the user.\n\nIn psql, this proposal would result in no tables matching \\d wrongdb.schema.table, which would differ from v13's behavior. You wouldn't get an error about having specified the wrong database. You'd just get no matching relations. \\d ??db??.schema.table would complain about the db portion being a pattern. \\d \"??db??\".schema.table would work, assuming you're connected to a database literally named ??db??\n\nIn pg_dumpall, --exclude-database=more.than.one.part would give an error about too many dotted parts rather than simply trying to exclude the last \"part\" and silently ignoring the prefix, which I think is what v13's pg_dumpall would do. --exclude-database=db?? would work to exclude four character database names beginning in \"db\".\n\nIn pg_dump, the -t wrongdb.schema.table would match nothing and give the familiar error \"pg_dump: error: no matching tables were found\". pg_dump -t too.many.dotted.names would give a different error about too many parts. pg_dump -t db??.foo.bar would give an error about the database needing to be a literal name rather than a pattern.\n\nI don't like your proposal to use a strcmp() rather than a pg_catalog.= match, because it diverges from how the rest of the pattern is treated, including in how encoding settings might interact with the name, needing to be executed on the client side rather than in the server where the rest of the name resolution is happening. \n\nDoes this sound like a workable proposal?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 13 Oct 2021 13:43:53 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Wed, Oct 13, 2021 at 4:43 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> The function where the processing occurs is processSQLNamePattern, which is called by pg_dump, pg_dumpall, and psql. All three callers expect processSQLNamePattern to append where-clauses to a buffer, not to execute any sql of its own. I propose that processSQLNamePattern return an error code if the pattern contains more than three parts, but otherwise insert the database portion into the buffer as a \"pg_catalog.current_database() OPERATOR(pg_catalog.=) <database>\", where <database> is a properly escaped representation of the database portion. Maybe someday we can change that to OPERATOR(pg_catalog.~), but for now we lack the sufficient logic for handling multiple matching database names. (The situation is different for pg_dumpall, as it's using the normal logic for matching a relation name, not for matching a database, and we'd still be fine matching that against a pattern.)\n\nI agree with matching using OPERATOR(pg_catalog.=) but I think it\nshould be an error, not a silently-return-nothing case.\n\n> In pg_dumpall, --exclude-database=more.than.one.part would give an error about too many dotted parts rather than simply trying to exclude the last \"part\" and silently ignoring the prefix, which I think is what v13's pg_dumpall would do. --exclude-database=db?? would work to exclude four character database names beginning in \"db\".\n\nThose things sound good.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Oct 2021 08:54:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "> On Oct 13, 2021, at 1:43 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> The issue of name parsing impacts pg_dump and pg_dumpall, also. Consider what happens with:\n> \n> pg_dump -t production.critical.secrets > secrets.dump\n> dropdb production\n> \n> In v13, if your default database is \"testing\", and database \"testing\" has the same schemas and tables (but not data) as production, you are unhappy. You just dumped a copy of your test data and blew away the production data.\n> \n> You could end up unhappy in v14, if database \"testing\" has a schema named \"production\" and a table that matches the pattern /^critical.secrets$/, but otherwise, you'll get an error from pg_dump, \"pg_dump: error: no matching tables were found\". Neither behavior seems correct.\n\nWith the attached patch, this scenario results in a \"cross-database references are not implemented\" error.\n\n> The function where the processing occurs is processSQLNamePattern, which is called by pg_dump, pg_dumpall, and psql. All three callers expect processSQLNamePattern to append where-clauses to a buffer, not to execute any sql of its own. I propose that processSQLNamePattern return an error code if the pattern contains more than three parts, but otherwise insert the database portion into the buffer as a \"pg_catalog.current_database() OPERATOR(pg_catalog.=) <database>\", where <database> is a properly escaped representation of the database portion. Maybe someday we can change that to OPERATOR(pg_catalog.~), but for now we lack the sufficient logic for handling multiple matching database names. (The situation is different for pg_dumpall, as it's using the normal logic for matching a relation name, not for matching a database, and we'd still be fine matching that against a pattern.)\n\nI ultimately went with your strcmp idea rather than OPERATOR(pg_catalog.=), as rejecting the database name as part of the query complicates the calling convention for no apparent benefit. I had been concerned about database names that were collation-wise equal but byte-wise unequal, but it seems we already treat those as distinct database names, so my concern was unnecessary. We already use strcmp on database names from frontend clients (fe_utils/parallel_slots.c, psql/prompt.c, pg_amcheck.c, pg_dump.c, pg_upgrade/relfilenode.c), from libpq (libpq/hba.c) and from the backend (commands/dbcommands.c, init/postinit.c). \n\nI tried testing how this plays out by handing `createdb` the name é (U+00E9 \"LATIN SMALL LETTER E WITH ACCUTE\") and then again the name é (U+0065 \"LATIN SMALL LETTER E\" followed by U+0301 \"COMBINING ACCUTE ACCENT\".) That results in two distinct databases, not an error about a duplicate database name:\n\n# select oid, datname, datdba, encoding, datcollate, datctype from pg_catalog.pg_database where datname IN ('é', 'é');\n oid | datname | datdba | encoding | datcollate | datctype \n-------+---------+--------+----------+-------------+-------------\n 37852 | é | 10 | 6 | en_US.UTF-8 | en_US.UTF-8\n 37855 | é | 10 | 6 | en_US.UTF-8 | en_US.UTF-8\n(2 rows)\n\nBut that doesn't seem to prove much, as other tools in my locale don't treat those as equal either. (Testing with perl's \"eq\" operator, they compare as distinct.) I expected to find regression tests providing better coverage for this somewhere, but did not. Anybody know more about it?\n\n> For psql and pg_dump, I'm tempted to restrict the database portion (if not quoted) to neither contain shell glob characters nor POSIX regex characters, and return an error code if any are found, so that the clients can raise an appropriate error to the user.\n\nWith the patch, using pattern characters in an unquoted database portion results in a \"database name must be literal\" error. Using them in a quoted database name is allowed, but unless you are connected to a database that literally equals that name, you will get a \"cross-database references are not implemented\" error.\n\n> In psql, this proposal would result in no tables matching \\d wrongdb.schema.table, which would differ from v13's behavior. You wouldn't get an error about having specified the wrong database. You'd just get no matching relations. \\d ??db??.schema.table would complain about the db portion being a pattern. \\d \"??db??\".schema.table would work, assuming you're connected to a database literally named ??db??\n\nWith the patch, psql will treat \\d wrongdb.schema.table as a \"cross-database references are not implemented\" error.\n\n> In pg_dumpall, --exclude-database=more.than.one.part would give an error about too many dotted parts rather than simply trying to exclude the last \"part\" and silently ignoring the prefix, which I think is what v13's pg_dumpall would do. --exclude-database=db?? would work to exclude four character database names beginning in \"db\".\n\nThe patch implements this.\n\n> In pg_dump, the -t wrongdb.schema.table would match nothing and give the familiar error \"pg_dump: error: no matching tables were found\".\n\nWith the patch, pg_dump instead gives a \"cross-database references are not implemented\" error.\n\n> pg_dump -t too.many.dotted.names would give a different error about too many parts.\n\nWith the patch, pg_dump instead gives a \"improper qualified name (too many dotted names)\" error.\n\n> pg_dump -t db??.foo.bar would give an error about the database needing to be a literal name rather than a pattern.\n\nWith the patch, pg_dump gives a \"database name must be literal\" error. This is the only new error message in the patch, which puts a burden on translators, but I didn't see any existing message that would serve. Suggestions welcome.\n\n> I don't like your proposal to use a strcmp() rather than a pg_catalog.= match, because it diverges from how the rest of the pattern is treated, including in how encoding settings might interact with the name, needing to be executed on the client side rather than in the server where the rest of the name resolution is happening. \n\nRecanted, as discussed above.\n\n\nThe patch only changes the behavior of pg_amcheck in that it now rejects patterns with too many parts. Using database patterns was and remains legal for this tool.\n\nThe patch changes nothing about reindexdb. That's a debatable design choice, but reindexdb doesn't use string_utils's processSQLNamePattern() function as the other tools do, nor does its documentation reference psql's #APP-PSQL-PATTERNS documentation. It's --schema option only takes literal names.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 20 Oct 2021 07:15:25 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> [ v1-0001-Reject-patterns-with-too-many-parts-or-wrong-db.patch ]\n\nThis needs a rebase over the recent renaming of our Perl test modules.\n(Per the cfbot, so do several of your other pending patches.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Nov 2021 15:07:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "> On Nov 3, 2021, at 12:07 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> [ v1-0001-Reject-patterns-with-too-many-parts-or-wrong-db.patch ]\n> \n> This needs a rebase over the recent renaming of our Perl test modules.\n> (Per the cfbot, so do several of your other pending patches.)\n> \n> \t\t\tregards, tom lane\n\nThanks for calling my attention to it.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 3 Nov 2021 14:52:12 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Wed, Oct 13, 2021 at 09:24:53AM -0400, Robert Haas wrote:\n> > Splitting the pattern on all the dots and throwing away any additional\n> > leftmost fields is a bug, ...\n> \n> I also agree with you right up to here.\n> \n> > and when you stop doing that, passing additional dots through to the POSIX\n> > regular expression for processing is the most natural thing to do. This\n> > is, in fact, how v14 works. It is a bit debatable whether treating the\n> > first dot as a separator and the additional dots as stuff to be passed\n> > through is the right thing, so we could call the v14 behavior a\n> > mis-feature, but it's not as clearcut as the discussion upthread suggested.\n> > Reverting to v13 behavior seems wrong, but I'm now uncertain how to\n> > proceed.\n> \n> But not this part, or at least not entirely.\n> \n> If we pass the dots through to the POSIX regular expression, we can\n> only do that either for the table name or the schema name, not both -\n> either the first or last dot must mark the boundary between the two.\n> That means that you can't use all the same regexy things for one as\n> you can for the other, which is a strange system. I knew that your\n> patch made it do that, and I committed it that way because I didn't\n> think it really mattered, and also because the whole system is already\n> pretty strange, so what's one more bit of strangeness?\n\nRather than trying to guess at the meaning of each '.' based on the total\nstring. I wonder, if we could for v15 require '.' to be spelled in longer way\nif it needs to be treated as part of the regex.\n\nPerhaps requiring something like '(.)' be used rather than a bare '.' \nmight be good enough and documenting otherwise it's really a separator? \nI suppose we could also invent a non-standard class as a stand in like\n'[::any::]', but that seems kinda weird.\n\nI think it might be possible to give better error messages long term\nif we knew what '.' should mean without looking at the whole thing.\n\nGarick\n\n", "msg_date": "Thu, 4 Nov 2021 13:37:08 +0000", "msg_from": "\"Hamlin, Garick L\" <ghamlin@isc.upenn.edu>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "\n\n> On Nov 4, 2021, at 6:37 AM, Hamlin, Garick L <ghamlin@isc.upenn.edu> wrote:\n> \n>> If we pass the dots through to the POSIX regular expression, we can\n>> only do that either for the table name or the schema name, not both -\n>> either the first or last dot must mark the boundary between the two.\n>> That means that you can't use all the same regexy things for one as\n>> you can for the other, which is a strange system. I knew that your\n>> patch made it do that, and I committed it that way because I didn't\n>> think it really mattered, and also because the whole system is already\n>> pretty strange, so what's one more bit of strangeness?\n> \n> Rather than trying to guess at the meaning of each '.' based on the total\n> string. I wonder, if we could for v15 require '.' to be spelled in longer way\n> if it needs to be treated as part of the regex.\n\nWe're trying to fix an edge case, not change how the basic case works. Most users are accustomed to using patterns from within psql like:\n\n \\dt myschema.mytable\n\nWhatever patch we accept must not break these totally normal and currently working cases.\n\n> Perhaps requiring something like '(.)' be used rather than a bare '.' \n> might be good enough and documenting otherwise it's really a separator? \n> I suppose we could also invent a non-standard class as a stand in like\n> '[::any::]', but that seems kinda weird.\n\nIf I understand you, that would require the above example to be written as:\n\n \\dt myschema(.)mytable\n\nwhich nobody expects to have to do, and which would be a very significant breaking change in v15. I can't see anything like that being accepted.\n\n> I think it might be possible to give better error messages long term\n> if we knew what '.' should mean without looking at the whole thing.\n\nYou quote a portion of an email from Robert. After that email, there were several more, and a new patch. The commit message of the new patch explains what it does. I wonder if you'd review that message, quoted here, or even better, review the entire patch. Does this seem like an ok fix to you?\n\nSubject: [PATCH v2] Reject patterns with too many parts or wrong db\n\nObject name patterns used by pg_dump and psql potentially contain\nmultiple parts (dotted names), and nothing prevents users from\nspecifying a name with too many parts, nor specifying a\ndatabase-qualified name for a database other than the currently\nconnected database. Prior to PostgreSQL version 14, pg_dump,\npg_dumpall and psql quietly discarded extra parts of the name on the\nleft. For example, `pg_dump -t` only expected a possibly schema\nqualified table name, not a database name, and the following command\n\n pg_dump -t production.marketing.customers\n\nquietly ignored the \"production\" database name with neither warning\nnor error. Commit 2c8726c4b0a496608919d1f78a5abc8c9b6e0868 changed\nthe behavior of name parsing. Where names contain more than the\nmaximum expected number of dots, the extra dots on the right were\ninterpreted as part of the name, such that the above example was\ninterpreted as schema=production, relation=marketing.customers.\nThis turns out to be highly unintuitive to users.\n\nWe've had reports that users sometimes copy-and-paste database- and\nschema-qualified relation names from the logs.\nhttps://www.postgresql.org/message-id/20211013165426.GD27491%40telsasoft.com\n\nThere is no support for cross database references, but allowing a\ndatabase qualified pattern when the database portion matches the\ncurrent database, as in the above report, seems more friendly than\nrejecting it, so do that. We don't allow the database portion\nitself to be a pattern, because if it matched more than one database\n(including the current one), there would be confusion about which\ndatabase(s) were processed.\n\nConsistent with how we allow db.schemapat.relpat in pg_dump and psql,\nalso allow db.schemapat for specifying schemas, as:\n\n \\dn mydb.myschema\n\nin psql and\n\n pg_dump --schema=mydb.myschema\n\nFix the pre-v14 behavior of ignoring leading portions of patterns\ncontaining too many dotted names, and the v14.0 misfeature of\ncombining trailing portions of such patterns, and instead reject\nsuch patterns in all cases by raising an error.\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 4 Nov 2021 09:08:31 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On 2021-Oct-20, Mark Dilger wrote:\n\n> I tried testing how this plays out by handing `createdb` the name é\n> (U+00E9 \"LATIN SMALL LETTER E WITH ACCUTE\") and then again the name é\n> (U+0065 \"LATIN SMALL LETTER E\" followed by U+0301 \"COMBINING ACCUTE\n> ACCENT\".) That results in two distinct databases, not an error about\n> a duplicate database name:\n> \n> # select oid, datname, datdba, encoding, datcollate, datctype from pg_catalog.pg_database where datname IN ('é', 'é');\n> oid | datname | datdba | encoding | datcollate | datctype \n> -------+---------+--------+----------+-------------+-------------\n> 37852 | é | 10 | 6 | en_US.UTF-8 | en_US.UTF-8\n> 37855 | é | 10 | 6 | en_US.UTF-8 | en_US.UTF-8\n> (2 rows)\n> \n> But that doesn't seem to prove much, as other tools in my locale don't\n> treat those as equal either. (Testing with perl's \"eq\" operator, they\n> compare as distinct.) I expected to find regression tests providing\n> better coverage for this somewhere, but did not. Anybody know more\n> about it?\n\nI think it would appropriate to normalize identifiers that are going to\nbe stored in catalogs. As presented, this is a bit ridiculous and I see\nno reason to continue to support it.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Ed is the standard text editor.\"\n http://groups.google.com/group/alt.religion.emacs/msg/8d94ddab6a9b0ad3\n\n\n", "msg_date": "Fri, 5 Nov 2021 10:33:55 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I think it would appropriate to normalize identifiers that are going to\n> be stored in catalogs. As presented, this is a bit ridiculous and I see\n> no reason to continue to support it.\n\nIf we had any sort of convention about the encoding of identifiers stored\nin shared catalogs, maybe we could do something about that. But we don't,\nso any change is inevitably going to break someone's use-case.\n\nIn any case, that seems quite orthogonal to the question of how to treat\nnames with too many dots in them. Considering we are three days out from\nfreezing 14.1, I think it is time to stop the meandering discussion and\nfix it. And by \"fix\", I mean revert to the pre-14 behavior.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Nov 2021 09:59:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Fri, Nov 5, 2021 at 9:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> In any case, that seems quite orthogonal to the question of how to treat\n> names with too many dots in them. Considering we are three days out from\n> freezing 14.1, I think it is time to stop the meandering discussion and\n> fix it. And by \"fix\", I mean revert to the pre-14 behavior.\n\nI do not think that there is consensus on that proposal.\n\nAnd FWIW, I still oppose it. It's debatable whether this even\nqualifies as a bug in the first place, and even more debatable whether\naccepting and ignoring arbitrary garbage is the right solution.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Nov 2021 10:37:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "\n\n> On Nov 5, 2021, at 6:59 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> I think it would appropriate to normalize identifiers that are going to\n>> be stored in catalogs. As presented, this is a bit ridiculous and I see\n>> no reason to continue to support it.\n> \n> If we had any sort of convention about the encoding of identifiers stored\n> in shared catalogs, maybe we could do something about that. But we don't,\n> so any change is inevitably going to break someone's use-case.\n\nI only started the discussion about normalization to demonstrate that existing behavior does not require it.\n\n> In any case, that seems quite orthogonal to the question of how to treat\n> names with too many dots in them.\n\nAgreed.\n\n> Considering we are three days out from\n> freezing 14.1, I think it is time to stop the meandering discussion and\n> fix it.\n\nAgreed.\n\n> And by \"fix\", I mean revert to the pre-14 behavior.\n\nThat's one solution. The patch I posted on October 20, and rebased two days ago, has not received any negative feedback. If you want to revert to pre-14 behavior for 14.1, do you oppose the patch going in for v15? (I'm not taking a position here, just asking what you'd prefer.)\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 5 Nov 2021 07:58:15 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "Rebased patch attached:\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 21 Dec 2021 10:58:39 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "Hi,\n\nOn Tue, Dec 21, 2021 at 10:58:39AM -0800, Mark Dilger wrote:\n> \n> Rebased patch attached:\n\nThis version doesn't apply anymore:\nhttp://cfbot.cputube.org/patch_36_3367.log\n=== Applying patches on top of PostgreSQL commit ID 5513dc6a304d8bda114004a3b906cc6fde5d6274 ===\n=== applying patch ./v3-0001-Reject-patterns-with-too-many-parts-or-wrong-db.patch\n[...]\n1 out of 52 hunks FAILED -- saving rejects to file src/bin/psql/describe.c.rej\n\nCould you send a rebased version? In the meantime I will switch the cf entry\nto Waiting on Author.\n\n\n", "msg_date": "Sat, 15 Jan 2022 16:28:19 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "> On Jan 15, 2022, at 12:28 AM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> Could you send a rebased version?\n\nYes. Here it is:\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 17 Jan 2022 10:06:22 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Mon, Jan 17, 2022 at 1:06 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On Jan 15, 2022, at 12:28 AM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > Could you send a rebased version?\n> Yes. Here it is:\n\nThis is not a full review, but I just noticed that:\n\n+ * dotcnt: how many separators were parsed from the pattern, by reference.\n+ * Can be NULL.\n\nBut then:\n\n+ Assert(dotcnt != NULL);\n\nOn a related note, it's unclear why you've added three new arguments\nto processSQLNamePattern() but only one of them gets a mention in the\nfunction header comment.\n\nIt's also pretty clear that the behavior of patternToSQLRegex() is\nchanging, but the function header comments are not.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Jan 2022 16:54:12 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "> On Jan 17, 2022, at 1:54 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> + * dotcnt: how many separators were parsed from the pattern, by reference.\n> + * Can be NULL.\n> \n> But then:\n> \n> + Assert(dotcnt != NULL);\n\nRemoved the \"Can be NULL\" part, as that use case doesn't make sense. The caller should always care whether the number of dots was greater than they are prepared to handle.\n\n> On a related note, it's unclear why you've added three new arguments\n> to processSQLNamePattern() but only one of them gets a mention in the\n> function header comment.\n\nUpdated the header comments to include all parameters.\n\n> It's also pretty clear that the behavior of patternToSQLRegex() is\n> changing, but the function header comments are not.\n\nUpdated the header comments for this, too.\n\nAlso, rebased as necessary:\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 26 Jan 2022 09:04:15 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "Continuing my pass through the \"bug fixes\" section of the CommitFest,\nI came upon this patch, which is contested. Here is my attempt to\nsummarize where things stand. As I understand it:\n\n- Tom wants to revert to the previous behavior of accepting arbitrary\ngarbage, so that \\d slkgjskld.jgdsjhgjklsdhg.saklasgh.foo.bar means \\d\nfoo.bar.\n- I want \\d mydb.foo.bar to mean \\d foo.bar if the dbname is mydb and\nreport an error otherwise; anything with dots>2 is also an error in my\nview.\n- Peter Geoghegan agrees with Tom.\n- Stephen Frost agrees with me.\n- Vik Fearing also agrees with me.\n- Justin Pryzby, who originally discovered the problem, prefers the\nsame behavior that I prefer long-term, but thinks Tom's behavior is\nbetter than doing nothing.\n- Mark Dilger, Isaac Moreland, Garick Hamlin, Alvaro Herrera, and\nJulien Rouhaud have commented on the thread but have not endorsed\neither of these dueling proposals.\n\nBy my count, that's probably a vote of 4-2 in view of the preferred\nsolution, but it depends on whether you could Justin's vote as +1 for\nmy preferred solution or maybe +0.75 or +0.50 or something. At any\nrate, it's close.\n\nIf anyone else would like to take a position, please do so in the next\nfew days. If there are no more votes, I'm going to proceed with trying\nto fix up Mark's patch implementing my preferred solution and getting\nit committed.\n\nThanks,\n\n...Robert\n\n\n", "msg_date": "Tue, 15 Mar 2022 15:27:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "\n\n> On Mar 15, 2022, at 12:27 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> - Justin Pryzby, who originally discovered the problem, prefers the\n> same behavior that I prefer long-term, but thinks Tom's behavior is\n> better than doing nothing.\n> - Mark Dilger, Isaac Moreland, Garick Hamlin, Alvaro Herrera, and\n> Julien Rouhaud have commented on the thread but have not endorsed\n> either of these dueling proposals.\n\nI vote in favor of committing the patch, though I'd also say it's not super important to me.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 15 Mar 2022 12:30:52 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Wed, Oct 13, 2021 at 11:54:26AM -0500, Justin Pryzby wrote:\n> It seems unfortunate if names from log messages qualified with datname were now\n> rejected. Like this one:\n> \n> | automatic analyze of table \"ts.child.cdrs_2021_10_12\"...\n\nMark mentioned this \"log message\" use case in his proposed commit message, but\nI wanted to mention what seems like a more important parallel:\n\npostgres=# SELECT 'postgres.public.postgres_log'::regclass;\nregclass | postgres_log\n\npostgres=# SELECT 'not.postgres.public.postgres_log'::regclass;\nERROR: improper relation name (too many dotted names): not.postgres.public.postgres_log\n ^\npostgres=# SELECT 'not.public.postgres_log'::regclass;\nERROR: cross-database references are not implemented: \"not.public.postgres_log\"\n\nI think Mark used this as the model behavior for \\d for this patch, which\nsounds right. Since the \"two dot\" case wasn't fixed in 14.1 nor 2, it seems\nbetter to implement the ultimate, intended behavior now, rather than trying to\nexactly match what old versions did. I'm of the understanding that's what\nMark's patch does, so +1 from me.\n\nI don't know how someone upgrading from an old version would know about the\nchange, though (rejecting junk prefixes rather than ignoring them). *If* it\nwere important, it seems like it'd need to be added to the 14.0 release notes.\n\n\n", "msg_date": "Tue, 15 Mar 2022 15:01:07 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Tue, Mar 15, 2022 at 12:31 PM Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n>\n> > On Mar 15, 2022, at 12:27 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > - Justin Pryzby, who originally discovered the problem, prefers the\n> > same behavior that I prefer long-term, but thinks Tom's behavior is\n> > better than doing nothing.\n> > - Mark Dilger, Isaac Moreland, Garick Hamlin, Alvaro Herrera, and\n> > Julien Rouhaud have commented on the thread but have not endorsed\n> > either of these dueling proposals.\n>\n> I vote in favor of committing the patch, though I'd also say it's not\n> super important to me.\n>\n>\nI'm on board with leaving the v14 change in place - fixing the bug so that\na matching database name is accepted (the whole copy-from-logs argument is\nquite compelling). I'm not too concerned about psql, since \\d is mainly\nused interactively, and since the change will result in errors in\npg_dump/pg_restore the usual due diligence for upgrading should handle the\nnecessary tweaks should the case arise where bogus/ignore stuff is present.\n\nDavid J.\n\nOn Tue, Mar 15, 2022 at 12:31 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> On Mar 15, 2022, at 12:27 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> - Justin Pryzby, who originally discovered the problem, prefers the\n> same behavior that I prefer long-term, but thinks Tom's behavior is\n> better than doing nothing.\n> - Mark Dilger, Isaac Moreland, Garick Hamlin, Alvaro Herrera, and\n> Julien Rouhaud have commented on the thread but have not endorsed\n> either of these dueling proposals.\n\nI vote in favor of committing the patch, though I'd also say it's not super important to me.I'm on board with leaving the v14 change in place - fixing the bug so that a matching database name is accepted (the whole copy-from-logs argument is quite compelling).  I'm not too concerned about psql, since \\d is mainly used interactively, and since the change will result in errors in pg_dump/pg_restore the usual due diligence for upgrading should handle the necessary tweaks should the case arise where bogus/ignore stuff is present.David J.", "msg_date": "Tue, 15 Mar 2022 13:03:48 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On 2022-01-26 09:04:15 -0800, Mark Dilger wrote:\n> Also, rebased as necessary:\n\nNeeds another one: http://cfbot.cputube.org/patch_37_3367.log\n\nMarked as waiting-on-author.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Mar 2022 18:12:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "> On Mar 21, 2022, at 6:12 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> Needs another one: http://cfbot.cputube.org/patch_37_3367.log\n> \n> Marked as waiting-on-author.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 21 Mar 2022 18:32:24 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Mon, Mar 21, 2022 at 9:32 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> [ new patch version ]\n\nThis patch adds three new arguments to processSQLNamePattern() and\ndocuments one of them. It adds three new parameters to\npatternToSQLRegex() as well, and documents none of them. I think that\nthe text of the comment might need some updating too, in particular\nthe sentence \"Additional dots in the name portion are not treated as\nspecial.\"\n\nThere are no comments explaining the left_is_literal stuff. It appears\nthat your intention here is that if the pattern string supplied by the\nuser contains any of *?|+()[]{}.^\\ not surrounded by double-quotes, we\nsignal the caller. Some callers then use this to issue a complaint\nthat the database name must be a literal. To me, this behavior doesn't\nreally make sense. If something is a literal, that means we're not\ngoing to interpret the special characters that it contains. Here, we\nare interpreting the special characters just so we can complain that\nthey exist. It seems to me that a simpler solution would be to not\ninterpret them at all. I attach a patch showing what I mean by that.\nIt just rips out the dbname_is_literal stuff in favor of doing nothing\nat all. To put the whole thing another way, if the user types \"\\d\n}.public.ft\", your code wants to complain about the fact that the user\nis trying to use regular expression characters in a place where they\nare not allowed to do that. I argue that we should instead just be\ncomparing \"}\" against the database name and see whether it happens to\nmatch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 22 Mar 2022 14:04:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "> On Mar 22, 2022, at 11:04 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> This patch adds three new arguments to processSQLNamePattern() and\n> documents one of them. It adds three new parameters to\n> patternToSQLRegex() as well, and documents none of them.\n\nThis next patch adds the missing comments.\n\n> I think that\n> the text of the comment might need some updating too, in particular\n> the sentence \"Additional dots in the name portion are not treated as\n> special.\"\n\nChanged. \n\n> There are no comments explaining the left_is_literal stuff. It appears\n> that your intention here is that if the pattern string supplied by the\n> user contains any of *?|+()[]{}.^\\ not surrounded by double-quotes, we\n> signal the caller. Some callers then use this to issue a complaint\n> that the database name must be a literal. To me, this behavior doesn't\n> really make sense. If something is a literal, that means we're not\n> going to interpret the special characters that it contains. Here, we\n> are interpreting the special characters just so we can complain that\n> they exist. It seems to me that a simpler solution would be to not\n> interpret them at all. I attach a patch showing what I mean by that.\n> It just rips out the dbname_is_literal stuff in favor of doing nothing\n> at all. To put the whole thing another way, if the user types \"\\d\n> }.public.ft\", your code wants to complain about the fact that the user\n> is trying to use regular expression characters in a place where they\n> are not allowed to do that. I argue that we should instead just be\n> comparing \"}\" against the database name and see whether it happens to\n> match.\n\nI think your change is fine, so I've rolled it into this next patch.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 25 Mar 2022 12:42:38 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Fri, Mar 25, 2022 at 3:42 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I think your change is fine, so I've rolled it into this next patch.\n\nOK, cool. Here are some more comments.\n\nIn describe.c, why are the various describeWhatever functions\nreturning true when validateSQLNamePattern returns false? It seems to\nme that they should return false. That would cause exec_command_d() to\nset status = PSQL_CMD_ERROR, which seems appropriate. I wondered\nwhether we should return PSQL_CMD_ERROR only for database errors, but\nthat doesn't seem to be the case. For example, exec_command_a() sets\nPSQL_CMD_ERROR for a failure in do_pset().\n\npg_dump's prohibit_crossdb_refs() has a special case for you are not\nconnected to a database, but psql's validateSQLNamePattern() treats it\nas an invalid cross-database reference. Maybe that should be\nconsistent, or just the other way around. After all, I would expect\npg_dump to just bail out if we lose the database connection, but psql\nmay continue, because we can reconnect. Putting more code into the\ntool where reconnecting doesn't really make sense seems odd.\n\nprocessSQLNamePattern() documents that dotcnt can be NULL, and then\nasserts that it isn't.\n\nprocessSQLNamePattern() introduces new local variables schema and\nname, which account for most of the notational churn in that function.\nI can't see a reason why those changes are needed. You do test whether\nthe new variables are NULL in a couple of places, but you could\nequally well test schemavar/namevar/altnamevar directly. Actually, I\ndon't really understand why this function needs any changes other than\npassing dbnamebuf and dotcnt through to patternToSQLRegex(). Is there\na reason?\n\npatternToSQLRegex() restructures the system of buffers as well, and I\ndon't understand the purpose of that either. It sort of looks like the\nidea might be to relax the rule against dbname.relname patterns, but\nwhy would we want to do that? If we don't want to do that, why remove\nthe assertion?\n\nIt is not very nice that patternToSQLRegex() ends up repeating the\nlocution \"if (left && want_literal_dbname)\nappendPQExpBufferChar(&left_literal, '\"')\" a whole bunch of times.\nSuppose we remove all that. Then, in the if (!inquotes && ch == '.')\ncase, if left = true, we copy \"cp - pattern\" bytes starting at\n\"pattern\" into the buffer. Wouldn't that accomplish the same thing\nwith less code?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Mar 2022 11:20:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "> On Mar 29, 2022, at 8:20 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> In describe.c, why are the various describeWhatever functions\n> returning true when validateSQLNamePattern returns false? It seems to\n> me that they should return false. That would cause exec_command_d() to\n> set status = PSQL_CMD_ERROR, which seems appropriate. I wondered\n> whether we should return PSQL_CMD_ERROR only for database errors, but\n> that doesn't seem to be the case. For example, exec_command_a() sets\n> PSQL_CMD_ERROR for a failure in do_pset().\n\nYes, I believe you are right. For scripting, the following should echo, but doesn't under the version 7 patch. Fixed in version 8.\n\n% psql -c \"\\d a.b.c.d\" || echo 'error'\nimproper qualified name (too many dotted names): a.b.c.d\n\n> pg_dump's prohibit_crossdb_refs() has a special case for you are not\n> connected to a database, but psql's validateSQLNamePattern() treats it\n> as an invalid cross-database reference. Maybe that should be\n> consistent, or just the other way around. After all, I would expect\n> pg_dump to just bail out if we lose the database connection, but psql\n> may continue, because we can reconnect. Putting more code into the\n> tool where reconnecting doesn't really make sense seems odd.\n\nFixed psql in version 8 to issue the appropriate error message, either \"You are currently not connected to a database.\" or \"cross-database references are not implemented: %s\". That matches the output for pg_dump.\n\n> processSQLNamePattern() documents that dotcnt can be NULL, and then\n> asserts that it isn't.\n\nThat's ugly. Fixed the documentation in version 8.\n\n> processSQLNamePattern() introduces new local variables schema and\n> name, which account for most of the notational churn in that function.\n> I can't see a reason why those changes are needed. You do test whether\n> the new variables are NULL in a couple of places, but you could\n> equally well test schemavar/namevar/altnamevar directly. Actually, I\n> don't really understand why this function needs any changes other than\n> passing dbnamebuf and dotcnt through to patternToSQLRegex(). Is there\n> a reason?\n\nIt looks like overeager optimization to me, to avoid passing buffers to patternToSQLRegex that aren't really wanted, consequently asking that function to parse things that the caller doesn't care about. But I don't think the optimization is worth the git history churn. Removed in version 8.\n\n> patternToSQLRegex() restructures the system of buffers as well, and I\n> don't understand the purpose of that either. It sort of looks like the\n> idea might be to relax the rule against dbname.relname patterns, but\n> why would we want to do that? If we don't want to do that, why remove\n> the assertion?\n\nThis took a while to answer.\n\nI don't remember exactly what I was trying to do here, but it looks like I wanted callers who only want a (possibly database-qualified) schema name to pass that in the (dbnamebuf and) schemabuf, rather than using the (schemabuf and ) namebuf. I obviously didn't finish that conversion, because the clients never got the message. What remained was some rearrangement in patternToSQLRegex which worked but served no purpose.\n\nI've reverted the useless refactoring.\n\n> It is not very nice that patternToSQLRegex() ends up repeating the\n> locution \"if (left && want_literal_dbname)\n> appendPQExpBufferChar(&left_literal, '\"')\" a whole bunch of times.\n> Suppose we remove all that. Then, in the if (!inquotes && ch == '.')\n> case, if left = true, we copy \"cp - pattern\" bytes starting at\n> \"pattern\" into the buffer. Wouldn't that accomplish the same thing\n> with less code?\n\nWe don't *quite* want the literal left string. If it is quoted, we still want the quotes removed. For example:\n\n \\d \"robert.haas\".accounts.acme\n\nneeds to return robert.haas (without the quotes) as the database name. Likewise, for embedded quotes:\n\n \\d \"robert\"\"haas\".accounts.acme\n\nneeds to return robert\"haas, and so forth.\n\nI was able to clean up the \"if (left && want_literal_dbname)\" stuff, though.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 6 Apr 2022 09:07:15 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Wed, Apr 6, 2022 at 12:07 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I was able to clean up the \"if (left && want_literal_dbname)\" stuff, though.\n\nI still have a vague feeling that there's probably some way of doing\nthis better, but had more or less resolved to commit this patch as is\nanyway and had that all queued up. But then I had to go to a meeting\nand when I came out I discovered that Tom had done this:\n\n--- a/src/fe_utils/string_utils.c\n+++ b/src/fe_utils/string_utils.c\n@@ -918,8 +918,12 @@ processSQLNamePattern(PGconn *conn, PQExpBuffer\nbuf, const char *pattern,\n * Convert shell-style 'pattern' into the regular expression(s) we want to\n * execute. Quoting/escaping into SQL literal format will be done below\n * using appendStringLiteralConn().\n+ *\n+ * If the caller provided a schemavar, we want to split the pattern on\n+ * \".\", otherwise not.\n */\n- patternToSQLRegex(PQclientEncoding(conn), NULL, &schemabuf, &namebuf,\n+ patternToSQLRegex(PQclientEncoding(conn), NULL,\n+ (schemavar ? &schemabuf : NULL), &namebuf,\n pattern, force_escape);\n\n /*\n\nI don't know whether that's a bug fix for the existing code or some\nnew bit of functionality that \\dconfig requires and nothing else\nneeds. A related point that I had noticed during review is that these\nexisting tests look pretty bogus:\n\n if (namebuf.len > 2)\n\n if (schemabuf.len > 2)\n\nIn the v13 code, these tests occur at a point where we've definitely\nadded ^( to the buffer, but possibly nothing else. But starting in v14\nthat's no longer the case. So probably this test should be changed\nsomehow. The proposed patch changes these to something like this:\n\n+ if (schemavar && schemabuf.len > 2)\n\nBut that doesn't really seem like it's fixing the problem I'm talking about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Apr 2022 18:18:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I still have a vague feeling that there's probably some way of doing\n> this better, but had more or less resolved to commit this patch as is\n> anyway and had that all queued up. But then I had to go to a meeting\n> and when I came out I discovered that Tom had done this:\n\nSorry, it didn't occur to me that that would impinge on what you\nwere doing over here ... though in retrospect I should have thought\nof it.\n\n> I don't know whether that's a bug fix for the existing code or some\n> new bit of functionality that \\dconfig requires and nothing else\n> needs.\n\nWell, \\dconfig needs it because it would like foo.bar to get processed\nas just a name. But I think it's a bug fix because as things stood,\nif the caller doesn't provide a schemavar and the pattern contains a\ndot, the code just silently throws away the dot and all to the left.\nThat doesn't seem very sane, even if it is a longstanding behavior.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Apr 2022 18:37:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "\n\n> On Apr 7, 2022, at 3:37 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n>> \n>> I don't know whether that's a bug fix for the existing code or some\n>> new bit of functionality that \\dconfig requires and nothing else\n>> needs.\n> \n> Well, \\dconfig needs it because it would like foo.bar to get processed\n> as just a name. But I think it's a bug fix because as things stood,\n> if the caller doesn't provide a schemavar and the pattern contains a\n> dot, the code just silently throws away the dot and all to the left.\n> That doesn't seem very sane, even if it is a longstanding behavior.\n\nThe patch submitted changes processSQLNamePattern() to return a dot count by reference. It's up to the caller to decide whether to raise an error. If you pass in no schemavar, and you get back dotcnt=2, you know it parsed it as a two part pattern, and you can pg_fatal(...) or ereport(ERROR, ...) or whatever.\n\nIt looks like I'll need to post a new version of the patch with an argument telling the function to ignore dots, but I'm not prepared to say that for sure. \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 7 Apr 2022 16:32:15 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> The patch submitted changes processSQLNamePattern() to return a dot count by reference. It's up to the caller to decide whether to raise an error. If you pass in no schemavar, and you get back dotcnt=2, you know it parsed it as a two part pattern, and you can pg_fatal(...) or ereport(ERROR, ...) or whatever.\n\nWell, I'm not telling Robert what to do, but I wouldn't accept that\nAPI. It requires duplicative error-handling code at every call site\nand is an open invitation to omitting necessary error checks.\n\nPossibly a better idea is to add an enum argument telling the function\nwhat to do (parse the whole thing as one name regardless of dots,\nparse as two names if there's a dot, throw error if there's a dot,\netc etc as needed by existing call sites). Perhaps some of the\nexisting arguments could be merged into such an enum, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Apr 2022 19:41:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Thu, Apr 7, 2022 at 7:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> > The patch submitted changes processSQLNamePattern() to return a dot count by reference. It's up to the caller to decide whether to raise an error. If you pass in no schemavar, and you get back dotcnt=2, you know it parsed it as a two part pattern, and you can pg_fatal(...) or ereport(ERROR, ...) or whatever.\n>\n> Well, I'm not telling Robert what to do, but I wouldn't accept that\n> API. It requires duplicative error-handling code at every call site\n> and is an open invitation to omitting necessary error checks.\n>\n> Possibly a better idea is to add an enum argument telling the function\n> what to do (parse the whole thing as one name regardless of dots,\n> parse as two names if there's a dot, throw error if there's a dot,\n> etc etc as needed by existing call sites). Perhaps some of the\n> existing arguments could be merged into such an enum, too.\n\nI hadn't considered that approach, but I don't think it works very\nwell, because front-end error handling is so inconsistent. From the\npatch:\n\n+ pg_log_error(\"improper relation name (too many dotted\nnames): %s\", pattern);\n+ exit(2);\n\n+ fatal(\"improper qualified name (too many\ndotted names): %s\",\n+ cell->val);\n\n+ pg_log_error(\"improper qualified name (too\nmany dotted names): %s\",\n+ cell->val);\n+ PQfinish(conn);\n+ exit_nicely(1);\n\n+ pg_log_error(\"improper qualified name (too many dotted\nnames): %s\",\n+ pattern);\n+ termPQExpBuffer(&dbbuf);\n+ return false;\n\nCome to think of it, maybe the error text there could stand some\nbikeshedding, but AFAICS there's not much to be done about the fact\nthat one caller wants pg_log_error + exit(2), another wants fatal(), a\nthird PQfinish(conn) and exit_nicely(), and the last termPQExpBuffer()\nand return false.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Apr 2022 22:26:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Thu, Apr 07, 2022 at 10:26:18PM -0400, Robert Haas wrote:\n> + pg_log_error(\"improper relation name (too many dotted names): %s\", pattern);\n> \n> Come to think of it, maybe the error text there could stand some\n> bikeshedding, but AFAICS\n\nAFAICT the error text deliberately matches this, which I mentioned seems to me\nthe strongest argument for supporting \\d datname.nspname.relname\n\nts=# SELECT 'a.a.a.a'::regclass;\nERROR: 42601: improper relation name (too many dotted names): a.a.a.a\nLINE 1: SELECT 'a.a.a.a'::regclass;\n ^\nLOCATION: makeRangeVarFromNameList, namespace.c:3129\n\n\n", "msg_date": "Thu, 7 Apr 2022 22:04:44 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Thu, 7 Apr 2022 at 22:32, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Apr 7, 2022 at 7:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > Possibly a better idea is to add an enum argument telling the function\n> > what to do (parse the whole thing as one name regardless of dots,\n> > parse as two names if there's a dot, throw error if there's a dot,\n> > etc etc as needed by existing call sites). Perhaps some of the\n> > existing arguments could be merged into such an enum, too.\n>\n> AFAICS there's not much to be done about the fact\n> that one caller wants pg_log_error + exit(2), another wants fatal(), a\n> third PQfinish(conn) and exit_nicely(), and the last termPQExpBuffer()\n> and return false.\n\nThat doesn't seem to be entirely inconsistent with what Tom describes.\nInstead of \"throw an error\" the function would return an error and\npossibly some extra info which the caller would use to handle the\nerror appropriately.\n\nIt still has the nice property that the decision that it is in fact an\nerror would be made inside the parsing function based on the enum\ndeclaring what's intended. And it wouldn't return a possibly bogus\nparsing with information the caller might use to infer it isn't what\nwas desired (or fail to).\n\n-- \ngreg\n\n\n", "msg_date": "Thu, 7 Apr 2022 23:39:47 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Thu, Apr 7, 2022 at 11:40 PM Greg Stark <stark@mit.edu> wrote:\n> That doesn't seem to be entirely inconsistent with what Tom describes.\n> Instead of \"throw an error\" the function would return an error and\n> possibly some extra info which the caller would use to handle the\n> error appropriately.\n\nI don't personally see how we're going to come out ahead with that\napproach, but if you or Tom or someone else want to put something\ntogether, that's fine with me. I'm not stuck on this approach, I just\ndon't see how we come out ahead with the type of thing you're talking\nabout. I mean we could return the error text, but it's only to a\nhandful of places, so it just doesn't really seem like a win over what\nthe patch is already doing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Apr 2022 07:11:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "> On Apr 8, 2022, at 4:11 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> I don't personally see how we're going to come out ahead with that\n> approach, but if you or Tom or someone else want to put something\n> together, that's fine with me. I'm not stuck on this approach, I just\n> don't see how we come out ahead with the type of thing you're talking\n> about. I mean we could return the error text, but it's only to a\n> handful of places, so it just doesn't really seem like a win over what\n> the patch is already doing.\n\nSince there hasn't been any agreement on that point, I've just rebased the patch to apply cleanly against the current master:\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 18 Apr 2022 12:39:17 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Mon, Apr 18, 2022 at 3:39 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Since there hasn't been any agreement on that point, I've just rebased the patch to apply cleanly against the current master:\n\nThis looks OK to me. There may be better ways to do some of it, but\nthere's no rule against further improving the code later. Also, since\nthe issue was introduced in v14, we probably shouldn't wait forever to\ndo something about it. However, there is a procedural issue here now\nthat we are past feature freeze. I think someone could defensibly take\nany of the following positions:\n\n(A) This is a new feature. Wait for v16.\n(B) This is a bug fix. Commit it now and back-patch to v14.\n(C) This is a cleanup that is OK to put into v15 even after feature\nfreeze but since it is a behavior change we shouldn't back-patch it.\n\nI vote for (C). What do other people think?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Apr 2022 10:00:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Tue, Apr 19, 2022 at 7:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Apr 18, 2022 at 3:39 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > Since there hasn't been any agreement on that point, I've just rebased\n> the patch to apply cleanly against the current master:\n>\n> This looks OK to me. There may be better ways to do some of it, but\n> there's no rule against further improving the code later. Also, since\n> the issue was introduced in v14, we probably shouldn't wait forever to\n> do something about it. However, there is a procedural issue here now\n> that we are past feature freeze. I think someone could defensibly take\n> any of the following positions:\n>\n> (A) This is a new feature. Wait for v16.\n> (B) This is a bug fix. Commit it now and back-patch to v14.\n> (C) This is a cleanup that is OK to put into v15 even after feature\n> freeze but since it is a behavior change we shouldn't back-patch it.\n>\n> I vote for (C). What do other people think?\n>\n>\nI vote for (B). The behavioral change for v14 turns working usage patterns\ninto errors where it should not have. It is a design bug and POLA\nviolation that should be corrected.\n\n\"\"\"\nsuch that the above example was\ninterpreted as schema=production, relation=marketing.customers.\nThis turns out to be highly unintuitive to users.\n\"\"\"\n\nMy concern here about a behavior affecting bug fix - which we allow - is\nreduced by the fact this feature is almost exclusively an interactive one.\nWhich supports not having only v14, and maybe v15, behave differently than\nv13 and v16 when it comes to using it for expected usage patterns:\n\n\"\"\"\nWe've had reports that users sometimes copy-and-paste database- and\nschema-qualified relation names from the logs.\n\"\"\"\n\nDavid J.\n\nOn Tue, Apr 19, 2022 at 7:00 AM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Apr 18, 2022 at 3:39 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Since there hasn't been any agreement on that point, I've just rebased the patch to apply cleanly against the current master:\n\nThis looks OK to me. There may be better ways to do some of it, but\nthere's no rule against further improving the code later. Also, since\nthe issue was introduced in v14, we probably shouldn't wait forever to\ndo something about it. However, there is a procedural issue here now\nthat we are past feature freeze. I think someone could defensibly take\nany of the following positions:\n\n(A) This is a new feature. Wait for v16.\n(B) This is a bug fix. Commit it now and back-patch to v14.\n(C) This is a cleanup that is OK to put into v15 even after feature\nfreeze but since it is a behavior change we shouldn't back-patch it.\n\nI vote for (C). What do other people think?I vote for (B).  The behavioral change for v14 turns working usage patterns into errors where it should not have.  It is a design bug and POLA violation that should be corrected.\"\"\"such that the above example wasinterpreted as schema=production, relation=marketing.customers.This turns out to be highly unintuitive to users.\"\"\"My concern here about a behavior affecting bug fix - which we allow - is reduced by the fact this feature is almost exclusively an interactive one.  Which supports not having only v14, and maybe v15, behave differently than v13 and v16 when it comes to using it for expected usage patterns:\"\"\"We've had reports that users sometimes copy-and-paste database- andschema-qualified relation names from the logs.\"\"\"David J.", "msg_date": "Tue, 19 Apr 2022 07:20:41 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Tue, Apr 19, 2022 at 10:00:01AM -0400, Robert Haas wrote:\n> (A) This is a new feature. Wait for v16.\n> (B) This is a bug fix. Commit it now and back-patch to v14.\n> (C) This is a cleanup that is OK to put into v15 even after feature\n> freeze but since it is a behavior change we shouldn't back-patch it.\n> \n> I vote for (C). What do other people think?\n\nI thought the plan was to backpatch to v14.\n\nv14 psql had an unintentional behavior change, rejecting \\d\ndatname.nspname.relname.\n\nThis patch is meant to relax that change by allowing datname, but only if it\nmatches the name of the current database ... without returning to the v13\nbehavior, which allowed arbitrary leading junk.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 19 Apr 2022 09:27:28 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Tue, Apr 19, 2022 at 10:00:01AM -0400, Robert Haas wrote:\n>> (A) This is a new feature. Wait for v16.\n>> (B) This is a bug fix. Commit it now and back-patch to v14.\n>> (C) This is a cleanup that is OK to put into v15 even after feature\n>> freeze but since it is a behavior change we shouldn't back-patch it.\n>> I vote for (C). What do other people think?\n\n> I thought the plan was to backpatch to v14.\n\n> v14 psql had an unintentional behavior change, rejecting \\d\n> datname.nspname.relname.\n\nI agree that the v14 behavior is a bug, so ordinarily I'd vote\nfor back-patching.\n\nA possible objection to doing that is that the patch changes the\nAPIs of processSQLNamePattern and patternToSQLRegex. We would avoid\nmaking such a change in core-backend APIs in a minor release, but\nI'm not certain whether there are equivalent stability concerns\nfor src/fe_utils/.\n\nOn the whole I'd vote for (B), with (C) as second choice.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Apr 2022 11:26:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On 4/19/22 16:00, Robert Haas wrote:\n> On Mon, Apr 18, 2022 at 3:39 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> Since there hasn't been any agreement on that point, I've just rebased the patch to apply cleanly against the current master:\n> \n> This looks OK to me. There may be better ways to do some of it, but\n> there's no rule against further improving the code later. Also, since\n> the issue was introduced in v14, we probably shouldn't wait forever to\n> do something about it. However, there is a procedural issue here now\n> that we are past feature freeze. I think someone could defensibly take\n> any of the following positions:\n> \n> (A) This is a new feature. Wait for v16.\n> (B) This is a bug fix. Commit it now and back-patch to v14.\n> (C) This is a cleanup that is OK to put into v15 even after feature\n> freeze but since it is a behavior change we shouldn't back-patch it.\n> \n> I vote for (C). What do other people think?\n\n\nI vote for (B).\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 19 Apr 2022 20:34:32 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "> On Apr 19, 2022, at 7:00 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> (A) This is a new feature. Wait for v16.\n> (B) This is a bug fix. Commit it now and back-patch to v14.\n> (C) This is a cleanup that is OK to put into v15 even after feature\n> freeze but since it is a behavior change we shouldn't back-patch it.\n> \n> I vote for (C). What do other people think?\n\nLooks like most people voted for (B). In support of that option, here are patches for master and REL_14_STABLE. Note that I extended the tests compared to v9, which found a problem that is fixed for v10:\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 19 Apr 2022 19:20:19 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Tue, Apr 19, 2022 at 10:20 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Looks like most people voted for (B). In support of that option, here are patches for master and REL_14_STABLE. Note that I extended the tests compared to v9, which found a problem that is fixed for v10:\n\nOK, I committed these. I am not totally sure we've got all the\nproblems sorted here, but I don't think that continuing to not commit\nanything is going to be better, so here we go.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Apr 2022 11:54:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Thu, Apr 21, 2022 at 3:55 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Apr 19, 2022 at 10:20 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > Looks like most people voted for (B). In support of that option, here are patches for master and REL_14_STABLE. Note that I extended the tests compared to v9, which found a problem that is fixed for v10:\n>\n> OK, I committed these. I am not totally sure we've got all the\n> problems sorted here, but I don't think that continuing to not commit\n> anything is going to be better, so here we go.\n\nLooks like this somehow broke on a Windows box:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2022-04-20%2016%3A34%3A19\n\n[14:05:49.729](0.001s) not ok 16 - pg_dumpall: option\n--exclude-database rejects multipart pattern \".*\": matches\n[14:05:49.730](0.000s)\n[14:05:49.730](0.000s) # Failed test 'pg_dumpall: option\n--exclude-database rejects multipart pattern \".*\": matches'\n# at t/002_pg_dump.pl line 3985.\n[14:05:49.730](0.000s) # 'pg_dumpall: error:\nimproper qualified name (too many dotted names): .gitignore\n# '\n# doesn't match '(?^:pg_dumpall: error: improper qualified name\n\\\\(too many dotted names\\\\): \\\\.\\\\*)'\n\n\n", "msg_date": "Thu, 21 Apr 2022 07:07:28 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Wed, Apr 20, 2022 at 3:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Looks like this somehow broke on a Windows box:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2022-04-20%2016%3A34%3A19\n\nSo the issue here is that we are running this command:\n\npg_dumpall --exclude-database .*\n\nAnd on that Windows machine, .* is being expanded to .gitignore, so\npg_dumpall prints:\n\npg_dumpall: error: improper qualified name (too many dotted names): .gitignore\n\nInstead of:\n\npg_dumpall: error: improper qualified name (too many dotted names): .*\n\nI don't know why that glob-expansion only happens on jacana, and I\ndon't know how to fix it, either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Apr 2022 15:35:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Thu, Apr 21, 2022 at 7:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Apr 20, 2022 at 3:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Looks like this somehow broke on a Windows box:\n> >\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2022-04-20%2016%3A34%3A19\n>\n> So the issue here is that we are running this command:\n>\n> pg_dumpall --exclude-database .*\n>\n> And on that Windows machine, .* is being expanded to .gitignore, so\n> pg_dumpall prints:\n>\n> pg_dumpall: error: improper qualified name (too many dotted names): .gitignore\n>\n> Instead of:\n>\n> pg_dumpall: error: improper qualified name (too many dotted names): .*\n>\n> I don't know why that glob-expansion only happens on jacana, and I\n> don't know how to fix it, either.\n\nPerhaps bowerbird and jacana have different versions of IPC::Run? I\nsee some recent-ish changes to escaping logic in here from Noah:\n\nhttps://github.com/toddr/IPC-Run/commits/master/lib/IPC/Run/Win32Helper.pm\n\nLooks like the older version looks for meta characters not including\n'*', and the later one uses Win32::ShellQuote::quote_native.\n\n\n", "msg_date": "Thu, 21 Apr 2022 08:38:33 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" }, { "msg_contents": "On Thu, Apr 21, 2022 at 8:38 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Apr 21, 2022 at 7:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Wed, Apr 20, 2022 at 3:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > Looks like this somehow broke on a Windows box:\n> > >\n> > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2022-04-20%2016%3A34%3A19\n> >\n> > So the issue here is that we are running this command:\n> >\n> > pg_dumpall --exclude-database .*\n> >\n> > And on that Windows machine, .* is being expanded to .gitignore, so\n> > pg_dumpall prints:\n> >\n> > pg_dumpall: error: improper qualified name (too many dotted names): .gitignore\n> >\n> > Instead of:\n> >\n> > pg_dumpall: error: improper qualified name (too many dotted names): .*\n> >\n> > I don't know why that glob-expansion only happens on jacana, and I\n> > don't know how to fix it, either.\n>\n> Perhaps bowerbird and jacana have different versions of IPC::Run? I\n> see some recent-ish changes to escaping logic in here from Noah:\n>\n> https://github.com/toddr/IPC-Run/commits/master/lib/IPC/Run/Win32Helper.pm\n>\n> Looks like the older version looks for meta characters not including\n> '*', and the later one uses Win32::ShellQuote::quote_native.\n\nThis time with Andrew in CC.\n\n\n", "msg_date": "Thu, 21 Apr 2022 08:51:31 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14 psql broke \\d datname.nspname.relname" } ]
[ { "msg_contents": "Hi,\n\n> The previous patch was failing because of the recent test changes made\n> by commit 201a76183e2 which unified new and get_new_node, attached\n> patch has the changes to handle the changes accordingly.\n> Thanks for your update!\n> I have two comments.\n\n1.Do we need “set_backtrace(NULL, 0);” on “HandleMainLoopInterrupts()”?\nI could observe that it works correctly without this. It is written on \n“HandleAutoVacLauncherInterrupts” as well, but I think it is necessary \nto prevent delays as well as [1].\n\n2.The patch seems to forget to handle\n“ereport(LOG,(errmsg(\"logging backtrace of PID %d\", MyProcPid)));” on \n“HandleAutoVacLauncherInterrupts” and “HandleMainLoopInterrupts()”.\nI think it should be the same as the process on “ProcessInterrupts()”.\n\n3.How about creating a new function.\nSince the same process is on three functions( “ProcessInterrupts()”, \n“HandleAutoVacLauncherInterrupts”, “HandleMainLoopInterrupts()” ), I \nthink it’s good to create a new function.\n\n[1] https://commitfest.postgresql.org/35/3342/\n\nRegards,\nKoyu Tanigawa\n\n\n", "msg_date": "Tue, 12 Oct 2021 14:17:03 +0900", "msg_from": "bt21tanigaway <bt21tanigaway@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Printing backtrace of postgres processes" }, { "msg_contents": "On Tue, Oct 12, 2021 at 10:47 AM bt21tanigaway\n<bt21tanigaway@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> > The previous patch was failing because of the recent test changes made\n> > by commit 201a76183e2 which unified new and get_new_node, attached\n> > patch has the changes to handle the changes accordingly.\n> > Thanks for your update!\n> > I have two comments.\n>\n> 1.Do we need “set_backtrace(NULL, 0);” on “HandleMainLoopInterrupts()”?\n> I could observe that it works correctly without this. It is written on\n> “HandleAutoVacLauncherInterrupts” as well, but I think it is necessary\n> to prevent delays as well as [1].\n\nI have removed this from HandleMainLoopInterrupts\n\n> 2.The patch seems to forget to handle\n> “ereport(LOG,(errmsg(\"logging backtrace of PID %d\", MyProcPid)));” on\n> “HandleAutoVacLauncherInterrupts” and “HandleMainLoopInterrupts()”.\n> I think it should be the same as the process on “ProcessInterrupts()”.\n\nI have create ProcessPrintBacktraceInterrupt which has the\nimplementation and is called wherever required. It is handled now.\n\n> 3.How about creating a new function.\n> Since the same process is on three functions( “ProcessInterrupts()”,\n> “HandleAutoVacLauncherInterrupts”, “HandleMainLoopInterrupts()” ), I\n> think it’s good to create a new function.\n\nI have created ProcessPrintBacktraceInterrupt to handle it.\n\nThanks for the comments, v9 patch attached at [1] has the changes for the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm3MGVP_WK1Uuf%3DBiAJ9PeVOfciwLy0mrFA1JNbRp99VOQ%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 9 Nov 2021 19:11:11 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Printing backtrace of postgres processes" } ]
[ { "msg_contents": "Hi,\n\nQueryID is good tool for query analysis. I want to improve core jumbling \nmachinery in two ways:\n1. QueryID value should survive dump/restore of a database (use fully \nqualified name of table instead of relid).\n2. QueryID could represent more general class of queries: for example, \nit can be independent from permutation of tables in a FROM clause.\n\nSee the patch in attachment as an POC. Main idea here is to break \nJumbleState down to a 'clocations' part that can be really interested in\na post parse hook and a 'context data', that needed to build query or \nsubquery signature (hash) and, I guess, isn't really needed in any \nextensions.\n\nI think, it adds not much complexity and overhead. It still not \nguaranteed equality of queryid on two instances with an equal schema, \nbut survives across an instance upgrade and allows to do some query \nanalysis on a replica node.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Tue, 12 Oct 2021 13:11:51 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Make query ID more portable" }, { "msg_contents": "Hi,\n\nOn Tue, Oct 12, 2021 at 4:12 PM Andrey V. Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n> QueryID is good tool for query analysis. I want to improve core jumbling\n> machinery in two ways:\n> 1. QueryID value should survive dump/restore of a database (use fully\n> qualified name of table instead of relid).\n> 2. QueryID could represent more general class of queries: for example,\n> it can be independent from permutation of tables in a FROM clause.\n>\n> See the patch in attachment as an POC. Main idea here is to break\n> JumbleState down to a 'clocations' part that can be really interested in\n> a post parse hook and a 'context data', that needed to build query or\n> subquery signature (hash) and, I guess, isn't really needed in any\n> extensions.\n\nThere have been quite a lot of threads about that in the past, and\nalmost every time people wanted to change how the hash was computed.\nSo it seems to me that extensions would actually be quite interested\nin that. This is even more the case now that an extension can be used\nto replace the queryid calculation only and keep the rest of the\nextension relying on it as is.\n\n> I think, it adds not much complexity and overhead.\n\nI think the biggest change in your patch is:\n\n case RTE_RELATION:\n- APP_JUMB(rte->relid);\n- JumbleExpr(jstate, (Node *) rte->tablesample);\n+ {\n+ char *relname = regclassout_ext(rte->relid, true);\n+\n+ APP_JUMB_STRING(relname);\n+ JumbleExpr(jstate, (Node *) rte->tablesample, ctx);\n APP_JUMB(rte->inh);\n break;\n\nHave you done any benchmark on OLTP workload? Adding catalog access\nthere is likely to add significant overhead.\n\nAlso, why only using the fully qualified relation name for stable\nhashes? At least operators and functions should also be treated the\nsame way. If you do that you will probably have way too much overhead\nto be usable in a busy production environment. Why not using the new\npossibility of 3rd party extension for the queryid calculation that\nexactly suits your need?\n\n\n", "msg_date": "Tue, 12 Oct 2021 16:35:39 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make query ID more portable" }, { "msg_contents": "On 12/10/21 13:35, Julien Rouhaud wrote:\n> Hi,\n> \n> On Tue, Oct 12, 2021 at 4:12 PM Andrey V. Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> See the patch in attachment as an POC. Main idea here is to break\n>> JumbleState down to a 'clocations' part that can be really interested in\n>> a post parse hook and a 'context data', that needed to build query or\n>> subquery signature (hash) and, I guess, isn't really needed in any\n>> extensions.\n> \n> There have been quite a lot of threads about that in the past, and\n> almost every time people wanted to change how the hash was computed.\n> So it seems to me that extensions would actually be quite interested\n> in that. This is even more the case now that an extension can be used\n> to replace the queryid calculation only and keep the rest of the\n> extension relying on it as is.\nYes, I know. I have been using such self-made queryID for four years. \nAnd I will use it further.\nBut core jumbling code is good, fast and much easier in support. The \npurpose of this work is extending of jumbling to use in more flexible \nway to avoid meaningless copying of this code to an extension.\n>> I think, it adds not much complexity and overhead.\n> \n> I think the biggest change in your patch is:\n> \n> case RTE_RELATION:\n> - APP_JUMB(rte->relid);\n> - JumbleExpr(jstate, (Node *) rte->tablesample);\n> + {\n> + char *relname = regclassout_ext(rte->relid, true);\n> +\n> + APP_JUMB_STRING(relname);\n> + JumbleExpr(jstate, (Node *) rte->tablesample, ctx);\n> APP_JUMB(rte->inh);\n> break;\n> \n> Have you done any benchmark on OLTP workload? Adding catalog access\n> there is likely to add significant overhead.\nYes, I should do benchmarking. But I guess, main goal of Query ID is \nmonitoring, that can be switched off, if necessary.\nThis part made for a demo. It can be replaced by a hook, for example.\n> \n> Also, why only using the fully qualified relation name for stable\n> hashes? At least operators and functions should also be treated the\n> same way. If you do that you will probably have way too much overhead\n> to be usable in a busy production environment. Why not using the new\n> possibility of 3rd party extension for the queryid calculation that\n> exactly suits your need?\n> \nI fully agree with these arguments. This code is POC. Main part here is \nbreaking down JumbleState, using a local context for subqueries and \nsorting of a range table entries hashes.\nI think, we can call one routine (APP_JUMB_OBJECT(), as an example) for \nall oids in this code. It would allow an extension to intercept this \ncall and replace oid with an arbitrary value.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Tue, 12 Oct 2021 15:00:54 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Make query ID more portable" }, { "msg_contents": "Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> But core jumbling code is good, fast and much easier in support.\n\nIt won't be fast once you stick a bunch of catalog lookups into it.\nI think this is fine as an extension, but it has no chance of being\naccepted in core, just on performance grounds.\n\n(I'm also not sure that the query ID calculation code is always/only\ninvoked in contexts where it's safe to do catalog accesses.)\n\nA bigger issue is that query ID stability isn't something we are going\nto promise on a large scale --- for example, what if a new release adds\nsome new fields to struct Query? So I'm not sure that \"query IDs should\nsurvive dump/reload\" is a useful goal to consider. It's certainly not\nsomething that could be reached by anything even remotely like the\nexisting code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Oct 2021 09:40:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Make query ID more portable" }, { "msg_contents": "On Tue, Oct 12, 2021 at 09:40:47AM -0400, Tom Lane wrote:\n> Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> > But core jumbling code is good, fast and much easier in support.\n> \n> It won't be fast once you stick a bunch of catalog lookups into it.\n> I think this is fine as an extension, but it has no chance of being\n> accepted in core, just on performance grounds.\n> \n> (I'm also not sure that the query ID calculation code is always/only\n> invoked in contexts where it's safe to do catalog accesses.)\n> \n> A bigger issue is that query ID stability isn't something we are going\n> to promise on a large scale --- for example, what if a new release adds\n> some new fields to struct Query? So I'm not sure that \"query IDs should\n> survive dump/reload\" is a useful goal to consider. It's certainly not\n> something that could be reached by anything even remotely like the\n> existing code.\n\nAlso, the current code handles renames of schemas and objects, but this\nwould not.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 12 Oct 2021 09:45:41 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Make query ID more portable" }, { "msg_contents": "On 12/10/21 18:45, Bruce Momjian wrote:\n> On Tue, Oct 12, 2021 at 09:40:47AM -0400, Tom Lane wrote:\n>> Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n>>> But core jumbling code is good, fast and much easier in support.\n> Also, the current code handles renames of schemas and objects, but this\n> would not.\nYes, It is good option if an extension works only in the context of one \nnode. But my efforts are directed to the cross-instance usage of a \nmonitoring data. As an example, it may be useful for sharding.\nAlso, I guess for an user is essential to think that if he changed a \nname of any object he also would change queries and reset monitoring \ndata, related on this object.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Thu, 14 Oct 2021 09:37:08 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Make query ID more portable" }, { "msg_contents": "On Thu, Oct 14, 2021 at 12:37 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n> On 12/10/21 18:45, Bruce Momjian wrote:\n> > On Tue, Oct 12, 2021 at 09:40:47AM -0400, Tom Lane wrote:\n> >> Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> >>> But core jumbling code is good, fast and much easier in support.\n> > Also, the current code handles renames of schemas and objects, but this\n> > would not.\n> Yes, It is good option if an extension works only in the context of one\n> node. But my efforts are directed to the cross-instance usage of a\n> monitoring data. As an example, it may be useful for sharding.\n> Also, I guess for an user is essential to think that if he changed a\n> name of any object he also would change queries and reset monitoring\n> data, related on this object.\n\nWhat if someone wants to allow any form of partitioning without\nchanging to the ID, or ignore the schema because it's a multi tenant\ndb with dedicated roles?\n\nI think that there are just too many arbitrary decisions that could be\nmade on what exactly should be a query identifier to have a single\nin-core implementation. If you do sharding, you already have to\nproperly configure each node, so configuring your custom query id\nextension shouldn't be a big problem.\n\n\n", "msg_date": "Thu, 14 Oct 2021 13:40:04 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make query ID more portable" }, { "msg_contents": "On 12/10/21 18:40, Tom Lane wrote:\n> Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n>> But core jumbling code is good, fast and much easier in support.\n> A bigger issue is that query ID stability isn't something we are going\n> to promise on a large scale --- for example, what if a new release adds\n> some new fields to struct Query? So I'm not sure that \"query IDs should\n> survive dump/reload\" is a useful goal to consider. It's certainly not\n> something that could be reached by anything even remotely like the\n> existing code.\nThank you for the explanation.\nI think the problem of queryId is that is encapsulates two different \nmeanings:\n1. It allows an extension to match an query on post parse and execution \nstages. In this sense, queryId should be as unique as possible for each \nquery.\n2. For pg_stat_statements purposes (and my project too) it represents an \nquery class and should be stable against permutations of range table \nentries, clauses, e.t.c. For example:\n\"SELECT * FROM a,b;\" and \"SELECT * FROM b,a;\" should have the same queryId.\n\nThis issue may be solved in an extension with next approach:\n1. Force as unique value for queryId as extension wants in a post parse hook\n2. Generalize the JumbleQuery routine code to generate a kind of query \nclass signature.\n\nThe attached patch is a first sketch for such change.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Thu, 14 Oct 2021 10:49:32 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Make query ID more portable" }, { "msg_contents": "On 14/10/21 10:40, Julien Rouhaud wrote:\n> On Thu, Oct 14, 2021 at 12:37 PM Andrey Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>>\n>> On 12/10/21 18:45, Bruce Momjian wrote:\n>>> On Tue, Oct 12, 2021 at 09:40:47AM -0400, Tom Lane wrote:\n>>>> Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> I think that there are just too many arbitrary decisions that could be\n> made on what exactly should be a query identifier to have a single\n> in-core implementation.\nYes, and I use such custom decision too. But core jumbling code \nimplements good idea and can be generalized for reuse. Patch from \nprevious letter and breaking down of JumbleState can allow coders to \nimplement their codes based on queryjumble.c module with smaller changes.\n\n> If you do sharding, you already have to\n> properly configure each node, so configuring your custom query id\n> extension shouldn't be a big problem.\nMy project is about adaptive query optimization techniques. It is not \nobvious how to match (without a field in Query struct) a post parse and \nan execution phases because of nested queries.\nAlso, if we use queryId in an extension, we interfere with \npg_stat_statements.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Thu, 14 Oct 2021 17:02:15 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Make query ID more portable" } ]
[ { "msg_contents": "Hi,\n\nFor the last year or so I've on and off tinkered with $subject. I think\nit's in a state worth sharing now. First, let's look at a little\ncomparison.\n\nMy workstation:\n\nnon-cached configure:\ncurrent: 11.80s\nmeson: 6.67s\n\nnon-cached build (world-bin):\ncurrent: 40.46s\nninja: 7.31s\n\nno-change build:\ncurrent: 1.17s\nninja: 0.06s\n\ntest world:\ncurrent: 105s\nmeson: 63s\n\n\nWhat actually started to motivate me however were the long times windows\nbuilds took to come back with testsresults. On CI, with the same machine\nconfig:\n\nbuild:\ncurrent: 202s (doesn't include genbki etc)\nmeson+ninja: 140s\nmeson+msbuild: 206s\n\n\ntest:\ncurrent: 1323s (many commands)\nmeson: 903s (single command)\n\n(note that the test comparison isn't quite fair - there's a few tests\nmissing, but it's just small contrib ones afaik)\n\n\nThe biggest difference to me however is not the speed, but how readable\nthe output is.\n\nRunning the tests with meson in a terminal, shows the number of tests\nthat completed out of how many total, how much time has passed, how long\nthe currently running tests already have been running.\n\nAt the end of a testrun a count of tests is shown:\n\n188/189 postgresql:tap+pg_basebackup / pg_basebackup/t/010_pg_basebackup.pl OK 39.51s 110 subtests passed\n189/189 postgresql:isolation+snapshot_too_old / snapshot_too_old/isolation OK 62.93s\n\n\nOk: 188\nExpected Fail: 0\nFail: 1\nUnexpected Pass: 0\nSkipped: 0\nTimeout: 0\n\nFull log written to /tmp/meson/meson-logs/testlog.txt\n\n\nThe log has the output of the tests and ends with:\n\nSummary of Failures:\n120/189 postgresql:tap+recovery / recovery/t/007_sync_rep.pl ERROR 7.16s (exit status 255 or signal 127 SIGinvalid)\n\n\nQuite the difference to make check-world -jnn output.\n\n\nSo, now that the teasing is done, let me explain a bit what lead me down\nthis path:\n\nAutoconf + make is not being actively developed. Especially autoconf is\n*barely* in maintenance mode - despite many shortcomings and bugs. It's\nalso technology that very few want to use - autoconf m4 is scary, and\nit's scarier for people that started more recently than a lot of us\ncommitters for example.\n\nRecursive make as we use it is hard to get right. One reason the clean\nmake build is so slow compared to meson is that we had to resort to\n.NOTPARALLEL to handle dependencies in a bunch of places. And despite\nthat, I quite regularly see incremental build failures that can be\nresolved by retrying the build.\n\nWhile we have incremental build via --enable-depend, they don't work\nthat reliable (i.e. misses necessary rebuilds) and yet is often too\naggressive. More modern build system can keep track of the precise\ncommand used to build a target and rebuild it when that command changes.\n\n\nWe also don't just have the autoconf / make buildsystem, there's also\nthe msvc project generator - something most of us unix-y folks do not\nlike to touch. I think that, combined with there being no easy way to\nrun all tests, and it being just different, really hurt our windows\ndeveloper appeal (and subsequently the quality of postgres on\nwindows). I'm not saying this to ding the project generator - that was\nwell before there were decent \"meta\" buildsystems out there (and in some\nways it is a small one itself).\n\n\nThe last big issue I have with the current situation is that there's no\ngood test integration. make check-world output is essentially unreadable\n/ not automatically parseable. Which led to the buildfarm having a\nseparate list of things it needs to test, so that failures can be\npinpointed and paired with appropriate logs. That approach unfortunately\ndoesn't scale well to multi-core CPUs, slowing down the buildfarm by a\nfair bit.\n\n\nThis all led to me to experiment with improvements. I tried a few\nsomewhat crazy but incremental things like converting our buildsystem to\nnon-recursive make (I got it to build the backend, but it's too hard to\ndo manually I think), or to not run tests during the recursive make\ncheck-world, but to append commands to a list of tests, that then is run\nby a helper (can kinda be made to work). In the end I concluded that\nthe amount of time we'd need to invest to maintain our more-and-more\ncustom buildsystem going forward doesn't make sense.\n\n\nWhich lead me to look around and analyze which other buildsystems there\nare that could make some sense for us. The halfway decent list includes,\nI think:\n1) cmake\n2) bazel\n3) meson\n\n\ncmake would be a decent choice, I think. However, I just can't fully\nwarm up to it. Something about it just doesn't quite sit right with\nme. That's not a good enough reason to prevent others from suggesting to\nuse it, but it's good enough to justify not investing a lot of time in\nit myself.\n\nBazel has some nice architectural properties. But it requires a JVM to\nrun - I think that basically makes it insuitable for us. And the build\ninformation seems quite arduous to maintain too.\n\nWhich left me with meson. It is a meta-buildsystem that can do the\nactual work of building via ninja (the most common one, also targeted by\ncmake), msbuild (visual studio project files, important for GUI work)\nand xcode projects (I assume that's for a macos IDE, but I haven't tried\nto use it). Meson roughly does what autoconf+automake did, in a\npython-esque DSL, and outputs build-instructions for ninja / msbuild /\nxcode. One interesting bit is that meson itself is written in python (\nand fairly easy to contribute too - I got a few changes in now).\n\n\nI don't think meson is perfect architecturally - e.g. its insistence on\nnot having functions ends up making it a bit harder to not end up\nduplicating code. There's some user-interface oddities that are now hard\nto fix fully, due to the faily wide usage. But all-in-all it's pretty\nnice to use.\n\n\nIts worth calling out that a lot of large open source projects have been\n/ are migrating to meson. qemu/kvm, mesa (core part of graphics stack on\nlinux and also widely used in other platforms), a good chunk of GNOME,\nand quite a few more. Due to that it seems unlikely to be abandoned\nsoon.\n\n\nAs far as I can tell the only OS that postgres currently supports that\nmeson doesn't support is HPUX. It'd likely be fairly easy to add\ngcc-on-hpux support, a chunk more to add support for the proprietary\nones.\n\n\nThe attached patch (meson support is 0016, the rest is prerequisites\nthat aren't that interesting at this stage) converts most of postgres to\nmeson. There's a few missing contrib modules, only about half the\noptional library dependencies are implemented, and I've only built on\nx64. It builds on freebsd, linux, macos and windows (both ninja and\nmsbuild) and cross builds from linux to windows. Thomas helped make the\nfreebsd / macos pieces a reality, thanks!\n\nI took a number of shortcuts (although there used to be a *lot*\nmore). So this shouldn't be reviewed to the normal standard of the\ncommunity - it's a prototype. But I think it's in a complete enough\nshape that it allows to do a well-informed evaluation.\n\nWhat doesn't yet work/ build:\n\n- plenty optional libraries, contrib, NLS, docs build\n\n- PGXS - and I don't yet know what to best do about it. One\n backward-compatible way would be to continue use makefiles for pgxs,\n but do the necessary replacement of Makefile.global.in via meson (and\n not use that for postgres' own build). But that doesn't really\n provide a nicer path for building postgres extensions on windows, so\n it'd definitely not be a long-term path.\n\n- JIT bitcode generation for anything but src/backend.\n\n- anything but modern-ish x86. That's proably a small amount of work,\n but something that needs to be done.\n\n- exporting all symbols for extension modules on windows (the stuff for\n postgres is implemented). Instead I marked the relevant symbols als\n declspec(dllexport). I think we should do that regardless of the\n buildsystem change. Restricting symbol visibility via gcc's\n -fvisibility=hidden for extensions results in a substantially reduced\n number of exported symbols, and even reduces object size (and I think\n improves the code too). I'll send an email about that separately.\n\n\n\n\nThere's a lot more stuff to talk about, but I'll stop with a small bit\nof instructions below:\n\n\nDemo / instructions:\n# Get code\ngit remote add andres git@github.com:anarazel/postgres.git\ngit fetch andres\ngit checkout --track andres/meson\n\n# setup build directory\nmeson setup build --buildtype debug\ncd build\n\n# build (uses automatically as many cores as available)\nninja\n\n# change configuration, build again\nmeson configure -Dssl=openssl\nninja\n\n# run all tests\nmeson test\n\n# run just recovery tests\nmeson test --suite setup --suite recovery\n\n# list tests\nmeson test --list\n\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 12 Oct 2021 01:37:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "[RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-12 01:37:21 -0700, Andres Freund wrote:\n> non-cached build (world-bin):\n> current: 40.46s\n> ninja: 7.31s\n\nInterestingly this is pretty close to the minimum achievable on my\nmachine from the buildsystem perspective.\n\nA build with -fuse-ld=lld, which the above didn't use, takes 6.979s. The\ncritical path is\n\nbison gram.y -> gram.c 4.13s\ngcc gram.c -> gram.o 2.05s\ngcc postgres .... 0.317\n\n\nA very helpful visualization is to transform ninja's build logs into a\ntracefile with https://github.com/nico/ninjatracing\n\nI attached an example - the trace.json.gz can be uploaded as-is to\nhttps://ui.perfetto.dev/\n\nIt's quite a bit of of fun to look at imo.\n\nThere's a few other things quickly apparent:\n\n- genbki prevents build progress due to dependencies on the generated\n headers.\n- the absolutely stupid way I implemented the python2->python3\n regression test output conversion uses up a fair bit of resources\n- tablecmds.c, pg_dump.c, xlog.c and a few other files are starting to\n big enough to be problematic compile-time wise\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 12 Oct 2021 02:08:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On 12.10.21 10:37, Andres Freund wrote:\n> For the last year or so I've on and off tinkered with $subject. I think\n> it's in a state worth sharing now. First, let's look at a little\n> comparison.\n\nI played with $subject a few years ago and liked it. I think, like you \nsaid, meson is the best way forward. I support this project.\n\nOne problem I noticed back then was that some choices that we currently \ndetermine ourselves in configure or the makefiles are hardcoded in \nmeson. For example, at the time, gcc on macOS was not supported. Meson \nthought, if you are on macOS, you are surely using the Apple compiler, \nand it supports these options. Fixing that required patches deep in the \nbowels of the meson source code (and, in practice, waiting for a new \nrelease etc.). I strongly suspect this isn't the only such problem. \nFor example, the shared library build behavior has been carefully tuned \nin opinionated ways. With the autotools chain, one can override \nanything with enough violence; so we have always felt free to do that. \nI haven't followed it in a while, so I don't know what the situation is \nnow; but it is a concern, because we have always felt free to try new \nand unusual build tools (Sun compiler, Intel compiler, \nclang-when-it-was-new) early without waiting for anyone else.\n\n\n", "msg_date": "Tue, 12 Oct 2021 15:30:57 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Tue, Oct 12, 2021 at 9:31 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> One problem I noticed back then was that some choices that we currently\n> determine ourselves in configure or the makefiles are hardcoded in\n> meson. For example, at the time, gcc on macOS was not supported. Meson\n> thought, if you are on macOS, you are surely using the Apple compiler,\n> and it supports these options. Fixing that required patches deep in the\n> bowels of the meson source code (and, in practice, waiting for a new\n> release etc.). I strongly suspect this isn't the only such problem.\n> For example, the shared library build behavior has been carefully tuned\n> in opinionated ways. With the autotools chain, one can override\n> anything with enough violence; so we have always felt free to do that.\n> I haven't followed it in a while, so I don't know what the situation is\n> now; but it is a concern, because we have always felt free to try new\n> and unusual build tools (Sun compiler, Intel compiler,\n> clang-when-it-was-new) early without waiting for anyone else.\n\nI think we're going to need some solution to this problem. We have too\nmany people here with strong opinions about questions like this for me\nto feel good about the idea that we're going to collectively be OK\nwith leaving these sorts of decisions up to some other project.\n\n From my point of view, the time it takes to run configure is annoying,\nbut the build time is pretty fine. On my system, configure takes about\n33 seconds, and a full rebuild with 'make -j8' takes 14.5 seconds (I\nam using ccache). Moreover, most of the time when I run make, I'm only\ndoing a partial rebuild, so it's near-instantaneous.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 Oct 2021 11:00:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "\nOn 10/12/21 4:37 AM, Andres Freund wrote:\n> git remote add andres git@github.com:anarazel/postgres.git\n\nITYM:\n\ngit remote add andres git://github.com/anarazel/postgres.git\n\ncheers\n\nandrew\n�\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 12 Oct 2021 11:08:53 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": ".\n\nút 12. 10. 2021 v 10:37 odesílatel Andres Freund <andres@anarazel.de> napsal:\n>\n> Hi,\n>\n> For the last year or so I've on and off tinkered with $subject. I think\n> it's in a state worth sharing now. First, let's look at a little\n> comparison.\n>\n> My workstation:\n>\n> non-cached configure:\n> current: 11.80s\n> meson: 6.67s\n>\n> non-cached build (world-bin):\n> current: 40.46s\n> ninja: 7.31s\n>\n> no-change build:\n> current: 1.17s\n> ninja: 0.06s\n>\n> test world:\n> current: 105s\n> meson: 63s\n>\n>\n> What actually started to motivate me however were the long times windows\n> builds took to come back with testsresults. On CI, with the same machine\n> config:\n>\n> build:\n> current: 202s (doesn't include genbki etc)\n> meson+ninja: 140s\n> meson+msbuild: 206s\n>\n>\n> test:\n> current: 1323s (many commands)\n> meson: 903s (single command)\n>\n> (note that the test comparison isn't quite fair - there's a few tests\n> missing, but it's just small contrib ones afaik)\n>\n>\n> The biggest difference to me however is not the speed, but how readable\n> the output is.\n>\n> Running the tests with meson in a terminal, shows the number of tests\n> that completed out of how many total, how much time has passed, how long\n> the currently running tests already have been running.\n>\n> At the end of a testrun a count of tests is shown:\n>\n> 188/189 postgresql:tap+pg_basebackup / pg_basebackup/t/010_pg_basebackup.pl OK 39.51s 110 subtests passed\n> 189/189 postgresql:isolation+snapshot_too_old / snapshot_too_old/isolation OK 62.93s\n>\n>\n> Ok: 188\n> Expected Fail: 0\n> Fail: 1\n> Unexpected Pass: 0\n> Skipped: 0\n> Timeout: 0\n>\n> Full log written to /tmp/meson/meson-logs/testlog.txt\n>\n>\n> The log has the output of the tests and ends with:\n>\n> Summary of Failures:\n> 120/189 postgresql:tap+recovery / recovery/t/007_sync_rep.pl ERROR 7.16s (exit status 255 or signal 127 SIGinvalid)\n>\n>\n> Quite the difference to make check-world -jnn output.\n>\n>\n> So, now that the teasing is done, let me explain a bit what lead me down\n> this path:\n>\n> Autoconf + make is not being actively developed. Especially autoconf is\n> *barely* in maintenance mode - despite many shortcomings and bugs. It's\n> also technology that very few want to use - autoconf m4 is scary, and\n> it's scarier for people that started more recently than a lot of us\n> committers for example.\n>\n> Recursive make as we use it is hard to get right. One reason the clean\n> make build is so slow compared to meson is that we had to resort to\n> .NOTPARALLEL to handle dependencies in a bunch of places. And despite\n> that, I quite regularly see incremental build failures that can be\n> resolved by retrying the build.\n>\n> While we have incremental build via --enable-depend, they don't work\n> that reliable (i.e. misses necessary rebuilds) and yet is often too\n> aggressive. More modern build system can keep track of the precise\n> command used to build a target and rebuild it when that command changes.\n>\n>\n> We also don't just have the autoconf / make buildsystem, there's also\n> the msvc project generator - something most of us unix-y folks do not\n> like to touch. I think that, combined with there being no easy way to\n> run all tests, and it being just different, really hurt our windows\n> developer appeal (and subsequently the quality of postgres on\n> windows). I'm not saying this to ding the project generator - that was\n> well before there were decent \"meta\" buildsystems out there (and in some\n> ways it is a small one itself).\n>\n>\n> The last big issue I have with the current situation is that there's no\n> good test integration. make check-world output is essentially unreadable\n> / not automatically parseable. Which led to the buildfarm having a\n> separate list of things it needs to test, so that failures can be\n> pinpointed and paired with appropriate logs. That approach unfortunately\n> doesn't scale well to multi-core CPUs, slowing down the buildfarm by a\n> fair bit.\n>\n>\n> This all led to me to experiment with improvements. I tried a few\n> somewhat crazy but incremental things like converting our buildsystem to\n> non-recursive make (I got it to build the backend, but it's too hard to\n> do manually I think), or to not run tests during the recursive make\n> check-world, but to append commands to a list of tests, that then is run\n> by a helper (can kinda be made to work). In the end I concluded that\n> the amount of time we'd need to invest to maintain our more-and-more\n> custom buildsystem going forward doesn't make sense.\n>\n>\n> Which lead me to look around and analyze which other buildsystems there\n> are that could make some sense for us. The halfway decent list includes,\n> I think:\n> 1) cmake\n> 2) bazel\n> 3) meson\n>\n>\n> cmake would be a decent choice, I think. However, I just can't fully\n> warm up to it. Something about it just doesn't quite sit right with\n> me. That's not a good enough reason to prevent others from suggesting to\n> use it, but it's good enough to justify not investing a lot of time in\n> it myself.\n>\n> Bazel has some nice architectural properties. But it requires a JVM to\n> run - I think that basically makes it insuitable for us. And the build\n> information seems quite arduous to maintain too.\n>\n> Which left me with meson. It is a meta-buildsystem that can do the\n> actual work of building via ninja (the most common one, also targeted by\n> cmake), msbuild (visual studio project files, important for GUI work)\n> and xcode projects (I assume that's for a macos IDE, but I haven't tried\n> to use it). Meson roughly does what autoconf+automake did, in a\n> python-esque DSL, and outputs build-instructions for ninja / msbuild /\n> xcode. One interesting bit is that meson itself is written in python (\n> and fairly easy to contribute too - I got a few changes in now).\n>\n>\n> I don't think meson is perfect architecturally - e.g. its insistence on\n> not having functions ends up making it a bit harder to not end up\n> duplicating code. There's some user-interface oddities that are now hard\n> to fix fully, due to the faily wide usage. But all-in-all it's pretty\n> nice to use.\n>\n>\n> Its worth calling out that a lot of large open source projects have been\n> / are migrating to meson. qemu/kvm, mesa (core part of graphics stack on\n> linux and also widely used in other platforms), a good chunk of GNOME,\n> and quite a few more. Due to that it seems unlikely to be abandoned\n> soon.\n>\n>\n> As far as I can tell the only OS that postgres currently supports that\n> meson doesn't support is HPUX. It'd likely be fairly easy to add\n> gcc-on-hpux support, a chunk more to add support for the proprietary\n> ones.\n>\n>\n> The attached patch (meson support is 0016, the rest is prerequisites\n> that aren't that interesting at this stage) converts most of postgres to\n> meson. There's a few missing contrib modules, only about half the\n> optional library dependencies are implemented, and I've only built on\n> x64. It builds on freebsd, linux, macos and windows (both ninja and\n> msbuild) and cross builds from linux to windows. Thomas helped make the\n> freebsd / macos pieces a reality, thanks!\n>\n> I took a number of shortcuts (although there used to be a *lot*\n> more). So this shouldn't be reviewed to the normal standard of the\n> community - it's a prototype. But I think it's in a complete enough\n> shape that it allows to do a well-informed evaluation.\n>\n> What doesn't yet work/ build:\n>\n> - plenty optional libraries, contrib, NLS, docs build\n>\n> - PGXS - and I don't yet know what to best do about it. One\n> backward-compatible way would be to continue use makefiles for pgxs,\n> but do the necessary replacement of Makefile.global.in via meson (and\n> not use that for postgres' own build). But that doesn't really\n> provide a nicer path for building postgres extensions on windows, so\n> it'd definitely not be a long-term path.\n>\n> - JIT bitcode generation for anything but src/backend.\n>\n> - anything but modern-ish x86. That's proably a small amount of work,\n> but something that needs to be done.\n>\n> - exporting all symbols for extension modules on windows (the stuff for\n> postgres is implemented). Instead I marked the relevant symbols als\n> declspec(dllexport). I think we should do that regardless of the\n> buildsystem change. Restricting symbol visibility via gcc's\n> -fvisibility=hidden for extensions results in a substantially reduced\n> number of exported symbols, and even reduces object size (and I think\n> improves the code too). I'll send an email about that separately.\n>\n >\n>\n>\n> There's a lot more stuff to talk about, but I'll stop with a small bit\n> of instructions below:\n>\n>\n> Demo / instructions:\n> # Get code\n> git remote add andres git@github.com:anarazel/postgres.git\n> git fetch andres\n> git checkout --track andres/meson\n>\n> # setup build directory\n> meson setup build --buildtype debug\n> cd build\n>\n> # build (uses automatically as many cores as available)\n> ninja\n\nI'm getting errors at this step. You can find my output at\nhttps://pastebin.com/Ar5VqfFG. Setup went well without errors. Is that\nexpected for now?\n\n> # change configuration, build again\n> meson configure -Dssl=openssl\n> ninja\n>\n> # run all tests\n> meson test\n>\n> # run just recovery tests\n> meson test --suite setup --suite recovery\n>\n> # list tests\n> meson test --list\n>\n>\n> Greetings,\n>\n> Andres Freund\n\n\n", "msg_date": "Tue, 12 Oct 2021 17:21:50 +0200", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "\nOn 10/12/21 4:37 AM, Andres Freund wrote:\n> # setup build directory\n> meson setup build --buildtype debug\n\nI took this for an outing on msys2 and it just seems to hang. If it's not hanging it's unbelievably slow.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 12 Oct 2021 11:28:04 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think we're going to need some solution to this problem. We have too\n> many people here with strong opinions about questions like this for me\n> to feel good about the idea that we're going to collectively be OK\n> with leaving these sorts of decisions up to some other project.\n\nAgreed. I'm willing to put up with the costs of moving to some\nother build system, but not if it dictates choices we don't want to\nmake about the end products.\n\n> From my point of view, the time it takes to run configure is annoying,\n> but the build time is pretty fine. On my system, configure takes about\n> 33 seconds, and a full rebuild with 'make -j8' takes 14.5 seconds (I\n> am using ccache). Moreover, most of the time when I run make, I'm only\n> doing a partial rebuild, so it's near-instantaneous.\n\nRead about Autoconf's --cache-file option. That and ccache are\nabsolutely essential tools IMO.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Oct 2021 11:47:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "\nOn 10/12/21 11:28 AM, Andrew Dunstan wrote:\n> On 10/12/21 4:37 AM, Andres Freund wrote:\n>> # setup build directory\n>> meson setup build --buildtype debug\n> I took this for an outing on msys2 and it just seems to hang. If it's not hanging it's unbelievably slow.\n>\n>\n\nIt hung because it expected the compiler to be 'ccache cc'. Hanging in\nsuch a case is kinda unforgivable. I remedied that by setting 'CC=gcc'\nbut it then errored out looking for perl libs. I think msys2 is going to\nbe a bit difficult here :-(\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 12 Oct 2021 11:50:03 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-12 15:30:57 +0200, Peter Eisentraut wrote:\n> I played with $subject a few years ago and liked it. I think, like you\n> said, meson is the best way forward. I support this project.\n\nCool.\n\n\n> One problem I noticed back then was that some choices that we currently\n> determine ourselves in configure or the makefiles are hardcoded in meson.\n\nYea, there's some of that. I think some degree of reduction in flexibility is\nneeded to realistically target multiple \"backend\" build-system like visual\nstudio project files etc. but I wish there were a bit less of that\nnonetheless.\n\n\n> For example, at the time, gcc on macOS was not supported. Meson thought, if\n> you are on macOS, you are surely using the Apple compiler, and it supports\n> these options.\n\nI'm pretty sure this one now can just be overridden with CC=gcc. It can on\nlinux and windows, but I don't have ready interactive access with a mac\n(leaving cirrus asside, which now has a \"start a terminal\" option...).\n\n\n> For example, the shared library build behavior has been carefully tuned in\n> opinionated ways. With the autotools chain, one can override anything with\n> enough violence; so we have always felt free to do that. I haven't followed\n> it in a while, so I don't know what the situation is now; but it is a\n> concern, because we have always felt free to try new and unusual build tools\n> (Sun compiler, Intel compiler, clang-when-it-was-new) early without waiting\n> for anyone else.\n\nIt's possible to just take over building e.g. shared libraries ourselves with\ncustom targets. Although it'd be a bit annoying to do. The bigger problem is\nthat that e.g. wouldn't play that nicely with generating visual studio\nprojects, which require to generate link steps in a certain way. It'd build,\nbut the GUI might loose some of its options. Etc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Oct 2021 09:15:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-12 11:50:03 -0400, Andrew Dunstan wrote:\n> It hung because it expected the compiler to be 'ccache cc'. Hanging in\n> such a case is kinda unforgivable. I remedied that by setting 'CC=gcc'\n> but it then errored out looking for perl libs. I think msys2 is going to\n> be a bit difficult here :-(\n\nHm. Yea, the perl thing is my fault - you should be able to get past it with\n-Dperl=disabled, and I'll take a look at fixing the perl detection. (*)\n\nI can't reproduce the hanging though. I needed to install bison, flex and\nninja and disable perl as described above, but then it built just fine.\n\nIt does seems to crash somewhere in the main regression tests though, I think\nI don't do the \"set stack depth\" dance correctly for msys.\n\n\nIf you repro the hanging, what's the last bit in meson-logs/meson-log.txt?\n\n\n(*) I've for now made most dependencies autodetected, unless you pass\n--auto-features disabled to collectively disable all the auto-detected\nfeatures. Initially I had mirrored the autoconf behaviour, but I got sick of\nforgetting to turn off readline or zlib on windows. And then it was useful to\ntest on multiple operating systems...\n\nFor working on windows meson's wraps are quite useful. I've not added that to\nthe git branch, but if you manually do\n mkdir subprojects\n meson wrap install lz4\n meson wrap install zlib\nbuilding with -Dzlib=enabled -Dlz4=enabled will fall back to building lz4,\nzlib as-needed.\n\nI was wondering about adding a binary wrap for e.g. bison, flex on windows, so\nthat the process of getting a build going isn't as arduous.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Oct 2021 09:59:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-12 17:21:50 +0200, Josef Šimánek wrote:\n> > # build (uses automatically as many cores as available)\n> > ninja\n> \n> I'm getting errors at this step. You can find my output at\n> https://pastebin.com/Ar5VqfFG. Setup went well without errors. Is that\n> expected for now?\n\nThanks, that's helpful. And no, that's not expected (*), it should be fixed.\n\nWhat OS / distribution / version is this?\n\nCan you build postgres \"normally\" with --with-gss? Seems like we're ending up\nwith a version of gssapi that we're not compatible with.\n\nYou should be able to get past this by disabling gss using meson configure\n-Dgssapi=disabled.\n\nGreetings,\n\nAndres Freund\n\n* except kinda, in the sense that I'd expect it to be buggy, given that I've\n run it only on a few machines and it's very, uh, bleeding edge\n\n\n", "msg_date": "Tue, 12 Oct 2021 10:17:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-12 09:59:26 -0700, Andres Freund wrote:\n> On 2021-10-12 11:50:03 -0400, Andrew Dunstan wrote:\n> > It hung because it expected the compiler to be 'ccache cc'. Hanging in\n> > such a case is kinda unforgivable. I remedied that by setting 'CC=gcc'\n> > but it then errored out looking for perl libs. I think msys2 is going to\n> > be a bit difficult here :-(\n> \n> Hm. Yea, the perl thing is my fault - you should be able to get past it with\n> -Dperl=disabled, and I'll take a look at fixing the perl detection. (*)\n\nThis is a weird one. I don't know much about msys, so it's probably related to\nthat. Perl spits out /usr/lib/perl5/core_perl/ as its archlibexp. According to\nshell commands that exists, but not according to msys's own python\n\n$ /mingw64/bin/python -c \"import os; p = '/usr/lib/perl5/core_perl/CORE'; print(f'does {p} exist:', os.path.exists(p))\"\ndoes /usr/lib/perl5/core_perl/CORE exist: False\n\n$ ls -ld /usr/lib/perl5/core_perl/CORE\ndrwxr-xr-x 1 anfreund anfreund 0 Oct 10 10:19 /usr/lib/perl5/core_perl/CORE\n\nSo it's not too surprising that that doesn't work out. It's easy enough to\nwork around, but still pretty weird.\n\nI pushed a workaround for the config-time error, but it doesn't yet recognize\nmsys perl correctly. But at least it's not alone in that - configure doesn't\nseem to either, so I'm probably doing something wrong :)\n\n\n> I can't reproduce the hanging though. I needed to install bison, flex and\n> ninja and disable perl as described above, but then it built just fine.\n> \n> It does seems to crash somewhere in the main regression tests though, I think\n> I don't do the \"set stack depth\" dance correctly for msys.\n\nThat was it - just hadn't ported setting -Wl,--stack=... for !msvc\nwindows. Pushed the fix for that out.\n\n\nI guess I should figure out how to commandline install msys and add it to CI.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Oct 2021 11:09:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "\nOn 10/12/21 12:59 PM, Andres Freund wrote:\n>\n>\n> If you repro the hanging, what's the last bit in meson-logs/meson-log.txt?\n\n\n\nHere's the entire thing\n\n\n# cat \nC:/tools/msys64/home/Administrator/postgresql/build/meson-logs/meson-log.txt\nBuild started at 2021-10-12T18:08:34.387568\nMain binary: C:/tools/msys64/mingw64/bin/python.exe\nBuild Options: -Dbuildtype=debug\nPython system: Windows\nThe Meson build system\nVersion: 0.59.1\nSource dir: C:/tools/msys64/home/Administrator/postgresql\nBuild dir: C:/tools/msys64/home/Administrator/postgresql/build\nBuild type: native build\nProject name: postgresql\nProject version: 15devel\nSanity testing C compiler: ccache cc\nIs cross compiler: False.\nSanity check compiler command line: ccache cc sanitycheckc.c -o\nsanitycheckc.exe -D_FILE_OFFSET_BITS=64\nSanity check compile stdout:\n\n-----\nSanity check compile stderr:\n\n-----\n\nmeson.build:1:0: ERROR: Compiler ccache cc can not compile programs.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 12 Oct 2021 14:11:39 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-12 14:11:39 -0400, Andrew Dunstan wrote:\n> On 10/12/21 12:59 PM, Andres Freund wrote:\n> > If you repro the hanging, what's the last bit in meson-logs/meson-log.txt?\n\n> Here's the entire thing\n\n> Sanity check compiler command line: ccache cc sanitycheckc.c -o\n> sanitycheckc.exe -D_FILE_OFFSET_BITS=64\n> Sanity check compile stdout:\n> \n> -----\n> Sanity check compile stderr:\n> \n> -----\n> \n> meson.build:1:0: ERROR: Compiler ccache cc can not compile programs.\n\nHuh, it's not a question of gcc vs cc, it's that meson automatically uses\nccache. And it looks like msys's ccache is broken at the moment (installed\nyesterday):\n\n$ ccache --version\nccache version 4.4.1\n...\n\n$ echo > test.c\n$ ccache cc -c test.c\nSegmentation fault (core dumped)\n..\n\nnot sure how that leads to hanging, but it's not too surprising that things\ndon't work out after that...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Oct 2021 11:23:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "\nOn 10/12/21 2:09 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-10-12 09:59:26 -0700, Andres Freund wrote:\n>> On 2021-10-12 11:50:03 -0400, Andrew Dunstan wrote:\n>>> It hung because it expected the compiler to be 'ccache cc'. Hanging in\n>>> such a case is kinda unforgivable. I remedied that by setting 'CC=gcc'\n>>> but it then errored out looking for perl libs. I think msys2 is going to\n>>> be a bit difficult here :-(\n>> Hm. Yea, the perl thing is my fault - you should be able to get past it with\n>> -Dperl=disabled, and I'll take a look at fixing the perl detection. (*)\n> This is a weird one. I don't know much about msys, so it's probably related to\n> that. Perl spits out /usr/lib/perl5/core_perl/ as its archlibexp. According to\n> shell commands that exists, but not according to msys's own python\n>\n> $ /mingw64/bin/python -c \"import os; p = '/usr/lib/perl5/core_perl/CORE'; print(f'does {p} exist:', os.path.exists(p))\"\n> does /usr/lib/perl5/core_perl/CORE exist: False\n>\n> $ ls -ld /usr/lib/perl5/core_perl/CORE\n> drwxr-xr-x 1 anfreund anfreund 0 Oct 10 10:19 /usr/lib/perl5/core_perl/CORE\n\n\nLooks to me like a python issue:\n\n\n# perl -e 'my $p = \"/usr/lib/perl5/core_perl/CORE\"; print qq(does $p\nexist: ), -e $p, qq{\\n};'\ndoes /usr/lib/perl5/core_perl/CORE exist: 1\n\n# python -c \"import os; p = '/usr/lib/perl5/core_perl/CORE';\nprint(f'does {p} exist:', os.path.exists(p))\"\ndoes /usr/lib/perl5/core_perl/CORE exist: False\n\n# cygpath -m /usr/lib/perl5/core_perl/CORE\nC:/tools/msys64/usr/lib/perl5/core_perl/CORE\n\n# python -c \"import os; p =\n'C:/tools/msys64/usr/lib/perl5/core_perl/CORE'; print(f'does {p}\nexist:', os.path.exists(p))\"\ndoes C:/tools/msys64/usr/lib/perl5/core_perl/CORE exist: True\n\n\nClearly python is not understanding msys virtualized paths.\n\n\n>\n>\n> I guess I should figure out how to commandline install msys and add it to CI.\n>\n\n\nhere's what I do:\n\n\n # msys2 outputs esc-[3J which clears the screen's scroll buffer. Nasty.\n # so we redirect the output\n # find the log in c:\\Windows\\System32 if needed\n choco install -y --no-progress --limit-output msys2 > msys2inst.log\n c:\\tools\\msys64\\usr\\bin\\bash -l\n '/c/vfiles/windows-uploads/msys2-packages.sh'\n\nHere's what's in msys-packages.sh:\n\n\n pacman -S --needed --noconfirm \\\n     base-devel \\\n     msys/git \\\n     msys/ccache \\\n     msys/vim  \\\n     msys/perl-Crypt-SSLeay \\\n     mingw-w64-clang-x86_64-toolchain \\\n     mingw-w64-x86_64-toolchain\n\n # could do: pacman -S --needed --noconfirm development\n # this is more economical. These should cover most of the things you\n might\n # want to configure with\n\n pacman -S --needed --noconfirm \\\n        msys/gettext-devel \\\n        msys/icu-devel \\\n        msys/libiconv-devel \\\n        msys/libreadline-devel \\\n        msys/libxml2-devel \\\n        msys/libxslt-devel \\\n        msys/openssl-devel \\\n        msys/zlib-devel\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 12 Oct 2021 14:37:04 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "\nOn 10/12/21 2:23 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-10-12 14:11:39 -0400, Andrew Dunstan wrote:\n>> On 10/12/21 12:59 PM, Andres Freund wrote:\n>>> If you repro the hanging, what's the last bit in meson-logs/meson-log.txt?\n>> Here's the entire thing\n>> Sanity check compiler command line: ccache cc sanitycheckc.c -o\n>> sanitycheckc.exe -D_FILE_OFFSET_BITS=64\n>> Sanity check compile stdout:\n>>\n>> -----\n>> Sanity check compile stderr:\n>>\n>> -----\n>>\n>> meson.build:1:0: ERROR: Compiler ccache cc can not compile programs.\n> Huh, it's not a question of gcc vs cc, it's that meson automatically uses\n> ccache. And it looks like msys's ccache is broken at the moment (installed\n> yesterday):\n>\n> $ ccache --version\n> ccache version 4.4.1\n> ...\n>\n> $ echo > test.c\n> $ ccache cc -c test.c\n> Segmentation fault (core dumped)\n> ..\n>\n> not sure how that leads to hanging, but it's not too surprising that things\n> don't work out after that...\n>\n\nYes, I've had to disable ccache on fairywren.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 12 Oct 2021 14:42:27 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-12 09:15:41 -0700, Andres Freund wrote:\n> > For example, at the time, gcc on macOS was not supported. Meson thought, if\n> > you are on macOS, you are surely using the Apple compiler, and it supports\n> > these options.\n>\n> I'm pretty sure this one now can just be overridden with CC=gcc. It can on\n> linux and windows, but I don't have ready interactive access with a mac\n> (leaving cirrus asside, which now has a \"start a terminal\" option...).\n\nIt was a tad more complicated. But only because it took me a while to figure\nout how to make gcc on macos actually work, independent of meson. Initially\ngcc was always failing with errors about not finding the linker, and\ninstalling binutils was a dead end.\n\nTurns out just using a gcc at a specific path doesn't work, it ends up using\nwrong internal binaries or something like that.\n\nOnce I got to that, the meson part was easy:\n\n$ export PATH=\"/usr/local/opt/gcc/bin:$PATH\"\n$ CC=gcc-11 meson setup build-gcc\n ...\n C compiler for the host machine: gcc-11 (gcc 11.2.0 \"gcc-11 (Homebrew GCC 11.2.0) 11.2.0\")\n ...\n$ cd build-gcc\n$ ninja test\n...\n\n 181/181 postgresql:tap+subscription / subscription/t/100_bugs.pl OK 17.83s 5 subtests passed\n\n\n Ok: 180\n Expected Fail: 0\n Fail: 0\n Unexpected Pass: 0\n Skipped: 1\n Timeout: 0\n\n\n\nOne thing that is nice with meson's testrunner is that it can parse the output\nof tap tests and recognizes the number of completed / failed subtests. I\nwonder whether we could make pg_regress' output tap compliant without the\noutput quality suffering too much.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Oct 2021 12:01:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "\nOn 10/12/21 2:37 PM, Andrew Dunstan wrote:\n> On 10/12/21 2:09 PM, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2021-10-12 09:59:26 -0700, Andres Freund wrote:\n>>> On 2021-10-12 11:50:03 -0400, Andrew Dunstan wrote:\n>>>> It hung because it expected the compiler to be 'ccache cc'. Hanging in\n>>>> such a case is kinda unforgivable. I remedied that by setting 'CC=gcc'\n>>>> but it then errored out looking for perl libs. I think msys2 is going to\n>>>> be a bit difficult here :-(\n>>> Hm. Yea, the perl thing is my fault - you should be able to get past it with\n>>> -Dperl=disabled, and I'll take a look at fixing the perl detection. (*)\n>> This is a weird one. I don't know much about msys, so it's probably related to\n>> that. Perl spits out /usr/lib/perl5/core_perl/ as its archlibexp. According to\n>> shell commands that exists, but not according to msys's own python\n>>\n>> $ /mingw64/bin/python -c \"import os; p = '/usr/lib/perl5/core_perl/CORE'; print(f'does {p} exist:', os.path.exists(p))\"\n>> does /usr/lib/perl5/core_perl/CORE exist: False\n>>\n>> $ ls -ld /usr/lib/perl5/core_perl/CORE\n>> drwxr-xr-x 1 anfreund anfreund 0 Oct 10 10:19 /usr/lib/perl5/core_perl/CORE\n>\n> Looks to me like a python issue:\n>\n>\n> # perl -e 'my $p = \"/usr/lib/perl5/core_perl/CORE\"; print qq(does $p\n> exist: ), -e $p, qq{\\n};'\n> does /usr/lib/perl5/core_perl/CORE exist: 1\n>\n> # python -c \"import os; p = '/usr/lib/perl5/core_perl/CORE';\n> print(f'does {p} exist:', os.path.exists(p))\"\n> does /usr/lib/perl5/core_perl/CORE exist: False\n>\n> # cygpath -m /usr/lib/perl5/core_perl/CORE\n> C:/tools/msys64/usr/lib/perl5/core_perl/CORE\n>\n> # python -c \"import os; p =\n> 'C:/tools/msys64/usr/lib/perl5/core_perl/CORE'; print(f'does {p}\n> exist:', os.path.exists(p))\"\n> does C:/tools/msys64/usr/lib/perl5/core_perl/CORE exist: True\n>\n>\n> Clearly python is not understanding msys virtualized paths.\n\n\nIt's a matter of which python you use. The one that understands msys\npaths is msys/python. The mingw64 packages are normally pure native\nwindows and so don't understand msys paths. I know it's confusing :-(\n\n\n# /usr/bin/python -c \"import os; p = '/usr/lib/perl5/core_perl/CORE';\nprint(f'does {p} exist:', os.path.exists(p))\"\ndoes /usr/lib/perl5/core_perl/CORE exist: True\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 12 Oct 2021 15:16:56 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-12 14:37:04 -0400, Andrew Dunstan wrote:\n> On 10/12/21 2:09 PM, Andres Freund wrote:\n> >> Hm. Yea, the perl thing is my fault - you should be able to get past it with\n> >> -Dperl=disabled, and I'll take a look at fixing the perl detection. (*)\n> > This is a weird one. I don't know much about msys, so it's probably related to\n> > that. Perl spits out /usr/lib/perl5/core_perl/ as its archlibexp. According to\n> > shell commands that exists, but not according to msys's own python\n> >\n> > $ /mingw64/bin/python -c \"import os; p = '/usr/lib/perl5/core_perl/CORE'; print(f'does {p} exist:', os.path.exists(p))\"\n> > does /usr/lib/perl5/core_perl/CORE exist: False\n> >\n> > $ ls -ld /usr/lib/perl5/core_perl/CORE\n> > drwxr-xr-x 1 anfreund anfreund 0 Oct 10 10:19 /usr/lib/perl5/core_perl/CORE\n\n> Looks to me like a python issue:\n\n> Clearly python is not understanding msys virtualized paths.\n\nAh, it's a question of the *wrong* python being used :/. I somehow ended up\nwith both a mingw and an msys python, with the mingw python taking preference\nover the msys one. The latter one does understand such paths.\n\n\n\n> > I guess I should figure out how to commandline install msys and add it to CI.\n\n> here's what I do:\n\nThanks!\n\n\nDoes that recipe get you to a build where ./configure --with-perl succeeds?\n\nI see this here:\n\nchecking for Perl archlibexp... /usr/lib/perl5/core_perl\nchecking for Perl privlibexp... /usr/share/perl5/core_perl\nchecking for Perl useshrplib... true\nchecking for CFLAGS recommended by Perl... -DPERL_USE_SAFE_PUTENV -U__STRICT_ANSI__ -D_GNU_SOURCE -march=x86-64 -mtune=generic -O2 -pipe -fwrapv -fno-strict-aliasing -fstack-protector-strong\nchecking for CFLAGS to compile embedded Perl... -DPERL_USE_SAFE_PUTENV\nchecking for flags to link embedded Perl... no\nconfigure: error: could not determine flags for linking embedded Perl.\nThis probably means that ExtUtils::Embed or ExtUtils::MakeMaker is not\ninstalled.\n\nIf I just include perl.h from a test file with gcc using the above flags it\nfails to compile:\n$ echo '#include <perl.h>' > test.c\n$ gcc -DPERL_USE_SAFE_PUTENV -U__STRICT_ANSI__ -D_GNU_SOURCE -march=x86-64 -mtune=generic -O2 -pipe -fwrapv -fno-strict-aliasing -fstack-protector-strong test.c -c -I /c/dev/msys64/usr/lib/perl5/core_perl/CORE\nIn file included from test.c:1:\nC:/dev/msys64/usr/lib/perl5/core_perl/CORE/perl.h:1003:13: fatal error: sys/wait.h: No such file or directory\n 1003 | # include <sys/wait.h>\n\nand ldopts bleats\n\n$ perl -MExtUtils::Embed -e ldopts\nWarning (mostly harmless): No library found for -lpthread\nWarning (mostly harmless): No library found for -ldl\n -Wl,--enable-auto-import -Wl,--export-all-symbols -Wl,--enable-auto-image-base -fstack-protector-strong -L/usr/lib/perl5/core_perl/CORE -lperl -lcrypt\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Oct 2021 12:29:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Tue, Oct 12, 2021 at 4:37 AM Andres Freund <andres@anarazel.de> wrote:\n\n[Meson prototype]\n\nThe build code looks pretty approachable for someone with no prior\nexposure, and feels pretty nice when running it (I couldn't get a build\nworking but I'll leave that aside for now).\n\n> As far as I can tell the only OS that postgres currently supports that\n> meson doesn't support is HPUX. It'd likely be fairly easy to add\n> gcc-on-hpux support, a chunk more to add support for the proprietary\n> ones.\n\nThat would also have to work for all the dependencies, which were displayed\nto me as:\n\nninja, gdbm, ca-certificates, openssl@1.1, readline, sqlite and python@3.9\n\nAlso, could utility makefile targets be made to work? I'm thinking in\nparticular of update-unicode and reformat-dat-files, for example.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Oct 12, 2021 at 4:37 AM Andres Freund <andres@anarazel.de> wrote:[Meson prototype]The build code looks pretty approachable for someone with no prior exposure, and feels pretty nice when running it (I couldn't get a build working but I'll leave that aside for now).> As far as I can tell the only OS that postgres currently supports that> meson doesn't support is HPUX. It'd likely be fairly easy to add> gcc-on-hpux support, a chunk more to add support for the proprietary> ones.That would also have to work for all the dependencies, which were displayed to me as:ninja, gdbm, ca-certificates, openssl@1.1, readline, sqlite and python@3.9Also, could utility makefile targets be made to work? I'm thinking in particular of update-unicode and reformat-dat-files, for example.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Tue, 12 Oct 2021 15:55:22 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "\nOn 10/12/21 3:29 PM, Andres Freund wrote:\n>\n> Does that recipe get you to a build where ./configure --with-perl succeeds?\n>\n> I see this here:\n>\n> checking for Perl archlibexp... /usr/lib/perl5/core_perl\n> checking for Perl privlibexp... /usr/share/perl5/core_perl\n> checking for Perl useshrplib... true\n> checking for CFLAGS recommended by Perl... -DPERL_USE_SAFE_PUTENV -U__STRICT_ANSI__ -D_GNU_SOURCE -march=x86-64 -mtune=generic -O2 -pipe -fwrapv -fno-strict-aliasing -fstack-protector-strong\n> checking for CFLAGS to compile embedded Perl... -DPERL_USE_SAFE_PUTENV\n> checking for flags to link embedded Perl... no\n> configure: error: could not determine flags for linking embedded Perl.\n> This probably means that ExtUtils::Embed or ExtUtils::MakeMaker is not\n> installed.\n>\n> If I just include perl.h from a test file with gcc using the above flags it\n> fails to compile:\n\n\nYou need to build against a native perl, like Strawberry or ActiveState.\n(I have had mixed success with Strawberry) You do that by putting a path\nto it at the start of the PATH. The wrinkle in this is that you need\nprove to point to one that understands virtual paths. So you do\nsomething like this:\n\n\nPATH=\"/c/perl/bin:$PATH\" PROVE=/bin/core_perl/prove configure ...\n\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 12 Oct 2021 16:02:14 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-12 16:02:14 -0400, Andrew Dunstan wrote:\n> You need to build against a native perl, like Strawberry or ActiveState.\n> (I have had mixed success with Strawberry)\n\nDo you understand why that is needed?\n\n\n> You do that by putting a path to it at the start of the PATH. The wrinkle in\n> this is that you need prove to point to one that understands virtual\n> paths. So you do something like this:\n> \n> \n> PATH=\"/c/perl/bin:$PATH\" PROVE=/bin/core_perl/prove configure ...\n\nOh my.\n\nI'll try that later... I wonder if we could make this easier from our side?\nThis is a lot of magic to know.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Oct 2021 13:42:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-12 15:55:22 -0400, John Naylor wrote:\n> On Tue, Oct 12, 2021 at 4:37 AM Andres Freund <andres@anarazel.de> wrote:\n> The build code looks pretty approachable for someone with no prior\n> exposure, and feels pretty nice when running it\n\nThat's part of what attracted me...\n\n\n> (I couldn't get a build working but I'll leave that aside for now).\n\nIf you want to do that separately, I'll try to fix it.\n\n\n> > As far as I can tell the only OS that postgres currently supports that\n> > meson doesn't support is HPUX. It'd likely be fairly easy to add\n> > gcc-on-hpux support, a chunk more to add support for the proprietary\n> > ones.\n> \n> That would also have to work for all the dependencies, which were displayed\n> to me as:\n> \n> ninja, gdbm, ca-certificates, openssl@1.1, readline, sqlite and python@3.9\n\nmeson does depend on ninja (to execute the build) and of course python. But\nthe rest should be optional dependencies. ninja builds without any\ndependencies as long as you don't change its parser sources. python builds on\naix, hpux etc.\n\nNot sure what way gdbm openssl@1.1 and sqlite are pulled in? I assume readline\nis for python...\n\n\n> Also, could utility makefile targets be made to work? I'm thinking in\n> particular of update-unicode and reformat-dat-files, for example.\n\nYes, that shouldn't be a problem. You can run arbitrary code in targets\n(there's plenty need for that already in what I have so far).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Oct 2021 13:59:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "út 12. 10. 2021 v 19:17 odesílatel Andres Freund <andres@anarazel.de> napsal:\n>\n> Hi,\n>\n> On 2021-10-12 17:21:50 +0200, Josef Šimánek wrote:\n> > > # build (uses automatically as many cores as available)\n> > > ninja\n> >\n> > I'm getting errors at this step. You can find my output at\n> > https://pastebin.com/Ar5VqfFG. Setup went well without errors. Is that\n> > expected for now?\n>\n> Thanks, that's helpful. And no, that's not expected (*), it should be fixed.\n>\n> What OS / distribution / version is this?\n\nFedora 34 (64 bit)\n\n> Can you build postgres \"normally\" with --with-gss? Seems like we're ending up\n> with a version of gssapi that we're not compatible with.\n\nYes, I can.\n\n> You should be able to get past this by disabling gss using meson configure\n> -Dgssapi=disabled.\n\nI tried to clean and start from scratch, but I'm getting different\nerror probably related to wrongly configured JIT (LLVM wasn't found\nduring meson setup). I'll debug on my side to provide more info.\n\nWhole build error could be found at https://pastebin.com/hCFqcPvZ.\nSetup log could be found at https://pastebin.com/wjbE1w56.\n\n> Greetings,\n>\n> Andres Freund\n>\n> * except kinda, in the sense that I'd expect it to be buggy, given that I've\n> run it only on a few machines and it's very, uh, bleeding edge\n\n\n", "msg_date": "Wed, 13 Oct 2021 01:19:27 +0200", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-13 01:19:27 +0200, Josef Šimánek wrote:\n> I tried to clean and start from scratch, but I'm getting different\n> error probably related to wrongly configured JIT (LLVM wasn't found\n> during meson setup). I'll debug on my side to provide more info.\n\n../src/backend/jit/jit.c:91:73: error: ‘DLSUFFIX’ undeclared (first use in this function)\n 91 | snprintf(path, MAXPGPATH, \"%s/%s%s\", pkglib_path, jit_provider, DLSUFFIX);\n | ^~~~~~~~\n\nThis *very* likely is related to building in a source tree that also contains\na \"non-meson\" build \"in place\". The problem is that the meson build picks up\nthe pg_config.h generated by ./configure in the \"normal\" build, rather than\nthe one meson generated itself.\n\nYou'd need to execute make distclean or such, or use a separate git checkout.\n\nI forgot about this issue because I only ever build postgres from outside the\nsource-tree (by invoking configure from a separate directory), so there's\nnever build products in it. I think at least I need to make the build emit a\nwarning / error if there's a pg_config.h in the source tree...\n\n\nThis is the part of the jit code that's built regardless of llvm availability\n- you'd get the same error in a few other places unrelated to jit.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Oct 2021 16:54:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-12 13:42:56 -0700, Andres Freund wrote:\n> On 2021-10-12 16:02:14 -0400, Andrew Dunstan wrote:\n> > You do that by putting a path to it at the start of the PATH. The wrinkle in\n> > this is that you need prove to point to one that understands virtual\n> > paths. So you do something like this:\n> > \n> > \n> > PATH=\"/c/perl/bin:$PATH\" PROVE=/bin/core_perl/prove configure ...\n> \n> Oh my.\n> \n> I'll try that later... I wonder if we could make this easier from our side?\n> This is a lot of magic to know.\n\nI managed to get this working. At first it failed because I don't have\npexports - it's not available inside msys as far as I could tell. And seems to\nbe unmaintained. But replacing pexports with gendef fixed that.\n\nThere's this comment in src/pl/plperl/GNUmakefile\n\n# Perl on win32 ships with import libraries only for Microsoft Visual C++,\n# which are not compatible with mingw gcc. Therefore we need to build a\n# new import library to link with.\n\nbut I seem to be able to link fine without going through that song-and-dance?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Oct 2021 18:03:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "> On 12 Oct 2021, at 21:01, Andres Freund <andres@anarazel.de> wrote:\n\n> One thing that is nice with meson's testrunner is that it can parse the output\n> of tap tests and recognizes the number of completed / failed subtests. I\n> wonder whether we could make pg_regress' output tap compliant without the\n> output quality suffering too much.\n\nI added a --tap option for TAP output to pg_regress together with Jinbao Chen\nfor giggles and killing some time a while back. It's not entirely done and\nsort of PoC, but most of it works. Might not be of interest here, but in case\nit is I've refreshed it slightly and rebased it. There might be better ways to\ndo it, but the aim was to make the diff against the guts of pg_regress small\nand instead extract output functions for the different formats.\n\nIt omits the test timings, but that could be added either as a diagnostic line\nfollowing each status or as a YAML block in TAP 13 (the attached is standard\nTAP, not version 13 but the change would be trivial).\n\nIf it's helpful and there's any interest for this I'm happy to finish it up now.\n\nOne thing that came out of this, is that we don't really handle the ignored\ntests in the way the code thinks it does for normal output, the attached treats\nignored tests as SKIP tests.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 13 Oct 2021 13:54:10 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "\nOn 10/12/21 9:03 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-10-12 13:42:56 -0700, Andres Freund wrote:\n>> On 2021-10-12 16:02:14 -0400, Andrew Dunstan wrote:\n>>> You do that by putting a path to it at the start of the PATH. The wrinkle in\n>>> this is that you need prove to point to one that understands virtual\n>>> paths. So you do something like this:\n>>>\n>>>\n>>> PATH=\"/c/perl/bin:$PATH\" PROVE=/bin/core_perl/prove configure ...\n>> Oh my.\n>>\n>> I'll try that later... I wonder if we could make this easier from our side?\n>> This is a lot of magic to know.\n> I managed to get this working. At first it failed because I don't have\n> pexports - it's not available inside msys as far as I could tell. And seems to\n> be unmaintained. But replacing pexports with gendef fixed that.\n>\n> There's this comment in src/pl/plperl/GNUmakefile\n>\n> # Perl on win32 ships with import libraries only for Microsoft Visual C++,\n> # which are not compatible with mingw gcc. Therefore we need to build a\n> # new import library to link with.\n>\n> but I seem to be able to link fine without going through that song-and-dance?\n>\n\n\nIt looks like you're not building a native postgres, but rather one\ntargeted at msys. To build one that's native (i.e. runs without any\npresence of msys) you need to do these things before building:\n\n MSYSTEM=MINGW64\n MSYSTEM_CHOST=x86_64-w64-mingw32\n PATH=\"/mingw64/bin:$PATH\"\n\npexports will be in the resulting path, and the build will use the\nnative compiler.\n\nYou can use fairywren's config as a guide.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 13 Oct 2021 08:55:38 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Tue, Oct 12, 2021 at 4:59 PM Andres Freund <andres@anarazel.de> wrote:\n\n> On 2021-10-12 15:55:22 -0400, John Naylor wrote:\n> > (I couldn't get a build working but I'll leave that aside for now).\n>\n> If you want to do that separately, I'll try to fix it.\n\nOkay, I pulled the latest commits and tried again:\n\n[51/950] Compiling C object\nsrc/interfaces/libpq/libpq.5.dylib.p/fe-connect.c.o\nFAILED: src/interfaces/libpq/libpq.5.dylib.p/fe-connect.c.o\nccache cc -Isrc/interfaces/libpq/libpq.5.dylib.p -Isrc/interfaces/libpq\n-I../src/interfaces/libpq -Isrc/port -I../src/port -Isrc/include\n-I../src/include\n-I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/LDAP.framework/Headers\n-I/usr/local/opt/readline/include -I/usr/local/opt/gettext/include\n-I/usr/local/opt/zlib/include -I/usr/local/opt/openssl/include\n-fcolor-diagnostics -Wall -Winvalid-pch -Wextra -O0 -g -isysroot\n/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -fno-strict-aliasing\n-fwrapv -Wmissing-prototypes -Wpointer-arith -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wformat-security -Wdeclaration-after-statement\n-Wno-unused-command-line-argument -Wno-missing-field-initializers\n-Wno-sign-compare -Wno-unused-parameter -msse4.2\n-F/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/LDAP.framework\n-DFRONTEND -MD -MQ src/interfaces/libpq/libpq.5.dylib.p/fe-connect.c.o -MF\nsrc/interfaces/libpq/libpq.5.dylib.p/fe-connect.c.o.d -o\nsrc/interfaces/libpq/libpq.5.dylib.p/fe-connect.c.o -c\n../src/interfaces/libpq/fe-connect.c\nIn file included from ../src/interfaces/libpq/fe-connect.c:72:\nIn file included from\n/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/LDAP.framework/Headers/ldap.h:1:\n\n[the last line is repeated a bunch of times, then...]\n\n/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/LDAP.framework/Headers/ldap.h:1:10:\nerror: #include nested too deeply\n#include <ldap.h>\n ^\n\nThen the expected \"undeclared identifier\" errors that would arise from a\nmissing header. I tried compiling --with-ldap with the Make build, and only\ngot warnings about deprecated declarations -- that build completed.\n\nI tried disabling ldap with the Meson build but I'll spare the details of\nwhat went wrong there in case I did something wrong, so we can take things\none step at a time.\n\n> > That would also have to work for all the dependencies, which were\ndisplayed\n> > to me as:\n> >\n> > ninja, gdbm, ca-certificates, openssl@1.1, readline, sqlite and\npython@3.9\n>\n> meson does depend on ninja (to execute the build) and of course python.\nBut\n> the rest should be optional dependencies. ninja builds without any\n> dependencies as long as you don't change its parser sources. python\nbuilds on\n> aix, hpux etc.\n>\n> Not sure what way gdbm openssl@1.1 and sqlite are pulled in? I assume\nreadline\n> is for python...\n\nHmm, weird.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Oct 12, 2021 at 4:59 PM Andres Freund <andres@anarazel.de> wrote:> On 2021-10-12 15:55:22 -0400, John Naylor wrote:> > (I couldn't get a build working but I'll leave that aside for now).>> If you want to do that separately, I'll try to fix it.Okay, I pulled the latest commits and tried again:[51/950] Compiling C object src/interfaces/libpq/libpq.5.dylib.p/fe-connect.c.oFAILED: src/interfaces/libpq/libpq.5.dylib.p/fe-connect.c.occache cc -Isrc/interfaces/libpq/libpq.5.dylib.p -Isrc/interfaces/libpq -I../src/interfaces/libpq -Isrc/port -I../src/port -Isrc/include -I../src/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/LDAP.framework/Headers -I/usr/local/opt/readline/include -I/usr/local/opt/gettext/include -I/usr/local/opt/zlib/include -I/usr/local/opt/openssl/include -fcolor-diagnostics -Wall -Winvalid-pch -Wextra -O0 -g -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -fno-strict-aliasing -fwrapv -Wmissing-prototypes -Wpointer-arith -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -Wdeclaration-after-statement -Wno-unused-command-line-argument -Wno-missing-field-initializers -Wno-sign-compare -Wno-unused-parameter -msse4.2 -F/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/LDAP.framework -DFRONTEND -MD -MQ src/interfaces/libpq/libpq.5.dylib.p/fe-connect.c.o -MF src/interfaces/libpq/libpq.5.dylib.p/fe-connect.c.o.d -o src/interfaces/libpq/libpq.5.dylib.p/fe-connect.c.o -c ../src/interfaces/libpq/fe-connect.cIn file included from ../src/interfaces/libpq/fe-connect.c:72:In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/LDAP.framework/Headers/ldap.h:1:[the last line is repeated a bunch of times, then...]/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/LDAP.framework/Headers/ldap.h:1:10: error: #include nested too deeply#include <ldap.h>         ^Then the expected \"undeclared identifier\" errors that would arise from a missing header. I tried compiling --with-ldap with the Make build, and only got warnings about deprecated declarations -- that build completed.I tried disabling ldap with the Meson build but I'll spare the details of what went wrong there in case I did something wrong, so we can take things one step at a time.> > That would also have to work for all the dependencies, which were displayed> > to me as:> >> > ninja, gdbm, ca-certificates, openssl@1.1, readline, sqlite and python@3.9>> meson does depend on ninja (to execute the build) and of course python. But> the rest should be optional dependencies. ninja builds without any> dependencies as long as you don't change its parser sources. python builds on> aix, hpux etc.>> Not sure what way gdbm openssl@1.1 and sqlite are pulled in? I assume readline> is for python...Hmm, weird.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 13 Oct 2021 11:51:03 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-13 11:51:03 -0400, John Naylor wrote:\n> On Tue, Oct 12, 2021 at 4:59 PM Andres Freund <andres@anarazel.de> wrote:\n> \n> > On 2021-10-12 15:55:22 -0400, John Naylor wrote:\n> > > (I couldn't get a build working but I'll leave that aside for now).\n> >\n> > If you want to do that separately, I'll try to fix it.\n> \n> Okay, I pulled the latest commits and tried again:\n> /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/LDAP.framework/Headers/ldap.h:1:\n> \n> [the last line is repeated a bunch of times, then...]\n\nOh. I actually saw that on CI at some point... That one is definitely\nodd. Currently CI for OSX builds like\n\n - brew install make coreutils ccache icu4c lz4 tcl-tk openldap\n - brew install meson ninja python@3.9\n..\n PKG_CONFIG_PATH=\"/usr/local/opt/openssl/lib/pkgconfig:$PKG_CONFIG_PATH\"\n PKG_CONFIG_PATH=\"/usr/local/opt/icu4c/lib/pkgconfig:$PKG_CONFIG_PATH\"\n PKG_CONFIG_PATH=\"/usr/local/opt/openldap/lib/pkgconfig:$PKG_CONFIG_PATH\"\n\n export PKG_CONFIG_PATH\n\n meson setup --buildtype debug -Dcassert=true -Dssl=openssl build\n\nbut I set that up knowing little about macos.\n\n\nFor the autoconf build CI currently does something similar via\n LIBS=\"/usr/local/lib:$LIBS\"\n INCLUDES=\"/usr/local/include:$INCLUDES\"\n...\n LIBS=\"/usr/local/opt/openldap/lib:$LIBS\"\n INCLUDES=\"/usr/local/opt/openldap/include:$INCLUDES\"\n ...\n --with-includes=\"$INCLUDES\" \\\n --with-libs=\"$LIBS\" \\\n\nare you doing something like that? Or does it work for you without? I vaguely\nrecall hitting a similar problem as you report when not passing\n/usr/local/... to configure.\n\n\n> i tried disabling ldap with the meson build but i'll spare the details of\n> what went wrong there in case i did something wrong, so we can take things\n> one step at a time.\n\nyou can change it for an existing builddir with\nmeson configure -dldap=disabled or when setting up a new builddir by passing\n-dldap=disabled at that time.\n\n\n> > > ninja, gdbm, ca-certificates, openssl@1.1, readline, sqlite and\n> python@3.9\n> >\n> > meson does depend on ninja (to execute the build) and of course python.\n> but\n> > the rest should be optional dependencies. ninja builds without any\n> > dependencies as long as you don't change its parser sources. python\n> builds on\n> > aix, hpux etc.\n> >\n> > not sure what way gdbm openssl@1.1 and sqlite are pulled in? i assume\n> readline\n> > is for python...\n> \n> Hmm, weird.\n\nThey're homebrew python deps: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/python@3.9.rb#L28\nwhich are optional things enabled explicitly:\nhttps://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/python@3.9.rb#L123\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 13 Oct 2021 09:37:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Wed, Oct 13, 2021 at 12:37 PM Andres Freund <andres@anarazel.de> wrote:\n\n> For the autoconf build CI currently does something similar via\n> LIBS=\"/usr/local/lib:$LIBS\"\n> INCLUDES=\"/usr/local/include:$INCLUDES\"\n> ...\n> LIBS=\"/usr/local/opt/openldap/lib:$LIBS\"\n> INCLUDES=\"/usr/local/opt/openldap/include:$INCLUDES\"\n> ...\n> --with-includes=\"$INCLUDES\" \\\n> --with-libs=\"$LIBS\" \\\n>\n> are you doing something like that? Or does it work for you without? I\nvaguely\n> recall hitting a similar problem as you report when not passing\n> /usr/local/... to configure.\n\nI didn't do anything like that for the autoconf build. I have in the past\ndone things retail, like\n\n--with-icu ICU_CFLAGS='-I/usr/local/opt/icu4c/include/'\nICU_LIBS='-L/usr/local/opt/icu4c/lib/ -licui18n -licuuc -licudata'\n\n> > i tried disabling ldap with the meson build but i'll spare the details\nof\n> > what went wrong there in case i did something wrong, so we can take\nthings\n> > one step at a time.\n>\n> you can change it for an existing builddir with\n> meson configure -dldap=disabled or when setting up a new builddir by\npassing\n> -dldap=disabled at that time.\n\nSomehow our emails got lower-cased down here, but I tried it with capital D:\n\nmeson configure -Dldap=disabled\n\ninside the build dir and got this:\n\n../meson.build:278:2: ERROR: Tried to assign the invalid value \"None\" of\ntype NoneType to variable.\n\nLine 278 is\n\n ldap_r = ldap = dependency('', required : false)\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Oct 13, 2021 at 12:37 PM Andres Freund <andres@anarazel.de> wrote:> For the autoconf build CI currently does something similar via>         LIBS=\"/usr/local/lib:$LIBS\">         INCLUDES=\"/usr/local/include:$INCLUDES\"> ...>         LIBS=\"/usr/local/opt/openldap/lib:$LIBS\">         INCLUDES=\"/usr/local/opt/openldap/include:$INCLUDES\">         ...>           --with-includes=\"$INCLUDES\" \\>           --with-libs=\"$LIBS\" \\>> are you doing something like that? Or does it work for you without? I vaguely> recall hitting a similar problem as you report when not passing> /usr/local/... to configure.I didn't do anything like that for the autoconf build. I have in the past done things retail, like--with-icu ICU_CFLAGS='-I/usr/local/opt/icu4c/include/' ICU_LIBS='-L/usr/local/opt/icu4c/lib/ -licui18n -licuuc -licudata'> > i tried disabling ldap with the meson build but i'll spare the details of> > what went wrong there in case i did something wrong, so we can take things> > one step at a time.>> you can change it for an existing builddir with> meson configure -dldap=disabled or when setting up a new builddir by passing> -dldap=disabled at that time.Somehow our emails got lower-cased down here, but I tried it with capital D:meson configure -Dldap=disabledinside the build dir and got this:../meson.build:278:2: ERROR: Tried to assign the invalid value \"None\" of type NoneType to variable.Line 278 is  ldap_r = ldap = dependency('', required : false)--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 13 Oct 2021 13:19:36 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-13 08:55:38 -0400, Andrew Dunstan wrote:\n> On 10/12/21 9:03 PM, Andres Freund wrote:\n> > I managed to get this working. At first it failed because I don't have\n> > pexports - it's not available inside msys as far as I could tell. And seems to\n> > be unmaintained. But replacing pexports with gendef fixed that.\n> >\n> > There's this comment in src/pl/plperl/GNUmakefile\n> >\n> > # Perl on win32 ships with import libraries only for Microsoft Visual C++,\n> > # which are not compatible with mingw gcc. Therefore we need to build a\n> > # new import library to link with.\n> >\n> > but I seem to be able to link fine without going through that song-and-dance?\n> >\n>\n>\n> It looks like you're not building a native postgres, but rather one\n> targeted at msys. To build one that's native (i.e. runs without any\n> presence of msys) you need to do these things before building:\n>\n> MSYSTEM=MINGW64\n> MSYSTEM_CHOST=x86_64-w64-mingw32\n> PATH=\"/mingw64/bin:$PATH\"\n\nI had a config equivalent to this (slight difference in PATH, but the same gcc\nbeing picked), and I just verified that it still works if I set up PATH like\nthat. I get a working plperl out of it. Without msys on PATH or such.\n\nwhere perl526.dll\nC:\\perl\\strawberry-5.26.3.1-64bit\\perl\\bin\\perl526.dll\n\ndumpbin /imports 'C:/Users/anfreund/src/pg-meson/build-mingw/tmp_install/lib/plperl.dll'|grep dll\n\nDump of file C:\\Users\\anfreund\\src\\pg-meson\\build-mingw\\tmp_install\\lib\\plperl.dll\n KERNEL32.dll\n msvcrt.dll\n perl526.dll\n\ndumpbin /imports .\\build-mingw\\tmp_install\\bin\\postgres.exe|grep dll\n ADVAPI32.dll\n KERNEL32.dll\n msvcrt.dll\n Secur32.dll\n WLDAP32.dll\n WS2_32.dll\n\ndo $$elog(NOTICE, \"blob\");$$ language plperl;\nNOTICE: blob\nDO\n\nTo me this looks like it's a plperl built without the import file recreation,\nwithout being msys dependent?\n\n\n> pexports will be in the resulting path, and the build will use the\n> native compiler.\n\nI don't see pexports anywhere in the msys installation. I can see it available\non sourceforge, and I see a few others asking where to get it from in the\ncontext of msys, and being pointed to manually downloading it.\n\nSeems like we should consider using gendef instead of pexports, given it's\navailable in msys?\n\n$ pacman -Fy\n$ pacman -F gendef.exe\n...\nmingw64/mingw-w64-x86_64-tools-git 9.0.0.6316.acdc7adc9-1 (mingw-w64-x86_64-toolchain) [installed]\n mingw64/bin/gendef.exe\n..\n$ pacman -F pexports.exe\n$ pacman -Fx pexports\n<bunch of packages containing smtpexports.h>\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 13 Oct 2021 10:26:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-13 13:19:36 -0400, John Naylor wrote:\n> On Wed, Oct 13, 2021 at 12:37 PM Andres Freund <andres@anarazel.de> wrote:\n> \n> > For the autoconf build CI currently does something similar via\n> > LIBS=\"/usr/local/lib:$LIBS\"\n> > INCLUDES=\"/usr/local/include:$INCLUDES\"\n> > ...\n> > LIBS=\"/usr/local/opt/openldap/lib:$LIBS\"\n> > INCLUDES=\"/usr/local/opt/openldap/include:$INCLUDES\"\n> > ...\n> > --with-includes=\"$INCLUDES\" \\\n> > --with-libs=\"$LIBS\" \\\n> >\n> > are you doing something like that? Or does it work for you without? I\n> vaguely\n> > recall hitting a similar problem as you report when not passing\n> > /usr/local/... to configure.\n> \n> I didn't do anything like that for the autoconf build. I have in the past\n> done things retail, like\n\nI'll try to see how this works / what causes the breakage.\n\n\n> Somehow our emails got lower-cased down here, but I tried it with capital D:\n\n:)\n\n\n> meson configure -Dldap=disabled\n>\n> inside the build dir and got this:\n>\n> ../meson.build:278:2: ERROR: Tried to assign the invalid value \"None\" of\n> type NoneType to variable.\n>\n> Line 278 is\n>\n> ldap_r = ldap = dependency('', required : false)\n\nOops, I broke that when trying to clean things up. I guess I write too much C\n;). It needs to be two lines.\n\nI pushed the fix for that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 13 Oct 2021 10:42:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Wed, Oct 13, 2021 at 1:42 PM Andres Freund <andres@anarazel.de> wrote:\n> I pushed the fix for that.\n\nOk great, it builds now! :-) Now something's off with dynamic loading.\nThere are libraries in ./tmp_install/usr/local/lib/ but apparently initdb\ndoesn't know to look for them there:\n\n$ cat /Users/john/pgdev/meson/build/testrun/main/pg_regress/log/initdb.log\ndyld: Library not loaded: /usr/local/lib/libpq.5.dylib\n Referenced from:\n/Users/john/pgdev/meson/build/tmp_install/usr/local/bin/initdb\n Reason: image not found\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Oct 13, 2021 at 1:42 PM Andres Freund <andres@anarazel.de> wrote:> I pushed the fix for that.Ok great, it builds now! :-) Now something's off with dynamic loading. There are libraries in ./tmp_install/usr/local/lib/ but apparently initdb doesn't know to look for them there:$ cat /Users/john/pgdev/meson/build/testrun/main/pg_regress/log/initdb.logdyld: Library not loaded: /usr/local/lib/libpq.5.dylib  Referenced from: /Users/john/pgdev/meson/build/tmp_install/usr/local/bin/initdb  Reason: image not found--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 13 Oct 2021 14:40:19 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "\nOn 10/13/21 1:26 PM, Andres Freund wrote:\n>\n>> pexports will be in the resulting path, and the build will use the\n>> native compiler.\n> I don't see pexports anywhere in the msys installation. I can see it available\n> on sourceforge, and I see a few others asking where to get it from in the\n> context of msys, and being pointed to manually downloading it.\n\n\n\nWeird. fairywren has it, which means that it must have been removed from\nthe packages at some stage, fairly recently as fairywren isn't that old.\nI just confirmed the absence on a 100% fresh install.\n\n\nIt is in Strawberry's c/bin directory.\n\n\n>\n> Seems like we should consider using gendef instead of pexports, given it's\n> available in msys?\n\n\nYeah. It's missing on my ancient msys animal (frogmouth), but it doesn't\nbuild --with-perl.\n\n\njacana seems to have it.\n\n\nIf you prep a patch I'll test it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 13 Oct 2021 16:06:32 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-13 16:06:32 -0400, Andrew Dunstan wrote:\n> If you prep a patch I'll test it.\n\nWell, right now I'm wondering if the better fix is to just remove the whole\nwin32 block. I don't know how far back, but afaict it's not needed. Seems to\nhave been needed for narwhal at some point, according to 02b61dd08f99. But\nnarwhal is long dead.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 13 Oct 2021 14:46:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "st 13. 10. 2021 v 1:54 odesílatel Andres Freund <andres@anarazel.de> napsal:\n>\n> Hi,\n>\n> On 2021-10-13 01:19:27 +0200, Josef Šimánek wrote:\n> > I tried to clean and start from scratch, but I'm getting different\n> > error probably related to wrongly configured JIT (LLVM wasn't found\n> > during meson setup). I'll debug on my side to provide more info.\n>\n> ../src/backend/jit/jit.c:91:73: error: ‘DLSUFFIX’ undeclared (first use in this function)\n> 91 | snprintf(path, MAXPGPATH, \"%s/%s%s\", pkglib_path, jit_provider, DLSUFFIX);\n> | ^~~~~~~~\n>\n> This *very* likely is related to building in a source tree that also contains\n> a \"non-meson\" build \"in place\". The problem is that the meson build picks up\n> the pg_config.h generated by ./configure in the \"normal\" build, rather than\n> the one meson generated itself.\n>\n> You'd need to execute make distclean or such, or use a separate git checkout.\n>\n> I forgot about this issue because I only ever build postgres from outside the\n> source-tree (by invoking configure from a separate directory), so there's\n> never build products in it. I think at least I need to make the build emit a\n> warning / error if there's a pg_config.h in the source tree...\n\nHello, thanks for the hint. I can finally build using meson and run\nregress tests.\n\nThe only problem I do have currently is auto-detection of perl. I'm\ngetting error related to missing \"Opcode.pm\". PERL is autodetected and\nenabled (https://pastebin.com/xfRRrDcU).\n\nI do get the same error when I enforce perl for current master build\n(./configure --with-perl). Using ./configure with perl autodetection\nskips plperl extension on my system.\n\nDisabling perl manually for meson build (meson setup build\n--reconfigure --buildtype debug -Dperl=disabled) works for me.\n\n>\n> This is the part of the jit code that's built regardless of llvm availability\n> - you'd get the same error in a few other places unrelated to jit.\n>\n> Greetings,\n>\n> Andres Freund\n\n\n", "msg_date": "Wed, 13 Oct 2021 23:58:12 +0200", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "\nOn 10/13/21 5:46 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-10-13 16:06:32 -0400, Andrew Dunstan wrote:\n>> If you prep a patch I'll test it.\n> Well, right now I'm wondering if the better fix is to just remove the whole\n> win32 block. I don't know how far back, but afaict it's not needed. Seems to\n> have been needed for narwhal at some point, according to 02b61dd08f99. But\n> narwhal is long dead.\n>\nOk, I'll test it out.\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 13 Oct 2021 19:11:10 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Thu, Oct 14, 2021 at 4:51 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/LDAP.framework/Headers/ldap.h:1:10: error: #include nested too deeply\n> #include <ldap.h>\n> ^\n\nI vaguely recall that PostgreSQL should build OK against Apple's copy\nof OpenLDAP. That recursive include loop is coming from a \"framework\"\nheader that contains just a couple of lines like #include <ldap.h> to\ntry to include the real header, which should also be in the include\npath, somewhere like\n/Library/Developer/CommandLineTools/SDKs/MacOSX11.3.sdk/usr/include/ldap.h.\nI think we'd need to figure out where that\n-I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/LDAP.framework/Headers\ndirective is coming from and get rid of it, so we can include the real\nheader directly.\n\n\n", "msg_date": "Thu, 14 Oct 2021 17:27:08 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Josef Šimánek <josef.simanek@gmail.com> writes:\n\n> The only problem I do have currently is auto-detection of perl. I'm\n> getting error related to missing \"Opcode.pm\". PERL is autodetected and\n> enabled (https://pastebin.com/xfRRrDcU).\n\nYour Perl (not PERL) installation seems to be incomplete. Opcode.pm is a\ncore module, and should be in /usr/lib64/perl5, judging by the paths in\nthe error message.\n\nWhich OS is this? Some Linux distributions have separate packages for\nthe interpreter itself and the included modules, and the packages can be\nnamed confusingly. E.g. on older Redhat/Fedora versions you have to\ninstall the 'perl-core' package to get all the modules, 'perl' is just\nthe interpreter and the bare minimum set of strictily necessary modules.\n\nThey've fixed this in recent versions (Fedora 34 and Redhat 8, IIRC), so\nthat 'perl' gives you the hole bundle, and 'perl-interpeter' is the\nminimal one.\n\n\n- ilmari\n\n\n", "msg_date": "Thu, 14 Oct 2021 14:14:39 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "čt 14. 10. 2021 v 15:14 odesílatel Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> napsal:\n>\n> Josef Šimánek <josef.simanek@gmail.com> writes:\n>\n> > The only problem I do have currently is auto-detection of perl. I'm\n> > getting error related to missing \"Opcode.pm\". PERL is autodetected and\n> > enabled (https://pastebin.com/xfRRrDcU).\n>\n> Your Perl (not PERL) installation seems to be incomplete. Opcode.pm is a\n> core module, and should be in /usr/lib64/perl5, judging by the paths in\n> the error message.\n>\n> Which OS is this? Some Linux distributions have separate packages for\n> the interpreter itself and the included modules, and the packages can be\n> named confusingly. E.g. on older Redhat/Fedora versions you have to\n> install the 'perl-core' package to get all the modules, 'perl' is just\n> the interpreter and the bare minimum set of strictily necessary modules.\n>\n> They've fixed this in recent versions (Fedora 34 and Redhat 8, IIRC), so\n> that 'perl' gives you the hole bundle, and 'perl-interpeter' is the\n> minimal one.\n\nI'm using Fedora 34 and I still see perl-Opcode.x86_64 as a separate\npackage. Anyway it behaves differently with autoconf tools and the\nmeson build system. Is perl disabled by default in the current build\nsystem?\n\n>\n> - ilmari\n\n\n", "msg_date": "Thu, 14 Oct 2021 15:19:18 +0200", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On 2021-Oct-14, Josef Šimánek wrote:\n\n> I'm using Fedora 34 and I still see perl-Opcode.x86_64 as a separate\n> package. Anyway it behaves differently with autoconf tools and the\n> meson build system. Is perl disabled by default in the current build\n> system?\n\nYes, you have to use --with-perl in order to get it.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"\n\n\n", "msg_date": "Thu, 14 Oct 2021 10:29:42 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Josef Šimánek <josef.simanek@gmail.com> writes:\n\n> čt 14. 10. 2021 v 15:14 odesílatel Dagfinn Ilmari Mannsåker\n> <ilmari@ilmari.org> napsal:\n>>\n>> Josef Šimánek <josef.simanek@gmail.com> writes:\n>>\n>> > The only problem I do have currently is auto-detection of perl. I'm\n>> > getting error related to missing \"Opcode.pm\". PERL is autodetected and\n>> > enabled (https://pastebin.com/xfRRrDcU).\n>>\n>> Your Perl (not PERL) installation seems to be incomplete. Opcode.pm is a\n>> core module, and should be in /usr/lib64/perl5, judging by the paths in\n>> the error message.\n>>\n>> Which OS is this? Some Linux distributions have separate packages for\n>> the interpreter itself and the included modules, and the packages can be\n>> named confusingly. E.g. on older Redhat/Fedora versions you have to\n>> install the 'perl-core' package to get all the modules, 'perl' is just\n>> the interpreter and the bare minimum set of strictily necessary modules.\n>>\n>> They've fixed this in recent versions (Fedora 34 and Redhat 8, IIRC), so\n>> that 'perl' gives you the hole bundle, and 'perl-interpeter' is the\n>> minimal one.\n>\n> I'm using Fedora 34 and I still see perl-Opcode.x86_64 as a separate\n> package.`\n\nYes, it's a separate package, but the 'perl' package depends on all the\ncore module packages, so installing that should fix things. You appear\nto only have 'perl-interpreter' installed.\n\n> Anyway it behaves differently with autoconf tools and the meson build\n> system. Is perl disabled by default in the current build system?\n\nconfigure doesn't auto-detect any optional features, they have to be\nexplicitly enabled using --with-foo switches.\n\n- ilmari\n\n\n", "msg_date": "Thu, 14 Oct 2021 14:32:49 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-14 10:29:42 -0300, Alvaro Herrera wrote:\n> On 2021-Oct-14, Josef Šimánek wrote:\n>\n> > I'm using Fedora 34 and I still see perl-Opcode.x86_64 as a separate\n> > package. Anyway it behaves differently with autoconf tools and the\n> > meson build system. Is perl disabled by default in the current build\n> > system?\n\nHm, so it seems we should make the test separately verify that perl -M{Opcode,\nExtUtils::Embed, ExtUtils::ParseXS} doesn't fail, so that we can fail perl\ndetection with a useful message?\n\n\n> Yes, you have to use --with-perl in order to get it.\n\nWith the meson prototype I set most optional features to \"auto\", except for\nLLVM, as that increases compile times noticeably.\n\nFor configure we didn't/don't want to do much auto-detection, because that\nmakes life harder for distributors. But meson has one switch controlling all\nfeatures set to 'auto' and not explicitly enabled/disabled:\n --auto-features {enabled,disabled,auto} Override value of all 'auto' features (default: auto).\nso the argument doesn't apply to the same degree there. We could default\nauto-features to something else too.\n\nThere were two other reasons:\n\n1) I got tired of needing to disable zlib, readline to be able to build on\n windows.\n2) Exercising all the dependency detection / checking seems important at this\n stage\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 14 Oct 2021 10:24:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-13 23:58:12 +0200, Josef Šimánek wrote:\n> st 13. 10. 2021 v 1:54 odesílatel Andres Freund <andres@anarazel.de> napsal:\n> > This *very* likely is related to building in a source tree that also contains\n> > a \"non-meson\" build \"in place\". The problem is that the meson build picks up\n> > the pg_config.h generated by ./configure in the \"normal\" build, rather than\n> > the one meson generated itself.\n> >\n> > You'd need to execute make distclean or such, or use a separate git checkout.\n> >\n> > I forgot about this issue because I only ever build postgres from outside the\n> > source-tree (by invoking configure from a separate directory), so there's\n> > never build products in it. I think at least I need to make the build emit a\n> > warning / error if there's a pg_config.h in the source tree...\n> \n> Hello, thanks for the hint. I can finally build using meson and run\n> regress tests.\n\nI yesterday pushed code that should detect this case (with an error). Should\nnow detect the situation both when you first run configure in tree, and then\nmeson, and the other way round (by the dirty hack of ./configure touch'ing\nmeson.build at the end for in-tree builds).\n\n\n> The only problem I do have currently is auto-detection of perl. I'm\n> getting error related to missing \"Opcode.pm\". PERL is autodetected and\n> enabled (https://pastebin.com/xfRRrDcU).\n> \n> I do get the same error when I enforce perl for current master build\n> (./configure --with-perl). Using ./configure with perl autodetection\n> skips plperl extension on my system.\n> \n> Disabling perl manually for meson build (meson setup build\n> --reconfigure --buildtype debug -Dperl=disabled) works for me.\n\nYay, thanks for testing!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 14 Oct 2021 10:26:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "I wrote:\n\n> Ok great, it builds now! :-) Now something's off with dynamic loading.\nThere are libraries in ./tmp_install/usr/local/lib/ but apparently initdb\ndoesn't know to look for them there:\n>\n> $ cat /Users/john/pgdev/meson/build/testrun/main/pg_regress/log/initdb.log\n> dyld: Library not loaded: /usr/local/lib/libpq.5.dylib\n> Referenced from:\n/Users/john/pgdev/meson/build/tmp_install/usr/local/bin/initdb\n> Reason: image not found\n\nAfter poking a bit more, this only happens when trying to run the tests. If\nI specify a prefix, I can install, init, and start the server just fine, so\nthat much works.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nI wrote:> Ok great, it builds now! :-) Now something's off with dynamic loading. There are libraries in ./tmp_install/usr/local/lib/ but apparently initdb doesn't know to look for them there:>> $ cat /Users/john/pgdev/meson/build/testrun/main/pg_regress/log/initdb.log> dyld: Library not loaded: /usr/local/lib/libpq.5.dylib>   Referenced from: /Users/john/pgdev/meson/build/tmp_install/usr/local/bin/initdb>   Reason: image not foundAfter poking a bit more, this only happens when trying to run the tests. If I specify a prefix, I can install, init, and start the server just fine, so that much works.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 14 Oct 2021 15:14:16 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "čt 14. 10. 2021 v 15:32 odesílatel Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> napsal:\n>\n> Josef Šimánek <josef.simanek@gmail.com> writes:\n>\n> > čt 14. 10. 2021 v 15:14 odesílatel Dagfinn Ilmari Mannsåker\n> > <ilmari@ilmari.org> napsal:\n> >>\n> >> Josef Šimánek <josef.simanek@gmail.com> writes:\n> >>\n> >> > The only problem I do have currently is auto-detection of perl. I'm\n> >> > getting error related to missing \"Opcode.pm\". PERL is autodetected and\n> >> > enabled (https://pastebin.com/xfRRrDcU).\n> >>\n> >> Your Perl (not PERL) installation seems to be incomplete. Opcode.pm is a\n> >> core module, and should be in /usr/lib64/perl5, judging by the paths in\n> >> the error message.\n> >>\n> >> Which OS is this? Some Linux distributions have separate packages for\n> >> the interpreter itself and the included modules, and the packages can be\n> >> named confusingly. E.g. on older Redhat/Fedora versions you have to\n> >> install the 'perl-core' package to get all the modules, 'perl' is just\n> >> the interpreter and the bare minimum set of strictily necessary modules.\n> >>\n> >> They've fixed this in recent versions (Fedora 34 and Redhat 8, IIRC), so\n> >> that 'perl' gives you the hole bundle, and 'perl-interpeter' is the\n> >> minimal one.\n> >\n> > I'm using Fedora 34 and I still see perl-Opcode.x86_64 as a separate\n> > package.`\n>\n> Yes, it's a separate package, but the 'perl' package depends on all the\n> core module packages, so installing that should fix things. You appear\n> to only have 'perl-interpreter' installed.\n\nYou're right. Installing \"perl\" or \"perl-Opcode\" manually fixes this\nproblem. Currently I only have \"perl-interpreter\" installed.\n\n> > Anyway it behaves differently with autoconf tools and the meson build\n> > system. Is perl disabled by default in the current build system?\n>\n> configure doesn't auto-detect any optional features, they have to be\n> explicitly enabled using --with-foo switches.\n>\n> - ilmari\n\n\n", "msg_date": "Thu, 14 Oct 2021 21:29:38 +0200", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "čt 14. 10. 2021 v 19:24 odesílatel Andres Freund <andres@anarazel.de> napsal:\n>\n> Hi,\n>\n> On 2021-10-14 10:29:42 -0300, Alvaro Herrera wrote:\n> > On 2021-Oct-14, Josef Šimánek wrote:\n> >\n> > > I'm using Fedora 34 and I still see perl-Opcode.x86_64 as a separate\n> > > package. Anyway it behaves differently with autoconf tools and the\n> > > meson build system. Is perl disabled by default in the current build\n> > > system?\n>\n> Hm, so it seems we should make the test separately verify that perl -M{Opcode,\n> ExtUtils::Embed, ExtUtils::ParseXS} doesn't fail, so that we can fail perl\n> detection with a useful message?\n\nI can confirm \"perl -MOpcode\" fails. ExtUtils::Embed and\nExtUtils::ParseXS are present. Looking at the local system history of\nperl-interpreter package, it seems to be installed by default on\nFedora 34. Friendly error message would be welcomed.\n\n>\n> > Yes, you have to use --with-perl in order to get it.\n>\n> With the meson prototype I set most optional features to \"auto\", except for\n> LLVM, as that increases compile times noticeably.\n>\n> For configure we didn't/don't want to do much auto-detection, because that\n> makes life harder for distributors. But meson has one switch controlling all\n> features set to 'auto' and not explicitly enabled/disabled:\n> --auto-features {enabled,disabled,auto} Override value of all 'auto' features (default: auto).\n> so the argument doesn't apply to the same degree there. We could default\n> auto-features to something else too.\n>\n> There were two other reasons:\n>\n> 1) I got tired of needing to disable zlib, readline to be able to build on\n> windows.\n> 2) Exercising all the dependency detection / checking seems important at this\n> stage\n\nClear, thanks for the info.\n\n> Greetings,\n>\n> Andres Freund\n\n\n", "msg_date": "Thu, 14 Oct 2021 21:36:52 +0200", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn October 14, 2021 12:14:16 PM PDT, John Naylor <john.naylor@enterprisedb.com> wrote:\n>I wrote:\n>\n>> Ok great, it builds now! :-) Now something's off with dynamic loading.\n>There are libraries in ./tmp_install/usr/local/lib/ but apparently initdb\n>doesn't know to look for them there:\n>>\n>> $ cat /Users/john/pgdev/meson/build/testrun/main/pg_regress/log/initdb.log\n>> dyld: Library not loaded: /usr/local/lib/libpq.5.dylib\n>> Referenced from:\n>/Users/john/pgdev/meson/build/tmp_install/usr/local/bin/initdb\n>> Reason: image not found\n>\n>After poking a bit more, this only happens when trying to run the tests. If\n>I specify a prefix, I can install, init, and start the server just fine, so\n>that much works.\n\nIs this a Mac with SIP enabled? The Mac CI presumably has that disabled, which is why I didn't see this issue there. Probably need to implement whatever Tom figured out to do about that for the current way of running tests.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 14 Oct 2021 13:34:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Thu, Oct 14, 2021 at 4:34 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Is this a Mac with SIP enabled? The Mac CI presumably has that disabled,\nwhich is why I didn't see this issue there. Probably need to implement\nwhatever Tom figured out to do about that for the current way of running\ntests.\n\nSystem Information says it's disabled. Running \"csrutil status\" complains\nof an unsupported configuration, which doesn't sound good, so I should\nprobably go fix that independent of anything else. :-/\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Oct 14, 2021 at 4:34 PM Andres Freund <andres@anarazel.de> wrote:> Is this a Mac with SIP enabled? The Mac CI presumably has that disabled, which is why I didn't see this issue there. Probably need to implement whatever Tom figured out to do about that for the current way of running tests.System Information says it's disabled. Running \"csrutil status\" complains of an unsupported configuration, which doesn't sound good, so I should probably go fix that independent of anything else. :-/--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 14 Oct 2021 16:54:34 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "I wrote:\n\n> > Is this a Mac with SIP enabled? The Mac CI presumably has that\ndisabled, which is why I didn't see this issue there. Probably need to\nimplement whatever Tom figured out to do about that for the current way of\nrunning tests.\n>\n> System Information says it's disabled. Running \"csrutil status\" complains\nof an unsupported configuration, which doesn't sound good, so I should\nprobably go fix that independent of anything else. :-/\n\nLooking online, I wonder if the \"unsupported\" message might be overly\ncautious. In any case, I do remember turning something off to allow a\ndebugger to run. Here are all the settings, in case it matters:\n\nApple Internal: disabled\nKext Signing: enabled\nFilesystem Protections: enabled\nDebugging Restrictions: disabled\nDTrace Restrictions: enabled\nNVRAM Protections: enabled\nBaseSystem Verification: enabled\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nI wrote:> > Is this a Mac with SIP enabled? The Mac CI presumably has that disabled, which is why I didn't see this issue there. Probably need to implement whatever Tom figured out to do about that for the current way of running tests.>> System Information says it's disabled. Running \"csrutil status\" complains of an unsupported configuration, which doesn't sound good, so I should probably go fix that independent of anything else. :-/Looking online, I wonder if the \"unsupported\" message might be overly cautious. In any case, I do remember turning something off to allow a debugger to run. Here are all the settings, in case it matters:Apple Internal: disabled\tKext Signing: enabled\tFilesystem Protections: enabled\tDebugging Restrictions: disabled\tDTrace Restrictions: enabled\tNVRAM Protections: enabled\tBaseSystem Verification: enabled--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 14 Oct 2021 17:16:15 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 14.10.2021 23:54, John Naylor wrote:\n> On Thu, Oct 14, 2021 at 4:34 PM Andres Freund <andres@anarazel.de \n> <mailto:andres@anarazel.de>> wrote:\n> \n> > Is this a Mac with SIP enabled? The Mac CI presumably has that \n> disabled, which is why I didn't see this issue there. Probably need to \n> implement whatever Tom figured out to do about that for the current way \n> of running tests.\n> \n> System Information says it's disabled. Running \"csrutil status\" \n> complains of an unsupported configuration, which doesn't sound good, so \n> I should probably go fix that independent of anything else. :-/\n\n\nMaybe you could check that DYLD_LIBRARY_PATH is working for you?\n\n% DYLD_FALLBACK_LIBRARY_PATH= \nDYLD_LIBRARY_PATH=./tmp_install/usr/local/lib \n./tmp_install/usr/local/bin/psql --version\npsql (PostgreSQL) 15devel\n\n\nWithout DYLD_LIBRARY_PATH I get the error, as expected:\n\n% DYLD_FALLBACK_LIBRARY_PATH= ./tmp_install/usr/local/bin/psql --version\ndyld: Library not loaded: /usr/local/lib/libpq.5.dylib\n Referenced from: \n/Users/shinderuk/src/postgres-meson/build/./tmp_install/usr/local/bin/psql\n Reason: image not found\n\n\nI add \"DYLD_FALLBACK_LIBRARY_PATH=\" because otherwise dyld falls back to \n/usr/lib/libpq.5.dylib provided by Apple (I am testing on Catalina).\n\n% DYLD_PRINT_LIBRARIES=1 ./tmp_install/usr/local/bin/psql --version 2>&1 \n| grep libpq\ndyld: loaded: <4EDF735E-2104-32AD-BE7B-B400ABFCF57C> /usr/lib/libpq.5.dylib\n\n\nRegards,\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n", "msg_date": "Fri, 15 Oct 2021 00:41:20 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n>> System Information says it's disabled. Running \"csrutil status\" complains\n>> of an unsupported configuration, which doesn't sound good, so I should\n>> probably go fix that independent of anything else. :-/\n\n> Looking online, I wonder if the \"unsupported\" message might be overly\n> cautious. In any case, I do remember turning something off to allow a\n> debugger to run. Here are all the settings, in case it matters:\n\n> Apple Internal: disabled\n> Kext Signing: enabled\n> Filesystem Protections: enabled\n> Debugging Restrictions: disabled\n> DTrace Restrictions: enabled\n> NVRAM Protections: enabled\n> BaseSystem Verification: enabled\n\nI remember having seen that report too, after some previous software\nupgrade that had started from a \"SIP disabled\" status. I'm mostly\nguessing here, but my guess is that\n\n(a) csrutil only considers the all-enabled and all-disabled states\nof these individual flags to be \"supported\" cases.\n\n(b) some one or more of these flags came along in a macOS update,\nand if you did the update starting from a \"disabled\" state, you\nnonetheless ended up with the new flags enabled, leading to the\nmixed state that csrutil complains about.\n\nI've lost count of the number of times I've seen macOS updates\nbe sloppy about preserving non-default settings, so I don't find\ntheory (b) to be even slightly surprising.\n\nWhether the mixed state is actually problematic in any way,\nI dunno. I don't recall having had any problems before noticing\nthat that was what I had.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Oct 2021 17:48:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Is this a Mac with SIP enabled? The Mac CI presumably has that disabled, which is why I didn't see this issue there. Probably need to implement whatever Tom figured out to do about that for the current way of running tests.\n\nAFAIR the only cases we've made work are\n\n(1) disable SIP\n\n(2) avoid the need for (1) by always doing \"make install\" before\n\"make check\".\n\nPeter E. did some hacking towards another solution awhile ago,\nbut IIRC it involved changing the built binaries, and I think\nwe concluded that the benefits didn't justify that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Oct 2021 18:00:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Hm, so it seems we should make the test separately verify that perl -M{Opcode,\n> ExtUtils::Embed, ExtUtils::ParseXS} doesn't fail, so that we can fail perl\n> detection with a useful message?\n\nOur existing policy is that we should check this at configure time,\nnot later. Since plperl won't work at all without Opcode, it seems\nappropriate to add a check there if you say --with-perl. I wasn't\naware that Red Hat had unbundled that from the minimal perl\ninstallation :-(.\n\nOTOH, if they've not unbundled ExtUtils::Embed or ExtUtils::ParseXS,\nI doubt it's worth the configure cycles to check for those separately.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Oct 2021 18:08:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "\nOn 10/13/21 7:11 PM, Andrew Dunstan wrote:\n> On 10/13/21 5:46 PM, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2021-10-13 16:06:32 -0400, Andrew Dunstan wrote:\n>>> If you prep a patch I'll test it.\n>> Well, right now I'm wondering if the better fix is to just remove the whole\n>> win32 block. I don't know how far back, but afaict it's not needed. Seems to\n>> have been needed for narwhal at some point, according to 02b61dd08f99. But\n>> narwhal is long dead.\n>>\n> Ok, I'll test it out.\n>\n\nconfirmed that jacana doesn't need this code to build or test plperl\n(all I did was change the test from win32 to win32x). There would still\nbe work to do to fix the contrib bool_plperl, jsonb_plperl and\nhstore_plperl modules.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 14 Oct 2021 18:19:45 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Fri, Oct 15, 2021 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Peter E. did some hacking towards another solution awhile ago,\n> but IIRC it involved changing the built binaries, and I think\n> we concluded that the benefits didn't justify that.\n\nYeah, by now there are lots of useful blogs from various projects\nfiguring out that you can use the install_name_tool to adjust the\npaths it uses to be absolute or relative to certain magic words, like\n@executable_path/../lib/blah.dylib, which is tempting, but...\nrealistically, for serious hacking on a Mac, SIP is so annoying that\nit isn't the only reason you'll want to turn it off: it stops\ndtrace/dtruss/... from working, and somehow prevents debuggers from\nworking when you've ssh'd in from a remote machine with a proper\nkeyboard, and probably more things that I'm forgetting.\n\nI wish I could find the Xnu source that shows exactly how and when the\nenvironment is suppressed in this way to understand better, but it\ndoesn't jump out of Apple's github; maybe it's hiding in closed source\nmachinery...\n\n\n", "msg_date": "Fri, 15 Oct 2021 11:23:00 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I wish I could find the Xnu source that shows exactly how and when the\n> environment is suppressed in this way to understand better, but it\n> doesn't jump out of Apple's github; maybe it's hiding in closed source\n> machinery...\n\nI recall that we figured out awhile ago that the environment gets trimmed\nwhen make (or whatever) executes some command via the shell; seemingly,\nApple has decided that /bin/sh is a security-critical program that mustn't\nbe run with a non-default DYLD_LIBRARY_PATH. Dunno if that helps you\nfind where the damage is done exactly.\n\n(The silliness of this policy, when you pair it with the fact that they\ndon't reset PATH at the same time, seems blindingly obvious to me. But\napparently not to Apple.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Oct 2021 18:40:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-14 18:00:49 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Is this a Mac with SIP enabled? The Mac CI presumably has that disabled, which is why I didn't see this issue there. Probably need to implement whatever Tom figured out to do about that for the current way of running tests.\n> \n> AFAIR the only cases we've made work are\n> \n> (1) disable SIP\n> \n> (2) avoid the need for (1) by always doing \"make install\" before\n> \"make check\".\n\nAh, I thought it was more than that. In that case, John, does meson's test\nsucceed after you did the \"proper\" install? Assuming it's in a path that's\nallowed to provide shared libraries?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 14 Oct 2021 15:55:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "I wrote:\n> I recall that we figured out awhile ago that the environment gets trimmed\n> when make (or whatever) executes some command via the shell; seemingly,\n> Apple has decided that /bin/sh is a security-critical program that mustn't\n> be run with a non-default DYLD_LIBRARY_PATH. Dunno if that helps you\n> find where the damage is done exactly.\n\nBTW, here's the evidence for this theory:\n\n[tgl@pro ~]$ cat checkenv.c\n#include <stdio.h>\n#include <stdlib.h>\n\nint\nmain(int argc, char **argv)\n{\n char *pth = getenv(\"DYLD_LIBRARY_PATH\");\n\n if (pth)\n printf(\"DYLD_LIBRARY_PATH = %s\\n\", pth);\n else\n printf(\"DYLD_LIBRARY_PATH is unset\\n\");\n\n return 0;\n}\n[tgl@pro ~]$ gcc checkenv.c\n[tgl@pro ~]$ ./a.out\nDYLD_LIBRARY_PATH is unset\n[tgl@pro ~]$ export DYLD_LIBRARY_PATH=/Users/tgl/pginstall/lib\n[tgl@pro ~]$ ./a.out\nDYLD_LIBRARY_PATH = /Users/tgl/pginstall/lib\n[tgl@pro ~]$ sh -c ./a.out\nDYLD_LIBRARY_PATH is unset\n[tgl@pro ~]$ ./a.out\nDYLD_LIBRARY_PATH = /Users/tgl/pginstall/lib\n[tgl@pro ~]$ bash -c ./a.out\nDYLD_LIBRARY_PATH is unset\n\nYou have to check the environment using an \"unprivileged\" program.\nIf you try to examine the environment using, say, \"env\", you will get\nvery misleading results. AFAICT, /usr/bin/env is *also* considered\nsecurity-critical, because I cannot get it to ever report that\nDYLD_LIBRARY_PATH is set.\n\nHmm ... /usr/bin/perl seems to act the same way. It can see\nENV{'PATH'} but not ENV{'DYLD_LIBRARY_PATH'}.\n\nThis may indicate that they've applied this policy on a blanket\nbasis to everything in /bin and /usr/bin (and other system\ndirectories, maybe), rather than singling out the shell.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Oct 2021 19:04:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-15 11:23:00 +1300, Thomas Munro wrote:\n> On Fri, Oct 15, 2021 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Peter E. did some hacking towards another solution awhile ago,\n> > but IIRC it involved changing the built binaries, and I think\n> > we concluded that the benefits didn't justify that.\n> \n> Yeah, by now there are lots of useful blogs from various projects\n> figuring out that you can use the install_name_tool to adjust the\n> paths it uses to be absolute or relative to certain magic words, like\n> @executable_path/../lib/blah.dylib, which is tempting, but...\n> realistically, for serious hacking on a Mac, SIP is so annoying that\n> it isn't the only reason you'll want to turn it off: it stops\n> dtrace/dtruss/... from working, and somehow prevents debuggers from\n> working when you've ssh'd in from a remote machine with a proper\n> keyboard, and probably more things that I'm forgetting.\n\nMeson has support for using install_name_tool to remove \"build time\" rpaths\nand set \"install time\" rpaths during the installation process - which uses\ninstall_name_tool on mac.\n\nIf, and perhaps that's too big an if, relative rpaths actually work despite\nSIP, it might be worth setting a relative install_rpath, because afaict that\nshould then work both for a \"real\" installation and our temporary test one.\n\nIf absolute rpaths are required, it'd make the process a bit more expensive,\nbecause we'd probably need to change a configure time option during the temporary\ninstall. No actual rebuilds would be required, but still.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 14 Oct 2021 16:15:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> If, and perhaps that's too big an if, relative rpaths actually work despite\n> SIP, it might be worth setting a relative install_rpath, because afaict that\n> should then work both for a \"real\" installation and our temporary test one.\n\n From what we know so far, it seems like SIP wouldn't interfere with\nthat (if it works at all). I think what SIP desires to prevent is\nmessing with a program's execution by setting DYLD_LIBRARY_PATH.\nAs long as the program executable itself is saying where to find\nthe library, I don't see why they should interfere with that.\n\n(Again, it seems blindingly stupid to forbid this while not blocking\nPATH or any of the other environment variables that have always affected\nexecution. But what do I know.)\n\n> If absolute rpaths are required, it'd make the process a bit more expensive,\n\nIt'd also put the kibosh on relocatable install trees, though I dunno how\nmuch people really care about that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Oct 2021 19:23:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Thu, Oct 14, 2021 at 6:55 PM Andres Freund <andres@anarazel.de> wrote:\n> Ah, I thought it was more than that. In that case, John, does meson's test\n> succeed after you did the \"proper\" install? Assuming it's in a path that's\n> allowed to provide shared libraries?\n\nOh, it can act like installcheck? [checks] Yep, \"meson test\" ran fine (*).\nIt still ran the temp install first, but in any case it worked. The full\n\"configure step\" was\n\nmeson setup build --buildtype debug -Dldap=disabled -Dcassert=true\n-Dprefix=$(pwd)/inst\n\n* (all passed but skipped subscription/t/012_collation.pl)\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Oct 14, 2021 at 6:55 PM Andres Freund <andres@anarazel.de> wrote:> Ah, I thought it was more than that. In that case, John, does meson's test> succeed after you did the \"proper\" install? Assuming it's in a path that's> allowed to provide shared libraries?Oh, it can act like installcheck? [checks] Yep, \"meson test\" ran fine (*). It still ran the temp install first, but in any case it worked. The full \"configure step\"  was meson setup build --buildtype debug -Dldap=disabled -Dcassert=true -Dprefix=$(pwd)/inst* (all passed but skipped subscription/t/012_collation.pl)--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 14 Oct 2021 19:27:17 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-14 18:08:58 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Hm, so it seems we should make the test separately verify that perl -M{Opcode,\n> > ExtUtils::Embed, ExtUtils::ParseXS} doesn't fail, so that we can fail perl\n> > detection with a useful message?\n>\n> Our existing policy is that we should check this at configure time,\n> not later.\n\nYea, I was thinking of configure (and meson's equivalent) as well.\n\n\n> Since plperl won't work at all without Opcode, it seems\n> appropriate to add a check there if you say --with-perl. I wasn't\n> aware that Red Hat had unbundled that from the minimal perl\n> installation :-(.\n>\n> OTOH, if they've not unbundled ExtUtils::Embed or ExtUtils::ParseXS,\n> I doubt it's worth the configure cycles to check for those separately.\n\nOn debian the perl binary, with a sparse set of modules is in\nperl-base. ExtUtils::Embed and ExtUtils::ParseXS are in\nperl-modules-x.yy. Whereas Opcode is in libperlx.yy. But libperlx.yy depends\non perl-modules-x.yy so I guess an Opcode.pm check would suffice.\n\nSeems we can just check all of them at once with with something like\n\nperl -MOpcode -MExtUtils::Embed -MExtUtils::ParseXSNotAvailable -e ''\nCan't locate ExtUtils/ParseXSNotAvailable.pm in @INC (you may need to install the ExtUtils::ParseXS3 module) (@INC contains: /home/andres/bin/perl5/lib/perl5/x86_64-linux-gnu-thread-multi /home/andres/bin/perl5/lib/perl5 /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.32.1 /usr/local/share/perl/5.32.1 /usr/lib/x86_64-linux-gnu/perl5/5.32 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl-base /usr/lib/x86_64-linux-gnu/perl/5.32 /usr/share/perl/5.32 /usr/local/lib/site_perl).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 14 Oct 2021 16:38:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-10-14 18:08:58 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> Hm, so it seems we should make the test separately verify that perl -M{Opcode,\n>>> ExtUtils::Embed, ExtUtils::ParseXS} doesn't fail, so that we can fail perl\n>>> detection with a useful message?\n\n>> Our existing policy is that we should check this at configure time,\n>> not later.\n\n> Yea, I was thinking of configure (and meson's equivalent) as well.\n\nAh, sorry, I misunderstood what you meant by \"test\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Oct 2021 19:51:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-14 19:27:17 -0400, John Naylor wrote:\n> On Thu, Oct 14, 2021 at 6:55 PM Andres Freund <andres@anarazel.de> wrote:\n> > Ah, I thought it was more than that. In that case, John, does meson's test\n> > succeed after you did the \"proper\" install? Assuming it's in a path that's\n> > allowed to provide shared libraries?\n> \n> Oh, it can act like installcheck? [checks] Yep, \"meson test\" ran fine (*).\n> It still ran the temp install first, but in any case it worked.\n\nAs far as I understand Tom, our normal make check only works on OSX if\npreviously you ran make install. Which will have installed libpq into the\n\"proper\" install location. Because all our binaries will, by default, have an\nrpath to the library directory embedded, that then allows binaries in the\ntemporary install to work. But using the wrong libpq - which most of the time\nturns out to be harmless, because libpq doesn't change that rapidly.\n\n\n> * (all passed but skipped subscription/t/012_collation.pl)\n\nThat test requires ICU, so that's fine. I guess we could prevent the test from\nbeing executed in the first place, but I don't think we've done that for cases\nwhere it's one specific test in a t/ directory, where others in the same\ndirectory do not have such dependencies.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 14 Oct 2021 17:02:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Fri, Oct 15, 2021 at 12:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> [tgl@pro ~]$ cat checkenv.c\n> #include <stdio.h>\n> #include <stdlib.h>\n>\n> int\n> main(int argc, char **argv)\n> {\n> char *pth = getenv(\"DYLD_LIBRARY_PATH\");\n>\n> if (pth)\n> printf(\"DYLD_LIBRARY_PATH = %s\\n\", pth);\n> else\n> printf(\"DYLD_LIBRARY_PATH is unset\\n\");\n>\n> return 0;\n> }\n> [tgl@pro ~]$ gcc checkenv.c\n> [tgl@pro ~]$ ./a.out\n> DYLD_LIBRARY_PATH is unset\n> [tgl@pro ~]$ export DYLD_LIBRARY_PATH=/Users/tgl/pginstall/lib\n> [tgl@pro ~]$ ./a.out\n> DYLD_LIBRARY_PATH = /Users/tgl/pginstall/lib\n> [tgl@pro ~]$ sh -c ./a.out\n> DYLD_LIBRARY_PATH is unset\n> [tgl@pro ~]$ ./a.out\n> DYLD_LIBRARY_PATH = /Users/tgl/pginstall/lib\n> [tgl@pro ~]$ bash -c ./a.out\n> DYLD_LIBRARY_PATH is unset\n>\n> You have to check the environment using an \"unprivileged\" program.\n> If you try to examine the environment using, say, \"env\", you will get\n> very misleading results. AFAICT, /usr/bin/env is *also* considered\n> security-critical, because I cannot get it to ever report that\n> DYLD_LIBRARY_PATH is set.\n>\n> Hmm ... /usr/bin/perl seems to act the same way. It can see\n> ENV{'PATH'} but not ENV{'DYLD_LIBRARY_PATH'}.\n>\n> This may indicate that they've applied this policy on a blanket\n> basis to everything in /bin and /usr/bin (and other system\n> directories, maybe), rather than singling out the shell.\n\nLooks like it. If I've found the right code here, it looks like where\nany common-or-garden Unix runtime linker would ignore LD_LIBRARY_PATH\nfor a setuid binary, they've trained theirs to whack DYLD_*, and also\nfor code-signed and __RESTRICT-marked executables.\n\nhttps://github.com/opensource-apple/dyld/blob/master/src/dyld.cpp#L1681\n\nI suppose you could point SHELL at an unsigned copy of sh (codesign\n--remove-signature, or something from brew/ports/x) so that GNU make\nshould respect, but I don't know how many other exec(\"/bin/sh\") calls\nmight be hiding around the place (I guess perl calls system()?) and\nmight require some kind of LD_PRELOAD hackery... not much fun.\n\n\n", "msg_date": "Fri, 15 Oct 2021 13:36:28 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Oct 15, 2021 at 12:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This may indicate that they've applied this policy on a blanket\n>> basis to everything in /bin and /usr/bin (and other system\n>> directories, maybe), rather than singling out the shell.\n\n> Looks like it. If I've found the right code here, it looks like where\n> any common-or-garden Unix runtime linker would ignore LD_LIBRARY_PATH\n> for a setuid binary, they've trained theirs to whack DYLD_*, and also\n> for code-signed and __RESTRICT-marked executables.\n> https://github.com/opensource-apple/dyld/blob/master/src/dyld.cpp#L1681\n\nUgh. That explains it, all right.\n\n> I suppose you could point SHELL at an unsigned copy of sh (codesign\n> --remove-signature, or something from brew/ports/x) so that GNU make\n> should respect, but I don't know how many other exec(\"/bin/sh\") calls\n> might be hiding around the place (I guess perl calls system()?) and\n> might require some kind of LD_PRELOAD hackery... not much fun.\n\nYeah. I thought about invoking everything via a small wrapper\nthat restores the correct setting of DYLD_LIBRARY_PATH. We could\nperhaps make that work for the invocations of test postmasters\nand psqls from \"make\" and TAP scripts, but hacking up our code's\nsundry uses of system(3) like that seems like it'd be very messy,\nif feasible at all.\n\nBTW, the POSIX spec explicitly discourages letting SHELL affect the\nbehavior of system(3), so I bet that wouldn't help.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Oct 2021 22:46:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-14 22:46:07 -0400, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I suppose you could point SHELL at an unsigned copy of sh (codesign\n> > --remove-signature, or something from brew/ports/x) so that GNU make\n> > should respect, but I don't know how many other exec(\"/bin/sh\") calls\n> > might be hiding around the place (I guess perl calls system()?) and\n> > might require some kind of LD_PRELOAD hackery... not much fun.\n> \n> Yeah. I thought about invoking everything via a small wrapper\n> that restores the correct setting of DYLD_LIBRARY_PATH. We could\n> perhaps make that work for the invocations of test postmasters\n> and psqls from \"make\" and TAP scripts, but hacking up our code's\n> sundry uses of system(3) like that seems like it'd be very messy,\n> if feasible at all.\n\nIt does sound like using relative rpaths might be the thing we want - and like\nthey've been available since 10.5 or something.\n\nIs there a reason we're using absolute rpaths on a bunch of platforms, rather\nthan relative ones, which'd allow relocation?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 14 Oct 2021 20:20:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-14 19:23:58 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > If, and perhaps that's too big an if, relative rpaths actually work despite\n> > SIP, it might be worth setting a relative install_rpath, because afaict that\n> > should then work both for a \"real\" installation and our temporary test one.\n> \n> From what we know so far, it seems like SIP wouldn't interfere with\n> that (if it works at all). I think what SIP desires to prevent is\n> messing with a program's execution by setting DYLD_LIBRARY_PATH.\n> As long as the program executable itself is saying where to find\n> the library, I don't see why they should interfere with that.\n\nWell, there's *some* danger with relative rpaths, because they might\naccidentally be pointing somewhere non-existing and user-creatable. Not a huge\nrisk, but as you say:\n\n> (Again, it seems blindingly stupid to forbid this while not blocking\n> PATH or any of the other environment variables that have always affected\n> execution. But what do I know.)\n\nthese aren't necessarily carefuly weighed considerations :/\n\nBut it seems to work well from what I gather.\n\n\n> > If absolute rpaths are required, it'd make the process a bit more expensive,\n> \n> It'd also put the kibosh on relocatable install trees, though I dunno how\n> much people really care about that.\n\nWe currently use absolute rpaths, or something equivalent.\n\nThe reason that running tests on macos works is that we set the \"install_name\"\nof shared libraries to the intended installed location, using an absolute\npath:\n LINK.shared\t\t= $(COMPILER) -dynamiclib -install_name '$(libdir)/lib$(NAME).$(SO_MAJOR_VERSION)$(DLSUFFIX)' $(version_link) $(exported_symbols_list) -multiply_defined suppress\nwhich on macos means that all libraries linking to that dylib reference it\nunder that absolute path.\n\nOn most other platforms we set an absolute rpath to the installation\ndirectory, which has an equivalent effect:\nrpathdir = $(libdir)\n\n\nIt seems to work quite well to change our own references to libpq in binaries\n/ shared libs to be relative, but to leave the install_name of the libraries\nintact. In combination with adding an rpath of @loader_path/../lib/ to\nbinaries and @loader_path/ to shlibs, the install will re relocatable.\n\nIt doesn't work as well to actually have a non-absolute install_name for\nlibraries (e.g. @rpath/libpq.dylib), because then external code linking to\nlibpq needs to add an rpath to the installation to make it work.\n\nThe advantage of this approach over Peter's is that it's not temp-install\nspecific - due to the relative paths, it makes installations relocatable\nwithout relying [DY]LD_LIBRARY_PATH.\n\nOn other unixoid systems this whole mess is simpler, because we can just add\n$ORIGIN to shared libraries and $ORIGIN/../lib/ to binaries. We don't need to\nleave some absolute path in the libraries themself intact.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 15 Oct 2021 11:50:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-15 11:50:30 -0700, Andres Freund wrote:\n> It seems to work quite well to change our own references to libpq in binaries\n> / shared libs to be relative, but to leave the install_name of the libraries\n> intact. In combination with adding an rpath of @loader_path/../lib/ to\n> binaries and @loader_path/ to shlibs, the install will re relocatable.\n> \n> It doesn't work as well to actually have a non-absolute install_name for\n> libraries (e.g. @rpath/libpq.dylib), because then external code linking to\n> libpq needs to add an rpath to the installation to make it work.\n> \n> The advantage of this approach over Peter's is that it's not temp-install\n> specific - due to the relative paths, it makes installations relocatable\n> without relying [DY]LD_LIBRARY_PATH.\n> \n> On other unixoid systems this whole mess is simpler, because we can just add\n> $ORIGIN to shared libraries and $ORIGIN/../lib/ to binaries. We don't need to\n> leave some absolute path in the libraries themself intact.\n\nI implemented this for the meson build, and it seems to work nicely. The macos\npart was harder than I hoped due to the install_name stuff, which meson\ndoesn't solve.\n\nhttps://github.com/anarazel/postgres/commit/a35379c28989469cc4b701a8d7a22422e6302e09\n\nAfter that the build directory is relocatale.\n\n\nI don't immediately see a way to do this reasonably for the autoconf\nbuild. We'd need a list of our own shared libraries from somewhere, and then\nreplace the references after building the objects?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 15 Oct 2021 15:36:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-15 15:36:16 -0700, Andres Freund wrote:\n> On 2021-10-15 11:50:30 -0700, Andres Freund wrote:\n> > It seems to work quite well to change our own references to libpq in binaries\n> > / shared libs to be relative, but to leave the install_name of the libraries\n> > intact. In combination with adding an rpath of @loader_path/../lib/ to\n> > binaries and @loader_path/ to shlibs, the install will re relocatable.\n> > \n> > It doesn't work as well to actually have a non-absolute install_name for\n> > libraries (e.g. @rpath/libpq.dylib), because then external code linking to\n> > libpq needs to add an rpath to the installation to make it work.\n> > \n> > The advantage of this approach over Peter's is that it's not temp-install\n> > specific - due to the relative paths, it makes installations relocatable\n> > without relying [DY]LD_LIBRARY_PATH.\n> > \n> > On other unixoid systems this whole mess is simpler, because we can just add\n> > $ORIGIN to shared libraries and $ORIGIN/../lib/ to binaries. We don't need to\n> > leave some absolute path in the libraries themself intact.\n> \n> I implemented this for the meson build, and it seems to work nicely. The macos\n> part was harder than I hoped due to the install_name stuff, which meson\n> doesn't solve.\n> \n> https://github.com/anarazel/postgres/commit/a35379c28989469cc4b701a8d7a22422e6302e09\n> \n> After that the build directory is relocatale.\n\nWell, now that I think about it, it's still only relocatable in the sense that\npostgres itself will continue to work. Outside code linking to e.g. libpq will\nget the wrong path after relocation the source tree, due to the absolute\ninstall_name.\n\nBut that doesn't seem solvable, unless we make the installed install_name to\nbe '@rpath/libpq...dylib' and require code linking to libpq to pass\n-Wl,-rpath,/path/to/libpq when linking to libpq.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 15 Oct 2021 15:47:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi Tom,\n\nOn 2021-10-12 01:37:21 -0700, Andres Freund wrote:\n> As far as I can tell the only OS that postgres currently supports that\n> meson doesn't support is HPUX. It'd likely be fairly easy to add\n> gcc-on-hpux support, a chunk more to add support for the proprietary\n> ones.\n\nTom, wrt HPUX on pa-risc, what are your thoughts there? IIRC we gave up\nsupporting HP's compiler on pa-risc a while ago.\n\nAs I said it'd probably not be too hard to add meson support for hpux on hppa,\nit's probably just a few branches. But that'd require access somewhere. The\ngcc compile farm does not have a hppa member anymore...\n\nI did notice that gcc will declare hppa-hpux obsolete in gcc 12 and will\nremove at some point:\n\"The hppa[12]*-*-hpux10* and hppa[12]*-*-hpux11* configurations targeting 32-bit PA-RISC with HP-UX have been obsoleted and will be removed in a future release.\"\nhttps://gcc.gnu.org/gcc-12/changes.html\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 19 Oct 2021 11:34:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-10-12 01:37:21 -0700, Andres Freund wrote:\n>> As far as I can tell the only OS that postgres currently supports that\n>> meson doesn't support is HPUX. It'd likely be fairly easy to add\n>> gcc-on-hpux support, a chunk more to add support for the proprietary\n>> ones.\n\n> Tom, wrt HPUX on pa-risc, what are your thoughts there? IIRC we gave up\n> supporting HP's compiler on pa-risc a while ago.\n\nRight. I am still testing with gcc on HP-PA. I'd kind of like to\nkeep it running just as an edge case for our spinlock support, but\nI'm not sure that I want to do any huge amount of work on meson\nto keep that going.\n\nI do have a functioning OpenBSD installation on that machine, so\none alternative if the porting costs look too high is to replace\ngaur with an OpenBSD animal. However, last I checked, OpenBSD\nwas about half the speed of HPUX on that hardware, so I'm not\nreal eager to go that way. gaur's already about the slowest\nanimal in the farm :-(\n\n> As I said it'd probably not be too hard to add meson support for hpux on hppa,\n> it's probably just a few branches. But that'd require access somewhere. The\n> gcc compile farm does not have a hppa member anymore...\n\nIf you've got an idea where to look, I could add that to my\nto-do queue.\n\nIn any case, I don't think we need to consider HPUX as a blocker\nfor the meson approach. The value-add from keeping gaur going\nprobably isn't terribly much. I'm more concerned about the\neffort involved in getting meson going on some other old animals,\nsuch as prairiedog.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Oct 2021 15:22:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nI know this is still in the evaluation stage, but I did notice some\ndiscrepencies in the Flex flags. With the attached patch, the read-only\ndata segment seems to match up pretty well now.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 19 Oct 2021 17:57:31 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-19 17:57:31 -0400, John Naylor wrote:\n> I know this is still in the evaluation stage, but I did notice some\n> discrepencies in the Flex flags. With the attached patch, the read-only\n> data segment seems to match up pretty well now.\n\nGood catch. I think I just copied them around...\n\nI wish we had a bit more consistency in the flags, so we could centralize\nthem. Seems there's no reason to not use -p -p and -b everywhere?\n\n\nI also need to make meson use our flex wrapper for the relevant versions... I\ncan see the warning that'd be fixed by it on macos CI. Will do that and push\nit out to my github repo together with your changes.\n\nThanks!\n\nAndres\n\n\n", "msg_date": "Tue, 19 Oct 2021 17:31:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-19 15:22:15 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-10-12 01:37:21 -0700, Andres Freund wrote:\n> >> As far as I can tell the only OS that postgres currently supports that\n> >> meson doesn't support is HPUX. It'd likely be fairly easy to add\n> >> gcc-on-hpux support, a chunk more to add support for the proprietary\n> >> ones.\n>\n> > Tom, wrt HPUX on pa-risc, what are your thoughts there? IIRC we gave up\n> > supporting HP's compiler on pa-risc a while ago.\n>\n> Right. I am still testing with gcc on HP-PA. I'd kind of like to\n> keep it running just as an edge case for our spinlock support, but\n> I'm not sure that I want to do any huge amount of work on meson\n> to keep that going.\n\nMakes sense. While that does test an odd special case for our spinlock\nimplementation, it's also the only supported platform with that edge case, and\nit seems extremely unlikely that there ever will be a new platform with such\nodd/limited atomic operations.\n\n\n> I do have a functioning OpenBSD installation on that machine, so\n> one alternative if the porting costs look too high is to replace\n> gaur with an OpenBSD animal. However, last I checked, OpenBSD\n> was about half the speed of HPUX on that hardware, so I'm not\n> real eager to go that way. gaur's already about the slowest\n> animal in the farm :-(\n\nYea, that doesn't sound enticing. Seems like we either should keep it running\non hp-ux or just drop parisc support?\n\n\n> > As I said it'd probably not be too hard to add meson support for hpux on hppa,\n> > it's probably just a few branches. But that'd require access somewhere. The\n> > gcc compile farm does not have a hppa member anymore...\n>\n> If you've got an idea where to look, I could add that to my to-do queue.\n\nIt might even just work. Looks like meson does have pa-risc detection. While\nit doesn't have any specifically for hpux, it just falls back to python's\nsys.platform in that case. python3 -c 'import sys;print(sys.platform)'\n\nmeson generates output for ninja to execute (basically a faster make that's\npartially faster by being much less flexible. Intended to be output by more\nuser-friendly buildsystems ). Ninja can be built by a minimal python script,\nor with cmake. The former doesn't seem to have hpux support, the latter does I\nthink.\nhttps://github.com/ninja-build/ninja\n\nSo it could be interesting to see if ninja builds.\n\n\nI've not taught the PG meson the necessary stuff for a 32 bit build. So\nthere's no point is trying whether meson works that much. I'll try to do that,\nand let you know.\n\n\n> I'm more concerned about the effort involved in getting meson going on some\n> other old animals, such as prairiedog.\n\nYea, that's an *old* OS version. One version too old to have support for\n@rpath, added in 10.5 :(. Is there a reason to run 10.4 specifically?\nAccording to wikipedia 10.5 is the last version to support ppc.\n\nLooks like python still supports building back to 10.4.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 19 Oct 2021 18:08:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I wish we had a bit more consistency in the flags, so we could centralize\n> them. Seems there's no reason to not use -p -p and -b everywhere?\n\nI don't think we care enough about performance of most of the scanners\nto make them all backup-free, so -1 to that idea.\n\nWe could possibly replace the command line switches with %option\nentries in the files themselves. But I think the reason we haven't\ndone so for -b is that the Makefile still needs to know about it\nso as to know what to do with the lex.backup output file.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Oct 2021 21:10:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-10-19 15:22:15 -0400, Tom Lane wrote:\n>> I'm more concerned about the effort involved in getting meson going on some\n>> other old animals, such as prairiedog.\n\n> Yea, that's an *old* OS version. One version too old to have support for\n> @rpath, added in 10.5 :(. Is there a reason to run 10.4 specifically?\n> According to wikipedia 10.5 is the last version to support ppc.\n\nMy notes say\n\n Currently running OSX 10.4.11 (last release of Tiger); although 10.5 Leopard\n supports PPCs, it refuses to install if CPU speed < 867MHz, well beyond the\n Cube's ability. Wikipedia does suggest it's possible to run Leopard, but...\n https://en.wikipedia.org/wiki/Mac_OS_X_Leopard#Usage_on_unsupported_hardware\n\nI'm not sure that I have install media for 10.5 anymore, either --- ISTR\nsome machine's CD drive failing and not letting me get the CD back out.\nIf I did have it, I don't think there'd be a way to update past 10.5.0\n(surely Apple no longer has those updaters on-line?), so on the whole\nI think that path is a nonstarter.\n\nI do have 10.5 running on an old G4 PowerMac, but that machine is (a)\nnoisy (b) power-hungry and (c) getting flaky, so I'm uneager to spin up\na buildfarm animal on it.\n\nAs with the HPPA, a potential compromise is to spin up some newer\nBSD-ish system on it. I agree that OSX 10.4 is uninteresting as a\nsoftware platform, but I'd like to keep 32-bit PPC represented in\nthe farm.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Oct 2021 21:26:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-19 17:31:22 -0700, Andres Freund wrote:\n> I also need to make meson use our flex wrapper for the relevant versions... I\n> can see the warning that'd be fixed by it on macos CI. Will do that and push\n> it out to my github repo together with your changes.\n\nThat turned out to be more work than I anticipated, so I pushed your changes\nout separately.\n\nThere's this bit in plflex.pl that talks about adjusting yywrap() for msvc. I\ndidn't implement that and didn't see any compilation problems. Looks like that\noriginally hails from 2011, in 08a0c2dabc3b9d59d72d7a79ed867b8e37d275a7\n\nHm. Seems not worth carrying forward unless it actually causes trouble?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 19 Oct 2021 18:35:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-19 21:26:53 -0400, Tom Lane wrote:\n> My notes say\n> \n> Currently running OSX 10.4.11 (last release of Tiger); although 10.5 Leopard\n> supports PPCs, it refuses to install if CPU speed < 867MHz, well beyond the\n> Cube's ability. Wikipedia does suggest it's possible to run Leopard, but...\n> https://en.wikipedia.org/wiki/Mac_OS_X_Leopard#Usage_on_unsupported_hardware\n> \n> I'm not sure that I have install media for 10.5 anymore, either --- ISTR\n> some machine's CD drive failing and not letting me get the CD back out.\n> If I did have it, I don't think there'd be a way to update past 10.5.0\n> (surely Apple no longer has those updaters on-line?), so on the whole\n> I think that path is a nonstarter.\n\nThat does indeed sound like a nonstarter.\n\n\n> I do have 10.5 running on an old G4 PowerMac, but that machine is (a)\n> noisy (b) power-hungry and (c) getting flaky, so I'm uneager to spin up\n> a buildfarm animal on it.\n\nUnderstandable.\n\n\n> As with the HPPA, a potential compromise is to spin up some newer\n> BSD-ish system on it. I agree that OSX 10.4 is uninteresting as a\n> software platform, but I'd like to keep 32-bit PPC represented in\n> the farm.\n\nI assume the reason 32-bit PPC is interesting is that it's commonly run big\nendian?\n\nI wonder when it'll be faster to run 32bit ppc via qemu than natively :)\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 19 Oct 2021 18:49:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-10-19 21:26:53 -0400, Tom Lane wrote:\n>> As with the HPPA, a potential compromise is to spin up some newer\n>> BSD-ish system on it. I agree that OSX 10.4 is uninteresting as a\n>> software platform, but I'd like to keep 32-bit PPC represented in\n>> the farm.\n\n> I assume the reason 32-bit PPC is interesting is that it's commonly run big\n> endian?\n\nAside from bit width and endianness, I believe it's a somewhat smaller\ninstruction set than the newer CPUs.\n\n> I wonder when it'll be faster to run 32bit ppc via qemu than natively :)\n\nI think qemu would have a ways to go for that. More to the point,\nI've found that its emulation is not as precise as one might wish...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Oct 2021 22:04:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-19 18:49:43 -0700, Andres Freund wrote:\n> I wonder when it'll be faster to run 32bit ppc via qemu than natively :)\n\nFreebsd didn't seem to want to boot, but surprisingly a debian buster image\nstarted at least the installer without problems... Will probably take a while\nto see if it actually works.\n\nI assume to make it acceptable from a build-speed perspective one would have\nto use distcc with the compiler running outside.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 19 Oct 2021 19:41:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-19 19:41:56 -0700, Andres Freund wrote:\n> On 2021-10-19 18:49:43 -0700, Andres Freund wrote:\n> > I wonder when it'll be faster to run 32bit ppc via qemu than natively :)\n>\n> Freebsd didn't seem to want to boot, but surprisingly a debian buster image\n> started at least the installer without problems... Will probably take a while\n> to see if it actually works.\n\nThe build was quite slow (cold ccache cache, only 1 cpu):\nreal\t106m33.418s\nuser\t86m36.363s\nsys\t17m33.830s\n\nBut the actual test time wasn't *too* bad, compared to the 32bit ppc animals\n\nreal\t12m14.944s\nuser\t0m51.622s\nsys\t0m44.743s\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 20 Oct 2021 09:01:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-12 15:55:22 -0400, John Naylor wrote:\n> Also, could utility makefile targets be made to work? I'm thinking in\n> particular of update-unicode and reformat-dat-files, for example.\n\nImplementing reformat-dat-files was trivial:\nhttps://github.com/anarazel/postgres/commit/29c1ce1ad4731290714978da5ce81e99ef051bec\n\n\nHowever, update-unicode is a bit harder. Partially not directly because of\nmeson, but because update-unicode as-is afaict doesn't support VPATH builds,\nand meson enforces those.\n\nmake update-unicode\n...\nmake -C src/common/unicode update-unicode\n'/usr/bin/perl' generate-unicode_norm_table.pl\nCan't open perl script \"generate-unicode_norm_table.pl\": No such file or directory\n\nIt's not too hard to fix. See attached for the minimal stuff that I\nimmediately found to be needed. There's likely more,\ne.g. src/backend/utils/mb/Unicode - but I didn't immediately see where that's\ninvoked from.\n\n\nThe slightly bigger issue making update-unicode work with meson is that meson\ndoesn't provide support for invoking build targets in specific directories\n(because it doesn't map nicely to e.g. msbuild). But scripts like\nsrc/common/unicode/generate-unicode_norm_table.pl rely on CWD. It's not hard\nto work around that, but IMO it's better for such scripts to not rely on CWD.\n\n\nGreetings,\n\nAndres Freund", "msg_date": "Thu, 21 Oct 2021 14:48:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Thu, Oct 21, 2021 at 5:48 PM Andres Freund <andres@anarazel.de> wrote:\n\n> However, update-unicode is a bit harder. Partially not directly because\nof\n> meson, but because update-unicode as-is afaict doesn't support VPATH\nbuilds,\n> and meson enforces those.\n\n> make update-unicode\n> ...\n> make -C src/common/unicode update-unicode\n> '/usr/bin/perl' generate-unicode_norm_table.pl\n> Can't open perl script \"generate-unicode_norm_table.pl\": No such file or\ndirectory\n>\n> It's not too hard to fix. See attached for the minimal stuff that I\n> immediately found to be needed.\n\nThanks for doing that, it works well enough for demonstration. With your\npatch, and using an autoconf VPATH build, the unicode tables work fine, but\nit complains of a permission error in generate_unaccent_rules.py. That\nseems to be because the script is invoked directly rather than as an\nargument to the python interpreter.\n\n> The slightly bigger issue making update-unicode work with meson is that\nmeson\n> doesn't provide support for invoking build targets in specific directories\n> (because it doesn't map nicely to e.g. msbuild). But scripts like\n> src/common/unicode/generate-unicode_norm_table.pl rely on CWD. It's not\nhard\n> to work around that, but IMO it's better for such scripts to not rely on\nCWD.\n\nYeah. I encountered a further issue: With autoconf on HEAD, with a source\ntree build executed in contrib/unaccent:\n\n$ touch generate_unaccent_rules.py\n$ make update-unicode\ngenerate_unaccent_rules.py --unicode-data-file\n../../src/common/unicode/UnicodeData.txt --latin-ascii-file Latin-ASCII.xml\n>unaccent.rules\n/bin/sh: generate_unaccent_rules.py: command not found\nmake: *** [unaccent.rules] Error 127\nmake: *** Deleting file `unaccent.rules'\n\n...so in this case it seems not to know to use CWD here.\n\nAnyway, this can be put off until the very end, since it's not run often.\nYou've demonstrated how these targets would work, and that's good enough\nfor now.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Oct 21, 2021 at 5:48 PM Andres Freund <andres@anarazel.de> wrote:> However, update-unicode is a bit harder.  Partially not directly because of> meson, but because update-unicode as-is afaict doesn't support VPATH builds,> and meson enforces those.> make update-unicode> ...> make -C src/common/unicode update-unicode> '/usr/bin/perl' generate-unicode_norm_table.pl> Can't open perl script \"generate-unicode_norm_table.pl\": No such file or directory>> It's not too hard to fix. See attached for the minimal stuff that I> immediately found to be needed.Thanks for doing that, it works well enough for demonstration. With your patch, and using an autoconf VPATH build, the unicode tables work fine, but it complains of a permission error in generate_unaccent_rules.py. That seems to be because the script is invoked directly rather than as an argument to the python interpreter.> The slightly bigger issue making update-unicode work with meson is that meson> doesn't provide support for invoking build targets in specific directories> (because it doesn't map nicely to e.g. msbuild). But scripts like> src/common/unicode/generate-unicode_norm_table.pl rely on CWD. It's not hard> to work around that, but IMO it's better for such scripts to not rely on CWD.Yeah. I encountered a further issue: With autoconf on HEAD, with a source tree build executed in contrib/unaccent:$ touch generate_unaccent_rules.py$ make update-unicodegenerate_unaccent_rules.py --unicode-data-file ../../src/common/unicode/UnicodeData.txt --latin-ascii-file Latin-ASCII.xml >unaccent.rules/bin/sh: generate_unaccent_rules.py: command not foundmake: *** [unaccent.rules] Error 127make: *** Deleting file `unaccent.rules'...so in this case it seems not to know to use CWD here.Anyway, this can be put off until the very end, since it's not run often. You've demonstrated how these targets would work, and that's good enough for now.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Fri, 22 Oct 2021 11:55:05 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nAttached is an updated version of the meson patchset.\n\nChanges:\n\n- support for remaining binaries in src/bin, contrib modules\n\n- nearly all tests, including src/test/modules etc, are integrated.\n\n- quite a few more, but not yet all, optional dependencies (most are\n exercised in the included CI)\n\n- runs tests on SIP enabled macos without needing a prior installation /\n installation is relocatable\n\n- support for building docs.\n I couldn't get dbtoepub work in a vpath style build, so I changed that\n to also use pandoc. No idea if anybody uses the epub rules?\n\n- 32bit x86 [1], 64bit aarch64 builds\n\n- cross-building windows from linux works\n\n- error when building with meson against a source tree with an in-tree\n autoconf build (leads to problems with pg_config.h etc)\n\n- update-unicode, reformat-dat-files, expand-dat-files\n\n\nBigger missing pieces:\n\n- pgxs (that's a *hard* one)\n\n- NLS\n\n- test / add support for platforms besides freebsd, linux, macos, windows\n\n- remaining hardcoded configure tests (e.g. ACCEPT_TYPE_ARG*)\n\n- win32 resource files only handled for two binaries, needs to be made\n more compact\n\n- ecpg\n\n- fixing up flex output\n\n- truckloads of polishing\n\n- some tests (e.g. pg_upgrade, because of the upcoming tap conversion,\n other tests that are shell scripts). Some tests are now run\n unconditionally that previously were opt-in.\n\n- what exactly gets installed where\n\n- a \"dist\" target\n\n- fix \"ldap\" build on macos\n\n\nGreetings,\n\nAndres Freund\n\n[1] I had not defined SIZEOF_SIZE_T. Surprisingly that still results in\na successful 64bit build, but not a successful 32bit build.", "msg_date": "Sun, 31 Oct 2021 16:24:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v" }, { "msg_contents": "On 01.11.21 00:24, Andres Freund wrote:\n> - remaining hardcoded configure tests (e.g. ACCEPT_TYPE_ARG*)\n\nI think we can get rid of that one.\n\nThat test originally catered to some strange edge cases where the third \nargument was size_t that was not the same size as int. That is long \ngone, if it ever really existed. All systems currently of interest use \neither socklen_t or int, and socklen_t is always int. (A few build farm \nanimals report size_t, but they are all 32-bit.)\n\nI think we can change the code to use socklen_t and add a simple check \nto typedef socklen_t as int if not available. See attached patch.", "msg_date": "Thu, 4 Nov 2021 19:17:05 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v" }, { "msg_contents": "Hi,\n\nOn 2021-11-04 19:17:05 +0100, Peter Eisentraut wrote:\n> On 01.11.21 00:24, Andres Freund wrote:\n> > - remaining hardcoded configure tests (e.g. ACCEPT_TYPE_ARG*)\n> \n> I think we can get rid of that one.\n\nOh, nice!\n\nI was somewhat confused by \"unsigned int PASCAL\" as a type.\n\n\n> That test originally catered to some strange edge cases where the third\n> argument was size_t that was not the same size as int. That is long gone,\n> if it ever really existed. All systems currently of interest use either\n> socklen_t or int, and socklen_t is always int. (A few build farm animals\n> report size_t, but they are all 32-bit.)\n\n> diff --git a/src/include/c.h b/src/include/c.h\n> index c8ede08273..7c790f557e 100644\n> --- a/src/include/c.h\n> +++ b/src/include/c.h\n> @@ -408,6 +408,10 @@ typedef unsigned char bool;\n> * ----------------------------------------------------------------\n> */\n> \n> +#ifndef HAVE_SOCKLEN_T\n> +typedef socklen_t int;\n> +#endif\n\nI'd put this in port.h instead of c.h, or is there a reason not to do so?\n\n\nProbably worth putting this in fairly soon independent of whether anything\nhappens wrt meson?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Nov 2021 11:48:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v" }, { "msg_contents": "On 04.11.21 19:48, Andres Freund wrote:\n> Probably worth putting this in fairly soon independent of whether anything\n> happens wrt meson?\n\nOK, done. Let's see what happens. ;-)\n\n\n", "msg_date": "Tue, 9 Nov 2021 16:49:24 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v" }, { "msg_contents": "On 01.11.21 00:24, Andres Freund wrote:\n> Hi,\n> \n> Attached is an updated version of the meson patchset.\n\nNanoreview: I think the patch\n\nSubject: [PATCH v5 11/16] meson: prereq: Handle DLSUFFIX in msvc builds\n similar to other build envs.\n\nis good to go. It's not clear why it's needed in this context, but it \nseems good in general to make these things more consistent.\n\n\n", "msg_date": "Wed, 10 Nov 2021 11:07:02 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v" }, { "msg_contents": "Hi,\n\nOn 2021-11-10 11:07:02 +0100, Peter Eisentraut wrote:\n> On 01.11.21 00:24, Andres Freund wrote:\n> > Hi,\n> > \n> > Attached is an updated version of the meson patchset.\n> \n> Nanoreview: I think the patch\n\nThanks for looking!\n\n\n> Subject: [PATCH v5 11/16] meson: prereq: Handle DLSUFFIX in msvc builds\n> similar to other build envs.\n> \n> is good to go. It's not clear why it's needed in this context, but it seems\n> good in general to make these things more consistent.\n\nThe way it was set between msvc and other builds is currently inconsistent\nbetween msvc and other builds, by virtue of win32_port.h defining for msvc:\n\n/* Things that exist in MinGW headers, but need to be added to MSVC */\n#ifdef _MSC_VER\n...\n/* Pulled from Makefile.port in MinGW */\n#define DLSUFFIX \".dll\"\n\n\nit'd have needed unnecessarily contorted logic to continue setting DLSUFFIX\nvia commandline for !msvc, given that the the meson stuff is the same for msvc\nand !msvc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 10 Nov 2021 19:21:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v" }, { "msg_contents": "Hi,\n\nFWIW, I tried building postgres on a few other operating systems using\nmeson, after I got access to the gcc compile farm. Here's the results:\n\n\n- openbsd: Compiled fine. Hit one issue running tests:\n\n openbsd has *completely* broken $ORIGIN support. It uses CWD as $ORIGIN\n rpaths, which obviously breaks for binaries invoked via PATH. So there\n goes the idea to only use $ORIGIN to run tests. Still seems worth to use\n on other platforms, particularly because it works with SIP on macos\n\n I understand not supporting $ORIGIN at all. But implementing it this way\n seems insane.\n\n\n I also ran into some problems with the semaphore limits. I had to switch to\n USE_NAMED_POSIX_SEMAPHORES to make the tests pass at all.\n\n\n- netbsd: Compiled fine after some minor fix. There's a bit more to fix around\n many libraries not being in the normal library directory, but in\n /usr/pkg/lib, which is not in the library search path (i.e. we need to add\n an rpath for that in a few more places).\n\n\n- AIX: Compiled and basic postgres runs fine after a few fixes (big endian\n test, converting exports.txt into the right format). Doesn't yet\n successfully run more than trivial tests, because I didn't implement the\n necessary generation of import files for postgres, but that's just a bit of\n work.\n\n This is hampered by the fact that the vanilla postgres crashes for me. I\n haven't quite figured out what's the problem. Might be a system issue -\n lots of other tools, e.g. perl, segfault frequently.\n\n\n One important thing to call out: Meson has support for the AIX linker, but\n *not* the xlc compiler. I.e. one has to use gcc (or clang, but I didn't\n try). I don't know if we'd require adding support for xlc to meson - xlc is\n pretty buggy and it doesn't seem particularly crucial to support such an old\n crufty compiler on a platform that's not used to a significant degree?\n\n\nGreetings,\n\nAndres\n\n\n", "msg_date": "Mon, 15 Nov 2021 10:34:15 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> One important thing to call out: Meson has support for the AIX linker, but\n> *not* the xlc compiler. I.e. one has to use gcc (or clang, but I didn't\n> try). I don't know if we'd require adding support for xlc to meson - xlc is\n> pretty buggy and it doesn't seem particularly crucial to support such an old\n> crufty compiler on a platform that's not used to a significant degree?\n\nWhile I have no particular interest in AIX or xlc specifically, I do\nworry about us becoming a builds-on-gcc-or-workalikes-only project.\nI suppose MSVC provides a little bit of a cross-check, but I don't\nreally like giving up on other compilers. Discounting gcc+clang+MSVC\nleaves just a few buildfarm animals, and the xlc ones are a significant\npart of that population. (In fact, unless somebody renews fossa/husky's\nicc license, the three xlc animals will be an outright majority of\nthem, because wrasse and anole are the only other active animals with\nnon-mainstream compilers.)\n\nHaving said that, I don't plan to be the one trying to get meson\nto add xlc support ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Nov 2021 14:11:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-11-15 14:11:25 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > One important thing to call out: Meson has support for the AIX linker, but\n> > *not* the xlc compiler. I.e. one has to use gcc (or clang, but I didn't\n> > try). I don't know if we'd require adding support for xlc to meson - xlc is\n> > pretty buggy and it doesn't seem particularly crucial to support such an old\n> > crufty compiler on a platform that's not used to a significant degree?\n>\n> While I have no particular interest in AIX or xlc specifically, I do\n> worry about us becoming a builds-on-gcc-or-workalikes-only project.\n> I suppose MSVC provides a little bit of a cross-check, but I don't\n> really like giving up on other compilers. Discounting gcc+clang+MSVC\n> leaves just a few buildfarm animals, and the xlc ones are a significant\n> part of that population.\n\nYea, that's a reasonable concern. I wonder if there's some non-mainstream\ncompiler that actually works on, um, more easily available platforms that we\ncould utilize.\n\n\n> (In fact, unless somebody renews fossa/husky's\n> icc license, the three xlc animals will be an outright majority of\n> them, because wrasse and anole are the only other active animals with\n> non-mainstream compilers.)\n\nIt should probably be doable to get somebody to run another icc animal. Icc is\nsupported by meson, fwiw.\n\n\n> Having said that, I don't plan to be the one trying to get meson\n> to add xlc support ...\n\nIt'd probably not be too hard. But given that it's quite hard to get access to\nAIX + xlc, I'm not sure it's something I want to propose. There's no resources\nto run halfway regular tests on that I found...\n\n\nIt's good to make sure we're not growing too reliant on some compiler(s), but\nimo only really makes sense if the alternative compilers are meaningfully\navailable and maintained.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Nov 2021 11:23:40 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Mon, Nov 15, 2021 at 2:23 PM Andres Freund <andres@anarazel.de> wrote:\n> It's good to make sure we're not growing too reliant on some compiler(s), but\n> imo only really makes sense if the alternative compilers are meaningfully\n> available and maintained.\n\nThat's a sensible position. I do worry that with this proposed move\nwe're going to be giving up some of the flexibility that we have right\nnow. I'm not sure exactly what that means in practice. But make is\njust a way of running shell commands, and so you can run any shell\ncommands you want. The concept of some compiler not being supported\nisn't really a thing that even makes sense in a world that is powered\nby make. With a big enough hammer you can run any commands you like,\nincluding any compilation commands you like. The whole thing is likely\nto be a bit crufty which is a downside, and you might spend more time\nfiddling with it than you really want. But nothing is really ever\nblocked.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Nov 2021 14:48:48 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Tue, Nov 16, 2021 at 8:23 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-11-15 14:11:25 -0500, Tom Lane wrote:\n> > Having said that, I don't plan to be the one trying to get meson\n> > to add xlc support ...\n>\n> It'd probably not be too hard. But given that it's quite hard to get access to\n> AIX + xlc, I'm not sure it's something I want to propose. There's no resources\n> to run halfway regular tests on that I found...\n\nFWIW there's a free-as-in-beer edition of xlc for Linux (various\ndistros, POWER only) so you could use qemu, though of course there\nwill be differences WRT AIX especially around linking, and I suppose a\nbig part of that work would be precisely understanding stuff like\nlinker details.\n\nIt looks like we have two xlc 12.1 compilers in the farm, but those\ncompilers are EOL'd[1]. The current release is 16.1, and we have one\nof those. The interesting thing about 16.1 is that you can invoke it\nas xlclang to get the new clang frontend and, I think, possibly use\nmore clang/gcc-ish compiler switches[2].\n\n[1] https://www.ibm.com/support/pages/lifecycle/search?q=xl%20c%2Fc%2B%2B\n[2] https://www.ibm.com/docs/en/xl-c-and-cpp-aix/16.1?topic=new-clang-based-front-end\n\n\n", "msg_date": "Tue, 16 Nov 2021 11:08:15 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> ... The interesting thing about 16.1 is that you can invoke it\n> as xlclang to get the new clang frontend and, I think, possibly use\n> more clang/gcc-ish compiler switches[2].\n> [2] https://www.ibm.com/docs/en/xl-c-and-cpp-aix/16.1?topic=new-clang-based-front-end\n\nHo, that's an earful ;-). Though I wonder whether that frontend\nhides the AIX-specific linking issues you mentioned. (Also, although\nI see /opt/IBM/xlc/16.1.0/ on gcc119, there's no xlclang there.\nSo whether we have useful access to it right now is unclear.)\n\nThis plays into something that was nagging at me while I wrote my\nupthread screed about not giving up on non-gcc/clang compilers:\nare those compilers outcompeting all the proprietary ones, to the\nextent that the latter will be dead soon anyway? I think Microsoft\nis rich enough and stubborn enough to keep on developing MSVC no\nmatter what, but other compiler vendors may see the handwriting\non the wall. Writing C compilers can't be a growth industry these\ndays.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Nov 2021 17:34:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-11-15 17:34:33 -0500, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > ... The interesting thing about 16.1 is that you can invoke it\n> > as xlclang to get the new clang frontend and, I think, possibly use\n> > more clang/gcc-ish compiler switches[2].\n> > [2] https://www.ibm.com/docs/en/xl-c-and-cpp-aix/16.1?topic=new-clang-based-front-end\n>\n> Ho, that's an earful ;-). Though I wonder whether that frontend\n> hides the AIX-specific linking issues you mentioned. (Also, although\n> I see /opt/IBM/xlc/16.1.0/ on gcc119, there's no xlclang there.\n> So whether we have useful access to it right now is unclear.)\n\nIt's actually available there, but in /opt/IBM/xlC/16.1.0/bin/xlclang++ (note\nthe upper case C).\n\nIt doesn't really hide the linking issues afaict. I think they're basically an\nABI rather than a linker invocation issue. It's not that hard to address them\nthough, it's basically making mkldexport.sh a tiny bit more general and\nintegrating it into src/backend/postgres' build.\n\nWe don't have to generate export files for shared libraries anymore though,\nafaict, because there's 'expall', which suffices for our purposes. dlopen()\ndoesn't require an import file.\n\n\n> This plays into something that was nagging at me while I wrote my\n> upthread screed about not giving up on non-gcc/clang compilers:\n> are those compilers outcompeting all the proprietary ones, to the\n> extent that the latter will be dead soon anyway?\n\nI think that's a pretty clear trend. The ones that aren't dying seem to be\nincrementally onto more and more rebasing onto llvm tooling.\n\nIt doesn't help that most of those compilers are primarily for OSs that, uh,\naren't exactly growing. Which limits their potential usability significantly.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Nov 2021 14:46:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Tue, Nov 16, 2021 at 11:08 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> FWIW there's a free-as-in-beer edition of xlc for Linux (various\n> distros, POWER only) so you could use qemu,\n\n(It's also known to be possible to run AIX 7.2 on qemu, but the\ninstall media is not made available to developers for testing/CI\nwithout a hardware serial number. Boo.)\n\n\n", "msg_date": "Tue, 16 Nov 2021 11:52:33 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Tue, Nov 16, 2021 at 8:23 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-11-15 14:11:25 -0500, Tom Lane wrote:\n> > (In fact, unless somebody renews fossa/husky's\n> > icc license, the three xlc animals will be an outright majority of\n> > them, because wrasse and anole are the only other active animals with\n> > non-mainstream compilers.)\n>\n> It should probably be doable to get somebody to run another icc animal. Icc is\n> supported by meson, fwiw.\n\nFWIW, in case someone is interested in bringing ICC back to the farm,\nsome light googling tells me that newer editions of \"classic\" ICC (as\nopposed to \"data parallel\" ICC, parts of some kind of rebrand) no\nlonger require regular licence bureaucracy, and can be installed in\nmodern easier to maintain ways. For example, I see that some people\nadd Intel's APT repository and apt-get install the compiler inside CI\njobs, on Ubuntu.\n\n\n", "msg_date": "Fri, 19 Nov 2021 09:44:37 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On 10/13/21 16:06, Andrew Dunstan wrote:\n> On 10/13/21 1:26 PM, Andres Freund wrote:\n>>> pexports will be in the resulting path, and the build will use the\n>>> native compiler.\n>> I don't see pexports anywhere in the msys installation. I can see it available\n>> on sourceforge, and I see a few others asking where to get it from in the\n>> context of msys, and being pointed to manually downloading it.\n>\n>\n> Weird. fairywren has it, which means that it must have been removed from\n> the packages at some stage, fairly recently as fairywren isn't that old.\n> I just confirmed the absence on a 100% fresh install.\n>\n>\n> It is in Strawberry's c/bin directory.\n>\n>\n>> Seems like we should consider using gendef instead of pexports, given it's\n>> available in msys?\n>\n> Yeah. It's missing on my ancient msys animal (frogmouth), but it doesn't\n> build --with-perl.\n>\n>\n> jacana seems to have it.\n>\n>\n> If you prep a patch I'll test it.\n>\n>\n\nHere's a patch. I've tested the perl piece on master and it works fine.\nIt applies cleanly down to 9.4, which is before we got transform modules\n(9.5) which fail if we just omit doing this platform-specific piece.\n\n\nBefore that only plpython uses pexports, and we're not committed to\nsupporting plpython at all on old branches.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 6 Feb 2022 12:06:41 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2022-02-06 12:06:41 -0500, Andrew Dunstan wrote:\n> Here's a patch. I've tested the perl piece on master and it works fine.\n> It applies cleanly down to 9.4, which is before we got transform modules\n> (9.5) which fail if we just omit doing this platform-specific piece.\n\nGiven https://postgr.es/m/34e972bc-6e75-0754-9e6d-cde2518773a1%40dunslane.net\nwouldn't it make sense to simply remove the pexports/gendef logic instead of\nmoving to gendef?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 6 Feb 2022 10:39:36 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "\nOn 2/6/22 13:39, Andres Freund wrote:\n> Hi,\n>\n> On 2022-02-06 12:06:41 -0500, Andrew Dunstan wrote:\n>> Here's a patch. I've tested the perl piece on master and it works fine.\n>> It applies cleanly down to 9.4, which is before we got transform modules\n>> (9.5) which fail if we just omit doing this platform-specific piece.\n> Given https://postgr.es/m/34e972bc-6e75-0754-9e6d-cde2518773a1%40dunslane.net\n> wouldn't it make sense to simply remove the pexports/gendef logic instead of\n> moving to gendef?\n>\n\nI haven't found a way to fix the transform builds if we do that. So\nlet's leave that as a separate exercise unless you have a solution for\nthat - this patch is really trivial.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 6 Feb 2022 15:57:03 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nI've been wondering whether we should try to have the generated pg_config.h\nlook as similar as possible to autoconf/autoheader's, or not. And whether the\nway autoconf/autoheader define symbols makes sense when not using either\nanymore.\n\nTo be honest, I do not really understand the logic behind when autoconf ends\nup with #defines that define a macro to 0/1 and when a macro ends defined/or\nnot and when we end up with a macro defined to 1 or not defined at all.\n\nSo far I've tried to mirror the logic, but not the description / comment\nformatting of the individual macros.\n\nThe \"defined to 1 or not defined at all\" behaviour is a mildly awkward to\nachieve with meson, because it doesn't match the behaviour for booleans\noptions meson has (there are two builtin behaviours, one to define/undefine a\nmacro, the other to set the macro to 0/1. But there's none that defines a\nmacro to 1 or undefines it).\n\nProbably best to initially have the macros defined as similar as reasonably\npossible, but subsequently clean things up a bit.\n\n\nA second aspect that I'm wondering about is whether we should try to split\npg_config.h output a bit:\n\nWith meson it's easy to change options like whether to build with some\ndependency in an existing build tree and then still get a reliable build\nresult (ninja rebuilds if the commandline changed from the last invocation).\n\nBut right now doing so often ends up with way bigger rebuilds than necessary,\nbecause for a lot of options we add #define USE_LDAP 1 etc to pg_config.h,\nwhich of course requires rebuilding a lot of files. Even though most of these\nsymbols are only checked in a handful of files, often only .c files.\n\nIt seems like it might make sense to separate out defines that depend on the\ncompiler / \"standard libraries\" (e.g. {SIZEOF,ALIGNOF}_*,\nHAVE_DECL_{STRNLEN,...}, HAVE_*_H) from feature defines (like\nUSE_{LDAP,ICU,...}). The header containing the latter could then included in\nthe places needing it (or we could have one header for each of the places\nusing it).\n\nPerhaps we should also separate out configure-time settings like BLCKSZ,\nDEF_PGPORT, etc. Realistically most of them are going to require a \"full tree\"\nrecompile anway, but it seems like it might make things easier to understand.\n\nI think a split into pg_config_{platform,features,settings}.h could make sense.\n\nSimilar to above, it's probably best to do this separately after merging meson\nsupport. But knowing what the split should eventually look like would be\nhelpful before, to ensure it's easy to do.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 7 Feb 2022 11:24:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - autogenerated headers" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I've been wondering whether we should try to have the generated pg_config.h\n> look as similar as possible to autoconf/autoheader's, or not. And whether the\n> way autoconf/autoheader define symbols makes sense when not using either\n> anymore.\n\n> To be honest, I do not really understand the logic behind when autoconf ends\n> up with #defines that define a macro to 0/1 and when a macro ends defined/or\n> not and when we end up with a macro defined to 1 or not defined at all.\n\nAgreed, that always seemed entirely random to me too. I'd be content\nto end up with \"defined or not defined\" as the standard. I think\nwe have way more #ifdef tests than #if tests, so changing the latter\nseems more sensible than changing the former.\n\n> A second aspect that I'm wondering about is whether we should try to split\n> pg_config.h output a bit:\n\nTBH I can't get excited about that. I do not think that rebuilding\nwith different options is a critical path. ccache already does most\nof the heavy lifting when you do that sort of thing, anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Feb 2022 16:30:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - autogenerated headers" }, { "msg_contents": "Hi,\n\nI was trying to fix a few perl embedding oddities in the meson\npatchset.\n\nWhenever I have looked at the existing code, I've been a bit confused about\nthe following\n\ncode/comment in perl.m4:\n\n# PGAC_CHECK_PERL_EMBED_LDFLAGS\n# -----------------------------\n# We are after Embed's ldopts, but without the subset mentioned in\n# Config's ccdlflags; [...]\n\n\tpgac_tmp1=`$PERL -MExtUtils::Embed -e ldopts`\n\tpgac_tmp2=`$PERL -MConfig -e 'print $Config{ccdlflags}'`\n\tperl_embed_ldflags=`echo X\"$pgac_tmp1\" | sed -e \"s/^X//\" -e \"s%$pgac_tmp2%%\" -e [\"s/ -arch [-a-zA-Z0-9_]*//g\"]`\n\nWhat is the reason behind subtracting ccdlflags?\n\n\nThe comment originates in:\n\ncommit d69a419e682c2d39c2355105a7e5e2b90357c8f0\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2009-09-08 18:15:55 +0000\n\n Remove any -arch switches given in ExtUtils::Embed's ldopts from our\n perl_embed_ldflags setting. On OS X it seems that ExtUtils::Embed is\n trying to force a universal binary to be built, but you need to specify\n that a lot further upstream if you want Postgres built that way; the only\n result of including -arch in perl_embed_ldflags is some warnings at the\n plperl.so link step. Per my complaint and Jan Otto's suggestion.\n\nbut the subtraction goes all the way back to\n\ncommit 7662419f1bc1a994193c319c9304dfc47e121c98\nAuthor: Peter Eisentraut <peter_e@gmx.net>\nDate: 2002-05-28 16:57:53 +0000\n\n Change PL/Perl and Pg interface build to use configured compiler and\n Makefile.shlib system, not MakeMaker.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 7 Feb 2022 17:12:56 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - perl embedding" }, { "msg_contents": "Hi,\n\nOn 2022-02-07 16:30:53 -0500, Tom Lane wrote:\n> > A second aspect that I'm wondering about is whether we should try to split\n> > pg_config.h output a bit:\n> \n> TBH I can't get excited about that. I do not think that rebuilding\n> with different options is a critical path. ccache already does most\n> of the heavy lifting when you do that sort of thing, anyway.\n\nI've found it to be pretty painful when building with msvc, which doesn't have\nccache (yet at least), and where the process startup overhead is bigger.\n\nEven on some other platforms it's useful - it takes a while on net/openbsd to\nrecompile postgres, even if everything is in ccache. If I test on some\nplatform I'll often install the most basic set, get the tests to run, and then\nincrementally figure out what other packages need to be installed etc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 7 Feb 2022 17:22:32 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - autogenerated headers" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> What is the reason behind subtracting ccdlflags?\n\nIt looks like the coding actually originated here:\n\ncommit f5d0c6cad5bb2706e0e63f3f8f32e431ea428100\nAuthor: Bruce Momjian <bruce@momjian.us>\nDate: Wed Jun 20 00:26:06 2001 +0000\n\n Apparently, on some systems, ExtUtils::Embed and MakeMaker are slightly\n broken, and its impossible to make a shared library when compiling with\n both CCDLFLAGS and LDDLFAGS, you have to pick one or the other.\n \n Alex Pilosov\n\nand Peter just copied the logic in 7662419f1. Considering that\nthe point of 7662419f1 was to get rid of MakeMaker, maybe we no\nlonger needed that at that point.\n\nOn my RHEL box, the output of ldopts is sufficiently redundant\nthat the subtraction doesn't actually accomplish much:\n\n$ perl -MExtUtils::Embed -e ldopts\n-Wl,--enable-new-dtags -Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -fstack-protector-strong -L/usr/local/lib -L/usr/lib64/perl5/CORE -lperl -lpthread -lresolv -ldl -lm -lcrypt -lutil -lc\n\n$ perl -MConfig -e 'print $Config{ccdlflags}'\n-Wl,--enable-new-dtags -Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld\n\nwhich leads to\n\n$ grep perl_embed_ldflags src/Makefile.global\nperl_embed_ldflags = -Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -fstack-protector-strong -L/usr/local/lib -L/usr/lib64/perl5/CORE -lperl -lpthread -lresolv -ldl -lm -lcrypt -lutil -lc\n\nso the only thing we actually got rid of was -Wl,--enable-new-dtags,\nwhich I think we'll put back anyway.\n\nThings might be different elsewhere of course, but I'm tempted\nto take out the ccdlflags subtraction and see what the buildfarm\nsays.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Feb 2022 20:42:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - perl embedding" }, { "msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> What is the reason behind subtracting ccdlflags?\n\n> It looks like the coding actually originated here:\n> commit f5d0c6cad5bb2706e0e63f3f8f32e431ea428100\n\nAh, here's the thread leading up to that:\n\nhttps://www.postgresql.org/message-id/flat/200106191206.f5JC6R108371%40candle.pha.pa.us\n\nThe use of ldopts rather than hand-hacked link options seems to date to\n0ed7864d6, only a couple days before that. I don't think we had a\nbuildfarm then, but I'd bet against the problem being especially\nwidespread even then, or more people would've complained.\n\n\nBTW, the business with zapping arch options seems to not be necessary\nanymore either on recent macOS:\n\n$ perl -MExtUtils::Embed -e ldopts\n -fstack-protector-strong -L/System/Library/Perl/5.30/darwin-thread-multi-2level/CORE -lperl\n$ perl -MConfig -e 'print $Config{ccdlflags}'\n $\n\n(same results on either Intel or ARM Mac). However, it looks like it\nis still necessary to keep locust happy, and I have no idea just when\nApple stopped using arch switches here, so we'd better keep that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Feb 2022 21:40:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - perl embedding" }, { "msg_contents": "Hi,\n\nOn 2022-02-07 20:42:09 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > What is the reason behind subtracting ccdlflags?\n>\n> It looks like the coding actually originated here:\n>\n> commit f5d0c6cad5bb2706e0e63f3f8f32e431ea428100\n> Author: Bruce Momjian <bruce@momjian.us>\n> Date: Wed Jun 20 00:26:06 2001 +0000\n>\n> Apparently, on some systems, ExtUtils::Embed and MakeMaker are slightly\n> broken, and its impossible to make a shared library when compiling with\n> both CCDLFLAGS and LDDLFAGS, you have to pick one or the other.\n>\n> Alex Pilosov\n>\n> and Peter just copied the logic in 7662419f1. Considering that\n> the point of 7662419f1 was to get rid of MakeMaker, maybe we no\n> longer needed that at that point.\n\nYea. And maybe what was broken in 2001 isn't broken anymore either ;)\n\n\nLooking at a number of OSs:\n\ndebian sid:\nembed: -Wl,-E -fstack-protector-strong -L/usr/local/lib -L/usr/lib/x86_64-linux-gnu/perl/5.34/CORE -lperl -ldl -lm -lpthread -lc -lcrypt\nldopts: -Wl,-E\n\nfedora:\nembed: -Wl,--enable-new-dtags -Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -fstack-protector-strong -L/usr/local/lib -L/usr/lib64/perl5/CORE -lperl -lpthread -lresolv -ldl -lm -lcrypt -lutil -lc\nldopts: -Wl,--enable-new-dtags -Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1\n\nsuse tumbleweed:\nembed: -Wl,-E -Wl,-rpath,/usr/lib/perl5/5.34.0/x86_64-linux-thread-multi/CORE -L/usr/local/lib64 -fstack-protector-strong -L/usr/lib/perl5/5.34.0/x86_64-linux-thread-multi/CORE -lperl -lm -ldl -lcrypt -lpthread\nldopts: -Wl,-E -Wl,-rpath,/usr/lib/perl5/5.34.0/x86_64-linux-thread-multi/CORE\n\nfreebsd:\nembed: -Wl,-R/usr/local/lib/perl5/5.30/mach/CORE -pthread -Wl,-E -fstack-protector-strong -L/usr/local/lib -L/usr/local/lib/perl5/5.30/mach/CORE -lperl -lpthread -lm -lcrypt -lutil\nldopts: -Wl,-R/usr/local/lib/perl5/5.30/mach/CORE\n\nnetbsd:\nembed: -Wl,-E -Wl,-R/usr/pkg/lib/perl5/5.34.0/x86_64-netbsd-thread-multi/CORE -pthread -L/usr/lib -Wl,-R/usr/lib -Wl,-R/usr/pkg/lib -L/usr/pkg/lib -L/usr/pkg/lib/perl5/5.34.0/x86_64-netbsd-thread-multi/CORE -lperl -lm -lcrypt -lpthread\nldopts: -Wl,-E -Wl,-R/usr/pkg/lib/perl5/5.34.0/x86_64-netbsd-thread-multi/CORE\n\nopenbsd:\nembed: -Wl,-R/usr/libdata/perl5/amd64-openbsd/CORE -Wl,-E -fstack-protector-strong -L/usr/local/lib -L/usr/libdata/perl5/amd64-openbsd/CORE -lperl -lm -lc\nldopts: -Wl,-R/usr/libdata/perl5/amd64-openbsd/CORE\n\naix:\nembed: -bE:/usr/opt/perl5/lib64/5.28.1/aix-thread-multi-64all/CORE/perl.exp -bE:/usr/opt/perl5/lib64/5.28.1/aix-thread-multi-64all/CORE/perl.exp -brtl -bdynamic -b64 -L/usr/opt/perl5/lib64/5.28.1/aix-thread-multi-64all/CORE -lperl -lpthread -lbind -lnsl -ldl -lld -lm -lcrypt -lpthreads -lc\nldopts: -bE:/usr/opt/perl5/lib64/5.28.1/aix-thread-multi-64all/CORE/perl.exp -bE:/usr/opt/perl5/lib64/5.28.1/aix-thread-multi-64all/CORE/perl.exp\n\nmac m1 monterey:\nembed: -fstack-protector-strong -L/System/Library/Perl/5.30/darwin-thread-multi-2level/CORE -lperl\nldopts:\n\nwindows msys install ucrt perl:\nembed: -s -L\"C:\\dev\\msys64\\ucrt64\\lib\\perl5\\core_perl\\CORE\" -L\"C:\\dev\\msys64\\ucrt64\\lib\" \"C:\\dev\\msys64\\ucrt64\\lib\\perl5\\core_perl\\CORE\\libperl532.a\"\nldopts:\n\nwindows strawberrry perl:\nembed: -s -L\"C:\\STRAWB~1\\perl\\lib\\CORE\" -L\"C:\\STRAWB~1\\c\\lib\" \"C:\\STRAWB~1\\perl\\lib\\CORE\\libperl530.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libmoldname.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libkernel32.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libuser32.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libgdi32.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libwinspool.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libcomdlg32.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libadvapi32.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libshell32.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libole32.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\liboleaut32.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libnetapi32.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libuuid.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libws2_32.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libmpr.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libwinmm.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libversion.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libodbc32.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libodbccp32.a\" \"C:\\STRAWB~1\\c\\x86_64-w64-mingw32\\lib\\libcomctl32.a\"\nldopts:\n\n\nSo on windows, macos it makes no difference because ldopts is empty.\n\nOn various linuxes, except red-hat and debian ones, as well as on the BSDs, it\nremoves rpath. Which we then add back in various places (pl and transform\nmodules). On debian the added rpath never will contain the library.\n\nAIX is the one exception. Specifying -bE... doesn't seem right for building\nplperl etc. So possibly the subtraction accidentally does work for us there...\n\n\n> Things might be different elsewhere of course, but I'm tempted\n> to take out the ccdlflags subtraction and see what the buildfarm\n> says.\n\nExcept for the AIX thing I agree :(\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 7 Feb 2022 19:19:33 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - perl embedding" }, { "msg_contents": "On 07.02.22 20:24, Andres Freund wrote:\n> To be honest, I do not really understand the logic behind when autoconf ends\n> up with #defines that define a macro to 0/1 and when a macro ends defined/or\n> not and when we end up with a macro defined to 1 or not defined at all.\n\nThe default is to define to 1 or not at all. The reason for this is \npresumably that originally, autoconf (or its predecessor practices) just \npopulated the command line with a few -DHAVE_THIS options. Creating a \nheader file came later. And -DFOO is equivalent to #define FOO 1. \nAlso, this behavior allows code to use both the #ifdef HAVE_THIS and the \n#if HAVE_THIS style.\n\nThe cases that deviate from this have a special reason for this. One \nissue to consider is that depending on how the configure script is set \nup or structured, a test might not run at all. But for example, if you \nhave a check for a declaration of a function, and the test doesn't run \nin a particular configuration, the fallback in your own code would \nnormally be to then manually declare the function yourself. But if you \ndidn't even run the test, then adding a declaration of a function you \ndidn't want in the first place might be bad. In that case, you can \ncheck with #ifdef whether the test was run, and then check the value of \nthe macro for the test outcome.\n\n\n", "msg_date": "Tue, 8 Feb 2022 14:19:59 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - autogenerated headers" }, { "msg_contents": "\nOn 2/7/22 21:40, Tom Lane wrote:\n> I wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> What is the reason behind subtracting ccdlflags?\n>> It looks like the coding actually originated here:\n>> commit f5d0c6cad5bb2706e0e63f3f8f32e431ea428100\n> Ah, here's the thread leading up to that:\n>\n> https://www.postgresql.org/message-id/flat/200106191206.f5JC6R108371%40candle.pha.pa.us\n>\n> The use of ldopts rather than hand-hacked link options seems to date to\n> 0ed7864d6, only a couple days before that. I don't think we had a\n> buildfarm then, but I'd bet against the problem being especially\n> widespread even then, or more people would've complained.\n\n\nThe buildfarm's first entry is from 22 Oct 2004.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 8 Feb 2022 09:10:36 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - perl embedding" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-02-07 20:42:09 -0500, Tom Lane wrote:\n>> ... Peter just copied the logic in 7662419f1. Considering that\n>> the point of 7662419f1 was to get rid of MakeMaker, maybe we no\n>> longer needed that at that point.\n\n> Yea. And maybe what was broken in 2001 isn't broken anymore either ;)\n\nYeah --- note that Bruce was complaining about a problem on\nPerl 5.005, which was already a bit over-the-hill in 2001.\n\n> AIX is the one exception. Specifying -bE... doesn't seem right for building\n> plperl etc. So possibly the subtraction accidentally does work for us there...\n\nI tried this on AIX 7.2 (using the gcc farm, same build options\nas hoverfly). The build still works and passes regression tests,\nbut you get a warning about each symbol exported by Perl itself:\n\n...\nld: 0711-415 WARNING: Symbol PL_veto_cleanup is already exported.\nld: 0711-415 WARNING: Symbol PL_warn_nl is already exported.\nld: 0711-415 WARNING: Symbol PL_warn_nosemi is already exported.\nld: 0711-415 WARNING: Symbol PL_warn_reserved is already exported.\nld: 0711-415 WARNING: Symbol PL_warn_uninit is already exported.\nld: 0711-415 WARNING: Symbol PL_WB_invlist is already exported.\nld: 0711-415 WARNING: Symbol PL_XPosix_ptrs is already exported.\nld: 0711-415 WARNING: Symbol PL_Yes is already exported.\nld: 0711-415 WARNING: Symbol PL_Zero is already exported.\n\nSo there's about 1200 such warnings for plperl, and then the same\nagain for each contrib foo_plperl module. Maybe that's annoying\nenough that we should keep the logic. OTOH, it seems entirely\naccidental that it has that effect. I'd be a little inclined to\nreplace it with some rule about stripping '-bE:' switches out of\nthe ldopts result.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 08 Feb 2022 18:42:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - perl embedding" }, { "msg_contents": "On Tue, Oct 12, 2021, at 10:37, Andres Freund wrote:\n> - PGXS - and I don't yet know what to best do about it. One\n> backward-compatible way would be to continue use makefiles for pgxs,\n> but do the necessary replacement of Makefile.global.in via meson (and\n> not use that for postgres' own build). But that doesn't really\n> provide a nicer path for building postgres extensions on windows, so\n> it'd definitely not be a long-term path.\n\nTo help evaluate meson, I've put together a list consisting of 6165 Github repos with (?m)^PGXS in the Makefile.\n\nIt's structured in the alphabetical order of each parent repo, with possible children repos underneath, using Markdown nested lists.\n\nhttps://github.com/joelonsql/postgresql-extension-repos\n\nPerhaps such a list could be useful also for other purposes as well,\nmaybe to create some new type of automated tests.\n\n/Joel\nOn Tue, Oct 12, 2021, at 10:37, Andres Freund wrote:- PGXS - and I don't yet know what to best do about it. One  backward-compatible way would be to continue use makefiles for pgxs,  but do the necessary replacement of Makefile.global.in via meson (and  not use that for postgres' own build).  But that doesn't really  provide a nicer path for building postgres extensions on windows, so  it'd definitely not be a long-term path.To help evaluate meson, I've put together a list consisting of 6165 Github repos with (?m)^PGXS in the Makefile.It's structured in the alphabetical order of each parent repo, with possible children repos underneath, using Markdown nested lists.https://github.com/joelonsql/postgresql-extension-reposPerhaps such a list could be useful also for other purposes as well,maybe to create some new type of automated tests./Joel", "msg_date": "Wed, 09 Feb 2022 11:06:13 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2022-02-08 18:42:33 -0500, Tom Lane wrote:\n> I'd be a little inclined to replace it with some rule about stripping '-bE:'\n> switches out of the ldopts result.\n\nSimilar. That's a lot easier to understand than than -bE ending up stripped by\nwhat we're doing. Should I do so, or do you want to?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 9 Feb 2022 17:47:52 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - perl embedding" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-02-08 18:42:33 -0500, Tom Lane wrote:\n>> I'd be a little inclined to replace it with some rule about stripping '-bE:'\n>> switches out of the ldopts result.\n\n> Similar. That's a lot easier to understand than than -bE ending up stripped by\n> what we're doing. Should I do so, or do you want to?\n\nI could look at it later, but if you want to do it, feel free.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Feb 2022 21:34:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - perl embedding" }, { "msg_contents": "\nOn 2/6/22 15:57, Andrew Dunstan wrote:\n> On 2/6/22 13:39, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2022-02-06 12:06:41 -0500, Andrew Dunstan wrote:\n>>> Here's a patch. I've tested the perl piece on master and it works fine.\n>>> It applies cleanly down to 9.4, which is before we got transform modules\n>>> (9.5) which fail if we just omit doing this platform-specific piece.\n>> Given https://postgr.es/m/34e972bc-6e75-0754-9e6d-cde2518773a1%40dunslane.net\n>> wouldn't it make sense to simply remove the pexports/gendef logic instead of\n>> moving to gendef?\n>>\n> I haven't found a way to fix the transform builds if we do that. So\n> let's leave that as a separate exercise unless you have a solution for\n> that - this patch is really trivial.\n>\n>\n\nAny objection to my moving ahead with this? My current workaround is this:\n\n\ncat > /usr/bin/pexports <<EOF\n#!/bin/sh\n/ucrt64/bin/gendef - \"$@\"\nEOF\nchmod +x /usr/bin/pexports\n\n\n(gendef is available in the ucrt64/mingw-w64-ucrt-x86_64-tools-git\npackage on msys2)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 10 Feb 2022 12:00:16 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2022-02-10 12:00:16 -0500, Andrew Dunstan wrote:\n> Any objection to my moving ahead with this?\n\nNo. I don't yet understand what the transforms issue is and whether it can be\navoidded, but clearly it's an improvement to be able to build with builtin\nmsys tools vs not...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 10 Feb 2022 09:52:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "\nOn 2/10/22 12:52, Andres Freund wrote:\n> Hi,\n>\n> On 2022-02-10 12:00:16 -0500, Andrew Dunstan wrote:\n>> Any objection to my moving ahead with this?\n> No. I don't yet understand what the transforms issue is and whether it can be\n> avoidded, but clearly it's an improvement to be able to build with builtin\n> msys tools vs not...\n\n\n\nOK, thanks, done.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 10 Feb 2022 14:06:21 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-13 13:54:10 +0200, Daniel Gustafsson wrote:\n> I added a --tap option for TAP output to pg_regress together with Jinbao Chen\n> for giggles and killing some time a while back.\n\nSorry for not replying to this earlier. I somehow thought I had, but the\narchives disagree.\n\nI think this would be great.\n\n\n> If it's helpful and there's any interest for this I'm happy to finish it up now.\n\nYes! Probably worth starting a new thread for...\n\n\n> One thing that came out of this, is that we don't really handle the ignored\n> tests in the way the code thinks it does for normal output, the attached treats\n> ignored tests as SKIP tests.\n\nI can't really parse the first sentence...\n\n\n> \tif (exit_status != 0)\n> \t\tlog_child_failure(exit_status);\n> @@ -2152,6 +2413,7 @@ regression_main(int argc, char *argv[],\n> \t\t{\"config-auth\", required_argument, NULL, 24},\n> \t\t{\"max-concurrent-tests\", required_argument, NULL, 25},\n> \t\t{\"make-testtablespace-dir\", no_argument, NULL, 26},\n> +\t\t{\"tap\", no_argument, NULL, 27},\n> \t\t{NULL, 0, NULL, 0}\n> \t};\n\nI'd make it a --format=(regress|tap) or such.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Feb 2022 08:52:28 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Is there a current patch set to review in this thread at the moment?\n\n\n", "msg_date": "Mon, 7 Mar 2022 14:56:24 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2022-03-07 14:56:24 +0100, Peter Eisentraut wrote:\n> Is there a current patch set to review in this thread at the moment?\n\nI've been regularly rebasing and improving the patchset, but didn't post to\nthe thread about it most of the time.\n\nI've just pushed another rebase, will work to squash it into a reasonable\nnumber of patches and then repost that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 7 Mar 2022 09:58:41 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nAttached is v6 of the meson patchset. There are a lots of changes since the\nlast version posted. These include:\n- python2 removal is now committed, so not needed in here anymore\n- CI changed to be based on the CI now merged into postgres\n- CI also tests suse, rhel, fedora (Nazir Bilal Yavuz). Found several bugs. I\n don't think we'd merge all of those, but while working on the meson branch,\n it's really useful.\n- all dependencies, except for pl/tcl (should be done soon)\n- several missing options added (segsize, extra_{lib,include}_dirs, enable-tap-tests\n- several portability fixes, builds on net/openbsd without changes now\n- improvements to a number of \"configure\" tests\n- lots of ongoing rebasing changes\n- ...\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 7 Mar 2022 18:56:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v6" }, { "msg_contents": "On 2022-03-07 09:58:41 -0800, Andres Freund wrote:\n> On 2022-03-07 14:56:24 +0100, Peter Eisentraut wrote:\n> > Is there a current patch set to review in this thread at the moment?\n> \n> I've been regularly rebasing and improving the patchset, but didn't post to\n> the thread about it most of the time.\n> \n> I've just pushed another rebase, will work to squash it into a reasonable\n> number of patches and then repost that.\n\nNow done, see https://postgr.es/m/20220308025629.3xh2yo4sau74oafo%40alap3.anarazel.de\n\n\n", "msg_date": "Mon, 7 Mar 2022 18:57:17 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On 08.03.22 03:56, Andres Freund wrote:\n> Attached is v6 of the meson patchset. There are a lots of changes since the\n> last version posted. These include:\n> - python2 removal is now committed, so not needed in here anymore\n> - CI changed to be based on the CI now merged into postgres\n> - CI also tests suse, rhel, fedora (Nazir Bilal Yavuz). Found several bugs. I\n> don't think we'd merge all of those, but while working on the meson branch,\n> it's really useful.\n> - all dependencies, except for pl/tcl (should be done soon)\n> - several missing options added (segsize, extra_{lib,include}_dirs, enable-tap-tests\n> - several portability fixes, builds on net/openbsd without changes now\n> - improvements to a number of \"configure\" tests\n> - lots of ongoing rebasing changes\n> - ...\n\nI looked at this today mainly to consider whether some of the prereq\nwork is ready for adoption now. A lot of the work has to do with\nmaking various scripts write the output to other directories. I\nsuspect this has something to do with how meson handles separate build\ndirectories and how we have so far handled files created in the\ndistribution tarball. But the whole picture isn't clear to me.\n\nMore generally, I don't see a distprep target in the meson build\nfiles. I wonder what your plan for that is, or whether that would\neven work under meson. In [0], I argued for getting rid of the\ndistprep step. Perhaps it is time to reconsider that now.\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/cf0bec33-d965-664d-e0ec-fb15290f2273%402ndquadrant.com\n\nFor the short term, I think the patches 0002, 0008, 0010, and 0011\ncould be adopted, if they are finished as described.\n\nPatch 0007 seems unrelated, or at least independently significant, and\nshould be discussed separately.\n\nThe rest is really all part of the same\nput-things-in-the-right-directory issue.\n\nFor the overall patch set, I did a quick test with\n\nmeson setup build\ncd build\nninja\n\nwhich failed with\n\nUndefined symbols for architecture x86_64:\n \"_bbsink_zstd_new\", referenced from:\n _SendBaseBackup in replication_basebackup.c.o\n\nSo maybe your patch set is not up to date with this new zstd build\noption.\n\n\nDetails:\n\nv6-0001-meson-prereq-output-and-depencency-tracking-work.patch.gz\n\nThis all looks kind of reasonable, but lacks explanation in some\ncases, so I can't fully judge it.\n\nv6-0002-meson-prereq-move-snowball_create.sql-creation-in.patch.gz\n\nLooks like a reasonable direction, would be good to deduplicate with\nInstall.pm.\n\nv6-0003-meson-prereq-add-output-path-arg-in-generate-lwlo.patch.gz\n\nOk. Similar to 0001. (But unlike 0001, nothing in this patch\nactually uses the new output dir option. That only comes in 0013.)\n\nv6-0004-meson-prereq-add-src-tools-gen_versioning_script..patch.gz\n\nThis isn't used until 0013, and there it is patched again, so I'm not\nsure if this is in the right position of this patch series.\n\nv6-0005-meson-prereq-generate-errcodes.pl-accept-output-f.patch.gz\n\nAlso similar to 0001.\n\nv6-0006-meson-prereq-remove-unhelpful-chattiness-in-snowb.patch.gz\n\nMight as well include this into 0002.\n\nv6-0007-meson-prereq-Can-we-get-away-with-not-export-all-.patch.gz\n\nThis is a separate discussion. It's not clear to me why this is part\nof this patch series.\n\nv6-0008-meson-prereq-Handle-DLSUFFIX-in-msvc-builds-simil.patch.gz\n\nPart of this was already done in 0001, so check if these patches are\nsplit correctly.\n\nI think the right way here is actually to go the other way around:\nMove DLSUFFIX into header files for all platforms. Move the DLSUFFIX\nassignment from src/makefiles/ to src/templates/, have configure read\nit, and then substitute it into Makefile.global and pg_config.h.\n\nThen we also don't have to patch the Windows build code a bunch of\ntimes to add the DLSUFFIX define everywhere.\n\nThere is code in configure already that would benefit from this, which\ncurrently says\n\n# We don't know the platform DLSUFFIX here, so check 'em all.\n\nv6-0009-prereq-make-unicode-targets-work-in-vpath-builds.patch.gz\n\nAnother directory issue\n\nv6-0010-ldap-tests-Don-t-run-on-unsupported-operating-sys.patch.gz\n\nNot sure what this is supposed to do, but it looks independent of this\npatch series. Does it currently not work on \"unsupported\" operating\nsystems?\n\nv6-0011-ldap-tests-Add-paths-for-openbsd.patch.gz\n\nThe more the merrier, although I'm a little bit worried about pointing\nto a /usr/local/share/examples/ directory.\n\nv6-0012-wip-split-TESTDIR-into-two.patch.gz\nv6-0013-meson-Add-meson-based-buildsystem.patch.gz\nv6-0014-meson-ci-Build-both-with-meson-and-as-before.patch.gz\n\nI suggest in the interim to add a README.meson to show how to invoke\nthis. Eventually, of course, we'd rewrite the installation\ninstructions.\n\n\n", "msg_date": "Wed, 9 Mar 2022 13:37:23 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v6" }, { "msg_contents": "Hi,\n\nOn 2022-03-09 13:37:23 +0100, Peter Eisentraut wrote:\n> I looked at this today mainly to consider whether some of the prereq\n> work is ready for adoption now.\n\nThanks!\n\n\n> A lot of the work has to do with\n> making various scripts write the output to other directories. I\n> suspect this has something to do with how meson handles separate build\n> directories and how we have so far handled files created in the\n> distribution tarball. But the whole picture isn't clear to me.\n\nA big part of it is that when building with ninja tools are invoked in the\ntop-level build directory, but right now a bunch of our scripts put their\noutput in CWD.\n\n\n> More generally, I don't see a distprep target in the meson build\n> files. I wonder what your plan for that is, or whether that would\n> even work under meson. In [0], I argued for getting rid of the\n> distprep step. Perhaps it is time to reconsider that now.\n> \n> [0]: https://www.postgresql.org/message-id/flat/cf0bec33-d965-664d-e0ec-fb15290f2273%402ndquadrant.com\n\nI think it should be doable to add something roughly like the current distprep. The\ncleanest way would be to use\nhttps://mesonbuild.com/Reference-manual_builtin_meson.html#mesonadd_dist_script\nto copy the files into the generated tarball.\n\nOf course not adding it would be even easier ;)\n\n\n> For the short term, I think the patches 0002, 0008, 0010, and 0011\n> could be adopted, if they are finished as described.\n\nCool.\n\n\n> Patch 0007 seems unrelated, or at least independently significant, and\n> should be discussed separately.\n\nIt's related - it saves us from doing a lot of extra complexity on\nwindows. I've brought it up as a separate thread too:\nhttps://postgr.es/m/20211101020311.av6hphdl6xbjbuif%40alap3.anarazel.de\n\nI'm currently a bit stuck implementing this properly for the configure / make\nsystem, as outlined in:\nhttps://www.postgresql.org/message-id/20220111025328.iq5g6uck53j5qtin%40alap3.anarazel.de\n\n\n> The rest is really all part of the same put-things-in-the-right-directory\n> issue.\n> \n> For the overall patch set, I did a quick test with\n> \n> meson setup build\n> cd build\n> ninja\n> \n> which failed with\n> \n> Undefined symbols for architecture x86_64:\n> \"_bbsink_zstd_new\", referenced from:\n> _SendBaseBackup in replication_basebackup.c.o\n> \n> So maybe your patch set is not up to date with this new zstd build\n> option.\n\nYep, I posted it before \"7cf085f077d - Add support for zstd base backup\ncompression.\" went in, but after 6c417bbcc8f. So the meson build knew about\nthe zstd dependency, but didn't yet specify it for postgres /\npg_basebackup. So all that's needed was / is adding the dependency to those\ntwo places.\n\nUpdated patches attached. This just contains the fix for this issue, doesn't\nyet address review comments.\n\nFWIW, I'd already pushed those fixes out to the git tree. There's frequent\nenough small changes that reposting everytime seems too noisy.\n\n\n> v6-0001-meson-prereq-output-and-depencency-tracking-work.patch.gz\n> \n> This all looks kind of reasonable, but lacks explanation in some\n> cases, so I can't fully judge it.\n\nI'll try to clean it up.\n\n\n> v6-0007-meson-prereq-Can-we-get-away-with-not-export-all-.patch.gz\n> \n> This is a separate discussion. It's not clear to me why this is part\n> of this patch series.\n\nSee above.\n\n\n> v6-0008-meson-prereq-Handle-DLSUFFIX-in-msvc-builds-simil.patch.gz\n> \n> Part of this was already done in 0001, so check if these patches are\n> split correctly.\n> \n> I think the right way here is actually to go the other way around:\n> Move DLSUFFIX into header files for all platforms. Move the DLSUFFIX\n> assignment from src/makefiles/ to src/templates/, have configure read\n> it, and then substitute it into Makefile.global and pg_config.h.\n> \n> Then we also don't have to patch the Windows build code a bunch of\n> times to add the DLSUFFIX define everywhere.\n> \n> There is code in configure already that would benefit from this, which\n> currently says\n> \n> # We don't know the platform DLSUFFIX here, so check 'em all.\n\nI'll try it out.\n\n\n> v6-0009-prereq-make-unicode-targets-work-in-vpath-builds.patch.gz\n> \n> Another directory issue\n\nI think it's a tad different, in that it's fixing something that's currently\nbroken in VPATH builds.\n\n\n> v6-0010-ldap-tests-Don-t-run-on-unsupported-operating-sys.patch.gz\n> \n> Not sure what this is supposed to do, but it looks independent of this\n> patch series. Does it currently not work on \"unsupported\" operating\n> systems?\n\nRight now if you run the ldap tests on windows, openbsd, ... the tests\nfail. The only reason it doesn't cause trouble on the buildfarm is that we\ncurrently don't run those tests by default...\n\n\n> v6-0011-ldap-tests-Add-paths-for-openbsd.patch.gz\n> \n> The more the merrier, although I'm a little bit worried about pointing\n> to a /usr/local/share/examples/ directory.\n\nIt's where the files are in the package :/.\n\n\n> v6-0012-wip-split-TESTDIR-into-two.patch.gz\n> v6-0013-meson-Add-meson-based-buildsystem.patch.gz\n> v6-0014-meson-ci-Build-both-with-meson-and-as-before.patch.gz\n> \n> I suggest in the interim to add a README.meson to show how to invoke\n> this. Eventually, of course, we'd rewrite the installation\n> instructions.\n\nGood idea.\n\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 9 Mar 2022 08:44:20 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v7" }, { "msg_contents": "Hi,\n\nOne thing that's pretty cool with ninja based builds is that it contains a\ndependency log of \"discovered\" dependencies as well as information about\ndependencies \"encoded\" in the build specification. LLVM contains a script that\nuses that dependency information to see whether the build specification is\nmissing any dependencies - this helped me find several \"build bugs\".\n\nThe script is at:\nhttps://github.com/llvm/llvm-project/blob/main/llvm/utils/check_ninja_deps.py\n\nIt just needs to be invoked in the build directory after a build.\n\nIf I e.g. remove the dependency from libpgcommon_srv.a to the lwlocknames.h\ngeneration (*) it complains with:\nerror: src/common/libpgcommon_srv.a.p/cryptohash_openssl.c.o requires src/include/storage/lwlocknames.h but has no dependency on it\nerror: src/common/libpgcommon_srv.a.p/hmac_openssl.c.o requires src/include/storage/lwlocknames.h but has no dependency on it\n\n\nI wonder if it's worth having a build target invoking it? But how to get the\npath to the script? We could just copy it into src/tools? It's permissively\nlicensed...\n\nGreetings,\n\nAndres Freund\n\n\n(*) It seems architecturally pretty darn ugly for pgcommon to acquire lwlocks.\n\n\n", "msg_date": "Wed, 9 Mar 2022 08:55:37 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-12 02:08:29 -0700, Andres Freund wrote:\n> A very helpful visualization is to transform ninja's build logs into a\n> tracefile with https://github.com/nico/ninjatracing\n> \n> I attached an example - the trace.json.gz can be uploaded as-is to\n> https://ui.perfetto.dev/\n\nThese days perfetto can load .ninja_log directly without conversion. Attached\nis an example of the output. Also attached a local .ninja_log to upload if\nsomebody wants to look at the interactive output without building.\n\n\n> It's quite a bit of of fun to look at imo.\n> \n> There's a few other things quickly apparent:\n> \n> - genbki prevents build progress due to dependencies on the generated\n> headers.\n\nThat's still the biggest \"slowdown\". Only a small number of things can start\nbefore genbki is done. Parts of pgport, bison/flex and parts of the docs\nbuild.\n\n\n> - the absolutely stupid way I implemented the python2->python3\n> regression test output conversion uses up a fair bit of resources\n\nThat's gone now.\n\n\n> - tablecmds.c, pg_dump.c, xlog.c and a few other files are starting to\n> big enough to be problematic compile-time wise\n\nBut these are still present. When building just the backend, the build speed\nis limited by gram.y->gram.c, gram.c -> gram.o, linking postgres.\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 9 Mar 2022 09:13:14 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2021-10-22 11:55:05 -0400, John Naylor wrote:\n> On Thu, Oct 21, 2021 at 5:48 PM Andres Freund <andres@anarazel.de> wrote:\n> \n> > However, update-unicode is a bit harder. Partially not directly because\n> of\n> > meson, but because update-unicode as-is afaict doesn't support VPATH\n> builds,\n> > and meson enforces those.\n> \n> > make update-unicode\n> > ...\n> > make -C src/common/unicode update-unicode\n> > '/usr/bin/perl' generate-unicode_norm_table.pl\n> > Can't open perl script \"generate-unicode_norm_table.pl\": No such file or\n> directory\n> >\n> > It's not too hard to fix. See attached for the minimal stuff that I\n> > immediately found to be needed.\n> \n> Thanks for doing that, it works well enough for demonstration. With your\n> patch, and using an autoconf VPATH build, the unicode tables work fine, but\n> it complains of a permission error in generate_unaccent_rules.py. That\n> seems to be because the script is invoked directly rather than as an\n> argument to the python interpreter.\n>\n\n> Yeah. I encountered a further issue: With autoconf on HEAD, with a source\n> tree build executed in contrib/unaccent:\n\nThis seems to be the same issue as above?\n\n\n> $ touch generate_unaccent_rules.py\n> $ make update-unicode\n> generate_unaccent_rules.py --unicode-data-file\n> ../../src/common/unicode/UnicodeData.txt --latin-ascii-file Latin-ASCII.xml\n> >unaccent.rules\n> /bin/sh: generate_unaccent_rules.py: command not found\n> make: *** [unaccent.rules] Error 127\n> make: *** Deleting file `unaccent.rules'\n\nThis looks more like you're building without --with-python and you don't have\na 'python' binary (but a python3 binary)?\n\nIndependent of my changes the invocation of generate_unaccent_rules looks like\n\n# Allow running this even without --with-python\nPYTHON ?= python\n...\n\t$(PYTHON) $< --unicode-data-file $(word 2,$^) --latin-ascii-file $(word 3,$^) >$@\n\nso your failure should only happen if PYTHON somehow is empty, otherwise I'd\nexpect python in front of the failing line?\n\n\n> Anyway, this can be put off until the very end, since it's not run often.\n> You've demonstrated how these targets would work, and that's good enough\n> for now.\n\nI'd like to get this stuff out of the patch series, so I'm planning to get\nthis committable...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 9 Mar 2022 10:47:57 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On 09.03.22 17:44, Andres Freund wrote:\n>> v6-0009-prereq-make-unicode-targets-work-in-vpath-builds.patch.gz\n>>\n>> Another directory issue\n> I think it's a tad different, in that it's fixing something that's currently\n> broken in VPATH builds.\n\nOk, I took another look at this.\n\n-override CPPFLAGS := -DFRONTEND $(CPPFLAGS)\n+override CPPFLAGS := -DFRONTEND \n-I$(abs_top_builddir)/src/common/unicode $(CPPFLAGS)\n\nThis could just be\n\n-I.\n\n- $(PERL) generate-unicode_norm_table.pl\n+ $(PERL) $< $(CURDIR)\n\nI didn't detect a need for the additional directory argument. (So the \nchanges in generate-unicode_norm_table.pl are also apparently not \nnecessary.) Maybe this is something that will become useful later, in \nwhich case it should be split out from this patch.\n\nThe rest of this patch looks ok.\n\n\n\n", "msg_date": "Thu, 10 Mar 2022 17:31:42 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v7" }, { "msg_contents": "Hi,\n\nAttached is v8. It's just a rebase to resolve conflicts with recent changes.\n\n- Andres", "msg_date": "Mon, 21 Mar 2022 19:22:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "On 09.03.22 13:37, Peter Eisentraut wrote:\n> v6-0008-meson-prereq-Handle-DLSUFFIX-in-msvc-builds-simil.patch.gz\n> \n> I think the right way here is actually to go the other way around:\n> Move DLSUFFIX into header files for all platforms.  Move the DLSUFFIX\n> assignment from src/makefiles/ to src/templates/, have configure read\n> it, and then substitute it into Makefile.global and pg_config.h.\n> \n> Then we also don't have to patch the Windows build code a bunch of\n> times to add the DLSUFFIX define everywhere.\n\nThis patch should do it.", "msg_date": "Thu, 24 Mar 2022 16:16:15 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v6" }, { "msg_contents": "Hi,\n\nOn 2022-03-24 16:16:15 +0100, Peter Eisentraut wrote:\n> On 09.03.22 13:37, Peter Eisentraut wrote:\n> > v6-0008-meson-prereq-Handle-DLSUFFIX-in-msvc-builds-simil.patch.gz\n> > \n> > I think the right way here is actually to go the other way around:\n> > Move DLSUFFIX into header files for all platforms.� Move the DLSUFFIX\n> > assignment from src/makefiles/ to src/templates/, have configure read\n> > it, and then substitute it into Makefile.global and pg_config.h.\n> > \n> > Then we also don't have to patch the Windows build code a bunch of\n> > times to add the DLSUFFIX define everywhere.\n> \n> This patch should do it.\n\n> From 763943176a1e0a0c954414ba9da07742ad791656 Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <peter@eisentraut.org>\n> Date: Thu, 24 Mar 2022 16:00:54 +0100\n> Subject: [PATCH] Refactor DLSUFFIX handling\n> \n> Move DLSUFFIX into header files for all platforms. Move the DLSUFFIX\n> assignment from src/makefiles/ to src/templates/, have configure read\n> it, and then substitute it into Makefile.global and pg_config.h. This\n> avoids the need of all users to locally set CPPFLAGS.\n\nReading through it, this looks good to me. Thanks!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Mar 2022 08:40:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v6" }, { "msg_contents": "\nOn 22.03.22 03:22, Andres Freund wrote:\n> Attached is v8. It's just a rebase to resolve conflicts with recent changes.\n\nI have committed the DLSUFFIX refactoring, and also a stripped-down \nversion of the patch that makes update-unicode work with vpath. This \ntakes care of patches 0007 and 0009.\n\nPatch 0006 (visibility) has its own CF entry.\n\nThe only other thing IMO that might be worth addressing in PG15 is the \nsnowball scripts refactoring (0002 and 0003), but that doesn't seem \nquite ready yet. (At least, it would need to be integrated into the \ndistprep target, since it adds a dependency on perl.) Maybe it's not \nworth it right now.\n\nWith that, I suggest moving this patch set to CF 2022-07.\n\nA general comment on the remaining prereq patches: We appear to be \naccumulating a mix of conventions for how \"generate\" scripts specify \ntheir output file. Some have it built-in, some use the last argument, \nsome use an option, which might be -o or --output. Maybe we can gently \nwork toward more commonality there.\n\n\n", "msg_date": "Fri, 25 Mar 2022 10:01:09 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-03-25 10:01:09 +0100, Peter Eisentraut wrote:\n> On 22.03.22 03:22, Andres Freund wrote:\n> > Attached is v8. It's just a rebase to resolve conflicts with recent changes.\n> \n> I have committed the DLSUFFIX refactoring, and also a stripped-down version\n> of the patch that makes update-unicode work with vpath. This takes care of\n> patches 0007 and 0009.\n\nThanks!\n\n\n> The only other thing IMO that might be worth addressing in PG15 is the\n> snowball scripts refactoring (0002 and 0003), but that doesn't seem quite\n> ready yet. (At least, it would need to be integrated into the distprep\n> target, since it adds a dependency on perl.) Maybe it's not worth it right\n> now.\n\nNot sure myself.\n\n\n> With that, I suggest moving this patch set to CF 2022-07.\n\nDone.\n\n\nOne thing I'd like to discuss fairly soon is what kind of approach to take for\nintegrating meson support. Most other projects I looked kept parallel\nbuildsystems for at least a release, so that there's one round of \"broad\" user\nfeedback.\n\nIn our context it could make sense to merge meson, after a few months of\nshakeup remove the current windows buildsystems, and then in release + 1\nremove the make based stuff.\n\nBut we can have that discussion that before the next CF, but still after\ncode-freeze & immediate mopup.\n\n\n> A general comment on the remaining prereq patches: We appear to be\n> accumulating a mix of conventions for how \"generate\" scripts specify their\n> output file. Some have it built-in, some use the last argument, some use an\n> option, which might be -o or --output. Maybe we can gently work toward more\n> commonality there.\n\nFair point.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 28 Mar 2022 12:59:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "\nOn 3/28/22 15:59, Andres Freund wrote:\n>\n> One thing I'd like to discuss fairly soon is what kind of approach to take for\n> integrating meson support. Most other projects I looked kept parallel\n> buildsystems for at least a release, so that there's one round of \"broad\" user\n> feedback.\n\n\n\nWe did something similar when we moved from CVS to git.\n\n\n>\n> In our context it could make sense to merge meson, after a few months of\n> shakeup remove the current windows buildsystems, and then in release + 1\n> remove the make based stuff.\n>\n> But we can have that discussion that before the next CF, but still after\n> code-freeze & immediate mopup.\n>\n\n\nI'd like to get a stake in the ground and then start working on\nbuildfarm support. Up to now I think it's been a bit too much of a\nmoving target. Essentially that would mean an interim option for\nbuilding with meson.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 28 Mar 2022 18:44:36 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 3/28/22 15:59, Andres Freund wrote:\n>> In our context it could make sense to merge meson, after a few months of\n>> shakeup remove the current windows buildsystems, and then in release + 1\n>> remove the make based stuff.\n\nThat sounds like a decent plan.\n\n> I'd like to get a stake in the ground and then start working on\n> buildfarm support. Up to now I think it's been a bit too much of a\n> moving target. Essentially that would mean an interim option for\n> building with meson.\n\nIf we can commit meson build infrastructure without removing the\nexisting infrastructure, then the buildfarm can continue to work,\nand we can roll out support for the new way slowly. It'd be\nfairly impractical to expect all buildfarm animals to update\ninstantly anyhow.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Mar 2022 18:58:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-03-28 18:58:19 -0400, Tom Lane wrote:\n> If we can commit meson build infrastructure without removing the\n> existing infrastructure, then the buildfarm can continue to work,\n> and we can roll out support for the new way slowly.\n\nI think it's not a huge issue to have both for a while. Of course it's\nannoying to have to update two files when adding a source file, but it's not\nthe end of the world for a limited time. IMO.\n\n\n> It'd be fairly impractical to expect all buildfarm animals to update\n> instantly anyhow.\n\nAnd development workflows. I expect some unforseen breakages somewhere, given\nthe variety of systems out there.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 28 Mar 2022 16:11:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Some feedback and patches for your branch at \n3274198960c139328fef3c725cee1468bbfff469:\n\n0001-Install-a-few-more-files.patch\n\nThese are just some files that were apparently forgotten to be installed \nso far.\n\n0002-Adjust-some-header-file-installation-paths.patch\n\nThe installation of server headers is apparently still in progress. \nThis just adjusts the installation directory of those that are already \nbeing dealt with, so they match the existing installation layout.\n\n0003-Fix-warnings-about-deprecated-features.patch\n\nThis fixes some deprecation warnings and raises the requirement to 0.56. \n I'm not sure why the current cutoff at 0.54 was chosen. Perhaps that \ncould be documented. If we choose to stay with 0.54, is there a way to \nturn off deprecation warnings, so not everyone needs to see them?\n\n0004-Install-postmaster-symlink.patch\n\nThis needs 0.61, so maybe it's a bit too new. Or we could get rid of \nthe postmaster symlink altogether?\n\n0005-Workaround-for-Perl-detection.patch\n\nThis is needed on my system to get the Perl detection to pass. If I \nlook at the equivalent configure code, some more refinement appears to \nbe needed in this area.", "msg_date": "Wed, 13 Apr 2022 12:26:05 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "On 13.04.22 12:26, Peter Eisentraut wrote:\n> Some feedback and patches for your branch at \n> 3274198960c139328fef3c725cee1468bbfff469:\n\nHere is another patch. It adds support for building ecpg.", "msg_date": "Wed, 20 Apr 2022 15:09:31 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-04-13 12:26:05 +0200, Peter Eisentraut wrote:\n> Some feedback and patches for your branch at\n> 3274198960c139328fef3c725cee1468bbfff469:\n\nThanks! I just rebased the branch, will merge your changes once the fallout\nfrom that is fixed...\n\n\n> 0001-Install-a-few-more-files.patch\n> \n> These are just some files that were apparently forgotten to be installed so\n> far.\n\n> 0002-Adjust-some-header-file-installation-paths.patch\n> \n> The installation of server headers is apparently still in progress. This\n> just adjusts the installation directory of those that are already being\n> dealt with, so they match the existing installation layout.\n\n\nYea. I've not at all paid attention to that so far, besides getting tests to\npass.\n\n\n> 0003-Fix-warnings-about-deprecated-features.patch\n> \n> This fixes some deprecation warnings and raises the requirement to 0.56.\n\nI don't see any deprecation warnings - I see some notices about *future*\ndeprecated features being used:\n\nNOTICE: Future-deprecated features used:\n * 0.55.0: {'ExternalProgram.path'}\n * 0.56.0: {'meson.source_root', 'meson.build_root'}\n\n(i.e. once the minimum version is increased to > 0.54, those will trigger\ndeprecation warnings)\n\nWhat are you seeing with what version?\n\n\n> I'm not sure why the current cutoff at 0.54 was chosen. Perhaps that could\n> be documented.\n\nNot quite sure why I ended up with 0.54. We definitely should require at most\n0.56, as that's the last version supporting python 3.5.\n\n\n> 0004-Install-postmaster-symlink.patch\n> \n> This needs 0.61, so maybe it's a bit too new.\n\nYea, that's too new. I think we can just create the symlink using ln or such\nif we need it.\n\n\n> Or we could get rid of the postmaster symlink altogether?\n\nBut that seems like a better approach.\n\n\n> 0005-Workaround-for-Perl-detection.patch\n> \n> This is needed on my system to get the Perl detection to pass. If I look at\n> the equivalent configure code, some more refinement appears to be needed in\n> this area.\n\n> From 1f80e1ebb8efeb0eba7d57032282520fd6455b0d Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <peter@eisentraut.org>\n> Date: Wed, 13 Apr 2022 11:50:52 +0200\n> Subject: [PATCH 5/5] Workaround for Perl detection\n> \n> ---\n> meson.build | 6 +++---\n> 1 file changed, 3 insertions(+), 3 deletions(-)\n> \n> diff --git a/meson.build b/meson.build\n> index 1bf53ea24d..e33ed11b08 100644\n> --- a/meson.build\n> +++ b/meson.build\n> @@ -545,9 +545,9 @@ else\n> # file existence.\n> if perl_may_work\n> perl_ccflags += ['-I@0@'.format(perl_inc_dir)]\n> - if host_machine.system() == 'darwin'\n> - perl_ccflags += ['-iwithsysroot', perl_inc_dir]\n> - endif\n> + #if host_machine.system() == 'darwin'\n> + # perl_ccflags += ['-iwithsysroot', perl_inc_dir]\n> + #endif\n> endif\n\nWhat problem do you see without this? It did build on CI and on my m1 mini box\nas is...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 20 Apr 2022 14:04:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "On 20.04.22 23:04, Andres Freund wrote:\n>> 0003-Fix-warnings-about-deprecated-features.patch\n>>\n>> This fixes some deprecation warnings and raises the requirement to 0.56.\n> \n> I don't see any deprecation warnings - I see some notices about *future*\n> deprecated features being used:\n> \n> NOTICE: Future-deprecated features used:\n> * 0.55.0: {'ExternalProgram.path'}\n> * 0.56.0: {'meson.source_root', 'meson.build_root'}\n> \n> (i.e. once the minimum version is increased to > 0.54, those will trigger\n> deprecation warnings)\n> \n> What are you seeing with what version?\n\nI see the same thing. Effectively, \"deprecation warning\" and \n\"future-deprecation notice\" are just different spellings of \"yelling at \nme unconditionally for using code that I can't do anything about\".\n\n>> I'm not sure why the current cutoff at 0.54 was chosen. Perhaps that could\n>> be documented.\n> \n> Not quite sure why I ended up with 0.54. We definitely should require at most\n> 0.56, as that's the last version supporting python 3.5.\n\nWhy is Python 3.5 relevant?\n\n>> From 1f80e1ebb8efeb0eba7d57032282520fd6455b0d Mon Sep 17 00:00:00 2001\n>> From: Peter Eisentraut <peter@eisentraut.org>\n>> Date: Wed, 13 Apr 2022 11:50:52 +0200\n>> Subject: [PATCH 5/5] Workaround for Perl detection\n>>\n>> ---\n>> meson.build | 6 +++---\n>> 1 file changed, 3 insertions(+), 3 deletions(-)\n>>\n>> diff --git a/meson.build b/meson.build\n>> index 1bf53ea24d..e33ed11b08 100644\n>> --- a/meson.build\n>> +++ b/meson.build\n>> @@ -545,9 +545,9 @@ else\n>> # file existence.\n>> if perl_may_work\n>> perl_ccflags += ['-I@0@'.format(perl_inc_dir)]\n>> - if host_machine.system() == 'darwin'\n>> - perl_ccflags += ['-iwithsysroot', perl_inc_dir]\n>> - endif\n>> + #if host_machine.system() == 'darwin'\n>> + # perl_ccflags += ['-iwithsysroot', perl_inc_dir]\n>> + #endif\n>> endif\n> \n> What problem do you see without this? It did build on CI and on my m1 mini box\n> as is...\n\nI'm using homebrew-installed gcc and homebrew-installed perl. gcc \ndoesn't understand the option -iwithsysroot, and apparently whatever it \npoints to is not needed.\n\nNote that in configure.ac the logic is like this:\n\n if test \\! -f \"$perl_archlibexp/CORE/perl.h\" ; then\n if test -f \"$PG_SYSROOT$perl_archlibexp/CORE/perl.h\" ; then\n perl_includespec=\"-iwithsysroot $perl_archlibexp/CORE\"\n fi\n fi\n\nSo it checks first if it can find the needed file without the sysroot \nbusiness.\n\n\n", "msg_date": "Thu, 21 Apr 2022 22:36:01 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-04-21 22:36:01 +0200, Peter Eisentraut wrote:\n> On 20.04.22 23:04, Andres Freund wrote:\n> > > 0003-Fix-warnings-about-deprecated-features.patch\n> > > \n> > > This fixes some deprecation warnings and raises the requirement to 0.56.\n> > \n> > I don't see any deprecation warnings - I see some notices about *future*\n> > deprecated features being used:\n> > \n> > NOTICE: Future-deprecated features used:\n> > * 0.55.0: {'ExternalProgram.path'}\n> > * 0.56.0: {'meson.source_root', 'meson.build_root'}\n> > \n> > (i.e. once the minimum version is increased to > 0.54, those will trigger\n> > deprecation warnings)\n> > \n> > What are you seeing with what version?\n> \n> I see the same thing. Effectively, \"deprecation warning\" and\n> \"future-deprecation notice\" are just different spellings of \"yelling at me\n> unconditionally for using code that I can't do anything about\".\n\nYea, I'm not happy that \"future-deprecation notice\" was enabled by\ndefault. It's still different from \"deprecation warning\" in prominence and\nbehaviour (e.g. --fatal-meson-warnings doesn't error out for notices but not\nfor errors), but ...\n\nMight be worth raising with the meson folks.\n\n\n> > > I'm not sure why the current cutoff at 0.54 was chosen. Perhaps that could\n> > > be documented.\n> > \n> > Not quite sure why I ended up with 0.54. We definitely should require at most\n> > 0.56, as that's the last version supporting python 3.5.\n> \n> Why is Python 3.5 relevant?\n\nIt's the latest available on some older platforms. It's pretty easy to install\na new meson, a heck of a lot more work to install a new python. IIRC solaris,\nAIX and some of Tom's dinosaurs.\n\n\n> > > From 1f80e1ebb8efeb0eba7d57032282520fd6455b0d Mon Sep 17 00:00:00 2001\n> > > From: Peter Eisentraut <peter@eisentraut.org>\n> > > Date: Wed, 13 Apr 2022 11:50:52 +0200\n> > > Subject: [PATCH 5/5] Workaround for Perl detection\n> > > \n> > > ---\n> > > meson.build | 6 +++---\n> > > 1 file changed, 3 insertions(+), 3 deletions(-)\n> > > \n> > > diff --git a/meson.build b/meson.build\n> > > index 1bf53ea24d..e33ed11b08 100644\n> > > --- a/meson.build\n> > > +++ b/meson.build\n> > > @@ -545,9 +545,9 @@ else\n> > > # file existence.\n> > > if perl_may_work\n> > > perl_ccflags += ['-I@0@'.format(perl_inc_dir)]\n> > > - if host_machine.system() == 'darwin'\n> > > - perl_ccflags += ['-iwithsysroot', perl_inc_dir]\n> > > - endif\n> > > + #if host_machine.system() == 'darwin'\n> > > + # perl_ccflags += ['-iwithsysroot', perl_inc_dir]\n> > > + #endif\n> > > endif\n> > \n> > What problem do you see without this? It did build on CI and on my m1 mini box\n> > as is...\n> \n> I'm using homebrew-installed gcc and homebrew-installed perl. gcc doesn't\n> understand the option -iwithsysroot, and apparently whatever it points to is\n> not needed.\n\nAh, I only tested with system \"cc\".\n\n\n> Note that in configure.ac the logic is like this:\n> \n> if test \\! -f \"$perl_archlibexp/CORE/perl.h\" ; then\n> if test -f \"$PG_SYSROOT$perl_archlibexp/CORE/perl.h\" ; then\n> perl_includespec=\"-iwithsysroot $perl_archlibexp/CORE\"\n> fi\n> fi\n> \n> So it checks first if it can find the needed file without the sysroot\n> business.\n\nI guess we'll have to copy that. Although it doesn't seem quite the right\nbehaviour, because it might end up picking up a different perl.h that way...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 21 Apr 2022 13:56:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-04-21 22:36:01 +0200, Peter Eisentraut wrote:\n>> Why is Python 3.5 relevant?\n\n> It's the latest available on some older platforms. It's pretty easy to install\n> a new meson, a heck of a lot more work to install a new python. IIRC solaris,\n> AIX and some of Tom's dinosaurs.\n\nFWIW, I don't think that either gaur or prairiedog need be factored into\nthis conversation. They cannot build ninja at all for lack of <spawn.h>,\nso whether they could run meson is pretty much beside the point.\n\n(I wonder if we should stick in a configure test for <spawn.h>,\njust to see if anything else doesn't have it?)\n\nWe should worry a little more about Solaris and AIX, but even there I\nthink it's largely up to the platform owner whether they've updated\npython to something modern. If it isn't, you need to move the goalposts\nback some more :-(. As of today I see the following pre-3.6 pythons\nin the buildfarm, exclusive of mine:\n\nskate\t\t3.2.3\nsnapper\t\t3.2.3\ntopminnow\t3.4.2\nhornet\t\t3.4.3\nsungazer\t3.4.3\nwrasse\t\t3.4.3\nshelduck\t3.4.10\ncurculio\t3.5.1\nhoverfly\t3.5.1\nbatfish\t\t3.5.2\nspurfowl\t3.5.2\ncuon\t\t3.5.2\nayu\t\t3.5.3\nchimaera\t3.5.3\nchipmunk\t3.5.3\ngrison\t\t3.5.3\nmussurana\t3.5.3\ntadarida\t3.5.3\nurocryon\t3.5.3\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Apr 2022 17:34:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "\nOn 2022-04-21 Th 17:34, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2022-04-21 22:36:01 +0200, Peter Eisentraut wrote:\n>>> Why is Python 3.5 relevant?\n>> It's the latest available on some older platforms. It's pretty easy to install\n>> a new meson, a heck of a lot more work to install a new python. IIRC solaris,\n>> AIX and some of Tom's dinosaurs.\n> FWIW, I don't think that either gaur or prairiedog need be factored into\n> this conversation. They cannot build ninja at all for lack of <spawn.h>,\n> so whether they could run meson is pretty much beside the point.\n>\n> (I wonder if we should stick in a configure test for <spawn.h>,\n> just to see if anything else doesn't have it?)\n>\n> We should worry a little more about Solaris and AIX, but even there I\n> think it's largely up to the platform owner whether they've updated\n> python to something modern. If it isn't, you need to move the goalposts\n> back some more :-(. As of today I see the following pre-3.6 pythons\n> in the buildfarm, exclusive of mine:\n>\n> skate\t\t3.2.3\n> snapper\t\t3.2.3\n> topminnow\t3.4.2\n> hornet\t\t3.4.3\n> sungazer\t3.4.3\n> wrasse\t\t3.4.3\n> shelduck\t3.4.10\n> curculio\t3.5.1\n> hoverfly\t3.5.1\n> batfish\t\t3.5.2\n> spurfowl\t3.5.2\n> cuon\t\t3.5.2\n> ayu\t\t3.5.3\n> chimaera\t3.5.3\n> chipmunk\t3.5.3\n> grison\t\t3.5.3\n> mussurana\t3.5.3\n> tadarida\t3.5.3\n> urocryon\t3.5.3\n>\n> \t\t\t\n\n\nPresumably that only tells you about the animals currently building with\npython.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 24 Apr 2022 15:44:03 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Here is a patch that adds in NLS.\n\nThere are some opportunities to improve this. For example, we could \nmove the list of languages from the meson.build files into separate \nLINGUAS files, which could be shared with the makefile-based build \nsystem. I need to research this a bit more.\n\nAlso, this only covers the build and install phases of the NLS process. \nThe xgettext and msgmerge aspects I haven't touched at all. There is \nmore to research there as well.\n\nThe annoying thing is that the i18n module doesn't appear to have a way \nto communicate with feature options or dependencies, so there isn't a \nway to tell it to only do its things when some option is enabled, or \nconversely to check whether the module found the things it needs and to \nenable or disable an option based on that. So right now for example if \nyou explicitly disable the 'nls' option, the binaries are built without \nNLS but the .mo files are still built and installed.\n\nIn any case, this works for the main use cases and gets us a step \nforward, so it's worth considering.", "msg_date": "Wed, 27 Apr 2022 21:56:27 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-04-20 15:09:31 +0200, Peter Eisentraut wrote:\n> On 13.04.22 12:26, Peter Eisentraut wrote:\n> > Some feedback and patches for your branch at\n> > 3274198960c139328fef3c725cee1468bbfff469:\n> \n> Here is another patch. It adds support for building ecpg.\n\nCool!\n\nI have merged this, with a few changes (split parse.pl change out, changed its\ninvocation in Solution.pm, indentation, explicitly using shared_library()\nrather than library(), indentation).\n\nBut there's need for some more - exports.txt handling is needed for windows\n(and everywhere else, but not as crucially) - hence CI currently being broken\non windows. I've done that in a VM, and it indeed fixes the issues. But it\nneeds to be generalized, I just copied and pasted stuff around...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 29 Apr 2022 10:46:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-04-27 21:56:27 +0200, Peter Eisentraut wrote:\n> Here is a patch that adds in NLS.\n\nCool! I know very little about translations, so I was reticent tackling\nthis...\n\n\n> For example, we could move the list of languages from the meson.build files\n> into separate LINGUAS files, which could be shared with the makefile-based\n> build system. I need to research this a bit more.\n\nYea, that'd be nice.\n\n\n> The annoying thing is that the i18n module doesn't appear to have a way to\n> communicate with feature options or dependencies, so there isn't a way to\n> tell it to only do its things when some option is enabled, or conversely to\n> check whether the module found the things it needs and to enable or disable\n> an option based on that. So right now for example if you explicitly disable\n> the 'nls' option, the binaries are built without NLS but the .mo files are\n> still built and installed.\n\nOne partial way to deal with that, I think, would be to change all the\nsubdir('po') invocations to subdir('po', if_found: libintl). If we don't want\nthat for some reason, is there a reason a simple if libintl.found() wouldn't\nwork?\n\n\n> In any case, this works for the main use cases and gets us a step forward,\n> so it's worth considering.\n\nAgreed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 29 Apr 2022 11:00:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-04-29 11:00:43 -0700, Andres Freund wrote:\n> On 2022-04-27 21:56:27 +0200, Peter Eisentraut wrote:\n> > Here is a patch that adds in NLS.\n> \n> Cool! I know very little about translations, so I was reticent tackling\n> this...\n\n> > The annoying thing is that the i18n module doesn't appear to have a way to\n> > communicate with feature options or dependencies, so there isn't a way to\n> > tell it to only do its things when some option is enabled, or conversely to\n> > check whether the module found the things it needs and to enable or disable\n> > an option based on that. So right now for example if you explicitly disable\n> > the 'nls' option, the binaries are built without NLS but the .mo files are\n> > still built and installed.\n> \n> One partial way to deal with that, I think, would be to change all the\n> subdir('po') invocations to subdir('po', if_found: libintl). If we don't want\n> that for some reason, is there a reason a simple if libintl.found() wouldn't\n> work?\n\nMerged into my tree now, using if_found. I've also made the intl check work\nwith older meson versions, since I didn't include your version requirement\nupgrades.\n\n\nFor now I \"fixed\" the ecpg issue on windows by just not building ecpg\nthere. Ecpg also needs tests ported...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 29 Apr 2022 13:11:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "On 29.04.22 19:46, Andres Freund wrote:\n> explicitly using shared_library() rather than library()\n\nWhy is that? We do build static libraries right now, so using library() \nwould seem more suitable for that.\n\n\n", "msg_date": "Mon, 2 May 2022 16:47:43 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-05-02 16:47:43 +0200, Peter Eisentraut wrote:\n> On 29.04.22 19:46, Andres Freund wrote:\n> > explicitly using shared_library() rather than library()\n> \n> Why is that? We do build static libraries right now, so using library()\n> would seem more suitable for that.\n\nWhen I wrote this I hadn't realized that we build both shared and static\nlibraries. I've since changed the respective ecpg libraries to use\nboth_libraries(). Same with libpq (I really hadn't realized we build a static\nlibpq...).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 2 May 2022 09:36:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "More patches:\n\n0001-meson-Assorted-compiler-test-tweaks.patch\n\nI was going through a diff of pg_config.h between old and new build and \nfound a few omissions and small differences.\n\nSome of the\n\n blah ? 1 : false\n\nis of course annoying and can be removed eventually, but it's useful \nwhen analyzing the diff, and since it's already done in other places it \nseems reasonable to apply it consistently.\n\nOf course there is some more work left for some of the more complicated \ntests; this isn't meant to be complete.\n\n\n0002-meson-Add-pg_walinspect.patch\n\nThis was added more recently and was not ported yet. Nothing too \ninteresting here.\n\n\n0003-meson-Install-all-server-headers.patch\n\nWith this, all the server headers installed by a makefile-based build \nare installed. I tried to strike a balance between using \ninstall_subdir() with exclude list versus listing things explicitly. \nDifferent variations might be possible, but this looked pretty sensible \nto me.\n\n\nWith these patches, the list of files installed with make versus meson \nmatch up, except for known open items (postmaster symlink, some library \nnaming differences, pkgconfig, pgxs, test modules installed, documentation).", "msg_date": "Wed, 4 May 2022 13:53:54 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-05-04 13:53:54 +0200, Peter Eisentraut wrote:\n> 0001-meson-Assorted-compiler-test-tweaks.patch\n> \n> I was going through a diff of pg_config.h between old and new build and\n> found a few omissions and small differences.\n\nThanks, merged that.\n\n\n> is of course annoying and can be removed eventually, but it's useful when\n> analyzing the diff, and since it's already done in other places it seems\n> reasonable to apply it consistently.\n\nYea, I'd tried to minimize the difference at some point, but haven't done so\nin a while...\n\n\n> 0002-meson-Add-pg_walinspect.patch\n> \n> This was added more recently and was not ported yet. Nothing too\n> interesting here.\n\nMerged that.\n\n\n> 0003-meson-Install-all-server-headers.patch\n> \n> With this, all the server headers installed by a makefile-based build are\n> installed. I tried to strike a balance between using install_subdir() with\n> exclude list versus listing things explicitly. Different variations might be\n> possible, but this looked pretty sensible to me.\n\nI locally had something similar, but I'm worried that this approach will be\ntoo fragile. Leads to e.g. editor temp files getting installed. I've merged it\nfor now, but I think we need a different approach.\n\n\n> With these patches, the list of files installed with make versus meson match\n> up, except for known open items (postmaster symlink, some library naming\n> differences, pkgconfig, pgxs, test modules installed, documentation).\n\nI added pkgconfig since then. They're not exactly the same, but pretty close,\nexcept for one thing: Looks like some of the ecpg libraries really should link\nto some other ecpg libs? I think we're missing something there... That then\nleads to missing requirements in the .pc files.\n\nRe symlink: Do you have an opion about dropping the symlink vs implementing it\n(likely via a small helper script?)?\n\nRe library naming: It'd obviously be easy to adjust the library names, but I\nwonder if it'd not be worth keeping the _static.a suffix, right now unsuffixed\nlibrary name imo is quite confusing.\n\nRe test modules: Not sure what the best fix for that is yet. Except that we\ndon't have a search path for server libs, I'd just install them to a dedicated\npath or add the build dir to the search path. But we don't, so ...\n\nRe docs: I think the best approach here would be to have a new\nmeson_options.txt option defining whether the docs should be built. But not\nquite sure.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 6 May 2022 14:27:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-04-21 17:34:47 -0400, Tom Lane wrote:\n> FWIW, I don't think that either gaur or prairiedog need be factored into\n> this conversation. They cannot build ninja at all for lack of <spawn.h>,\n> so whether they could run meson is pretty much beside the point.\n\nYea.\n\n\n> (I wonder if we should stick in a configure test for <spawn.h>,\n> just to see if anything else doesn't have it?)\n\nMight be worth doing?\n\n\n> We should worry a little more about Solaris and AIX, but even there I\n> think it's largely up to the platform owner whether they've updated\n> python to something modern.\n\nLooks like \"AIX toolbox\" is at 3.7. Solaris 11.4 apparently has 3.5 (11.3 is\nEOL January 2024).\n\nI think it's worth caring about supporting 3.6 due to RHEL 7 for now.\n\n\n> If it isn't, you need to move the goalposts\n> back some more :-(. As of today I see the following pre-3.6 pythons\n> in the buildfarm, exclusive of mine:\n>\n> skate\t\t3.2.3\n> snapper\t\t3.2.3\n\nDebian wheezy, I feel ok with dropping that.\n\n\n> topminnow\t3.4.2\n\nDebian jessie, similar.\n\n\n> hornet\t\t3.4.3\n> sungazer\t3.4.3\n\nLooks like a newer python version is available for AIX, without manually\ncompiling.\n\n\n> wrasse\t\t3.4.3\n\nApparently solaris 11.4 has python 3.5 (still not great :/)\n\n\n> shelduck\t3.4.10\n\nThis animal seems to have retired.\n\n\n> curculio\t3.5.1\n\nSupported versions of openbsd have modern versions of python.\n\n\n> hoverfly\t3.5.1\n\nAIX\n\n\n> batfish\t\t3.5.2\n> spurfowl\t3.5.2\n> cuon\t\t3.5.2\n\nUbuntu 16.04 is EOL (since 2021-04), outside of paid extended support.\n\n\n> ayu\t\t3.5.3\n> chimaera\t3.5.3\n> chipmunk\t3.5.3\n> grison\t\t3.5.3\n> mussurana\t3.5.3\n> tadarida\t3.5.3\n> urocryon\t3.5.3\n\nThese are all [variants of] debian stretch. I think we should be ok dropping\nsupport for that, the extended \"LTS\" support for stretch ends June 30, 2022\n(with the last non-extended update at July 18, 2020).\n\nGreetings,\n\nAndres Freund\n\n[1] https://repology.org/project/python/versions\n\n\n", "msg_date": "Fri, 6 May 2022 15:05:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-05-06 14:27:24 -0700, Andres Freund wrote:\n> > 0003-meson-Install-all-server-headers.patch\n> > \n> > With this, all the server headers installed by a makefile-based build are\n> > installed. I tried to strike a balance between using install_subdir() with\n> > exclude list versus listing things explicitly. Different variations might be\n> > possible, but this looked pretty sensible to me.\n> \n> I locally had something similar, but I'm worried that this approach will be\n> too fragile. Leads to e.g. editor temp files getting installed. I've merged it\n> for now, but I think we need a different approach.\n\nMeant to add potential alternatives here: The easiest likely would be to just\nadd an install script that globs *.h. Alternatively we could build a file list\nat configure time, and then install that with install_header(). The advantage\nwould be that it be available for things like cpluspluscheck, the disadvantage\nthat something needs to trigger reconfiguration to update the file list.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 6 May 2022 15:08:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "On 06.05.22 23:27, Andres Freund wrote:\n> Re symlink: Do you have an opion about dropping the symlink vs implementing it\n> (likely via a small helper script?)?\n\nI think the postmaster symlink could be dropped. The postmaster man \npage has been saying that it's deprecated since 2006.\n\n\n\n", "msg_date": "Wed, 11 May 2022 12:11:49 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "More patches:\n\nI fixed the Perl detection issue in my macOS environment that I had \nreported a while ago.\n\nThen I added in support for all configure options that had not been \nported over yet. Some of these are rather trivial.\n\nAfter that, these configure options don't have an equivalent yet:\n\n--disable-rpath\n--enable-profiling\n--disable-thread-safety\n--with-libedit-preferred\n\nThe first three overlap with meson built-in functionality, so we would \nneed to check whether the desired functionality is available somehow.\n\nThe last one we probably want to keep somehow; it would need a bit of \nfiddly work.", "msg_date": "Wed, 11 May 2022 12:18:58 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-05-11 12:18:58 +0200, Peter Eisentraut wrote:\n> I fixed the Perl detection issue in my macOS environment that I had reported\n> a while ago.\n\nHm. I wonder if it's right to check with is_file() - perhaps there are\nplatforms that have split off the include directory?\n\n\n> Then I added in support for all configure options that had not been ported\n> over yet. Some of these are rather trivial.\n\nThanks!\n\nSome of these (extra version, krb srvname, ...) I just merged from a\ncolleague.\n\nWill look at merging the others.\n\n\n> After that, these configure options don't have an equivalent yet:\n> \n> --disable-rpath\n> --enable-profiling\n> --disable-thread-safety\n> --with-libedit-preferred\n> \n> The first three overlap with meson built-in functionality, so we would need\n> to check whether the desired functionality is available somehow.\n\nWhich builtin functionality does --enable-profiling overlap with? There's a\ncoverage option, perhaps you were thinking of that?\n\nI don't think we should add --disable-thread-safety, platforms without it also\naren't going to support ninja / meson... IIRC Thomas was planning to submit a\npatch getting rid of it independently...\n\n\n> The last one we probably want to keep somehow; it would need a bit of fiddly\n> work.\n\nA colleague just finished that bit. Probably can be improved further, but it\nworks now...\n\n\n> From 049b34b6a8dd949f0eb7987cad65f6682a6ec786 Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <peter@eisentraut.org>\n> Date: Wed, 11 May 2022 09:06:13 +0200\n> Subject: [PATCH 3/9] meson: prereq: Refactor dtrace postprocessing make rules\n> \n> Move the dtrace postprocessing sed commands into a separate file so\n> that it can be shared by meson. Also split the rule into two for\n> proper dependency declaration.\n\nHm. Using sed may be problematic on windows...\n\n\n> From fad02f1fb71ec8c64e47e5031726ffbee4a1dd84 Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <peter@eisentraut.org>\n> Date: Wed, 11 May 2022 09:53:01 +0200\n> Subject: [PATCH 7/9] meson: Add system-tzdata option\n> \n> ---\n> meson.build | 3 +++\n> meson_options.txt | 3 +++\n> 2 files changed, 6 insertions(+)\n> \n> diff --git a/meson.build b/meson.build\n> index 7c9c6e7f23..b33a51a35d 100644\n> --- a/meson.build\n> +++ b/meson.build\n> @@ -246,6 +246,9 @@ cdata.set('RELSEG_SIZE', get_option('segsize') * 131072)\n> cdata.set('DEF_PGPORT', get_option('pgport'))\n> cdata.set_quoted('DEF_PGPORT_STR', get_option('pgport'))\n> cdata.set_quoted('PG_KRB_SRVNAM', 'postgres')\n> +if get_option('system-tzdata') != ''\n> + cdata.set_quoted('SYSTEMTZDIR', get_option('system-tzdata'))\n> +endif\n\nThis doesn't quite seem sufficient - we also need to prevent building &\ninstalling our own timezone data...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 12 May 2022 12:30:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-05-11 12:18:58 +0200, Peter Eisentraut wrote:\n> This currently only works on macOS. The dtrace -G calls needed on\n> other platforms are not implemented yet.\n\nI looked into that part. The make rule passes all the backend object files as\nan option, but it's not clear to me where / why that's needed. On linux it\ncertainly works to not pass in the object files...\n\nMaybe CI will show problems on freebsd or such...\n\n\n> Therefore, the option is not auto by default.\n\nIt probably shouldn't be auto either way, there's some overhead associated\nwith the probes...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 13 May 2022 16:17:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "On 12.05.22 21:30, Andres Freund wrote:\n> On 2022-05-11 12:18:58 +0200, Peter Eisentraut wrote:\n>> I fixed the Perl detection issue in my macOS environment that I had reported\n>> a while ago.\n> \n> Hm. I wonder if it's right to check with is_file() - perhaps there are\n> platforms that have split off the include directory?\n\nThe existing code uses \"test -f\", so using is_file() would keep it \nworking the same way.\n\n>> After that, these configure options don't have an equivalent yet:\n>>\n>> --disable-rpath\n>> --enable-profiling\n>> --disable-thread-safety\n>> --with-libedit-preferred\n>>\n>> The first three overlap with meson built-in functionality, so we would need\n>> to check whether the desired functionality is available somehow.\n> \n> Which builtin functionality does --enable-profiling overlap with? There's a\n> coverage option, perhaps you were thinking of that?\n\nI saw an option about \"profile guided optimization\" (b_pgo), which seems \npossibly related.\n\n> I don't think we should add --disable-thread-safety, platforms without it also\n> aren't going to support ninja / meson... IIRC Thomas was planning to submit a\n> patch getting rid of it independently...\n\nsure\n\n>> From 049b34b6a8dd949f0eb7987cad65f6682a6ec786 Mon Sep 17 00:00:00 2001\n>> From: Peter Eisentraut <peter@eisentraut.org>\n>> Date: Wed, 11 May 2022 09:06:13 +0200\n>> Subject: [PATCH 3/9] meson: prereq: Refactor dtrace postprocessing make rules\n>>\n>> Move the dtrace postprocessing sed commands into a separate file so\n>> that it can be shared by meson. Also split the rule into two for\n>> proper dependency declaration.\n> \n> Hm. Using sed may be problematic on windows...\n\nThis code is only used when dtrace is enabled, which probably doesn't \napply on Windows.\n\n\n", "msg_date": "Mon, 16 May 2022 17:47:24 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "On 14.05.22 01:17, Andres Freund wrote:\n> On 2022-05-11 12:18:58 +0200, Peter Eisentraut wrote:\n>> This currently only works on macOS. The dtrace -G calls needed on\n>> other platforms are not implemented yet.\n> \n> I looked into that part. The make rule passes all the backend object files as\n> an option, but it's not clear to me where / why that's needed. On linux it\n> certainly works to not pass in the object files...\n> \n> Maybe CI will show problems on freebsd or such...\n\nYes, it failed for me on freebsd.\n\n>> Therefore, the option is not auto by default.\n> \n> It probably shouldn't be auto either way, there's some overhead associated\n> with the probes...\n\nok\n\n\n", "msg_date": "Mon, 16 May 2022 17:48:08 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "On 2022-05-16 17:48:08 +0200, Peter Eisentraut wrote:\n> On 14.05.22 01:17, Andres Freund wrote:\n> > On 2022-05-11 12:18:58 +0200, Peter Eisentraut wrote:\n> > > This currently only works on macOS. The dtrace -G calls needed on\n> > > other platforms are not implemented yet.\n> > \n> > I looked into that part. The make rule passes all the backend object files as\n> > an option, but it's not clear to me where / why that's needed. On linux it\n> > certainly works to not pass in the object files...\n> > \n> > Maybe CI will show problems on freebsd or such...\n> \n> Yes, it failed for me on freebsd.\n\nYep, I saw those shortly after... I've implemented that bit now, although it\nneeds a bit more cleanup.\n\n\n", "msg_date": "Mon, 16 May 2022 09:13:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-05-16 17:47:24 +0200, Peter Eisentraut wrote:\n> On 12.05.22 21:30, Andres Freund wrote:\n> > On 2022-05-11 12:18:58 +0200, Peter Eisentraut wrote:\n> > > I fixed the Perl detection issue in my macOS environment that I had reported\n> > > a while ago.\n> > \n> > Hm. I wonder if it's right to check with is_file() - perhaps there are\n> > platforms that have split off the include directory?\n> \n> The existing code uses \"test -f\", so using is_file() would keep it working\n> the same way.\n\nI merged it that way. Merged.\n\n\n> > > From 049b34b6a8dd949f0eb7987cad65f6682a6ec786 Mon Sep 17 00:00:00 2001\n> > > From: Peter Eisentraut <peter@eisentraut.org>\n> > > Date: Wed, 11 May 2022 09:06:13 +0200\n> > > Subject: [PATCH 3/9] meson: prereq: Refactor dtrace postprocessing make rules\n> > > \n> > > Move the dtrace postprocessing sed commands into a separate file so\n> > > that it can be shared by meson. Also split the rule into two for\n> > > proper dependency declaration.\n> > \n> > Hm. Using sed may be problematic on windows...\n> \n> This code is only used when dtrace is enabled, which probably doesn't apply\n> on Windows.\n\nFair point. Merged.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 16 May 2022 09:15:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Here are some more patches that clean up various minor issues.", "msg_date": "Wed, 18 May 2022 10:30:12 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-05-18 10:30:12 +0200, Peter Eisentraut wrote:\n> Here are some more patches that clean up various minor issues.\n\nI rebased the meson tree, squashed a lot of the existing commits, merged your\nchanges, and fixed a few more differences between autoconf and meson.\n\n\nFor me the difference in defines now boils down to:\n\n- CONFIGURE_ARGS - empty in meson, not clear what to fill it with\n- GETTIMEOFDAY_1ARG - test doesn't exist - I suspect it might not be necessary\n- PACKAGE_STRING, PACKAGE_TARNAME - unclear if they should be implemented?\n- AC_APPLE_UNIVERSAL_BUILD logic - which I don't think we need?\n- pg_restrict is defined in a simplistic way\n- \"missing\" a bunch of defines that don't appear to be referenced:\n HAVE_FSEEKO\n HAVE_GSSAPI_GSSAPI_H\n HAVE_INTTYPES_H\n HAVE_LDAP_H\n HAVE_LIBCRYPTO\n HAVE_LIBLDAP\n HAVE_LIBM\n HAVE_LIBPAM\n HAVE_LIBSSL\n HAVE_LIBXML2\n HAVE_LIBXSLT\n HAVE_MEMORY_H\n HAVE_PTHREAD\n HAVE_PTHREAD_PRIO_INHERIT\n HAVE_STDINT_H\n HAVE_STDLIB_H\n HAVE_STRING_H\n HAVE_SYS_STAT_H\n HAVE_SYS_TYPES_H\n HAVE_UNISTD_H\n SIZEOF_BOOL\n SIZEOF_OFF_T\n STDC_HEADERS\n- meson additional defines, seems harmless:\n HAVE_GETTIMEOFDAY - only defined on windows rn\n HAVE_SHM_UNLINK\n HAVE_SSL_NEW\n HAVE_STRTOQ\n HAVE_STRTOUQ\n HAVE_CRYPTO_NEW_EX_DATA\n- a bunch of additional #undef's\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 18 May 2022 12:48:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "On 18.05.22 21:48, Andres Freund wrote:\n> - CONFIGURE_ARGS - empty in meson, not clear what to fill it with\n\nOk to leave empty for now.\n\n> - GETTIMEOFDAY_1ARG - test doesn't exist - I suspect it might not be necessary\n\nMight be obsolete, consider removing.\n\n> - PACKAGE_STRING, PACKAGE_TARNAME - unclear if they should be implemented?\n\nleave out for now\n\n> - AC_APPLE_UNIVERSAL_BUILD logic - which I don't think we need?\n\nno\n\n> - \"missing\" a bunch of defines that don't appear to be referenced:\n\nYeah, looks like these are implicitly defined by some autoconf check but \nthen the result is only used within configure.ac itself, so isn't needed \nafterwards.\n\n> - meson additional defines, seems harmless:\n> HAVE_GETTIMEOFDAY - only defined on windows rn\n> HAVE_SHM_UNLINK\n> HAVE_SSL_NEW\n> HAVE_STRTOQ\n> HAVE_STRTOUQ\n> HAVE_CRYPTO_NEW_EX_DATA\n\nYeah, that's the opposite of the previous.\n\nI don't see any other issues in pg_config.h either. Obviously, some \nniche platforms might uncover some issues, but it looks good for now.\n\n\n", "msg_date": "Tue, 24 May 2022 20:08:26 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi Tom,\n\nin the meson unconference session you'd spotted flex flags for psqlscanslash.l\n(I think) being \"hardcoded\". As far as I can tell that's largely just copied\nfrom the Makefile):\n\nsrc/backend/parser/Makefile:scan.c: FLEXFLAGS = -CF -p -p\nsrc/backend/utils/adt/Makefile:jsonpath_scan.c: FLEXFLAGS = -CF -p -p\nsrc/bin/psql/Makefile:psqlscanslash.c: FLEXFLAGS = -Cfe -p -p\nsrc/fe_utils/Makefile:psqlscan.c: FLEXFLAGS = -Cfe -p -p\n\nnote that it's not even FLEXFLAGS += or such.\n\nI honestly don't know enough about the various flex flags to judge what a\nbetter approach would be? Looks like these flags are case specific? Perhaps we\ncould group them, i.e. have centrally defined \"do compress\" \"don't compress\"\nflex flags?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 May 2022 17:47:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> in the meson unconference session you'd spotted flex flags for psqlscanslash.l\n> (I think) being \"hardcoded\". As far as I can tell that's largely just copied\n> from the Makefile):\n\n> src/backend/parser/Makefile:scan.c: FLEXFLAGS = -CF -p -p\n> src/backend/utils/adt/Makefile:jsonpath_scan.c: FLEXFLAGS = -CF -p -p\n> src/bin/psql/Makefile:psqlscanslash.c: FLEXFLAGS = -Cfe -p -p\n> src/fe_utils/Makefile:psqlscan.c: FLEXFLAGS = -Cfe -p -p\n\nHmm, OK. There *is* a FLEXFLAGS definition supplied by configure, and\nI believe many of our scanners do use it, but evidently we're just\noverriding it for the ones where we really care about using specific\nflags. It also looks like the configure-supplied version is usually\nempty, so the fact that this variable exists may be mostly a holdover\nfrom Autoconf practice rather than something we ever cared about.\n\nI think the main thing I didn't like about the way you have it in the\nmeson file is the loss of greppability. I could investigate this\nquestion in a few seconds just now, but if we drop the use of\nFLEXFLAGS as a macro it'll become much harder to figure out which\nplaces use what.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 May 2022 21:38:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2022-05-25 21:38:33 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > in the meson unconference session you'd spotted flex flags for psqlscanslash.l\n> > (I think) being \"hardcoded\". As far as I can tell that's largely just copied\n> > from the Makefile):\n> \n> > src/backend/parser/Makefile:scan.c: FLEXFLAGS = -CF -p -p\n> > src/backend/utils/adt/Makefile:jsonpath_scan.c: FLEXFLAGS = -CF -p -p\n> > src/bin/psql/Makefile:psqlscanslash.c: FLEXFLAGS = -Cfe -p -p\n> > src/fe_utils/Makefile:psqlscan.c: FLEXFLAGS = -Cfe -p -p\n> \n> Hmm, OK. There *is* a FLEXFLAGS definition supplied by configure, and\n> I believe many of our scanners do use it, but evidently we're just\n> overriding it for the ones where we really care about using specific\n> flags. It also looks like the configure-supplied version is usually\n> empty, so the fact that this variable exists may be mostly a holdover\n> from Autoconf practice rather than something we ever cared about.\n\nYea, it looks like that.\n\nISTM that it'd still be good to have something like FLEXFLAGS. But it doesn't\nlook great, nor really intentional, that FLEXFLAGS is overwritten rather than\nappended?\n\n\n> I think the main thing I didn't like about the way you have it in the\n> meson file is the loss of greppability. I could investigate this\n> question in a few seconds just now, but if we drop the use of\n> FLEXFLAGS as a macro it'll become much harder to figure out which\n> places use what.\n\nI disliked a bunch of repetitiveness as I had it, so I'm polishing that part\njust now.\n\nWhat would you want to grep for? Places that specify additional flags? Or just\nplaces using flex?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 May 2022 19:41:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-05-25 21:38:33 -0400, Tom Lane wrote:\n>> I think the main thing I didn't like about the way you have it in the\n>> meson file is the loss of greppability.\n\n> What would you want to grep for? Places that specify additional flags? Or just\n> places using flex?\n\nWell, the consistency of having a single name for \"flags given to\nflex\" seems to me to be worth something.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 May 2022 22:58:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi Andres,\n\nThanks for working on this! I'm very enthusiastic about this effort and I was\nglad to see on PGCon Unconference that the majority of the community seems\nto be as well.\n\n> The halfway decent list includes, I think:\n> 1) cmake\n> 2) bazel\n> 3) meson\n\nWas SCons considered as an option? It is widely adopted and written in Python\nas well. Personally, I like the fact that with SCons you write config files\nin pure Python, not some dialect you have to learn additionally. There is\na free e-book available [1].\n\nWhat pros and cons do you see that make Meson a better choice?\n\n[1]: https://scons.org/doc/production/PDF/scons-user.pdf\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 26 May 2022 11:47:13 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2022-05-26 11:47:13 +0300, Aleksander Alekseev wrote:\n> Thanks for working on this! I'm very enthusiastic about this effort and I was\n> glad to see on PGCon Unconference that the majority of the community seems\n> to be as well.\n> \n> > The halfway decent list includes, I think:\n> > 1) cmake\n> > 2) bazel\n> > 3) meson\n> \n> Was SCons considered as an option?\n> What pros and cons do you see that make Meson a better choice?\n\nI looked at it and quickly discarded it. From what I could see there's not\nbeen meaningful moves to it in the last couple years, if anything adoption has\nbeen dropping. And I don't think we want to end up relying on yet another half\nmaintained tool.\n\nNot having a ninja backend etc didn't strike me as great either - the builds\nwith scons I've done weren't fast at all.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 28 May 2022 12:14:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi Andres,\n\n> Not having a ninja backend etc didn't strike me as great either - the builds\n> with scons I've done weren't fast at all.\n\nI must admit, personally I never used Scons, I just know that it was considered\n(an / the only?) alternative to CMake for many years. The Scons 4.3.0 release\nnotes say that Ninja is supported [1], but according to the user guide [2]\nNinja support is considered experimental.\n\nDon't get me wrong, I don't insist on using Scons. I was just curious if it was\nconsidered. Actually, a friend of mine pointed out that the fact that Scons\nbuild files are literally a Python code could be a disadvantage. There is less\ncontrol of this code, basically it can do anything. It could complicate the\ndiagnosis of certain issues, etc.\n\nSince you invested so much effort into Meson already let's just focus on it.\n\nI tried the branch on GitHub on MacOS Monterey 12.3.1 and Ubuntu 20.04 LTS.\nI was going to test it against several third party extensions, but it looks like\nit is a bit early for this. On Ubuntu I got the following error:\n\n```\n../src/include/parser/kwlist.h:332:25: error: ‘PARAMETER’ undeclared here (not\nin a function)\n332 | PG_KEYWORD(\"parameter\", PARAMETER, UNRESERVED_KEYWORD, BARE_LABEL)\n\n../src/interfaces/ecpg/preproc/keywords.c:32:55: note: in definition of macro\n‘PG_KEYWORD’\n32 | #define PG_KEYWORD(kwname, value, category, collabel) value,\n```\n\nOn MacOS I got multiple errors regarding LDAP:\n\n```\n/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/\nLDAP.framework/Headers/ldap.h:1:10: error: #include nested too deeply\n#include <ldap.h>\n\n../src/interfaces/libpq/fe-connect.c:4816:2: error: use of undeclared\nidentifier 'LDAP'\n LDAP *ld = NULL;\n ^\n\n../src/interfaces/libpq/fe-connect.c:4817:2: error: use of undeclared\nidentifier 'LDAPMessage'\n LDAPMessage *res,\n ^\n... etc...\n```\n\nI didn't invest much time into investigating these issues. For now I just\nwanted to report them. Please let me know if you need any help with these\nand/or additional information.\n\n[1]: https://scons.org/scons-430-is-available.html\n[2]: https://scons.org/doc/production/PDF/scons-user.pdf\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 31 May 2022 16:49:17 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2022-05-31 16:49:17 +0300, Aleksander Alekseev wrote:\n> I tried the branch on GitHub on MacOS Monterey 12.3.1 and Ubuntu 20.04 LTS.\n> I was going to test it against several third party extensions, but it looks like\n> it is a bit early for this. On Ubuntu I got the following error:\n\nWhat do those extensions use to build? Since the unconference I added some\nrudimentary PGXS compatibility, but it's definitely not complete yet.\n\n\n> ```\n> ../src/include/parser/kwlist.h:332:25: error: ‘PARAMETER’ undeclared here (not\n> in a function)\n> 332 | PG_KEYWORD(\"parameter\", PARAMETER, UNRESERVED_KEYWORD, BARE_LABEL)\n> \n> ../src/interfaces/ecpg/preproc/keywords.c:32:55: note: in definition of macro\n> ‘PG_KEYWORD’\n> 32 | #define PG_KEYWORD(kwname, value, category, collabel) value,\n> ```\n\nHuh. I've not seen this before - could you provide a bit more detail about\nwhat you did? CI isn't testing ubuntu, but it is testing Debian, so I'd expect\nthis to work.\n\n\n> On MacOS I got multiple errors regarding LDAP:\n\nAh, yes. Sorry, that's an open issue that I need to fix. -Dldap=disabled for\nthe rescue. There's some crazy ordering dependency in macos framework\nheaders. The ldap framework contains an \"ldap.h\" header that includes\n\"ldap.h\". So if you end up with the framework on the include path, you get\nendless recursion.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 31 May 2022 12:25:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On 06.05.22 23:27, Andres Freund wrote:\n> I added pkgconfig since then. They're not exactly the same, but pretty close,\n> except for one thing: Looks like some of the ecpg libraries really should link\n> to some other ecpg libs? I think we're missing something there... That then\n> leads to missing requirements in the .pc files.\n\nI took a closer look at the generated pkgconfig files. I think they are \nok. There are a couple of insignificant textual differences that we \ncould reduce by patching Makefile.shlib. But technically they are ok.\n\nThere is one significant difference: the ecpg libraries now get a \nRequires.private for openssl, which I think is technically correct since \nboth libpgcommon and libpgport require openssl.\n\nAttached is a tiny patch to make the description in one file backward \nconsistent.", "msg_date": "Wed, 1 Jun 2022 06:55:06 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi Andres,\n\n> What do those extensions use to build? Since the unconference I added some\n> rudimentary PGXS compatibility, but it's definitely not complete yet.\n\nWe mostly use CMake and Cargo, the Rust package manager. So I don't\nanticipate many problems here, just want to make sure it's going to\nwork as expected.\n\n> > ```\n> > ../src/include/parser/kwlist.h:332:25: error: ‘PARAMETER’ undeclared here (not\n> > in a function)\n> > 332 | PG_KEYWORD(\"parameter\", PARAMETER, UNRESERVED_KEYWORD, BARE_LABEL)\n> >\n> > ../src/interfaces/ecpg/preproc/keywords.c:32:55: note: in definition of macro\n> > ‘PG_KEYWORD’\n> > 32 | #define PG_KEYWORD(kwname, value, category, collabel) value,\n> > ```\n>\n> Huh. I've not seen this before - could you provide a bit more detail about\n> what you did? CI isn't testing ubuntu, but it is testing Debian, so I'd expect\n> this to work.\n\nI used PIP to install Meson, since the default APT package is too old, v0.53:\n\n$ pip3 install --user meson\n$ meson --version\n0.62.1\n$ ninja --version\n1.10.0\n\nThe branch was checked out as it was described in the first email.\nThen to reproduce the issue:\n\n$ git status\nOn branch meson\nYour branch is up to date with 'andres/meson'.\n$ git fetch andres\n$ git rebase -i andres/meson\n$ meson setup build --buildtype debug\n$ cd build\n$ ninja\n\nThis is pretty much the default Ubuntu 20.04.4 LTS system with all the\nrecent updates installed, so it shouldn't be a problem to reproduce\nthe issue with a VM.\n\n> > On MacOS I got multiple errors regarding LDAP:\n>\n> Ah, yes. Sorry, that's an open issue that I need to fix. -Dldap=disabled for\n> the rescue. There's some crazy ordering dependency in macos framework\n> headers. The ldap framework contains an \"ldap.h\" header that includes\n> \"ldap.h\". So if you end up with the framework on the include path, you get\n> endless recursion.\n\nThanks, this helped. I did the following:\n\n$ meson configure -Dldap=disabled\n$ meson configure -Dssl=openssl\n$ meson configure -Dprefix=/Users/eax/pginstall\n$ ninja\n$ meson test\n$ meson install\n\n... and it terminated successfully. I was also able to configure and\nrun Postgres instance using my regular scripts, with some\nmodifications [1]\n\nThen I decided to compile TimescaleDB against the newly installed\nPostgres. Turns out there is a slight problem.\n\nThe extension uses CMake and also requires PostgreSQL to be compiled\nwith OpenSSL support. CMakeLists.txt looks for a\n\"--with-(ssl=)?openssl\" regular expression in the \"pg_config\n--configure\" output. The output is empty although Postgres was\ncompiled with OpenSSL support. The full output of pg_config looks like\nthis:\n\n```\nCONFIGURE =\nCC = not recorded\nCPPFLAGS = not recorded\nCFLAGS = not recorded\nCFLAGS_SL = not recorded\n... etc ...\n```\n\nI get a bunch of errors from the compiler if I remove this particular\ncheck from CMakeLists, but I have to investigate these a bit more\nsince the branch is based on PG15 and we don't officially support PG15\nyet. It worked last time we checked a month or so ago, but the\nsituation may have changed.\n\n[1]: https://github.com/afiskon/pgscripts/blob/master/single-install.sh\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 1 Jun 2022 12:39:50 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2022-06-01 06:55:06 +0200, Peter Eisentraut wrote:\n> \n> On 06.05.22 23:27, Andres Freund wrote:\n> > I added pkgconfig since then. They're not exactly the same, but pretty close,\n> > except for one thing: Looks like some of the ecpg libraries really should link\n> > to some other ecpg libs? I think we're missing something there... That then\n> > leads to missing requirements in the .pc files.\n> \n> I took a closer look at the generated pkgconfig files. I think they are ok.\n> There are a couple of insignificant textual differences that we could reduce\n> by patching Makefile.shlib. But technically they are ok.\n\nThanks for checking!\n\n\n> There is one significant difference: the ecpg libraries now get a\n> Requires.private for openssl, which I think is technically correct since\n> both libpgcommon and libpgport require openssl.\n\nYea, I noticed those too. It's not great, somehow. But I don't really see a\nbetter alternative for now.\n\n\n> Attached is a tiny patch to make the description in one file backward\n> consistent.\n\nApplied.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Jun 2022 13:53:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-06-01 12:39:50 +0300, Aleksander Alekseev wrote:\n> > > ```\n> > > ../src/include/parser/kwlist.h:332:25: error: ‘PARAMETER’ undeclared here (not\n> > > in a function)\n> > > 332 | PG_KEYWORD(\"parameter\", PARAMETER, UNRESERVED_KEYWORD, BARE_LABEL)\n> > >\n> > > ../src/interfaces/ecpg/preproc/keywords.c:32:55: note: in definition of macro\n> > > ‘PG_KEYWORD’\n> > > 32 | #define PG_KEYWORD(kwname, value, category, collabel) value,\n> > > ```\n> >\n> > Huh. I've not seen this before - could you provide a bit more detail about\n> > what you did? CI isn't testing ubuntu, but it is testing Debian, so I'd expect\n> > this to work.\n> \n> I used PIP to install Meson, since the default APT package is too old, v0.53:\n> \n> $ pip3 install --user meson\n> $ meson --version\n> 0.62.1\n> $ ninja --version\n> 1.10.0\n> \n> The branch was checked out as it was described in the first email.\n> Then to reproduce the issue:\n> \n> $ git status\n> On branch meson\n> Your branch is up to date with 'andres/meson'.\n> $ git fetch andres\n> $ git rebase -i andres/meson\n> $ meson setup build --buildtype debug\n> $ cd build\n> $ ninja\n> \n> This is pretty much the default Ubuntu 20.04.4 LTS system with all the\n> recent updates installed, so it shouldn't be a problem to reproduce\n> the issue with a VM.\n\nWill test.\n\n\n> > > On MacOS I got multiple errors regarding LDAP:\n> >\n> > Ah, yes. Sorry, that's an open issue that I need to fix. -Dldap=disabled for\n> > the rescue. There's some crazy ordering dependency in macos framework\n> > headers. The ldap framework contains an \"ldap.h\" header that includes\n> > \"ldap.h\". So if you end up with the framework on the include path, you get\n> > endless recursion.\n> \n> Thanks, this helped.\n\nCool. I think I pushed a fix/workaround for the issue now. Still can't decide\nwhether it's apple's or meson's fault.\n\n\n> I did the following:\n> \n> $ meson configure -Dldap=disabled\n> $ meson configure -Dssl=openssl\n> $ meson configure -Dprefix=/Users/eax/pginstall\n\nFYI, you can set multiple options in one go ;)\n\n\n> ... and it terminated successfully. I was also able to configure and\n> run Postgres instance using my regular scripts, with some\n> modifications [1]\n\nCool.\n\n\n> Then I decided to compile TimescaleDB against the newly installed\n> Postgres. Turns out there is a slight problem.\n> \n> The extension uses CMake and also requires PostgreSQL to be compiled\n> with OpenSSL support. CMakeLists.txt looks for a\n> \"--with-(ssl=)?openssl\" regular expression in the \"pg_config\n> --configure\" output. The output is empty although Postgres was\n> compiled with OpenSSL support.\n\nMakes sense. Currently we don't fill the --configure thing, because there\nconfigure wasn't used. We could try to generate something compatible from\nmeson options, but I'm not sure that's a good plan.\n\n\n\n> The full output of pg_config looks like\n> this:\n> \n> ```\n> CONFIGURE =\n> CC = not recorded\n> CPPFLAGS = not recorded\n> CFLAGS = not recorded\n> CFLAGS_SL = not recorded\n> ... etc ...\n> ```\n> \n> I get a bunch of errors from the compiler if I remove this particular\n> check from CMakeLists, but I have to investigate these a bit more\n> since the branch is based on PG15 and we don't officially support PG15\n> yet. It worked last time we checked a month or so ago, but the\n> situation may have changed.\n\nI suspect the errors might be due to CC/CPPFLAGS/... not being defined. I can\ntry to make it output something roughly compatible with the old output, for\nmost of those I already started to compute values for the PGXS compat stuff I\nwas hacking on recently.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Jun 2022 14:05:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi Andres,\n\n> Cool. I think I pushed a fix/workaround for the issue now. Still can't decide\n> whether it's apple's or meson's fault.\n\nMany thanks! The fix solved the problem, I can compile with -Dldap=enabled now.\nThe code passes the tests too.\n\n> > $ meson configure -Dldap=disabled\n> > $ meson configure -Dssl=openssl\n> > $ meson configure -Dprefix=/Users/eax/pginstall\n>\n> FYI, you can set multiple options in one go ;)\n\nThanks! ;)\n\n> Makes sense. Currently we don't fill the --configure thing, because there\n> configure wasn't used. We could try to generate something compatible from\n> meson options, but I'm not sure that's a good plan.\n\nIf pg_config output was 100% backward compatible with Autotools one, that would\nsimplify the lives of the extension developers for sure. However, considering\nthat at PGCon we agreed that both Autotools and Meson will be maintained for\nseveral releases, personally I wouldn't say that this compatibility is\nnecessary, nor is it realistically deliverable. Nevertheless, IMO there should\nbe a stable and documented way to determine the PostgreSQL version (this can be\ndone with `pg_config --version` for both Autotools and Meson), the build tool\nused (no way to determine) and the build options (no way to determine\nfor Meson).\n\n> I suspect the errors might be due to CC/CPPFLAGS/... not being defined. I can\n> try to make it output something roughly compatible with the old output, for\n> most of those I already started to compute values for the PGXS compat stuff I\n> was hacking on recently.\n\nYes, that could explain the problem. Just for the record, I get several errors\nregarding src/export.h in TimescaleDB code [1]:\n\n```\n/Users/eax/projects/c/timescaledb/src/export.h:26:5: error: pasting formed\n ')87628', an invalid preprocessing token [clang-diagnostic-error]\n#if TS_EMPTY(PGDLLEXPORT)\n ^\n/Users/eax/projects/c/timescaledb/src/export.h:17:22: note: expanded from\n macro 'TS_EMPTY'\n#define TS_EMPTY(x) (TS_CAT(x, 87628) == 87628)\n ^\n/Users/eax/projects/c/timescaledb/src/export.h:15:23: note: expanded from\n macro 'TS_CAT'\n#define TS_CAT(x, y) x##y\n ^\n/Users/eax/projects/c/timescaledb/src/export.h:26:14: error: function-like\n macro '__attribute__' is not defined [clang-diagnostic-error]\n#if TS_EMPTY(PGDLLEXPORT)\n ^\n/Users/eax/pginstall/include/postgresql/server/c.h:1339:21: note: expanded\n from macro 'PGDLLEXPORT'\n#define PGDLLEXPORT __attribute__((visibility(\"default\")))\n ^\n/Users/eax/projects/c/timescaledb/src/export.h:30:2: error: \"PGDLLEXPORT is\n already defined\" [clang-diagnostic-error]\n#error \"PGDLLEXPORT is already defined\"\n ^\n1 warning and 3 errors generated.\nError while processing /Users/eax/projects/c/timescaledb/src/extension.c\n```\n\n[1]: https://github.com/timescale/timescaledb/blob/main/src/export.h\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 2 Jun 2022 15:34:23 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2022-06-02 15:34:23 +0300, Aleksander Alekseev wrote:\n> Hi Andres,\n> \n> > Cool. I think I pushed a fix/workaround for the issue now. Still can't decide\n> > whether it's apple's or meson's fault.\n> \n> Many thanks! The fix solved the problem, I can compile with -Dldap=enabled now.\n> The code passes the tests too.\n\nCool.\n\n\n> > I suspect the errors might be due to CC/CPPFLAGS/... not being defined. I can\n> > try to make it output something roughly compatible with the old output, for\n> > most of those I already started to compute values for the PGXS compat stuff I\n> > was hacking on recently.\n> \n> Yes, that could explain the problem. Just for the record, I get several errors\n> regarding src/export.h in TimescaleDB code [1]:\n> \n\nI think this is timescale's issue. Why are you defining / undefining\nPGDLLEXPORT?\n\nPart of the patch series is to use visibility attributes, and your #if\nTS_EMPTY(PGDLLEXPORT) thing can't handle that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Jun 2022 08:22:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2022-06-01 12:39:50 +0300, Aleksander Alekseev wrote:\n> > > ```\n> > > ../src/include/parser/kwlist.h:332:25: error: ‘PARAMETER’ undeclared here (not\n> > > in a function)\n> > > 332 | PG_KEYWORD(\"parameter\", PARAMETER, UNRESERVED_KEYWORD, BARE_LABEL)\n> > >\n> > > ../src/interfaces/ecpg/preproc/keywords.c:32:55: note: in definition of macro\n> > > ‘PG_KEYWORD’\n> > > 32 | #define PG_KEYWORD(kwname, value, category, collabel) value,\n> > > ```\n> >\n> > Huh. I've not seen this before - could you provide a bit more detail about\n> > what you did? CI isn't testing ubuntu, but it is testing Debian, so I'd expect\n> > this to work.\n> \n> I used PIP to install Meson, since the default APT package is too old, v0.53:\n> \n> $ pip3 install --user meson\n> $ meson --version\n> 0.62.1\n> $ ninja --version\n> 1.10.0\n> \n> The branch was checked out as it was described in the first email.\n> Then to reproduce the issue:\n> \n> $ git status\n> On branch meson\n> Your branch is up to date with 'andres/meson'.\n> $ git fetch andres\n> $ git rebase -i andres/meson\n> $ meson setup build --buildtype debug\n> $ cd build\n> $ ninja\n> \n> This is pretty much the default Ubuntu 20.04.4 LTS system with all the\n> recent updates installed, so it shouldn't be a problem to reproduce\n> the issue with a VM.\n\nChatting with a colleague (who unbeknownst to me hit something similar in the\npast) I think we figured it out. It's not due to Ubuntu 20.04 or such. It's\nlikely due to previously having an in-tree build with autoconf, doing make\nclean, doing a git pull, then building with meson. The meson build doesn't yet\nhandle pre-existing flex / bison output.\n\nI had tried to defend against conflicts with in-tree builds by detecting an\nin-tree pg_config.h, but that doesn't help with files that aren't removed by\nmake clean. Like bison / flex output.\n\nAnd I didn't notice this problem because it doesn't cause visible issues until\nthe lexer / grammar changes...\n\n\nI'm not quite sure what the proper behaviour is when doing an out-of-tree\nbuild with meson (all builds are out-of-tree), with a pre-existing flex /\nbison output in the source tree that is out of date.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Jun 2022 09:39:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I'm not quite sure what the proper behaviour is when doing an out-of-tree\n> build with meson (all builds are out-of-tree), with a pre-existing flex /\n> bison output in the source tree that is out of date.\n\nDefinitely sounds like a gotcha.\n\nOn the one hand, there's been some discussion already of removing all\nderived files from tarballs and just insisting that users provide all\nneeded tools when building from source. If we did that, it could be\nsufficient for the meson build to check that no such files are present\nin the source tree. (Checking a couple of them would be enough, likely.)\n\nOn the other hand, I'm not sure that I want such a change to be forced\nby a toolchain change. It definitely seems a bit contrary to the plan\nwe'd formed of allowing meson and make-based builds to coexist for\na few years, because we'd be breaking at least some make-based build\nprocesses.\n\nCould we have the meson build check that, say, if gram.c exists it\nis newer than gram.y? Or get it to ignore an in-tree gram.c?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jun 2022 13:08:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2022-06-02 13:08:49 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I'm not quite sure what the proper behaviour is when doing an out-of-tree\n> > build with meson (all builds are out-of-tree), with a pre-existing flex /\n> > bison output in the source tree that is out of date.\n> \n> Definitely sounds like a gotcha.\n> \n> On the one hand, there's been some discussion already of removing all\n> derived files from tarballs and just insisting that users provide all\n> needed tools when building from source. If we did that, it could be\n> sufficient for the meson build to check that no such files are present\n> in the source tree. (Checking a couple of them would be enough, likely.)\n\nThere already is a check for pg_config.h, so the most obvious source of this\nis addressed. Just didn't think about the files that make clean doesn't remove\n:/.\n\n\n> On the other hand, I'm not sure that I want such a change to be forced\n> by a toolchain change. It definitely seems a bit contrary to the plan\n> we'd formed of allowing meson and make-based builds to coexist for\n> a few years, because we'd be breaking at least some make-based build\n> processes.\n\nAgreed. I think it'd be pretty reasonable to not include flex / bison\noutput. They're not hard to acquire. The docs are perhaps another story.\n\nI think it might be fine to say that make reallyclean (*) is required if\nthere's some conflicting in-source tree file?\n\n\n> Could we have the meson build check that, say, if gram.c exists it\n> is newer than gram.y? Or get it to ignore an in-tree gram.c?\n\nI suspect the problem with ignoring is gram.h, that's probably a bit harder to\nignore. Right now I'm leaning towards either always erroring out if there's\nbison/flex output in the source tree (with a hint towards make\nreallyclean(*)), or erroring out if they're out of date (again with a hint\ntowards reallyclean)?\n\nAlternatively we could just remove the generated .c/h files from the source\ndir, as a part of regenerating them in the build dir? But I like the idea of\nthe source dir being readonly outside of explicit targets modifying sources\n(e.g. update-unicode or such).\n\nGreetings,\n\nAndres Freund\n\n(*) do we really not have a target that removes bison / flex output?\n\n\n", "msg_date": "Thu, 2 Jun 2022 10:26:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> (*) do we really not have a target that removes bison / flex output?\n\nmaintainer-clean\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jun 2022 13:33:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2022-06-02 13:33:51 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > (*) do we really not have a target that removes bison / flex output?\n> \n> maintainer-clean\n\nDon't think so:\n\n# gram.c, gram.h, and scan.c are in the distribution tarball, so they\n# are not cleaned here.\nclean distclean maintainer-clean:\n\trm -f lex.backup\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Jun 2022 10:48:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-06-02 13:33:51 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> (*) do we really not have a target that removes bison / flex output?\n\n>> maintainer-clean\n\n> Don't think so:\n\nSee about line 300 in src/backend/Makefile. In any case, it's\neasy to show by experiment that it does.\n\n$ make maintainer-clean\n...\n$ git status --ignored\nOn branch master\nYour branch is up to date with 'origin/master'.\n\nnothing to commit, working tree clean\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jun 2022 15:05:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2022-06-02 15:05:10 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-06-02 13:33:51 -0400, Tom Lane wrote:\n> >> Andres Freund <andres@anarazel.de> writes:\n> >>> (*) do we really not have a target that removes bison / flex output?\n> \n> >> maintainer-clean\n> \n> > Don't think so:\n> \n> See about line 300 in src/backend/Makefile. In any case, it's\n> easy to show by experiment that it does.\n> \n> $ make maintainer-clean\n> ...\n> $ git status --ignored\n> On branch master\n> Your branch is up to date with 'origin/master'.\n> \n> nothing to commit, working tree clean\n\nOh. I executed maintainer-clean inside src/backend/parser/, and thus didn't\nsee it getting cleaned up.\n\nIt seems pretty darn grotty that src/backend/parser/Makefile explicitly states\nthat gram.c ... aren't cleaned \"here\", but then src/backend/Makefile does\nclean them up.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Jun 2022 12:17:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Oh. I executed maintainer-clean inside src/backend/parser/, and thus didn't\n> see it getting cleaned up.\n\n> It seems pretty darn grotty that src/backend/parser/Makefile explicitly states\n> that gram.c ... aren't cleaned \"here\", but then src/backend/Makefile does\n> clean them up.\n\nI agree the factorization of this ain't great. I'd think about improving\nit, were it not that we're trying to get rid of it.\n\n(But with meson, the whole idea of building or cleaning just part of the\ntree is out the window anyway, no?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jun 2022 15:53:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2022-06-02 15:53:50 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Oh. I executed maintainer-clean inside src/backend/parser/, and thus didn't\n> > see it getting cleaned up.\n> \n> > It seems pretty darn grotty that src/backend/parser/Makefile explicitly states\n> > that gram.c ... aren't cleaned \"here\", but then src/backend/Makefile does\n> > clean them up.\n> \n> I agree the factorization of this ain't great. I'd think about improving\n> it, were it not that we're trying to get rid of it.\n\n+1. I think I just wanted to excuse my confusion...\n\n\n> (But with meson, the whole idea of building or cleaning just part of the\n> tree is out the window anyway, no?)\n\nCleaning parts of the tree isn't supported as far as I know (not that I've\nneeded it). You can build parts of the tree by specifying the target\n(e.g. ninja src/backend/postgres) or by specifying meta-targets (e.g. ninja\ncontrib backend). I've thought about contributing a patch to meson to\nautomatically generate targets for each directory that has sub-targets - it's\njust a few lines.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Jun 2022 14:13:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi hackers,\n\n> See about line 300 in src/backend/Makefile. In any case, it's\n> easy to show by experiment that it does.\n\n`make maintainer-clean` did the trick, thanks. I suggest modifying meson.build\naccordingly:\n\n-run make distclean in the source tree.\n+run `make maintainer-clean` in the source tree.\n\n> I think this is timescale's issue. Why are you defining / undefining\n> PGDLLEXPORT?\n\nThat's a great question.\n\nAs I understand some time ago the developers had a problem with a collision of\nexported symbols on *nix platforms [1] and chose to solve it by re-defining\nPGDLLEXPORT to __attribute__((visibility (\"default\"))) for GCC and CLang.\nI agree that this is a questionable approach. Redefining a macro provided\nby Postgres doesn't strike me as a good idea. I tried to remove this\nre-definition, but it didn't go well [2]. So apparently it should be addressed\nsomehow differently.\n\n> Part of the patch series is to use visibility attributes, and your #if\n> TS_EMPTY(PGDLLEXPORT) thing can't handle that.\n\nOut of curiosity, how come a patchset that adds an alternative build system\nchanges the visibility attributes? I would guess they should be the same\nfor both Autotools and Meson. Is it necessary in order to make Meson work?\nIf not, maybe it should be a separate patch.\n\n[1]: https://github.com/timescale/timescaledb/commit/027b7b29420a742d7615c70d9f19b2b99c488c2c\n[2]: https://github.com/timescale/timescaledb/pull/4413\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 3 Jun 2022 12:35:45 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2022-06-03 12:35:45 +0300, Aleksander Alekseev wrote:\n> > Part of the patch series is to use visibility attributes, and your #if\n> > TS_EMPTY(PGDLLEXPORT) thing can't handle that.\n> \n> Out of curiosity, how come a patchset that adds an alternative build system\n> changes the visibility attributes?\n\nIt was the simplest path - on windows (and AIx) extension symbols need to be\nexplicitly exported. We did that by building the objects constituting\nextension libraries, collecting the symbols, generating an export file with\nall the symbols, which then is passed to the linker. It was a lot less work\nto just add the necessary PGDLLEXPORT annotations than make that export file\ngeneration work for extensions.\n\n\n> I would guess they should be the same for both Autotools and Meson.\n\nIt is, the patch adds it to both.\n\n\n> Is it necessary in order to make Meson work?\n\nYes, or at least the simplest path.\n\n\n> If not, maybe it should be a separate patch.\n\nIt is.\n\nhttps://commitfest.postgresql.org/38/3396/\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 3 Jun 2022 09:23:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Tue, May 24, 2022 at 08:08:26PM +0200, Peter Eisentraut wrote:\n> On 18.05.22 21:48, Andres Freund wrote:\n>> - GETTIMEOFDAY_1ARG - test doesn't exist - I suspect it might not be necessary\n> \n> Might be obsolete, consider removing.\n\nI just came across this one independently of what you are doing for\nmeson, and based on a lookup of the buildfarm, I think that it can be\nremoved. One reference about GETTIMEOFDAY_1ARG on the -hackers list\ncomes from here, from 20 years ago:\nhttps://www.postgresql.org/message-id/a1eeu5$1koe$1@news.tht.net\n--\nMichael", "msg_date": "Mon, 6 Jun 2022 16:54:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "I looked at some of the \"prereq\" patches again to see what state they \nare in:\n\ncommit 351a12f48e395b31cce4aca239b934174b36ea9d\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Wed Apr 20 22:46:54 2022\n\n prereq: deal with \\ paths in basebackup_to_shell tests.\n\nThis is a new component in PG15, so a fix might be in scope for PG15 \ntoo. But I don't know if this change is really necessary. There are \nother tests that use the GZIP and TAR environment variables (e.g., \npg_verifybackup). If this is a problem there too, we should think of a \ngeneral solution. If not, it could use some explanation.\n\n\ncommit c00642483a53f4ee6e351085c7628363c293ee61\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Fri Mar 25 21:44:48 2022\n\n meson: prereq: unicode: allow to specify output directory.\n\nOK with attached fixup (but see below).\n\n\ncommit 31313056e153e099f236a29b752f7610c4f7764f\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Thu Jan 20 08:36:50 2022\n\n meson: prereq: generate-errcodes.pl: accept output file\n\nThis is ok, but seems unnecessary, since meson can capture the output of \na single file. (See also similar script generate-errcodes-table.pl in \ndoc/, which uses capture.)\n\n\ncommit e4e77c0e20f3532be4ed270a7cf8b965b7cafa49\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Thu Jan 20 08:36:50 2022\n\n meson: prereq: add output path arg in generate-lwlocknames.pl\n\nWe should make the command-line interface here the same as the unicode \nscript: Either make the output directory a positional argument or an \noption. I don't have a strong feeling about it either way, but perhaps \nthe solution with the option is more elegant and would also not require \nchanging the makefiles. Also, we should decide on short or long option: \nThe code declares a long option, but the build uses a short option. \nIt's confusing that that even works.\n\n\ncommit 7866620afa65223f6e657da972f501615fd32d3b\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Wed Apr 20 21:01:31 2022\n\n meson: prereq: output and depencency tracking work.\n\nThis could be split into multiple parts with more detailed explanations. \n I see where you're going but not everything is fully clear to me \n(especially the guc-file.c.h stuff).", "msg_date": "Wed, 8 Jun 2022 08:27:06 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Attached is a patch the finishes up the work to move the snowball SQL \nscript generation into a separate script.", "msg_date": "Wed, 8 Jun 2022 14:33:16 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-06-08 08:27:06 +0200, Peter Eisentraut wrote:\n> I looked at some of the \"prereq\" patches again to see what state they are\n> in:\n> \n> commit 351a12f48e395b31cce4aca239b934174b36ea9d\n> Author: Andres Freund <andres@anarazel.de>\n> Date: Wed Apr 20 22:46:54 2022\n> \n> prereq: deal with \\ paths in basebackup_to_shell tests.\n> \n> This is a new component in PG15, so a fix might be in scope for PG15 too.\n\nYea, I should probably post that to the relevant thread. I think at that point\nI was just trying to get a rebase not to fail anymore...\n\n\n> But I don't know if this change is really necessary. There are other tests\n> that use the GZIP and TAR environment variables (e.g., pg_verifybackup). If\n> this is a problem there too, we should think of a general solution. If not,\n> it could use some explanation.\n\nI got failures on windows without it - which we just don't see on windows\nbecause currently nothing runs these tests :(. The pg_verifybackup case likely\nis unproblematic because it uses the array form of building subcommands,\ninstead of string interpolation.\n\n\n> commit c00642483a53f4ee6e351085c7628363c293ee61\n> Author: Andres Freund <andres@anarazel.de>\n> Date: Fri Mar 25 21:44:48 2022\n> \n> meson: prereq: unicode: allow to specify output directory.\n> \n> OK with attached fixup (but see below).\n\nMerged.\n\n\n> commit 31313056e153e099f236a29b752f7610c4f7764f\n> Author: Andres Freund <andres@anarazel.de>\n> Date: Thu Jan 20 08:36:50 2022\n> \n> meson: prereq: generate-errcodes.pl: accept output file\n> \n> This is ok, but seems unnecessary, since meson can capture the output of a\n> single file. (See also similar script generate-errcodes-table.pl in doc/,\n> which uses capture.)\n\nNot sure why I didn't do that. It might be because the meson capture stuff has\na noticable overhead, particularly on windows, because it starts up a python\ninterpreter. Since nearly the whole build depends on generate-errcodes.pl to\nhave run...\n\n\n> commit e4e77c0e20f3532be4ed270a7cf8b965b7cafa49\n> Author: Andres Freund <andres@anarazel.de>\n> Date: Thu Jan 20 08:36:50 2022\n> \n> meson: prereq: add output path arg in generate-lwlocknames.pl\n> \n> We should make the command-line interface here the same as the unicode\n> script: Either make the output directory a positional argument or an option.\n> I don't have a strong feeling about it either way, but perhaps the solution\n> with the option is more elegant and would also not require changing the\n> makefiles.\n\nI don't really have an opinion what's better here, so I'll go with your\npreference / the option.\n\n\n> Also, we should decide on short or long option: The code\n> declares a long option, but the build uses a short option. It's confusing\n> that that even works.\n\nGetopt::Long auto-generates short options afaict...\n\n\n> commit 7866620afa65223f6e657da972f501615fd32d3b\n> Author: Andres Freund <andres@anarazel.de>\n> Date: Wed Apr 20 21:01:31 2022\n> \n> meson: prereq: output and depencency tracking work.\n> \n> This could be split into multiple parts with more detailed explanations. I\n> see where you're going but not everything is fully clear to me (especially\n> the guc-file.c.h stuff).\n\nWill take a stab at doing so.\n\n\n> From 51c6d3544ae9e652c7aac26102a8bf5a116fb182 Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <peter@eisentraut.org>\n> Date: Tue, 7 Jun 2022 22:54:26 +0200\n> Subject: [PATCH] fixup! meson: prereq: unicode: allow to specify output\n> directory.\n\nMerged.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 14 Jun 2022 11:23:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2022-06-08 14:33:16 +0200, Peter Eisentraut wrote:\n> Attached is a patch the finishes up the work to move the snowball SQL script\n> generation into a separate script.\n\nThat looks good, merged. I did split the commit, because there's not yet a\nmeson.build \"at the time\" of the prereq: commits.\n\nOne thing I'm not quite sure about: Why does the makefile need awareness of\nthe stop files, but Install.pm doesn't? I suspect currently the patch leads to\nstopwords not being installed on windows anymore?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 14 Jun 2022 11:27:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "On 14.06.22 20:27, Andres Freund wrote:\n> One thing I'm not quite sure about: Why does the makefile need awareness of\n> the stop files, but Install.pm doesn't? I suspect currently the patch leads to\n> stopwords not being installed on windows anymore?\n\nInstall.pm contains this elsewhere:\n\n GenerateTsearchFiles($target);\n CopySetOfFiles(\n 'Stopword files',\n [ glob(\"src\\\\backend\\\\snowball\\\\stopwords\\\\*.stop\") ],\n $target . '/share/tsearch_data/');\n\nIt's a bit confusing that the \"generate\" function that we are patching \nalso installs some of the files right away, while the rest is installed \nby the calling function.\n\n\n", "msg_date": "Tue, 14 Jun 2022 20:47:59 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "On 2022-06-14 20:47:59 +0200, Peter Eisentraut wrote:\n> On 14.06.22 20:27, Andres Freund wrote:\n> > One thing I'm not quite sure about: Why does the makefile need awareness of\n> > the stop files, but Install.pm doesn't? I suspect currently the patch leads to\n> > stopwords not being installed on windows anymore?\n> \n> Install.pm contains this elsewhere:\n> \n> GenerateTsearchFiles($target);\n> CopySetOfFiles(\n> 'Stopword files',\n> [ glob(\"src\\\\backend\\\\snowball\\\\stopwords\\\\*.stop\") ],\n> $target . '/share/tsearch_data/');\n> \n> It's a bit confusing that the \"generate\" function that we are patching also\n> installs some of the files right away, while the rest is installed by the\n> calling function.\n\nUgh, that's confusing indeed. Thanks for the explanation.\n\n\n", "msg_date": "Tue, 14 Jun 2022 11:51:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nAttached is an updated version of the meson patchset. There has been a steady\nstream of incremental work over the last month, with patches from Peter\nEisentraut and Nazir Yavuz.\n\nI tried to address the review comments Peter had downthread about the prep\npatches. The one that I know is still outstanding is that there's still\ndifferent ways of passing output directories as parameters to a bunch of\nscripts added, will resolve that next (some have been fixed).\n\nNow the patchset contains a, somewhat hacky and incomplete, implementation of\npgxs, even when using meson. Basically a compatible Makefile.global.in is\ngenerated.\n\nThere's a lot of small and medium sized changes.\n\nAs usual the changes are also my git branch [1], which gets updated fairly\nregularly.\n\nGreetings,\n\nAndres Freund\n\n[1] https://github.com/anarazel/postgres/tree/meson", "msg_date": "Fri, 1 Jul 2022 02:33:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v9" }, { "msg_contents": "vcvarsall isn't needed in cirrus' \"check_world\" scripts.\n\nI'm missing any way to re/run cirrus only for msbuild OR ninja OR homegrown\nwith something more granular than \"ci-os-only: windows\". (The same thing\napplies to the mingw patch).\n\nI'll mail shortly about ccache.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 1 Jul 2022 14:01:11 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v9" }, { "msg_contents": "Hi,\n\nOn 2022-07-01 14:01:11 -0500, Justin Pryzby wrote:\n> vcvarsall isn't needed in cirrus' \"check_world\" scripts.\n\nE.g. for the ecpg tests it isn't, except that we don't currently build ecpg on\nwindows. But I plan to fix that.\n\n\n> I'm missing any way to re/run cirrus only for msbuild OR ninja OR homegrown\n> with something more granular than \"ci-os-only: windows\". (The same thing\n> applies to the mingw patch).\n\nNot sure that's really worth adding - I don't forsee merging all those\ntasks. But I'm open to proposals.\n\n\n> I'll mail shortly about ccache.\n\nThere's a meson PR that should fix some of the issues, need to test it...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Jul 2022 12:16:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v9" }, { "msg_contents": "On 01.07.22 11:33, Andres Freund wrote:\n> Attached is an updated version of the meson patchset. There has been a steady\n> stream of incremental work over the last month, with patches from Peter\n> Eisentraut and Nazir Yavuz.\n> \n> I tried to address the review comments Peter had downthread about the prep\n> patches. The one that I know is still outstanding is that there's still\n> different ways of passing output directories as parameters to a bunch of\n> scripts added, will resolve that next (some have been fixed).\n\nHere is my rough assessment of where we are with this patch set:\n\n08b4330ded prereq: deal with \\ paths in basebackup_to_shell tests.\n\nThis still needs clarification, per my previous review.\n\n3bf5b317d5 meson: prereq: Specify output directory for psql sql help script.\n2e5ed807f8 meson: prereq: ecpg: add and use output directory argument \nfor parse.pl.\n4e7fab01c5 meson: prereq: msvc: explicit output file for pgflex.pl\ncdcd3da4c4 meson: prereq: add output path arg in generate-lwlocknames.pl\n1f655486e4 meson: prereq: generate-errcodes.pl: accept output file\ne834c48758 meson: prereq: unicode: allow to specify output directory.\n\nYou said you are still finalizing these. I think we can move ahead with \nthese once that is done.\n\n9f4a9b1749 meson: prereq: move snowball_create.sql creation into perl file.\n\nThis looks ready, except I think it needs to be hooked into the\ndistprep target, since it's now a Perl script running at build time.\n\n8951a6721e meson: prereq: Refactor dtrace postprocessing make rules\n\nThis looks ready.\n\nbda6a45bae meson: prereq: Refactor PG_TEST_EXTRA logic in autoconf build\n\nI understand the intention behind this, but I think it changes the\nbehavior in an undesirable way. Before this patch, you can go into\nsrc/test/ssl/ and run make check manually. This was indeed the only\nway to do it before PG_TEST_EXTRA. With this patch, this would now\nskip all the tests unless you set PG_TEST_EXTRA, even if you run the\nspecific test directly.\n\nI think this needs a different idea.\n\neb852cc023 meson: prereq: Can we get away with not export-all'ing libraries?\n\nThis is also at <https://commitfest.postgresql.org/38/3396/>, which\nhasn't seen any activity in a while. I think this needs a resolution\none way or the other before we can proceed to the main act.\n\n2cc276ced6 meson: prereq: add src/tools/gen_versioning_script.pl.\n\nNote that in the make build system we can only use perl before\ndistprep. So it's not clear whether a script like this would help\nunify the code. Of course, we could still use it with the\nunderstanding that it will be separate.\n\n351ac51a89 meson: prereq: remove LLVM_CONFIG from Makefile.global.in\n\nThis can be committed. AFAICT, LLVM_CONFIG is only used within\nconfigure.\n\ndff7b5a960 meson: prereq: regress: allow to specify director containing \nexpected files.\n\nThis could use a bit more explanation, but it doesn't look\ncontroversial so far.\n\n243f99da38 wip: split TESTDIR into two.\n\nThis one has already caused a bit of confusion, but the explanation at\n\nhttps://www.postgresql.org/message-id/flat/20220601211112.td2ato4wjqf7afnv%40alap3.anarazel.de#1f250dee73cf0da29a6d2c020c3bde08\n\nseems reasonable. But it clearly needs further work.\n\n88dd280835 meson: Add meson based buildsystem.\n1ee3073a3c meson: ci: Build both with meson and as before.\n\nThese are for later. ;-)\n\nIn the meantime, also of interest to this effort:\n\n- If we're planning to remove the postmaster symlink in PG16, maybe we\n should start a discussion on that.\n\n- This patch is for unifying the list of languages in NLS, as\n previously discussed: https://commitfest.postgresql.org/38/3737/\n\n\n", "msg_date": "Wed, 6 Jul 2022 11:03:31 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v9" }, { "msg_contents": "Hi\n\nOn 2022-07-06 11:03:31 +0200, Peter Eisentraut wrote:\n> On 01.07.22 11:33, Andres Freund wrote:\n> > Attached is an updated version of the meson patchset. There has been a steady\n> > stream of incremental work over the last month, with patches from Peter\n> > Eisentraut and Nazir Yavuz.\n> > \n> > I tried to address the review comments Peter had downthread about the prep\n> > patches. The one that I know is still outstanding is that there's still\n> > different ways of passing output directories as parameters to a bunch of\n> > scripts added, will resolve that next (some have been fixed).\n> \n> Here is my rough assessment of where we are with this patch set:\n> \n> 08b4330ded prereq: deal with \\ paths in basebackup_to_shell tests.\n> \n> This still needs clarification, per my previous review.\n\nHm. I thought I had explained that bit, but apparently not. Well, it's pretty\nsimple - without this, the test fail on windows for me, as soon as one of the\nbinaries is in a directory with spaces (which is common on windows). Iimagine\nwhat happens with e.g.\n qq{$gzip --fast > \"$escaped_backup_path\\\\\\\\%f.gz\"}\nif $gzip contains spaces.\n\n\nThis doesn't happen currently on CI because nothing runs these tests on\nwindows yet.\n\n\n> bda6a45bae meson: prereq: Refactor PG_TEST_EXTRA logic in autoconf build\n> \n> I understand the intention behind this, but I think it changes the\n> behavior in an undesirable way. Before this patch, you can go into\n> src/test/ssl/ and run make check manually. This was indeed the only\n> way to do it before PG_TEST_EXTRA. With this patch, this would now\n> skip all the tests unless you set PG_TEST_EXTRA, even if you run the\n> specific test directly.\n\nIt's not a free lunch, I agree. But I think the downsides outweigh the upsides\nby far. Not seeing that tests were skipped in the test output is quite\nproblematic imo. And with meson's testrunner we're going to need something\nthat deals with skipping these tests - and it's more important to have them\nskipped, so that they show up in the test summary.\n\nIt's not like it's hard to set PG_TEST_EXTRA for a single command invocation?\n\n\n\n> 243f99da38 wip: split TESTDIR into two.\n> \n> This one has already caused a bit of confusion, but the explanation at\n> \n> https://www.postgresql.org/message-id/flat/20220601211112.td2ato4wjqf7afnv%40alap3.anarazel.de#1f250dee73cf0da29a6d2c020c3bde08\n> \n> seems reasonable. But it clearly needs further work.\n\nYea. I kind of want to get some of the preparatory stuff out of the way first.\n\n\n> 88dd280835 meson: Add meson based buildsystem.\n> 1ee3073a3c meson: ci: Build both with meson and as before.\n> \n> These are for later. ;-)\n> \n> In the meantime, also of interest to this effort:\n> \n> - If we're planning to remove the postmaster symlink in PG16, maybe we\n> should start a discussion on that.\n\nYea.\n\n\n> - This patch is for unifying the list of languages in NLS, as\n> previously discussed: https://commitfest.postgresql.org/38/3737/\n\nThere seems little downside to doing so, so ...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 6 Jul 2022 06:21:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v9" }, { "msg_contents": "On 06.07.22 15:21, Andres Freund wrote:\n>> Here is my rough assessment of where we are with this patch set:\n>>\n>> 08b4330ded prereq: deal with \\ paths in basebackup_to_shell tests.\n>>\n>> This still needs clarification, per my previous review.\n> Hm. I thought I had explained that bit, but apparently not. Well, it's pretty\n> simple - without this, the test fail on windows for me, as soon as one of the\n> binaries is in a directory with spaces (which is common on windows). Iimagine\n> what happens with e.g.\n> qq{$gzip --fast > \"$escaped_backup_path\\\\\\\\%f.gz\"}\n> if $gzip contains spaces.\n> \n> \n> This doesn't happen currently on CI because nothing runs these tests on\n> windows yet.\n\nHmm, maybe this patch looked different the last time I saw it. I see \nyour point.\n\nThe quoting of \"$gzip\" is clearly necessary.\n\nWhat about the backslash replacements s{\\\\}{/}g ? Is that also required \nfor passing the path through the shell? If so, the treatment of $tar in \nthat way doesn't seem necessary, since that doesn't get called through \nan intermediate shell. (That would then also explain why $gzip in the \npg_basebackup tests doesn't require that treatment, which had previously \nconfused me.)\n\nIf my understanding of this is correct, then I suggest:\n\n1. Add a comment to the $gzip =~ s{\\\\}{/}g ... line.\n2. Remove the $tar =~ s{\\\\}{/}g ... line.\n3. Backpatch to PG15.\n\n\n", "msg_date": "Thu, 7 Jul 2022 12:09:32 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v9" }, { "msg_contents": "On 06.07.22 15:21, Andres Freund wrote:\n>> - This patch is for unifying the list of languages in NLS, as\n>> previously discussed:https://commitfest.postgresql.org/38/3737/\n> There seems little downside to doing so, so ...\n\nThis has been committed, so on the next rebase, the languages arguments \ncan be removed from the i18n.gettext() calls.\n\n\n\n", "msg_date": "Wed, 13 Jul 2022 08:39:45 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v9" }, { "msg_contents": "On 06.07.22 15:21, Andres Freund wrote:\n>> bda6a45bae meson: prereq: Refactor PG_TEST_EXTRA logic in autoconf build\n>>\n>> I understand the intention behind this, but I think it changes the\n>> behavior in an undesirable way. Before this patch, you can go into\n>> src/test/ssl/ and run make check manually. This was indeed the only\n>> way to do it before PG_TEST_EXTRA. With this patch, this would now\n>> skip all the tests unless you set PG_TEST_EXTRA, even if you run the\n>> specific test directly.\n> It's not a free lunch, I agree. But I think the downsides outweigh the upsides\n> by far. Not seeing that tests were skipped in the test output is quite\n> problematic imo. And with meson's testrunner we're going to need something\n> that deals with skipping these tests - and it's more important to have them\n> skipped, so that they show up in the test summary.\n> \n> It's not like it's hard to set PG_TEST_EXTRA for a single command invocation?\n\nIt's probably ok. I have it set in my environment all the time anyway, \nso it wouldn't affect me. But it's the sort of thing people might be \nparticular about, so I thought it was worth pointing out.\n\n\n\n", "msg_date": "Wed, 13 Jul 2022 13:52:06 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v9" }, { "msg_contents": "Hi,\n\nOn 2022-07-07 12:09:32 +0200, Peter Eisentraut wrote:\n> On 06.07.22 15:21, Andres Freund wrote:\n> > > Here is my rough assessment of where we are with this patch set:\n> > > \n> > > 08b4330ded prereq: deal with \\ paths in basebackup_to_shell tests.\n> > > \n> > > This still needs clarification, per my previous review.\n> > Hm. I thought I had explained that bit, but apparently not. Well, it's pretty\n> > simple - without this, the test fail on windows for me, as soon as one of the\n> > binaries is in a directory with spaces (which is common on windows). Iimagine\n> > what happens with e.g.\n> > qq{$gzip --fast > \"$escaped_backup_path\\\\\\\\%f.gz\"}\n> > if $gzip contains spaces.\n> > \n> > \n> > This doesn't happen currently on CI because nothing runs these tests on\n> > windows yet.\n> \n> Hmm, maybe this patch looked different the last time I saw it.\n\nDon't think it had changed since I wrote it first.\n\n\n> The quoting of \"$gzip\" is clearly necessary.\n> \n> What about the backslash replacements s{\\\\}{/}g ? Is that also required for\n> passing the path through the shell?\n\nI don't recall the details, but it's definitely needed for embedding it into\npostgresql.conf. We could double the escapes instead, but that doesn't seem an\nimprovement (and wouldn't work when passing it to the shell anymore).\n\n\n> If so, the treatment of $tar in that\n> way doesn't seem necessary, since that doesn't get called through an\n> intermediate shell. (That would then also explain why $gzip in the\n> pg_basebackup tests doesn't require that treatment, which had previously\n> confused me.)\n\nYea, it doesn't immediately look like it's needed, and the test passes without\nit. I guess I might just have tried to be complete...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 14 Jul 2022 13:04:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v9" }, { "msg_contents": "Hi,\n\nAttached is v10 of the meson patchset. Lots of small changes, I don't think\nanything major. I tried to address most of Peter's feedback for the earlier\npatches.\n\nAfter this I plan to clean up the \"export\" patch, since that's I think the\nnext bigger step, and an improvement on its own. The step after will be to\ndiscuss where we want the output of tests to reside, whether the naming scheme\nfor tests is good etc.\n\nI did try to address Peter's criticism around inconsistency of the added\nparameters to perl scripts. I hope it's more consistent now. I used the\nopportunity to make src/tools/msvc use the \"output directory\" parameters,\nproviding coverage for those paths (and removing a few unnecessary chdirs, but\n...).\n\nGreetings,\n\nAndres Freund", "msg_date": "Thu, 14 Jul 2022 22:08:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v10" }, { "msg_contents": "Hi,\n\nOn 2022-07-13 08:39:45 +0200, Peter Eisentraut wrote:\n> On 06.07.22 15:21, Andres Freund wrote:\n> > > - This patch is for unifying the list of languages in NLS, as\n> > > previously discussed:https://commitfest.postgresql.org/38/3737/\n> > There seems little downside to doing so, so ...\n> \n> This has been committed, so on the next rebase, the languages arguments can\n> be removed from the i18n.gettext() calls.\n\nThat's done in v10 I posted yesterday.\n\n\nOn 2022-07-06 11:03:31 +0200, Peter Eisentraut wrote:\n> 3bf5b317d5 meson: prereq: Specify output directory for psql sql help script.\n> 2e5ed807f8 meson: prereq: ecpg: add and use output directory argument for\n> parse.pl.\n> 4e7fab01c5 meson: prereq: msvc: explicit output file for pgflex.pl\n> cdcd3da4c4 meson: prereq: add output path arg in generate-lwlocknames.pl\n> 1f655486e4 meson: prereq: generate-errcodes.pl: accept output file\n> e834c48758 meson: prereq: unicode: allow to specify output directory.\n> \n> You said you are still finalizing these. I think we can move ahead with\n> these once that is done.\n\nI tried to address the pending things in these patches.\n\n\n> 9f4a9b1749 meson: prereq: move snowball_create.sql creation into perl file.\n> \n> This looks ready, except I think it needs to be hooked into the\n> distprep target, since it's now a Perl script running at build time.\n\nDone. I vacillated between doing the distprep rule in\nsrc/backend/snowball/Makefile (as most things outside of the backend do) and\nsrc/backend/Makefile (most backend things). Ended up with doing it in\nsnowball/Makefile.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 15 Jul 2022 09:05:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v9" }, { "msg_contents": "Hi Andres,\n\n> Attached is v10 of the meson patchset. Lots of small changes, I don't think\n> anything major. I tried to address most of Peter's feedback for the earlier\n> patches.\n>\n> After this I plan to clean up the \"export\" patch, since that's I think the\n> next bigger step, and an improvement on its own. The step after will be to\n> discuss where we want the output of tests to reside, whether the naming scheme\n> for tests is good etc.\n>\n> I did try to address Peter's criticism around inconsistency of the added\n> parameters to perl scripts. I hope it's more consistent now. I used the\n> opportunity to make src/tools/msvc use the \"output directory\" parameters,\n> providing coverage for those paths (and removing a few unnecessary chdirs, but\n> ...).\n\nThanks for continuing to work on this!\n\nJust a quick question - is there a reason for changing the subject of\nthe emails?\n\nNot all email clients handle this well, e.g. Google Mail considers\nthis being 10 separate threads. The CF application and/or\npgsql-hackers@ archive also don't recognise this as a continuation of\nthe original thread. So all the discussions in -v8, -v9, -v9 ets\nthreads get lost.\n\nMay I suggest using a single thread?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 18 Jul 2022 11:05:23 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v10" }, { "msg_contents": "Hi again,\n\n> Just a quick question - is there a reason for changing the subject of\n> the emails?\n>\n> Not all email clients handle this well, e.g. Google Mail considers\n> this being 10 separate threads. The CF application and/or\n> pgsql-hackers@ archive also don't recognise this as a continuation of\n> the original thread. So all the discussions in -v8, -v9, -v9 ets\n> threads get lost.\n>\n> May I suggest using a single thread?\n\nOK, the part about the archive is wrong - I scrolled right to the end\nof the thread, didn't notice v10 patch above and assumed it was lost.\nSorry for the confusion. However, the part about various email clients\nis accurate.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 18 Jul 2022 11:12:10 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v10" }, { "msg_contents": "On 15.07.22 07:08, Andres Freund wrote:\n> Attached is v10 of the meson patchset. Lots of small changes, I don't think\n> anything major. I tried to address most of Peter's feedback for the earlier\n> patches.\n\nThe following patches are ok to commit IMO:\n\na1c5542929 prereq: Deal with paths containing \\ and spaces in basebackup_to_shell tests\ne37951875d meson: prereq: psql: Output dir and dependency generation for sql_help\n18cc9fbd02 meson: prereq: ecpg: Add and use output directory argument for preproc/*.pl\n58a32694e9 meson: prereq: Move snowball_create.sql creation into perl file\n59b8bffdaf meson: prereq: Add output path arg in generate-lwlocknames.pl\n2db97b00d5 meson: prereq: generate-errcodes.pl: Accept output file\nfb8f52f21d meson: prereq: unicode: Allow to specify output directory\n8f1e4410d6 meson: prereq: Refactor dtrace postprocessing make rules\n3d18a20b11 meson: prereq: Add --outdir option to gen_node_support.pl\n\n\n", "msg_date": "Mon, 18 Jul 2022 11:33:09 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v10" }, { "msg_contents": "Hi,\n\nOn 2022-07-18 11:12:10 +0300, Aleksander Alekseev wrote:\n> > Just a quick question - is there a reason for changing the subject of\n> > the emails?\n> >\n> > Not all email clients handle this well, e.g. Google Mail considers\n> > this being 10 separate threads. The CF application and/or\n> > pgsql-hackers@ archive also don't recognise this as a continuation of\n> > the original thread. So all the discussions in -v8, -v9, -v9 ets\n> > threads get lost.\n> >\n> > May I suggest using a single thread?\n> \n> OK, the part about the archive is wrong - I scrolled right to the end\n> of the thread, didn't notice v10 patch above and assumed it was lost.\n> Sorry for the confusion. However, the part about various email clients\n> is accurate.\n\nFor me the thread is too long to look through without some separation. I\nwouldn't do the version in the subject for a small patchset / thread, but at\nthis size I think it's reasonable.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 18 Jul 2022 08:44:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v10" }, { "msg_contents": "Hi,\n\nOn 2022-07-18 11:33:09 +0200, Peter Eisentraut wrote:\n> The following patches are ok to commit IMO:\n> \n> a1c5542929 prereq: Deal with paths containing \\ and spaces in basebackup_to_shell tests\n> e37951875d meson: prereq: psql: Output dir and dependency generation for sql_help\n> 18cc9fbd02 meson: prereq: ecpg: Add and use output directory argument for preproc/*.pl\n> 58a32694e9 meson: prereq: Move snowball_create.sql creation into perl file\n> 59b8bffdaf meson: prereq: Add output path arg in generate-lwlocknames.pl\n> 2db97b00d5 meson: prereq: generate-errcodes.pl: Accept output file\n> fb8f52f21d meson: prereq: unicode: Allow to specify output directory\n> 8f1e4410d6 meson: prereq: Refactor dtrace postprocessing make rules\n> 3d18a20b11 meson: prereq: Add --outdir option to gen_node_support.pl\n\nI pushed these. Thanks for the reviews and patches!\n\nThe symbol export stuff has also been pushed (discussed in a separate thread).\n\nIt's nice to see the meson patchset length reduced by this much.\n\nI pushed a rebased version of the remaining branches to git. I'll be on\nvacation for a bit, I'm not sure I can get a new version with further cleanups\nout before.\n\n\nGiven that we can't use src/tools/gen_versioning_script.pl for the make build,\ndue to not depending on perl for tarball builds, I'm inclined to rewrite it\npython (which we depend on via meson anyway) and consider it a meson specific\nwrapper?\n\n\nBilal, Peter previously commented on the pg_regress change for ecpg, perhaps\nyou can comment on that?\n\nIn https://postgr.es/m/0e81e45c-c9a5-e95b-2782-ab2dfec8bf57%40enterprisedb.com\nOn 2022-07-06 11:03:31 +0200, Peter Eisentraut wrote:\n> dff7b5a960 meson: prereq: regress: allow to specify director containing\n> expected files.\n> \n> This could use a bit more explanation, but it doesn't look\n> controversial so far.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 18 Jul 2022 13:23:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v10" }, { "msg_contents": "Hi,\n\nOn 2022-07-18 23:23:27 +0300, Andres Freund wrote:\n\n> Bilal, Peter previously commented on the pg_regress change for ecpg,\n> perhaps\n> you can comment on that?\n>\n> In\n> https://postgr.es/m/0e81e45c-c9a5-e95b-2782-ab2dfec8bf57%40enterprisedb.com\n> On 2022-07-06 11:03:31 +0200, Peter Eisentraut wrote:\n> > dff7b5a960 meson: prereq: regress: allow to specify director containing\n> > expected files.\n> >\n> > This could use a bit more explanation, but it doesn't look\n> > controversial so far\n\n\nWhile testing ECPG, C and exe files are generated by meson so these files\nare in the meson's build directory but expected files are in the source\ndirectory. However; there was no way to set different paths for inputs (C\nand exe files') and expected files' directory. So, I added `--expecteddir`\nto separately set expected files' directory.\n\nGreetings,\n\nNazir Bilal Yavuz\n\nOn Mon, 18 Jul 2022 at 23:23, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-07-18 11:33:09 +0200, Peter Eisentraut wrote:\n> > The following patches are ok to commit IMO:\n> >\n> > a1c5542929 prereq: Deal with paths containing \\ and spaces in\n> basebackup_to_shell tests\n> > e37951875d meson: prereq: psql: Output dir and dependency generation for\n> sql_help\n> > 18cc9fbd02 meson: prereq: ecpg: Add and use output directory argument\n> for preproc/*.pl\n> > 58a32694e9 meson: prereq: Move snowball_create.sql creation into perl\n> file\n> > 59b8bffdaf meson: prereq: Add output path arg in generate-lwlocknames.pl\n> > 2db97b00d5 meson: prereq: generate-errcodes.pl: Accept output file\n> > fb8f52f21d meson: prereq: unicode: Allow to specify output directory\n> > 8f1e4410d6 meson: prereq: Refactor dtrace postprocessing make rules\n> > 3d18a20b11 meson: prereq: Add --outdir option to gen_node_support.pl\n>\n> I pushed these. Thanks for the reviews and patches!\n>\n> The symbol export stuff has also been pushed (discussed in a separate\n> thread).\n>\n> It's nice to see the meson patchset length reduced by this much.\n>\n> I pushed a rebased version of the remaining branches to git. I'll be on\n> vacation for a bit, I'm not sure I can get a new version with further\n> cleanups\n> out before.\n>\n>\n> Given that we can't use src/tools/gen_versioning_script.pl for the make\n> build,\n> due to not depending on perl for tarball builds, I'm inclined to rewrite it\n> python (which we depend on via meson anyway) and consider it a meson\n> specific\n> wrapper?\n>\n>\n> Bilal, Peter previously commented on the pg_regress change for ecpg,\n> perhaps\n> you can comment on that?\n>\n> In\n> https://postgr.es/m/0e81e45c-c9a5-e95b-2782-ab2dfec8bf57%40enterprisedb.com\n> On 2022-07-06 11:03:31 +0200, Peter Eisentraut wrote:\n> > dff7b5a960 meson: prereq: regress: allow to specify director containing\n> > expected files.\n> >\n> > This could use a bit more explanation, but it doesn't look\n> > controversial so far.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nHi,On 2022-07-18 23:23:27 +0300, Andres Freund wrote:Bilal, Peter previously commented on the pg_regress change for ecpg, perhaps\nyou can comment on that?\n\nIn https://postgr.es/m/0e81e45c-c9a5-e95b-2782-ab2dfec8bf57%40enterprisedb.com\nOn 2022-07-06 11:03:31 +0200, Peter Eisentraut wrote:\n> dff7b5a960 meson: prereq: regress: allow to specify director containing\n> expected files.\n> \n> This could use a bit more explanation, but it doesn't look\n> controversial so farWhile testing ECPG, C and exe files are generated by meson so these files are in the meson's build directory but expected files are in the source directory. However; there was no way to set different paths for inputs (C and exe files') and expected files' directory. So, I added `--expecteddir` to separately set expected files' directory.Greetings,Nazir Bilal YavuzOn Mon, 18 Jul 2022 at 23:23, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-07-18 11:33:09 +0200, Peter Eisentraut wrote:\n> The following patches are ok to commit IMO:\n> \n> a1c5542929 prereq: Deal with paths containing \\ and spaces in basebackup_to_shell tests\n> e37951875d meson: prereq: psql: Output dir and dependency generation for sql_help\n> 18cc9fbd02 meson: prereq: ecpg: Add and use output directory argument for preproc/*.pl\n> 58a32694e9 meson: prereq: Move snowball_create.sql creation into perl file\n> 59b8bffdaf meson: prereq: Add output path arg in generate-lwlocknames.pl\n> 2db97b00d5 meson: prereq: generate-errcodes.pl: Accept output file\n> fb8f52f21d meson: prereq: unicode: Allow to specify output directory\n> 8f1e4410d6 meson: prereq: Refactor dtrace postprocessing make rules\n> 3d18a20b11 meson: prereq: Add --outdir option to gen_node_support.pl\n\nI pushed these. Thanks for the reviews and patches!\n\nThe symbol export stuff has also been pushed (discussed in a separate thread).\n\nIt's nice to see the meson patchset length reduced by this much.\n\nI pushed a rebased version of the remaining branches to git. I'll be on\nvacation for a bit, I'm not sure I can get a new version with further cleanups\nout before.\n\n\nGiven that we can't use src/tools/gen_versioning_script.pl for the make build,\ndue to not depending on perl for tarball builds, I'm inclined to rewrite it\npython (which we depend on via meson anyway) and consider it a meson specific\nwrapper?\n\n\nBilal, Peter previously commented on the pg_regress change for ecpg, perhaps\nyou can comment on that?\n\nIn https://postgr.es/m/0e81e45c-c9a5-e95b-2782-ab2dfec8bf57%40enterprisedb.com\nOn 2022-07-06 11:03:31 +0200, Peter Eisentraut wrote:\n> dff7b5a960 meson: prereq: regress: allow to specify director containing\n> expected files.\n> \n> This could use a bit more explanation, but it doesn't look\n> controversial so far.\n\nGreetings,\n\nAndres Freund", "msg_date": "Thu, 21 Jul 2022 15:16:33 +0300", "msg_from": "Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v10" }, { "msg_contents": "Hi,\n\nSorry for the first email.\n\nOn Mon, 18 Jul 2022 at 23:23, Andres Freund <andres@anarazel.de> wrote:\n>\n> In\nhttps://postgr.es/m/0e81e45c-c9a5-e95b-2782-ab2dfec8bf57%40enterprisedb.com\n> On 2022-07-06 11:03:31 +0200, Peter Eisentraut wrote:\n> > dff7b5a960 meson: prereq: regress: allow to specify director containing\n> > expected files.\n> >\n> > This could use a bit more explanation, but it doesn't look\n> > controversial so far.\n\nWhile testing ECPG, C and exe files are generated by meson so these files\nare in the meson's build directory but expected files are in the source\ndirectory. However; there was no way to set different paths for inputs (C\nand exe files') and expected files' directory. So, I added `--expecteddir`\nto separately set expected files' directory.\n\nGreetings,\n\nNazir Bilal Yavuz\n\nHi,Sorry for the first email.On Mon, 18 Jul 2022 at 23:23, Andres Freund <andres@anarazel.de> wrote:>> In https://postgr.es/m/0e81e45c-c9a5-e95b-2782-ab2dfec8bf57%40enterprisedb.com> On 2022-07-06 11:03:31 +0200, Peter Eisentraut wrote:> > dff7b5a960 meson: prereq: regress: allow to specify director containing> > expected files.> >> > This could use a bit more explanation, but it doesn't look> > controversial so far.While testing ECPG, C and exe files are generated by meson so these files are in the meson's build directory but expected files are in the source directory. However; there was no way to set different paths for inputs (C and exe files') and expected files' directory. So, I added `--expecteddir` to separately set expected files' directory.Greetings,Nazir Bilal Yavuz", "msg_date": "Thu, 21 Jul 2022 15:26:05 +0300", "msg_from": "Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v10" }, { "msg_contents": "Hi,\n\nOn 2021-10-31 16:24:48 -0700, Andres Freund wrote:\n> - support for building docs.\n> I couldn't get dbtoepub work in a vpath style build, so I changed that\n> to also use pandoc. No idea if anybody uses the epub rules?\n\ncombing through various FIXMEs in the meson patch I started to look into docs\nagain. A few questions / noteworthy points:\n\n- I still haven't gotten dbtoepub to work in vpath style builds (epub\n generation is instead using pandoc). Could somebody using these comment on\n the quality difference?\n\n We don't seem to offer these for download anywhere...\n\n Worth noting that dbtoepub takes approximately forever (>25min). Not that\n pandoc is fast, but ...\n\n Unfortunately pandoc doesn't understand <part> level stuff. That'd probably\n easy enough to tweak, but ...\n\n- We have logic to change the man section (grep for sqlmansect) for some\n platforms. The only remaining platform is solaris. I'm inclined to not\n implement that.\n\n- I've not implemented the texinfo targets - don't think they're really used?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 5 Aug 2022 20:53:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v" }, { "msg_contents": "Hi,\n\nOn 2022-07-21 15:26:05 +0300, Bilal Yavuz wrote:\n> > On 2022-07-06 11:03:31 +0200, Peter Eisentraut wrote:\n> > > dff7b5a960 meson: prereq: regress: allow to specify director containing\n> > > expected files.\n> > >\n> > > This could use a bit more explanation, but it doesn't look\n> > > controversial so far.\n> \n> While testing ECPG, C and exe files are generated by meson so these files\n> are in the meson's build directory but expected files are in the source\n> directory. However; there was no way to set different paths for inputs (C\n> and exe files') and expected files' directory. So, I added `--expecteddir`\n> to separately set expected files' directory.\n\nAttached is a version of this patch that also removes the copying of these\nfiles from ecpg's makefile.\n\nBilal's version checked different directories for expected files, but I don't\nthink that's necessary. Bilal, do you remember why you added that?\n\n\nI'm somewhat tempted to rename ecpg's pg_regress to pg_regress_ecpg as part of\nthis, given the .c file is named pg_regress_ecpg.c and that pg_regress is a\npre-existing binary.\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 8 Aug 2022 08:53:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v10" }, { "msg_contents": "Hi,\n\nI was looking at re-unifying gendef2.pl that the meson patchset had introduced\nfor temporary ease during hacking with gendef.pl. Testing that I noticed that\neither I and my machine is very confused, or gendef.pl's check whether it can\nskip work is bogus.\n\nI noticed that, despite having code to avoid rerunning when the input files\nare older than the .def file, it always runs.\n\n# if the def file exists and is newer than all input object files, skip\n# its creation\nif (-f $deffile\n && (-M $deffile > max(map { -M } <$ARGV[0]/*.obj>)))\n{\n print \"Not re-generating $defname.DEF, file already exists.\\n\";\n exit(0);\n}\n\nMy understanding of -M is that it returns the time delta between the file\nmodification and the start of the script. Which makes the use of max() bogus,\nsince it'll return the oldest time any input has been modified, not the\nnewest. And the condition needs to be inverted, because we want to skip the\nwork if $deffile is *newer*, right?\n\nAm I missing something here?\n\n\nI'm tempted to just remove the not-regenerating logic - gendef.pl shouldn't\nrun if there's nothing to do, and it'll e.g. not notice if there's an\nadditional input that wasn't there during the last invocation of gendef.pl.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 Aug 2022 00:10:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "\nOn 2022-08-09 Tu 03:10, Andres Freund wrote:\n> Hi,\n>\n> I was looking at re-unifying gendef2.pl that the meson patchset had introduced\n> for temporary ease during hacking with gendef.pl. Testing that I noticed that\n> either I and my machine is very confused, or gendef.pl's check whether it can\n> skip work is bogus.\n>\n> I noticed that, despite having code to avoid rerunning when the input files\n> are older than the .def file, it always runs.\n>\n> # if the def file exists and is newer than all input object files, skip\n> # its creation\n> if (-f $deffile\n> && (-M $deffile > max(map { -M } <$ARGV[0]/*.obj>)))\n> {\n> print \"Not re-generating $defname.DEF, file already exists.\\n\";\n> exit(0);\n> }\n>\n> My understanding of -M is that it returns the time delta between the file\n> modification and the start of the script. Which makes the use of max() bogus,\n> since it'll return the oldest time any input has been modified, not the\n> newest. And the condition needs to be inverted, because we want to skip the\n> work if $deffile is *newer*, right?\n>\n> Am I missing something here?\n\n\nNo, you're right, this is bogus. Reversing the test and using min\ninstead of max is the obvious fix.\n\n\n> I'm tempted to just remove the not-regenerating logic - gendef.pl shouldn't\n> run if there's nothing to do, and it'll e.g. not notice if there's an\n> additional input that wasn't there during the last invocation of gendef.pl.\n>\n\nMaybe, need to think about that more.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 9 Aug 2022 08:37:16 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 8/8/22 18:53, Andres Freund wrote:\n> Bilal's version checked different directories for expected files, but I don't\n> think that's necessary. Bilal, do you remember why you added that?\nThis was for not breaking autoconf build. Autoconf wasn't using \nexpecteddir, so I checked different directories.\n\nGreetings,\n\nNazir Bilal Yavuz\n\n\n", "msg_date": "Wed, 10 Aug 2022 10:58:47 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v10" }, { "msg_contents": "Hi,\n\nOn 2022-06-02 10:26:09 -0700, Andres Freund wrote:\n> > Could we have the meson build check that, say, if gram.c exists it\n> > is newer than gram.y? Or get it to ignore an in-tree gram.c?\n>\n> I suspect the problem with ignoring is gram.h, that's probably a bit harder to\n> ignore.\n\nI tried to ignore various generated files in the source tree, but I don't\nthink it's doable for all of them. Consider\ne.g. src/backend/utils/misc/guc-file.c which is gets built via #include\n\"guc-file.c\" from gram.c\n\nBecause it's a \"\" include, the search path starts in the current directory and\nonly then -I is searched. To my knowledge there's no way of changing\nthat. Quoting the gcc manpage:\n\n -I dir\n -iquote dir\n -isystem dir\n -idirafter dir\n Add the directory dir to the list of directories to be searched for header files during preprocessing. If dir begins with = or $SYSROOT, then\n the = or $SYSROOT is replaced by the sysroot prefix; see --sysroot and -isysroot.\n\n Directories specified with -iquote apply only to the quote form of the directive, \"#include \"file\"\". Directories specified with -I, -isystem, or\n -idirafter apply to lookup for both the \"#include \"file\"\" and \"#include <file>\" directives.\n\n You can specify any number or combination of these options on the command line to search for header files in several directories. The lookup\n order is as follows:\n\n 1. For the quote form of the include directive, the directory of the current file is searched first.\n\n 2. For the quote form of the include directive, the directories specified by -iquote options are searched in left-to-right order, as they appear\n on the command line.\n\n 3. Directories specified with -I options are scanned in left-to-right order.\n [...]\n\nExcept for copying guc.c from source to build tree before building, I don't\nsee a way of ignoring the in-build-tree guc-file.c.\n\nNot sure what a good way of dealing with this is. For now I'll make it just\nerror out if there's any known such file in the source tree, but that's not a\ngood solution forever. If it were just \"normal\" build leftovers I'd propose\nto (optionally) just remove them, but that's not good for tarballs.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 10 Aug 2022 10:19:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nAttached is a new version of the meson patchset. Plenty changes:\n\n- Added a postgresql-extension.pc pkg-config file. That allows building server\n extensions without integrating directly with the postgres buildsystem. I\n have tested that this allows to build a simple out-of-tree extension on\n linux and windows - the latter is something that we didn't really support\n before. I think we could add something similar to the autoconf build in the\n back branches, which'd make it easier to build extensions using this\n mechanism across server versions.\n\n- A significant number of the preparatory patches has been committed\n\n- Lots of cleanup / simplification around exporting symbols, including\n reunifying gendef.pl that I had previously copied\n\n- Ecpg is now built and tested on windows, thanks to the above\n\n- If there are any leftover generated files in the source tree, we now error\n out, with instructions for how to fix it. That might need a better answer at\n some point (think building from tarball), but I think that's good enough for\n now.\n\n It might be worth generating a file to perform the cleanups, it can be a\n long list.\n\n- CI for Openbsd, Netbsd (thanks Bilal!), that found a few minor issues\n\n- I hadn't fully implemented the defaults for semaphores. Turns out named\n semaphores are really slow on openbsd and netbsd.\n\n- I went through all the \"configure\" tests to see if there are mismatches, and\n either fixed them or added FIXMEs. There's maybe a handful.\n\n- The PGXS compat layer is good enough to build at least a few moderately\n complicated extensions (postgis, postgis), but currently their tests fail\n against 15 (independent of the buildsystem)...\n\n- Improved configure summary to show CFLAGS\n\n- Some other CI improvements, we e.g. didn't use the same test concurrency and\n CFLAGSs between the meson and autoconf tasks.\n\n- Lots of small cleanups\n\n- The testrunner now creates a test.start file when starting and either a\n test.success or test.failure when ending. I'd like to use that to select the\n list of log files etc to report in CI / the buildfarm, while still allowing\n concurrent testing. Andrew, does that make sense to you?\n\n- Lots of other small stuff\n\n\nI think this is getting closer to being initially mergeable. As we'd\ndiscussed, we're more likely to succeed if we accept working somewhat\nincrementally on this.\n\n\nSamay, with a bit of input from me, started on adding a docs chapter for\nbuilding with meson. I hope to include that in the next version.\n\nI'll next send out an email discussing where test outputs should be when\nrunning them with meson and how tests and \"testsuites\" should be named.\n\nGreetings,\n\nAndres", "msg_date": "Wed, 10 Aug 2022 17:20:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "On Thu, Aug 11, 2022 at 12:19 AM Andres Freund <andres@anarazel.de> wrote:\n> I tried to ignore various generated files in the source tree, but I don't\n> think it's doable for all of them. Consider\n> e.g. src/backend/utils/misc/guc-file.c which is gets built via #include\n> \"guc-file.c\" from gram.c\n\nWith a bit of work, we could probably get rid of those includes. See\n27199058d98ef7f for one example.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Aug 2022 10:33:39 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> With a bit of work, we could probably get rid of those includes. See\n> 27199058d98ef7f for one example.\n\nYeah --- it would mean creating gram.h files for all the bison grammars\nnot just a few of them, but it's certainly do-able if there's motivation\nto make the changes. Most of the files that are done that way date\nfrom before we knew about flex's %top.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Aug 2022 23:37:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "I wrote:\n> John Naylor <john.naylor@enterprisedb.com> writes:\n>> With a bit of work, we could probably get rid of those includes. See\n>> 27199058d98ef7f for one example.\n\n> Yeah --- it would mean creating gram.h files for all the bison grammars\n> not just a few of them, but it's certainly do-able if there's motivation\n> to make the changes. Most of the files that are done that way date\n> from before we knew about flex's %top.\n\nBTW, 72b1e3a21 is another useful precedent in this area.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Aug 2022 23:45:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "On Thu, Aug 11, 2022 at 10:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > With a bit of work, we could probably get rid of those includes. See\n> > 27199058d98ef7f for one example.\n>\n> Yeah --- it would mean creating gram.h files for all the bison grammars\n> not just a few of them, but it's certainly do-able if there's motivation\n> to make the changes. Most of the files that are done that way date\n> from before we knew about flex's %top.\n\nI'll volunteer to work on this unless an easier solution happens to\ncome along in the next couple days. (aside: guc-file.l doesn't have a\ngrammar, so not yet sure if that makes the issue easier or harder...)\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Aug 2022 10:57:33 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Hi,\n\nOn 2022-08-11 10:57:33 +0700, John Naylor wrote:\n> I'll volunteer to work on this unless an easier solution happens to\n> come along in the next couple days.\n\nCool!\n\n\n> (aside: guc-file.l doesn't have a grammar, so not yet sure if that makes the\n> issue easier or harder...)\n\nI think we should consider compiling it separately from guc.c. guc.c already\ncompiles quite slowly (iirc beat only by ecpg and main grammar), and it's a\nrelatively commonly changed source file.\n\nIt might even be a good idea to split guc.c so it only contains the settings\narrays + direct dependencies...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 10 Aug 2022 21:07:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> I'll volunteer to work on this unless an easier solution happens to\n> come along in the next couple days. (aside: guc-file.l doesn't have a\n> grammar, so not yet sure if that makes the issue easier or harder...)\n\nThat one's probably mostly about the issue mentioned in the other\ncommit you identified. Without %top, it's impossible to make a\nstandalone flex module honor the rule about thou-shalt-have-no-\nother-includes-before-postgres.h. So embedding it in some other\nfile was originally a necessity for that. Now that we know how\nto fix that, it's just a matter of making sure that any other stuff\nthe scanner needs is available from a .h file.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Aug 2022 00:32:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "Starting a new thread to control clutter. [was: Re: [RFC] building\npostgres with meson]\n\nmotivation: https://www.postgresql.org/message-id/20220810171935.7k5zgnjwqzalzmtm%40awork3.anarazel.de\n\nOn Thu, Aug 11, 2022 at 11:07 AM Andres Freund <andres@anarazel.de> wrote:\n> I think we should consider compiling it separately from guc.c. guc.c already\n> compiles quite slowly (iirc beat only by ecpg and main grammar), and it's a\n> relatively commonly changed source file.\n\nDone in the attached, and will do the rest in time. It seemed most\nstraightforward to put ProcessConfigFileInternal() in guc.c since\nthat's where most of its callers are, and it relies on some vars and\ntypes declared there. There are a couple new extern declarations in\nguc.h that are only for guc.c and guc-file.c:\n\n+/* functions shared between guc.c and guc-file.l */\n+extern int guc_name_compare(const char *namea, const char *nameb);\n+extern ConfigVariable *ProcessConfigFileInternal(GucContext context,\n+ bool applySettings, int elevel);\n+extern void record_config_file_error(const char *errmsg,\n+ const char *config_file,\n+ int lineno,\n+ ConfigVariable **head_p,\n+ ConfigVariable **tail_p);\n\nThese might be better placed in a new guc_internal.h. Thoughts?\n\n> It might even be a good idea to split guc.c so it only contains the settings\n> arrays + direct dependencies...\n\nPerhaps this can be a TODO item, one which falls under \"[E] marks\nitems that are easier to implement\". I've been slacking on removing\nthe old/intractable cruft from the TODO list, but we should also be\nsticking small nice-but-not-necessary things in there. That said, if\nthis idea has any bearing on the guc_internal.h idea, it might be\nbetter dealt with now.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 12 Aug 2022 13:01:25 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "build remaining Flex files standalone" }, { "msg_contents": "Here are the rest. Most of it was pretty straightforward, with the\nmain exception of jsonpath_scan.c, which is not quite finished. That\none passes tests but still has one compiler warning. I'm unsure how\nmuch of what is there already is really necessary or was cargo-culted\nfrom elsewhere without explanation. For starters, I'm not sure why the\ngrammar has a forward declaration of \"union YYSTYPE\". It's noteworthy\nthat it used to compile standalone, but with a bit more stuff, and\nthat was reverted in 550b9d26f80fa30. I can hack on it some more later\nbut I ran out of steam today.\n\nOther questions thus far:\n\n- \"BISONFLAGS += -d\" is now in every make file with a .y file -- can\nwe just force that everywhere?\n\n- Include order seems to matter for the grammar's .h file. I didn't\ntest if that was the case every time, and after a few miscompiles just\nalways made it the last inclusion, but I'm wondering if we should keep\nthose inclusions outside %top{} and put it at the start of the next\n%{} ?\n\n- contrib/cubeparse.y now has a global variable -- not terrific, but I\nwanted to get something working first.\n\n- I'm actually okay with guc-file.c now, but I'll still welcome\ncomments on that.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Sat, 13 Aug 2022 15:39:06 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "Hi,\n\nThanks for your work on this!\n\nOn 2022-08-13 15:39:06 +0700, John Naylor wrote:\n> Here are the rest. Most of it was pretty straightforward, with the\n> main exception of jsonpath_scan.c, which is not quite finished. That\n> one passes tests but still has one compiler warning. I'm unsure how\n> much of what is there already is really necessary or was cargo-culted\n> from elsewhere without explanation. For starters, I'm not sure why the\n> grammar has a forward declaration of \"union YYSTYPE\". It's noteworthy\n> that it used to compile standalone, but with a bit more stuff, and\n> that was reverted in 550b9d26f80fa30. I can hack on it some more later\n> but I ran out of steam today.\n\nI'm not sure either...\n\n\n> Other questions thus far:\n> \n> - \"BISONFLAGS += -d\" is now in every make file with a .y file -- can\n> we just force that everywhere?\n\nHm. Not sure it's worth it, extensions might use our BISON stuff...\n\n\n> - Include order seems to matter for the grammar's .h file. I didn't\n> test if that was the case every time, and after a few miscompiles just\n> always made it the last inclusion, but I'm wondering if we should keep\n> those inclusions outside %top{} and put it at the start of the next\n> %{} ?\n\nI think we have a few of those dependencies already, see e.g.\n/*\n * NB: include gram.h only AFTER including scanner.h, because scanner.h\n * is what #defines YYLTYPE.\n */\n\n\n> From d723ba14acf56fd432e9e263db937fcc13fc0355 Mon Sep 17 00:00:00 2001\n> From: John Naylor <john.naylor@postgresql.org>\n> Date: Thu, 11 Aug 2022 19:38:37 +0700\n> Subject: [PATCH v201 1/9] Build guc-file.c standalone\n\nMight be worth doing some of the moving around here separately from the\nparser/scanner specific bits.\n\n\n> +/* functions shared between guc.c and guc-file.l */\n> +extern int\tguc_name_compare(const char *namea, const char *nameb);\n> +extern ConfigVariable *ProcessConfigFileInternal(GucContext context,\n> +\t\t\t\t\t\t\t\t\t\t\t\t bool applySettings, int elevel);\n> +extern void record_config_file_error(const char *errmsg,\n> +\t\t\t\t\t\t\t\t\t const char *config_file,\n> +\t\t\t\t\t\t\t\t\t int lineno,\n> +\t\t\t\t\t\t\t\t\t ConfigVariable **head_p,\n> +\t\t\t\t\t\t\t\t\t ConfigVariable **tail_p);\n> \n> /*\n> * The following functions are not in guc.c, but are declared here to avoid\n> -- \n> 2.36.1\n> \n\nI think I prefer your suggestion of a guc_internal.h upthread.\n\n\n\n> From 7d4ecfcb3e91f3b45e94b9e64c7c40f1bbd22aa8 Mon Sep 17 00:00:00 2001\n> From: John Naylor <john.naylor@postgresql.org>\n> Date: Fri, 12 Aug 2022 15:45:24 +0700\n> Subject: [PATCH v201 2/9] Build booscanner.c standalone\n\n> -# bootscanner is compiled as part of bootparse\n> -bootparse.o: bootscanner.c\n> +# See notes in src/backend/parser/Makefile about the following two rules\n> +bootparse.h: bootparse.c\n> +\ttouch $@\n> +\n> +bootparse.c: BISONFLAGS += -d\n> +\n> +# Force these dependencies to be known even without dependency info built:\n> +bootparse.o bootscan.o: bootparse.h\n\nWonder if we could / should wrap this is something common. It's somewhat\nannoying to repeat this stuff everywhere.\n\n\n\n> diff --git a/src/test/isolation/specscanner.l b/src/test/isolation/specscanner.l\n> index aa6e89268e..2dc292c21d 100644\n> --- a/src/test/isolation/specscanner.l\n> +++ b/src/test/isolation/specscanner.l\n> @@ -1,4 +1,4 @@\n> -%{\n> +%top{\n> /*-------------------------------------------------------------------------\n> *\n> * specscanner.l\n> @@ -9,7 +9,14 @@\n> *\n> *-------------------------------------------------------------------------\n> */\n> +#include \"postgres_fe.h\"\n\nMiniscule nitpick: I think we typically leave an empty line between header and\nfirst include.\n\n\n> diff --git a/contrib/cube/cubedata.h b/contrib/cube/cubedata.h\n> index dbe7d4f742..0b373048b5 100644\n> --- a/contrib/cube/cubedata.h\n> +++ b/contrib/cube/cubedata.h\n> @@ -67,3 +67,7 @@ extern void cube_scanner_finish(void);\n> \n> /* in cubeparse.y */\n> extern int\tcube_yyparse(NDBOX **result);\n> +\n> +/* All grammar constructs return strings */\n> +#define YYSTYPE char *\n\nWhy does this need to be defined in a semi-public header? If we do this in\nmultiple files we'll end up with the danger of macro redefinition warnings.\n\n\n> +extern int scanbuflen;\n\nThe code around scanbuflen seems pretty darn grotty. Allocating enough memory\nfor the entire list by allocating the entire string size... I don't know\nanything about contrib/cube, but isn't that in effect O(inputlen^2) memory?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Aug 2022 11:11:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "For v3, I addressed some comments and added .h files to the\nheaderscheck exceptions.\n\nOn Tue, Aug 16, 2022 at 1:11 AM Andres Freund <andres@anarazel.de> wrote:\n\n> On 2022-08-13 15:39:06 +0700, John Naylor wrote:\n> > Here are the rest. Most of it was pretty straightforward, with the\n> > main exception of jsonpath_scan.c, which is not quite finished. That\n> > one passes tests but still has one compiler warning. I'm unsure how\n> > much of what is there already is really necessary or was cargo-culted\n> > from elsewhere without explanation. For starters, I'm not sure why the\n> > grammar has a forward declaration of \"union YYSTYPE\". It's noteworthy\n> > that it used to compile standalone, but with a bit more stuff, and\n> > that was reverted in 550b9d26f80fa30. I can hack on it some more later\n> > but I ran out of steam today.\n\nI've got it in half-way decent shape now, with an *internal.h header\nand some cleanups.\n\n> > - Include order seems to matter for the grammar's .h file. I didn't\n> > test if that was the case every time, and after a few miscompiles just\n> > always made it the last inclusion, but I'm wondering if we should keep\n> > those inclusions outside %top{} and put it at the start of the next\n> > %{} ?\n>\n> I think we have a few of those dependencies already, see e.g.\n> /*\n> * NB: include gram.h only AFTER including scanner.h, because scanner.h\n> * is what #defines YYLTYPE.\n> */\n\nWent with something like this in all cases:\n\n/*\n * NB: include bootparse.h only AFTER including bootstrap.h, because bootstrap.h\n * includes node definitions needed for YYSTYPE.\n */\n\nFuture cleanup: I see this in headerscheck:\n\n# We can't make these Bison output files compilable standalone\n# without using \"%code require\", which old Bison versions lack.\n# parser/gram.h will be included by parser/gramparse.h anyway.\n\nThat directive has been supported in Bison since 2.4.2.\n\n> > From d723ba14acf56fd432e9e263db937fcc13fc0355 Mon Sep 17 00:00:00 2001\n> > From: John Naylor <john.naylor@postgresql.org>\n> > Date: Thu, 11 Aug 2022 19:38:37 +0700\n> > Subject: [PATCH v201 1/9] Build guc-file.c standalone\n>\n> Might be worth doing some of the moving around here separately from the\n> parser/scanner specific bits.\n\nDone in 0001/0003.\n\n> > +/* functions shared between guc.c and guc-file.l */\n> > [...]\n> I think I prefer your suggestion of a guc_internal.h upthread.\n\nStarted in 0002, but left open the headerscheck failure.\n\nAlso, if such a thing is meant to be #include'd only by two generated\nfiles, maybe it should just live in the directory where they live, and\nnot in the src/include dir?\n\n> > From 7d4ecfcb3e91f3b45e94b9e64c7c40f1bbd22aa8 Mon Sep 17 00:00:00 2001\n> > From: John Naylor <john.naylor@postgresql.org>\n> > Date: Fri, 12 Aug 2022 15:45:24 +0700\n> > Subject: [PATCH v201 2/9] Build booscanner.c standalone\n>\n> > -# bootscanner is compiled as part of bootparse\n> > -bootparse.o: bootscanner.c\n> > +# See notes in src/backend/parser/Makefile about the following two rules\n> > +bootparse.h: bootparse.c\n> > + touch $@\n> > +\n> > +bootparse.c: BISONFLAGS += -d\n> > +\n> > +# Force these dependencies to be known even without dependency info built:\n> > +bootparse.o bootscan.o: bootparse.h\n>\n> Wonder if we could / should wrap this is something common. It's somewhat\n> annoying to repeat this stuff everywhere.\n\nI haven't looked at the Meson effort recently, but if the build rule\nis less annoying there, I'm inclined to leave this as a wart until\nautotools are retired.\n\n> > diff --git a/src/test/isolation/specscanner.l b/src/test/isolation/specscanner.l\n> > index aa6e89268e..2dc292c21d 100644\n> > --- a/src/test/isolation/specscanner.l\n> > +++ b/src/test/isolation/specscanner.l\n> > @@ -1,4 +1,4 @@\n> > -%{\n> > +%top{\n> > /*-------------------------------------------------------------------------\n> > *\n> > * specscanner.l\n> > @@ -9,7 +9,14 @@\n> > *\n> > *-------------------------------------------------------------------------\n> > */\n> > +#include \"postgres_fe.h\"\n>\n> Miniscule nitpick: I think we typically leave an empty line between header and\n> first include.\n\nIn a small unscientific sample it seems like the opposite is true\nactually, but I'll at least try to be consistent within the patch set.\n\n> > diff --git a/contrib/cube/cubedata.h b/contrib/cube/cubedata.h\n> > index dbe7d4f742..0b373048b5 100644\n> > --- a/contrib/cube/cubedata.h\n> > +++ b/contrib/cube/cubedata.h\n> > @@ -67,3 +67,7 @@ extern void cube_scanner_finish(void);\n> >\n> > /* in cubeparse.y */\n> > extern int cube_yyparse(NDBOX **result);\n> > +\n> > +/* All grammar constructs return strings */\n> > +#define YYSTYPE char *\n>\n> Why does this need to be defined in a semi-public header? If we do this in\n> multiple files we'll end up with the danger of macro redefinition warnings.\n\nI tried to put all the Flex/Bison stuff in another *_internal header,\nbut that breaks the build. Putting just this one symbol in a header is\nsilly, but done that way for now. Maybe two copies of the symbol?\n\nAnother future cleanup: \"%define api.prefix {cube_yy}\" etc would cause\nit to be spelled CUBE_YYSTYPE (other macros too), sidestepping this\nproblem (requires Bison 2.6). IIUC, doing it our way has been\ndeprecated for 9 years.\n\n> > +extern int scanbuflen;\n>\n> The code around scanbuflen seems pretty darn grotty. Allocating enough memory\n> for the entire list by allocating the entire string size... I don't know\n> anything about contrib/cube, but isn't that in effect O(inputlen^2) memory?\n\nNeither do I.\n\n\n\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 16 Aug 2022 17:41:43 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "Hi,\n\nOn 2022-08-16 17:41:43 +0700, John Naylor wrote:\n> For v3, I addressed some comments and added .h files to the\n> headerscheck exceptions.\n\nThanks!\n\n\n> /*\n> * NB: include bootparse.h only AFTER including bootstrap.h, because bootstrap.h\n> * includes node definitions needed for YYSTYPE.\n> */\n> \n> Future cleanup: I see this in headerscheck:\n> \n> # We can't make these Bison output files compilable standalone\n> # without using \"%code require\", which old Bison versions lack.\n> # parser/gram.h will be included by parser/gramparse.h anyway.\n> \n> That directive has been supported in Bison since 2.4.2.\n\n2.4.2 is from 2010. So I think we could just start relying on it?\n\n\n> > > +/* functions shared between guc.c and guc-file.l */\n> > > [...]\n> > I think I prefer your suggestion of a guc_internal.h upthread.\n> \n> Started in 0002, but left open the headerscheck failure.\n> \n> Also, if such a thing is meant to be #include'd only by two generated\n> files, maybe it should just live in the directory where they live, and\n> not in the src/include dir?\n\nIt's not something we've done for the backend afaics, but I don't see a reason\nnot to start at some point.\n\n\n> > > From 7d4ecfcb3e91f3b45e94b9e64c7c40f1bbd22aa8 Mon Sep 17 00:00:00 2001\n> > > From: John Naylor <john.naylor@postgresql.org>\n> > > Date: Fri, 12 Aug 2022 15:45:24 +0700\n> > > Subject: [PATCH v201 2/9] Build booscanner.c standalone\n> >\n> > > -# bootscanner is compiled as part of bootparse\n> > > -bootparse.o: bootscanner.c\n> > > +# See notes in src/backend/parser/Makefile about the following two rules\n> > > +bootparse.h: bootparse.c\n> > > + touch $@\n> > > +\n> > > +bootparse.c: BISONFLAGS += -d\n> > > +\n> > > +# Force these dependencies to be known even without dependency info built:\n> > > +bootparse.o bootscan.o: bootparse.h\n> >\n> > Wonder if we could / should wrap this is something common. It's somewhat\n> > annoying to repeat this stuff everywhere.\n> \n> I haven't looked at the Meson effort recently, but if the build rule\n> is less annoying there, I'm inclined to leave this as a wart until\n> autotools are retired.\n\nThe only complicating thing in the rules there is the dependencies from one .c\nfile to another .c file.\n\n\n> > > diff --git a/contrib/cube/cubedata.h b/contrib/cube/cubedata.h\n> > > index dbe7d4f742..0b373048b5 100644\n> > > --- a/contrib/cube/cubedata.h\n> > > +++ b/contrib/cube/cubedata.h\n> > > @@ -67,3 +67,7 @@ extern void cube_scanner_finish(void);\n> > >\n> > > /* in cubeparse.y */\n> > > extern int cube_yyparse(NDBOX **result);\n> > > +\n> > > +/* All grammar constructs return strings */\n> > > +#define YYSTYPE char *\n> >\n> > Why does this need to be defined in a semi-public header? If we do this in\n> > multiple files we'll end up with the danger of macro redefinition warnings.\n> \n> I tried to put all the Flex/Bison stuff in another *_internal header,\n> but that breaks the build. Putting just this one symbol in a header is\n> silly, but done that way for now. Maybe two copies of the symbol?\n\nThe problem is that if it's in a header you can't include another header with\nsuch a define. That's fine if it's a .h that's just intended to be included by\na limited set of files, but for something like a header for a datatype that\nmight need to be included to e.g. define a PL transform or a new operator or\n... This would be solved by the %code requires thing, right?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 16 Aug 2022 18:14:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-16 17:41:43 +0700, John Naylor wrote:\n>> That directive has been supported in Bison since 2.4.2.\n\n> 2.4.2 is from 2010. So I think we could just start relying on it?\n\nApple is still shipping 2.3. Is this worth enough to make Mac\nusers install a non-default Bison? I seriously doubt it.\n\nI don't say that there won't be a reason that justifies that\nat some point, but letting headerscheck test autogenerated\nfiles seems of only microscopic benefit :-(\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 16 Aug 2022 21:47:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "On Wed, Aug 17, 2022 at 8:14 AM Andres Freund <andres@anarazel.de> wrote:\n\n> > > > +/* functions shared between guc.c and guc-file.l */\n> > > > [...]\n> > > I think I prefer your suggestion of a guc_internal.h upthread.\n> >\n> > Started in 0002, but left open the headerscheck failure.\n> >\n> > Also, if such a thing is meant to be #include'd only by two generated\n> > files, maybe it should just live in the directory where they live, and\n> > not in the src/include dir?\n>\n> It's not something we've done for the backend afaics, but I don't see a reason\n> not to start at some point.\n\nBTW, I forgot to mention I did this for the json path parser, which\nmakes the makefile code simpler than what was there before\n550b9d26f80fa30. AFAICS, we could also do the same for gramparse.h,\nwhich is internal to parser.c. If I'm not mistaken, the only reason we\nsymlink gram.h to src/include/* is so that gramparse.h can include it.\nSo keeping gramparse.h in the backend could allow removing some gram.h\nmakefile incantations.\n\n> > > Why does this need to be defined in a semi-public header? If we do this in\n> > > multiple files we'll end up with the danger of macro redefinition warnings.\n> >\n> > I tried to put all the Flex/Bison stuff in another *_internal header,\n> > but that breaks the build. Putting just this one symbol in a header is\n> > silly, but done that way for now. Maybe two copies of the symbol?\n>\n> The problem is that if it's in a header you can't include another header with\n> such a define. That's fine if it's a .h that's just intended to be included by\n> a limited set of files, but for something like a header for a datatype that\n> might need to be included to e.g. define a PL transform or a new operator or\n> ... This would be solved by the %code requires thing, right?\n\nI believe it would.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 17 Aug 2022 09:53:01 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "On 11.08.22 02:20, Andres Freund wrote:\n> Attached is a new version of the meson patchset. Plenty changes:\n\nI have various bits of comments on this.\n\n- There are various references to \"pch\" (pre-compiled headers). Is\n there more discussion anywhere about this? I don't know what this\n would entail or whether there are any drawbacks to be aware of. The\n new *_pch.h files don't have any comments. Maybe this should be a\n separate patch later.\n\n- About relativize_shared_library_references: We have had several\n patches over the years for working around SIP stuff, and some of\n them did essentially this, but we decided not to go ahead with them.\n We could revisit that, but it should be a separate patch, not mixed\n in with this.\n\n- postgresql-extension.pc: Similarly, this ought to be a separate\n patch. If we want people to use this, we'll need it in the makefile\n build system anyway.\n\n- -DFRONTEND is used somewhat differently from the makefiles. For\n example, meson sets -DFRONTEND for pg_controldata, but the\n makefiles don't. Conversely, the makefiles set -DFRONTEND for\n ecpglib, but meson does not. This should be checked again to make\n sure it all matches up.\n\n- Option name spelling should be make consistent about underscores\n versus hyphens. Built-in meson options use underscores, so we\n should make the user-defined ones like that as well (some already\n do). (wal-blocksize krb-srvnam system-tzdata tap-tests bsd-auth)\n\n- I have found the variable name \"cdata\" for configuration_data() to\n be less than clear. I see some GNOME projects to it that way, is\n that where it's from? systemd uses \"conf\", maybe that's better.\n\n- In the top-level meson.build, the \"renaming\" of the Windows system\n name\n\n host_system = host_machine.system() == 'windows' ? 'win32' : \nhost_machine.system()\n build_system = build_machine.system() == 'windows' ? 'win32' : \nbuild_machine.system()\n\n seems unnecessary to me. Why not stick with the provided names?\n\n- The c99_test ought to be not needed if the c_std project option is\n used. Was there a problem with that?\n\n- Is there a way to split up the top-level meson.build somehow? Maybe\n just throw some stuff into included files? This might get out of\n hand at some point.\n\n- The PG_SYSROOT detection gives different results. On my system,\n configure produces\n \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk,\n meson produces\n \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk.\n src/template/darwin goes out of its way to get a version-specific\n result, so we need to carry that over somehow. (The difference does\n result in differences in the built binaries.)\n\n\nThen, some patches from me:\n\n0001-Change-shared-library-installation-naming-on-macOS.patch\n\nThis changes the makefiles to make the shared library file naming on\nmacOS match what meson produces. I don't know what the situation is\non other platforms.\n\n0002-meson-Fix-installation-name-of-libpgfeutils.patch\n\nThis was presumably an accidental mistake.\n\n0003-meson-Libraries-need-to-be-built-with-DSO_MAJOR_VERS.patch\n\nThis is needed to make NLS work for the libraries.\n\n0004-meson-Add-darwin_versions-argument-for-libraries.patch\n\nThis is to make the output match what Makefile.shlib produces.\n\n0005-meson-Fix-link-order-of-support-libraries.patch\n0006-meson-Make-link-order-of-external-libraries-match-ma.patch\n0007-WIP-meson-Make-link-order-of-object-files-match-make.patch\n\nI have analyzed the produced binaries between both build systems to\nmake sure they match. If we link the files and libraries in different\norders, that becomes difficult. So this fixes this up a bit. 0005 is\nneeded for correctness in general, I think. 0006 is mostly cosmetic.\nYou probably wanted to make the library order alphabetical in the\nmeson files, which I'd support, but then we should change the\nmakefiles to match. Similarly, 0007, which is clearly a bit messy at\nthe moment, but we should try to sort that out either in the old or\nthe new build files.\n\n\nAnd finally some comments on your patches:\n\nmeson: prereq: Don't add HAVE_LDAP_H HAVE_WINLDAP_H to pg_config.h.\n\nThis can go ahead.\n\nmeson: prereq: fix warning compat_informix/rnull.pgc with msvc\n\n- $float f = 3.71;\n+ $float f = (float) 3.71;\n\nThis could use float literals like\n\n+ $float f = 3.71f;", "msg_date": "Wed, 17 Aug 2022 15:50:23 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "Hi,\n\nOn 2022-08-17 15:50:23 +0200, Peter Eisentraut wrote:\n> - There are various references to \"pch\" (pre-compiled headers). Is\n> there more discussion anywhere about this? I don't know what this\n> would entail or whether there are any drawbacks to be aware of. The\n> new *_pch.h files don't have any comments. Maybe this should be a\n> separate patch later.\n\nIt's mainly to make windows builds a bit slower. I've no objection to\nseparating this out.\n\n\n> - About relativize_shared_library_references: We have had several\n> patches over the years for working around SIP stuff, and some of\n> them did essentially this, but we decided not to go ahead with them.\n> We could revisit that, but it should be a separate patch, not mixed\n> in with this.\n\nThe prior approaches all had issues because they didn't support relative\nreferences IIRC (and thus broke being able to relocate the installation),\nwhich this does.\n\nI just found it very annoying to work on macs without this. And there were at\nleast two \"bug\" reports of testers of the meson branch that were just due to\nSIP.\n\nI'm ok with splitting it out, but I also think it's a lower risk opportunity\nto test that this works.\n\n\n> - postgresql-extension.pc: Similarly, this ought to be a separate\n> patch. If we want people to use this, we'll need it in the makefile\n> build system anyway.\n\nMakes sense. I'd like to keep it in the same patch for a short while longer,\nto deduplicate some of the code, but then will split it out.\n\n\n> - -DFRONTEND is used somewhat differently from the makefiles. For\n> example, meson sets -DFRONTEND for pg_controldata, but the\n> makefiles don't. Conversely, the makefiles set -DFRONTEND for\n> ecpglib, but meson does not. This should be checked again to make\n> sure it all matches up.\n\nYes, should sync that up.\n\nFWIW, meson does add -DFRONTEND for ecpglib. There were a few places that did\nadd it twice, I'll push a cleanup of that in a bit.\n\n\n> - Option name spelling should be make consistent about underscores\n> versus hyphens. Built-in meson options use underscores, so we\n> should make the user-defined ones like that as well (some already\n> do). (wal-blocksize krb-srvnam system-tzdata tap-tests bsd-auth)\n\nNo objection.\n\n\n> - I have found the variable name \"cdata\" for configuration_data() to\n> be less than clear. I see some GNOME projects to it that way, is\n> that where it's from? systemd uses \"conf\", maybe that's better.\n\nI don't know where it's from - I don't think I ever looked at gnome\nbuildsystem stuff. It seems to be the obvious abbreviation for\nconfiguration_data()... I don't object to conf, but it's not a clear\nimprovement to me.\n\n\n> - In the top-level meson.build, the \"renaming\" of the Windows system\n> name\n>\n> host_system = host_machine.system() == 'windows' ? 'win32' :\n> host_machine.system()\n> build_system = build_machine.system() == 'windows' ? 'win32' :\n> build_machine.system()\n>\n> seems unnecessary to me. Why not stick with the provided names?\n\nBecause right now we also use it for things like choosing the \"source\" for\npg_config_os.h (i.e. include/port/{darwin,linux,win32,..}.h). And it seemed\neasier to just have one variable name for all of it.\n\n\n> - The c99_test ought to be not needed if the c_std project option is\n> used. Was there a problem with that?\n\nWe don't want to force -std=c99 when not necessary, I think. We sometimes use\nfeatures from newer (and from gnu) language versions after testing\navailability, and if we hardcode the version those will either fail or elicit\nwarnings.\n\n\n> - Is there a way to split up the top-level meson.build somehow? Maybe\n> just throw some stuff into included files? This might get out of\n> hand at some point.\n\nWe can put stuff into config/meson.build or such. But I don't think it's\nclearly warranted at this point.\n\n\n> - The PG_SYSROOT detection gives different results. On my system,\n> configure produces\n>\n> /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk,\n> meson produces\n>\n> /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk.\n> src/template/darwin goes out of its way to get a version-specific\n> result, so we need to carry that over somehow. (The difference does\n> result in differences in the built binaries.)\n\nTBH, I don't really understand the SYSROOT stuff all that well, never having\nused a mac in anger (well, only in anger, but ...).\n\nWhat do you think about extracting the relevant portion of src/template/darwin\ninto a dedicated shell script that gets called by both?\n\n\n> Then, some patches from me:\n>\n> 0001-Change-shared-library-installation-naming-on-macOS.patch\n>\n> This changes the makefiles to make the shared library file naming on\n> macOS match what meson produces. I don't know what the situation is\n> on other platforms.\n\nNo opinion on the matter. Seems best to apply separately if we want to?\n\n\n> 0002-meson-Fix-installation-name-of-libpgfeutils.patch\n>\n> This was presumably an accidental mistake.\n\nYes, merged.\n\n\n> 0003-meson-Libraries-need-to-be-built-with-DSO_MAJOR_VERS.patch\n>\n> This is needed to make NLS work for the libraries.\n\nOh, huh. Yes, merged.\n\n\n> 0004-meson-Add-darwin_versions-argument-for-libraries.patch\n>\n> This is to make the output match what Makefile.shlib produces.\n\n:/, merged. Would be good to clean up at some point.\n\n\n> 0005-meson-Fix-link-order-of-support-libraries.patch\n> 0006-meson-Make-link-order-of-external-libraries-match-ma.patch\n> 0007-WIP-meson-Make-link-order-of-object-files-match-make.patch\n>\n> I have analyzed the produced binaries between both build systems to\n> make sure they match. If we link the files and libraries in different\n> orders, that becomes difficult. So this fixes this up a bit. 0005 is\n> needed for correctness in general, I think.\n\nMakes sense.\n\n\n> 0006 is mostly cosmetic. You probably wanted to make the library order\n> alphabetical in the meson files, which I'd support, but then we should\n> change the makefiles to match.\n\nTrying to match makefile order doesn't seem like a good plan, given that it's\neffectively random, and might change depending on dependencies of linked to\nlibraries etc.\n\nIsn't the use of AC_CHECK_LIB for at least lz4, zstd completely bogus? We get\nwhether they're available via pkg-config, but then completely ignore the\nlinker flag for the library name. The comment says:\n # We only care about -I, -D, and -L switches;\n # note that -llz4 will be added by AC_CHECK_LIB below.\nbut without any further explanation. This seems to be from 4d399a6fbeb.\n\n\nThe repetition of lz4, zstd in pg_rewind, pg_waldump and backend makes me\nwonder if we should put them in a xlogreader_deps or such. It's otherwise not\nobvious why pg_rewind, pg_waldump need lz4/zstd.\n\n\n> Similarly, 0007, which is clearly a bit\n> messy at the moment, but we should try to sort that out either in the old or\n> the new build files.\n\nI am against trying to maintain bug-for-bug compatibility on filename\nordering. But obviously ok with fixing the ordering to make sense on both\nsides.\n\nWhat was your decision point about when to adjust makefile ordering and when\nmeson ordering?\n\n\n\n> And finally some comments on your patches:\n\nAny comment on the pg_regress_ecpg commit? I'd like to get that out of the\nway, and it seems considerably cleaner than the hackery we do right now to\nmake VPATH builds work.\n\n\n> meson: prereq: Don't add HAVE_LDAP_H HAVE_WINLDAP_H to pg_config.h.\n>\n> This can go ahead.\n>\n> meson: prereq: fix warning compat_informix/rnull.pgc with msvc\n>\n> - $float f = 3.71;\n> + $float f = (float) 3.71;\n>\n> This could use float literals like\n>\n> + $float f = 3.71f;\n\nI tried that first, but it fails:\n../src/interfaces/ecpg/test/compat_informix/rnull.pgc:19: ERROR: trailing junk after numeric literal\n\nShould have noted that. I don't feel like fixing ecpg's parser etc...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Aug 2022 14:53:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "On Wed, Aug 17, 2022 at 8:14 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-08-16 17:41:43 +0700, John Naylor wrote:\n> > For v3, I addressed some comments and added .h files to the\n> > headerscheck exceptions.\n>\n> Thanks!\n>\n>\n> > /*\n> > * NB: include bootparse.h only AFTER including bootstrap.h, because bootstrap.h\n> > * includes node definitions needed for YYSTYPE.\n> > */\n> >\n> > Future cleanup: I see this in headerscheck:\n> >\n> > # We can't make these Bison output files compilable standalone\n> > # without using \"%code require\", which old Bison versions lack.\n> > # parser/gram.h will be included by parser/gramparse.h anyway.\n> >\n> > That directive has been supported in Bison since 2.4.2.\n>\n> 2.4.2 is from 2010. So I think we could just start relying on it?\n>\n>\n> > > > +/* functions shared between guc.c and guc-file.l */\n> > > > [...]\n> > > I think I prefer your suggestion of a guc_internal.h upthread.\n> >\n> > Started in 0002, but left open the headerscheck failure.\n> >\n> > Also, if such a thing is meant to be #include'd only by two generated\n> > files, maybe it should just live in the directory where they live, and\n> > not in the src/include dir?\n>\n> It's not something we've done for the backend afaics, but I don't see a reason\n> not to start at some point.\n>\n>\n> > > > From 7d4ecfcb3e91f3b45e94b9e64c7c40f1bbd22aa8 Mon Sep 17 00:00:00 2001\n> > > > From: John Naylor <john.naylor@postgresql.org>\n> > > > Date: Fri, 12 Aug 2022 15:45:24 +0700\n> > > > Subject: [PATCH v201 2/9] Build booscanner.c standalone\n> > >\n> > > > -# bootscanner is compiled as part of bootparse\n> > > > -bootparse.o: bootscanner.c\n> > > > +# See notes in src/backend/parser/Makefile about the following two rules\n> > > > +bootparse.h: bootparse.c\n> > > > + touch $@\n> > > > +\n> > > > +bootparse.c: BISONFLAGS += -d\n> > > > +\n> > > > +# Force these dependencies to be known even without dependency info built:\n> > > > +bootparse.o bootscan.o: bootparse.h\n> > >\n> > > Wonder if we could / should wrap this is something common. It's somewhat\n> > > annoying to repeat this stuff everywhere.\n> >\n> > I haven't looked at the Meson effort recently, but if the build rule\n> > is less annoying there, I'm inclined to leave this as a wart until\n> > autotools are retired.\n>\n> The only complicating thing in the rules there is the dependencies from one .c\n> file to another .c file.\n>\n>\n> > > > diff --git a/contrib/cube/cubedata.h b/contrib/cube/cubedata.h\n> > > > index dbe7d4f742..0b373048b5 100644\n> > > > --- a/contrib/cube/cubedata.h\n> > > > +++ b/contrib/cube/cubedata.h\n> > > > @@ -67,3 +67,7 @@ extern void cube_scanner_finish(void);\n> > > >\n> > > > /* in cubeparse.y */\n> > > > extern int cube_yyparse(NDBOX **result);\n> > > > +\n> > > > +/* All grammar constructs return strings */\n> > > > +#define YYSTYPE char *\n> > >\n> > > Why does this need to be defined in a semi-public header? If we do this in\n> > > multiple files we'll end up with the danger of macro redefinition warnings.\n\nFor v4, I #defined YYSTYPE\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Aug 2022 14:40:36 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "> > > > > index dbe7d4f742..0b373048b5 100644\n> > > > > --- a/contrib/cube/cubedata.h\n> > > > > +++ b/contrib/cube/cubedata.h\n> > > > > @@ -67,3 +67,7 @@ extern void cube_scanner_finish(void);\n> > > > >\n> > > > > /* in cubeparse.y */\n> > > > > extern int cube_yyparse(NDBOX **result);\n> > > > > +\n> > > > > +/* All grammar constructs return strings */\n> > > > > +#define YYSTYPE char *\n> > > >\n> > > > Why does this need to be defined in a semi-public header? If we do this in\n> > > > multiple files we'll end up with the danger of macro redefinition warnings.\n>\n> For v4, I #defined YYSTYPE\n\nSorry for the misfire. Continuing on, I #defined YYSTYPE in cubescan.l\nbefore #including cubeparse.h.\n\nI also added scanbuflen to the %parse-param to prevent resorting to a\nglobal variable. The rest of the patches are unchanged.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 18 Aug 2022 14:43:28 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "I wrote\n> [v4]\n\nThis piece is a leftover from the last version, and forgot to remove\nit, will fix:\n\ndiff --git a/contrib/cube/cubeparse.y b/contrib/cube/cubeparse.y\nindex 7577c4515c..e3b750b695 100644\n--- a/contrib/cube/cubeparse.y\n+++ b/contrib/cube/cubeparse.y\n@@ -7,6 +7,7 @@\n #include \"postgres.h\"\n\n #include \"cubedata.h\"\n+#include \"cube_internal.h\"\n #include \"utils/float.h\"\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Aug 2022 15:00:03 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "On 17.08.22 23:53, Andres Freund wrote:\n> Any comment on the pg_regress_ecpg commit? I'd like to get that out of the\n> way, and it seems considerably cleaner than the hackery we do right now to\n> make VPATH builds work.\n\nThat one looks like a very good improvement.\n\n\n", "msg_date": "Sat, 20 Aug 2022 09:38:48 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "Hi,\n\nOn 2022-08-20 09:38:48 +0200, Peter Eisentraut wrote:\n> On 17.08.22 23:53, Andres Freund wrote:\n> > Any comment on the pg_regress_ecpg commit? I'd like to get that out of the\n> > way, and it seems considerably cleaner than the hackery we do right now to\n> > make VPATH builds work.\n> \n> That one looks like a very good improvement.\n\nThanks for checking! Pushed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 20 Aug 2022 11:01:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "Hi,\n\nOn 2022-08-09 08:37:16 -0400, Andrew Dunstan wrote:\n> On 2022-08-09 Tu 03:10, Andres Freund wrote:\n> > Hi,\n> >\n> > I was looking at re-unifying gendef2.pl that the meson patchset had introduced\n> > for temporary ease during hacking with gendef.pl. Testing that I noticed that\n> > either I and my machine is very confused, or gendef.pl's check whether it can\n> > skip work is bogus.\n> >\n> > I noticed that, despite having code to avoid rerunning when the input files\n> > are older than the .def file, it always runs.\n> >\n> > # if the def file exists and is newer than all input object files, skip\n> > # its creation\n> > if (-f $deffile\n> > && (-M $deffile > max(map { -M } <$ARGV[0]/*.obj>)))\n> > {\n> > print \"Not re-generating $defname.DEF, file already exists.\\n\";\n> > exit(0);\n> > }\n> >\n> > My understanding of -M is that it returns the time delta between the file\n> > modification and the start of the script. Which makes the use of max() bogus,\n> > since it'll return the oldest time any input has been modified, not the\n> > newest. And the condition needs to be inverted, because we want to skip the\n> > work if $deffile is *newer*, right?\n> >\n> > Am I missing something here?\n> \n> \n> No, you're right, this is bogus. Reversing the test and using min\n> instead of max is the obvious fix.\n> \n> \n> > I'm tempted to just remove the not-regenerating logic - gendef.pl shouldn't\n> > run if there's nothing to do, and it'll e.g. not notice if there's an\n> > additional input that wasn't there during the last invocation of gendef.pl.\n> >\n> \n> Maybe, need to think about that more.\n\nAny thoughts?\n\nI'd like to commit 0003 in\nhttps://postgr.es/m/20220811002012.ju3rrz47i2e5tdha%40awork3.anarazel.de\nfairly soon.\n\nI did fix the bogus \"die\" message I added during some debugging since posting\nthat...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 20 Aug 2022 17:42:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson" }, { "msg_contents": "I have looked at your branch at 0545eec895:\n\n258f6dc0a7 Don't hardcode tmp_check/ as test directory for tap tests\n8ecc33cf04 Split TESTDIR into TESTLOGDIR and TESTDATADIR\n\nI think these patches are split up a bit incorrectly. If you apply\nthe first patch by itself, then the output appears in tab_comp_dir/\ndirectly under the source directory. And then the second patch moves\nit to tmp_check/tap_comp_dir/. If there is an intent to apply these\npatches separately somehow, this should be cleaned up.\n\nI haven't checked the second patch in detail yet, but it looks like\nthe thought was that the first patch is about ready to go.\n\n834a40e609 meson: prereq: Extend gendef.pl in preparation for meson\n\nI'm not qualified to check that in detail, but it looks reasonable\nenough to me.\n\nSee attached patch (0001) for a perlcritic fix.\n\n97a0b096e8 meson: prereq: Add src/tools/gen_export.pl\n\nThis produces leading whitespace in the output files that at least on\ndarwin wasn't there before. See attached patch (0002). This should\nbe checked again on other platforms as well.\n\nOther than that this looks good. Attached is a small cosmetic patch (0003).\n\n40e363b263 meson: prereq: Refactor PG_TEST_EXTRA logic in autoconf build\n\nSince I last looked, this has been turned into a meson option. Which\nis probably the best solution. But then we should probably make this\na configure option as well. Otherwise, it could get a bit confusing.\nFor example, I just unset PG_TEST_EXTRA in my environment to test\nsomething with the meson build, but I was unaware that meson captures\nthe value at setup time, so my unsetting had no effect.\n\nIn any case, maybe adjust the regular expressions to check for word\nboundaries, to maintain the original \"whitespace-separated\"\nspecification. For example,\n\n elsif ($ENV{PG_TEST_EXTRA} !~ /\\bssl\\b/)\n\ne0a8387660 solaris: Use versioning scripts instead of -Bsymbolic\n\nThis looks like a good idea. The documentation clearly states that\n-Bsymbolic shouldn't be used, at least not in the way we have been\ndoing. Might as well go ahead with this and give it a whirl on the\nbuild farm.\n\n0545eec895 meson: Add docs\n\nWe should think more about how to arrange the documentation. We\nprobably don't want to copy-and-paste all the introductory and\nrequirements information. I think we can make this initially much\nbriefer, like the Windows installation chapter. For example, instead\nof documenting each setup option again, just mention which ones exist\nand then point (link) to the configure chapter for details.\n\n\nI spent a bit of time with the test suites. I think there is a\nproblem in that selecting a test suite directly, like\n\n meson test -C _build --suite recovery\n\ndoesn't update the tmp_install. So if this is the first thing you run\nafter a build, everything will fail. Also, if you run this later, the\ntmp_install doesn't get updated, so you're not testing up-to-date\ncode.", "msg_date": "Wed, 24 Aug 2022 11:39:06 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "Hi,\n\nOn 2022-08-24 11:39:06 +0200, Peter Eisentraut wrote:\n> I have looked at your branch at 0545eec895:\n> \n> 258f6dc0a7 Don't hardcode tmp_check/ as test directory for tap tests\n> 8ecc33cf04 Split TESTDIR into TESTLOGDIR and TESTDATADIR\n> \n> I think these patches are split up a bit incorrectly. If you apply\n> the first patch by itself, then the output appears in tab_comp_dir/\n> directly under the source directory. And then the second patch moves\n> it to tmp_check/tap_comp_dir/. If there is an intent to apply these\n> patches separately somehow, this should be cleaned up.\n\nHow is that happening with that version of the patch? The test puts\ntap_comp_dir under TESTDIR, and TESTDIR is $(CURDIR)/tmp_check. There was an\nearlier version of the patch that was split one more time that did have that\nproblem, but I don't quite see how that version has it?\n\n\n> I haven't checked the second patch in detail yet, but it looks like\n> the thought was that the first patch is about ready to go.\n> \n> 834a40e609 meson: prereq: Extend gendef.pl in preparation for meson\n> \n> I'm not qualified to check that in detail, but it looks reasonable\n> enough to me.\n> \n> See attached patch (0001) for a perlcritic fix.\n\nThanks.\n\n\n> 97a0b096e8 meson: prereq: Add src/tools/gen_export.pl\n> \n> This produces leading whitespace in the output files that at least on\n> darwin wasn't there before. See attached patch (0002). This should\n> be checked again on other platforms as well.\n\nHm, to me the indentation as is makes more sense, but ...\n\n> Other than that this looks good. Attached is a small cosmetic patch (0003).\n\nI wonder if we should rewrite this in python - I chose perl because I thought\nwe could share it, but as you pointed out, that's not possible, because we\ndon't want to depend on perl during the autoconf build from a tarball.\n\n\n> e0a8387660 solaris: Use versioning scripts instead of -Bsymbolic\n> \n> This looks like a good idea. The documentation clearly states that\n> -Bsymbolic shouldn't be used, at least not in the way we have been\n> doing. Might as well go ahead with this and give it a whirl on the\n> build farm.\n\nCool. I looked at this because I was confused about getting warnings with\nautoconf that I wasn't getting with meson.\n\n\n> 0545eec895 meson: Add docs\n> \n> We should think more about how to arrange the documentation. We\n> probably don't want to copy-and-paste all the introductory and\n> requirements information. I think we can make this initially much\n> briefer, like the Windows installation chapter. For example, instead\n> of documenting each setup option again, just mention which ones exist\n> and then point (link) to the configure chapter for details.\n\nThe current docs, including the windows ones, are already hard to follow. I\nthink we should take some care to not make the meson bits even more\nconfusing. Cross referencing left and right seems problematic from that angle.\n\n\n> I spent a bit of time with the test suites. I think there is a\n> problem in that selecting a test suite directly, like\n> \n> meson test -C _build --suite recovery\n> \n> doesn't update the tmp_install. So if this is the first thing you run\n> after a build, everything will fail. Also, if you run this later, the\n> tmp_install doesn't get updated, so you're not testing up-to-date\n> code.\n\nAt the moment creation of the tmp_install is its own test suite. I don't know\nif that's the best way, or what the best way is, but that explains that\nfact. You can do the above without the issue by specifying\n--suite setup --suite recovery.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 Aug 2022 08:30:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "Hi,\n\nOn 2022-08-17 14:53:17 -0700, Andres Freund wrote:\n> > - In the top-level meson.build, the \"renaming\" of the Windows system\n> > name\n> >\n> > host_system = host_machine.system() == 'windows' ? 'win32' :\n> > host_machine.system()\n> > build_system = build_machine.system() == 'windows' ? 'win32' :\n> > build_machine.system()\n> >\n> > seems unnecessary to me. Why not stick with the provided names?\n> \n> Because right now we also use it for things like choosing the \"source\" for\n> pg_config_os.h (i.e. include/port/{darwin,linux,win32,..}.h). And it seemed\n> easier to just have one variable name for all of it.\n\nI am now changing this so that there's an additional 'portname' variable for\nthis purpose. Otherwise the meson names are used.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 Aug 2022 10:42:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "Hi,\n\nOn Wed, Aug 24, 2022 at 8:30 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-08-24 11:39:06 +0200, Peter Eisentraut wrote:\n> > I have looked at your branch at 0545eec895:\n> >\n> > 258f6dc0a7 Don't hardcode tmp_check/ as test directory for tap tests\n> > 8ecc33cf04 Split TESTDIR into TESTLOGDIR and TESTDATADIR\n> >\n> > I think these patches are split up a bit incorrectly. If you apply\n> > the first patch by itself, then the output appears in tab_comp_dir/\n> > directly under the source directory. And then the second patch moves\n> > it to tmp_check/tap_comp_dir/. If there is an intent to apply these\n> > patches separately somehow, this should be cleaned up.\n>\n> How is that happening with that version of the patch? The test puts\n> tap_comp_dir under TESTDIR, and TESTDIR is $(CURDIR)/tmp_check. There was\n> an\n> earlier version of the patch that was split one more time that did have\n> that\n> problem, but I don't quite see how that version has it?\n>\n>\n> > I haven't checked the second patch in detail yet, but it looks like\n> > the thought was that the first patch is about ready to go.\n> >\n> > 834a40e609 meson: prereq: Extend gendef.pl in preparation for meson\n> >\n> > I'm not qualified to check that in detail, but it looks reasonable\n> > enough to me.\n> >\n> > See attached patch (0001) for a perlcritic fix.\n>\n> Thanks.\n>\n>\n> > 97a0b096e8 meson: prereq: Add src/tools/gen_export.pl\n> >\n> > This produces leading whitespace in the output files that at least on\n> > darwin wasn't there before. See attached patch (0002). This should\n> > be checked again on other platforms as well.\n>\n> Hm, to me the indentation as is makes more sense, but ...\n>\n> > Other than that this looks good. Attached is a small cosmetic patch\n> (0003).\n>\n> I wonder if we should rewrite this in python - I chose perl because I\n> thought\n> we could share it, but as you pointed out, that's not possible, because we\n> don't want to depend on perl during the autoconf build from a tarball.\n>\n>\n> > e0a8387660 solaris: Use versioning scripts instead of -Bsymbolic\n> >\n> > This looks like a good idea. The documentation clearly states that\n> > -Bsymbolic shouldn't be used, at least not in the way we have been\n> > doing. Might as well go ahead with this and give it a whirl on the\n> > build farm.\n>\n> Cool. I looked at this because I was confused about getting warnings with\n> autoconf that I wasn't getting with meson.\n>\n>\n> > 0545eec895 meson: Add docs\n> >\n> > We should think more about how to arrange the documentation. We\n> > probably don't want to copy-and-paste all the introductory and\n> > requirements information. I think we can make this initially much\n> > briefer, like the Windows installation chapter. For example, instead\n> > of documenting each setup option again, just mention which ones exist\n> > and then point (link) to the configure chapter for details.\n>\n> The current docs, including the windows ones, are already hard to follow. I\n> think we should take some care to not make the meson bits even more\n> confusing. Cross referencing left and right seems problematic from that\n> angle.\n>\n\nOn Configure options:\n\nTo add to the above, very few sections are an exact copy paste. The\narguments and default behaviors of quite a few configure options are\ndifferent. The change in default behavior and arguments is primarily due to\n\"auto\" features which get enabled if the dependencies are found. Whereas\nwith make, we have explicit --enable or --disable options which don't take\nany arguments.\n\nAlso, a few instructions / commands which worked with make will need to be\ndone a bit differently due to environment variables etc. which also had to\nbe communicated.\n\nCommunicating these differences and nuances with cross referencing would\nmake it confusing as most of this information is in the explanation\nparagraph.\n\nOn requirements:\n\nThey are also a bit different eg. readline is not a \"required\" thing\nanymore, perl, flex, bison are required etc. Also, these are bullet points\nwith information inlined and not separate sections, so cross-referencing\nhere also would be hard.\n\nRegards,\nSamay\n\n\n> > I spent a bit of time with the test suites. I think there is a\n> > problem in that selecting a test suite directly, like\n> >\n> > meson test -C _build --suite recovery\n> >\n> > doesn't update the tmp_install. So if this is the first thing you run\n> > after a build, everything will fail. Also, if you run this later, the\n> > tmp_install doesn't get updated, so you're not testing up-to-date\n> > code.\n>\n> At the moment creation of the tmp_install is its own test suite. I don't\n> know\n> if that's the best way, or what the best way is, but that explains that\n> fact. You can do the above without the issue by specifying\n> --suite setup --suite recovery.\n>\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nHi,On Wed, Aug 24, 2022 at 8:30 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-08-24 11:39:06 +0200, Peter Eisentraut wrote:\n> I have looked at your branch at 0545eec895:\n> \n> 258f6dc0a7 Don't hardcode tmp_check/ as test directory for tap tests\n> 8ecc33cf04 Split TESTDIR into TESTLOGDIR and TESTDATADIR\n> \n> I think these patches are split up a bit incorrectly.  If you apply\n> the first patch by itself, then the output appears in tab_comp_dir/\n> directly under the source directory.  And then the second patch moves\n> it to tmp_check/tap_comp_dir/.  If there is an intent to apply these\n> patches separately somehow, this should be cleaned up.\n\nHow is that happening with that version of the patch? The test puts\ntap_comp_dir under TESTDIR, and TESTDIR is $(CURDIR)/tmp_check. There was an\nearlier version of the patch that was split one more time that did have that\nproblem, but I don't quite see how that version has it?\n\n\n> I haven't checked the second patch in detail yet, but it looks like\n> the thought was that the first patch is about ready to go.\n> \n> 834a40e609 meson: prereq: Extend gendef.pl in preparation for meson\n> \n> I'm not qualified to check that in detail, but it looks reasonable\n> enough to me.\n> \n> See attached patch (0001) for a perlcritic fix.\n\nThanks.\n\n\n> 97a0b096e8 meson: prereq: Add src/tools/gen_export.pl\n> \n> This produces leading whitespace in the output files that at least on\n> darwin wasn't there before.  See attached patch (0002).  This should\n> be checked again on other platforms as well.\n\nHm, to me the indentation as is makes more sense, but ...\n\n> Other than that this looks good.  Attached is a small cosmetic patch (0003).\n\nI wonder if we should rewrite this in python - I chose perl because I thought\nwe could share it, but as you pointed out, that's not possible, because we\ndon't want to depend on perl during the autoconf build from a tarball.\n\n\n> e0a8387660 solaris: Use versioning scripts instead of -Bsymbolic\n> \n> This looks like a good idea.  The documentation clearly states that\n> -Bsymbolic shouldn't be used, at least not in the way we have been\n> doing.  Might as well go ahead with this and give it a whirl on the\n> build farm.\n\nCool. I looked at this because I was confused about getting warnings with\nautoconf that I wasn't getting with meson.\n\n\n> 0545eec895 meson: Add docs\n> \n> We should think more about how to arrange the documentation.  We\n> probably don't want to copy-and-paste all the introductory and\n> requirements information.  I think we can make this initially much\n> briefer, like the Windows installation chapter.  For example, instead\n> of documenting each setup option again, just mention which ones exist\n> and then point (link) to the configure chapter for details.\n\nThe current docs, including the windows ones, are already hard to follow. I\nthink we should take some care to not make the meson bits even more\nconfusing. Cross referencing left and right seems problematic from that angle.On Configure options:To add to the above, very few sections are an exact copy paste. The arguments and default behaviors of quite a few configure options are different. The change in default behavior and arguments is primarily due to \"auto\" features which get enabled if the dependencies are found. Whereas with make, we have explicit --enable or --disable options which don't take any arguments.Also, a few instructions / commands which worked with make will need to be done a bit differently due to environment variables etc. which also had to be communicated.Communicating these differences and nuances with cross referencing would make it confusing as most of this information is in the explanation paragraph.On requirements:They are also a bit different eg. readline is not a \"required\" thing anymore, perl, flex, bison are required etc. Also, these are bullet points with information inlined and not separate sections, so cross-referencing here also would be hard.Regards,Samay\n\n> I spent a bit of time with the test suites.  I think there is a\n> problem in that selecting a test suite directly, like\n> \n>     meson test -C _build --suite recovery\n> \n> doesn't update the tmp_install.  So if this is the first thing you run\n> after a build, everything will fail.  Also, if you run this later, the\n> tmp_install doesn't get updated, so you're not testing up-to-date\n> code.\n\nAt the moment creation of the tmp_install is its own test suite. I don't know\nif that's the best way, or what the best way is, but that explains that\nfact. You can do the above without the issue by specifying\n--suite setup --suite recovery.\n\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 24 Aug 2022 11:21:24 -0700", "msg_from": "samay sharma <smilingsamay@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "On 24.08.22 17:30, Andres Freund wrote:\n>> 258f6dc0a7 Don't hardcode tmp_check/ as test directory for tap tests\n>> 8ecc33cf04 Split TESTDIR into TESTLOGDIR and TESTDATADIR\n>>\n>> I think these patches are split up a bit incorrectly. If you apply\n>> the first patch by itself, then the output appears in tab_comp_dir/\n>> directly under the source directory. And then the second patch moves\n>> it to tmp_check/tap_comp_dir/. If there is an intent to apply these\n>> patches separately somehow, this should be cleaned up.\n> \n> How is that happening with that version of the patch? The test puts\n> tap_comp_dir under TESTDIR, and TESTDIR is $(CURDIR)/tmp_check. There was an\n> earlier version of the patch that was split one more time that did have that\n> problem, but I don't quite see how that version has it?\n\nOk, I see now how this works. It's a bit weird since the meaning of \nTESTDIR is changed. I'm not sure if this could create cross-branch \nconfusion.\n\n>> 97a0b096e8 meson: prereq: Add src/tools/gen_export.pl\n>>\n>> This produces leading whitespace in the output files that at least on\n>> darwin wasn't there before. See attached patch (0002). This should\n>> be checked again on other platforms as well.\n> \n> Hm, to me the indentation as is makes more sense, but ...\n\nMaybe for the 'gnu' format, but on darwin (and others) it's just a flat \nlist, so indenting it is pointless.\n\n> I wonder if we should rewrite this in python - I chose perl because I thought\n> we could share it, but as you pointed out, that's not possible, because we\n> don't want to depend on perl during the autoconf build from a tarball.\n\nGiven that the code is already written, I wouldn't do it. Introducing \nPython into the toolchain would require considerations of minimum \nversions, finding the right binaries, formatting, style checking, etc., \nwhich would probably be a distraction right now. I also think that this \nscript, whose purpose is to read an input file line by line and print it \nback out slightly differently, is not going to be done better in Python.\n\n\n", "msg_date": "Thu, 25 Aug 2022 15:29:30 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "Hi,\n\nAttached is v12 of the meson patchset. Lots of improvements:\n\n- initial set of docs for building with meson, contributed by Samay\n\n- PGXS, .pc generation for extensions, making macos tests work when SIP is\n enabled, precompiled headers support are all now separate commits\n\n as suggested by Peter Eisentraut\n\n- aix, solaris builds work now (both on gcc only)\n\n- most of the operating system specific considerations are now collected in\n one place\n\n There's still the odd check around, but it's mostly for stuff where it seems\n to make sense to leave decentralized (e.g. do we need to invoke the dtrace\n binary on darwin, using wldap32 on windows, ...)\n\n- split out the existing PG_SYSROOT selection logic from darwin's template\n into src/tools/darwin_sysroot\n\n Peter E. rightfully complained that the logic I had so far wasn't\n equivalent, and it's finnicky enough that it doesn't seem like a good idea\n to have two copies. Not sure about the location, perhaps it should be in\n config/ instead?\n\n- loads of cleanups, rebasing, etc\n\n\nThe known things that I think need to be fixed before we could consider test\ndriving this on a larger scale are:\n\n- the various global variables assembled in the toplevel meson.build need\n comments explaining them (e.g. cflags, cflags_sl, ...)\n\n- choice of semaphore API needs to be cleaned up, that should be easy now, but\n I thought that I needed to get a new version out first\n\n- there's a few configure tests denoted with FIXMEs, most importantly I\n haven't caught up to the PGAC_LDAP_SAFE\n\nGreetings,\n\nAndres Freund", "msg_date": "Sat, 27 Aug 2022 11:04:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "Hi,\n\nOn 2022-08-27 11:04:47 -0700, Andres Freund wrote:\n> - choice of semaphore API needs to be cleaned up, that should be easy now, but\n> I thought that I needed to get a new version out first\n\nEverytime I look at the existing selection code I get confused, which is part\nof why I haven't tackled this yet.\n\nIf I understand the current logic right, we check for sem_open, sem_init if\nand only if PREFERRED_SEMAPHORES is set to UNNAMED_POSIX or NAMED_POSIX (no\ntemplate defaults to named). Which also means that we don't link in rt or\npthread if USE_*_POSIX_SEMAPHORES is set directly in the template, as darwin\ndoes.\n\nI read the configure.ac code combined with the templates as resulting in the\nfollowing precedence order:\n\n1) windows uses windows semaphores\n\n2) freebsd, linux use unnamed posix semaphores if available\n\n3) macOS < 10.2 uses named semaphores, without linking in rt/pthread\n4) macos >= 10.2 uses sysv semaphores\n\n5) sysv semaphores are used\n\nCorrect?\n\n\nGiven that Mac OS 10.2 was released in 2002, I think we can safely consider\nthat unsupported - even prairiedog was a few years younger... :). Given the\ndownsides of named semaphores and that we apparently haven't used that code in\nyears, I wonder if we should remove it?\n\nHowever, there has been a thread about defaulting to named semas on openbsd,\nbut then Tom benchmarked out that that's not going to fly for performance\nreasons ([1]).\n\n\nFWIW, I did notice that netbsd does have working unnamed semaphores. I don't\nknow how long ago they were added, but they apparently didn't work quite right\nin 2018 [1]. No meaningful performance chance in the main regression tests,\nI'll run a concurrent check world comparison in the background...\n\n\nShould the choice be configurable with meson? I'm inclined to say not for now.\n\nRegards,\n\nAndres\n\n[1] https://postgr.es/m/3010886.1634950831%40sss.pgh.pa.us\n[2] http://www.polarhome.com/service/man/?qf=sem_init&tf=2&of=NetBSD&sf=3\n\n\n", "msg_date": "Sat, 27 Aug 2022 18:02:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "Hi,\n\nOn 2022-08-27 18:02:40 -0700, Andres Freund wrote:\n> FWIW, I did notice that netbsd does have working unnamed semaphores. I don't\n> know how long ago they were added, but they apparently didn't work quite right\n> in 2018 [1]. No meaningful performance chance in the main regression tests,\n> I'll run a concurrent check world comparison in the background...\n\nUnnamed ones are substantially worse unfortunately. On an 8 core netbsd 9.3\nVM:\n\nsysv:\nreal 4m39.777s\nuser 7m35.534s\nsys 7m33.831s\n\nunnamed posix\nreal 5m44.035s\nuser 7m23.326s\nsys 11m58.946s\n\nThe difference in system time is even more substantial than the wall clock\ntime. And repeated runs were even worse.\n\nI also had the ecpg tests hang in one run with unnamed posix semas, until I\nkilled 'alloc'. Didn't reproduce since though.\n\n\nSo clearly we shouldn't go and start auto-detecting unnamed posix sema\nsupport.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 27 Aug 2022 18:39:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "On Sun, Aug 28, 2022 at 1:39 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-08-27 18:02:40 -0700, Andres Freund wrote:\n> > FWIW, I did notice that netbsd does have working unnamed semaphores. I don't\n> > know how long ago they were added, but they apparently didn't work quite right\n> > in 2018 [1]. No meaningful performance chance in the main regression tests,\n> > I'll run a concurrent check world comparison in the background...\n>\n> Unnamed ones are substantially worse unfortunately. On an 8 core netbsd 9.3\n> VM:\n\nI could update my experimental patch to add home made semaphores using\natomics and futexes. It needs NetBSD 10, though. Also works on\nOpenBSD and macOS.\n\n\n", "msg_date": "Sun, 28 Aug 2022 15:38:31 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "with_temp_install is repeated twice in prove_check:\n\n> Subject: [PATCH v12 02/15] Split TESTDIR into TESTLOGDIR and TESTDATADIR \n> \n> - TESTDIR='$(CURDIR)/tmp_check' $(with_temp_install)\n> PGPORT='6$(DEF_PGPORT)' \\\n> + TESTLOGDIR='$(CURDIR)/tmp_check/log' $(with_temp_install) \\\n> + TESTDATADIR='$(CURDIR)/tmp_check' $(with_temp_install) \\\n> + PGPORT='6$(DEF_PGPORT)' \\\n\nBefore running an individual test like \"meson test recovery/017_shm\",\nit's currently necessary to first manually run \"meson test tmp_install\".\nIs it possible to make that happen automatically ?\n\nYou're running tap tests via a python script. There's no problem with\nthat, but it's different from what's done by the existing makefiles.\nI was able to remove the python indirection - maybe that's better to\ntalk about on the CI thread? That moves some setup for TAP tests\n(TESTDIR, PATH, cd) from Makefile into the existing perl, which means\nless duplication.\n\n\n", "msg_date": "Sun, 28 Aug 2022 12:08:07 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "Hi,\n\nOn 2022-08-28 12:08:07 -0500, Justin Pryzby wrote:\n> with_temp_install is repeated twice in prove_check:\n>\n> > Subject: [PATCH v12 02/15] Split TESTDIR into TESTLOGDIR and TESTDATADIR\n> >\n> > - TESTDIR='$(CURDIR)/tmp_check' $(with_temp_install)\n> > PGPORT='6$(DEF_PGPORT)' \\\n> > + TESTLOGDIR='$(CURDIR)/tmp_check/log' $(with_temp_install) \\\n> > + TESTDATADIR='$(CURDIR)/tmp_check' $(with_temp_install) \\\n> > + PGPORT='6$(DEF_PGPORT)' \\\n\nOops, must have screwed up resolving a conflict...\n\n\n> Before running an individual test like \"meson test recovery/017_shm\",\n> it's currently necessary to first manually run \"meson test tmp_install\".\n> Is it possible to make that happen automatically ?\n\nNot in a trivial way that I found. We don't want to reinstall all the time -\nit's *quite* expensive on older machines. We could have a lock file in the\ntest setup so that the first test run installs it, with the others getting\nstalled, but that has pretty obvious disadvantages too (like the test timing\nbeing distorted).\n\nMedium term I think we should consider simply not needing the temp install.\n\nFWIW, if you can do the above as 'meson test tmp_install recovery/017_shm'.\n\n\n> You're running tap tests via a python script. There's no problem with\n> that, but it's different from what's done by the existing makefiles.\n> I was able to remove the python indirection - maybe that's better to\n> talk about on the CI thread? That moves some setup for TAP tests\n> (TESTDIR, PATH, cd) from Makefile into the existing perl, which means\n> less duplication.\n\nI'm doubtful it's worth removing. You'd need to move removing the files from\nthe last run into both pg_regress and the tap test infrastructure. And I do\nthink it's nice to afterwards have markers which tests failed, so we can only\ncollect their logs.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 28 Aug 2022 13:37:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "I found that the perl test modules are not installed. See attached \npatch to correct this.\n\nTo the patches:\n\n4e15ee0e24 Don't hardcode tmp_check/ as test directory for tap tests\n1a3169bc3f Split TESTDIR into TESTLOGDIR and TESTDATADIR\n\nIt's a bit weird that the first patch changes the meaning of TESTDIR\nand the second patch removes it. Maybe these patches should be\nsquashed together?\n\n\n96d1d0a0cf meson: prereq: Extend gendef.pl in preparation for meson\n\nok\n\n\n581721fa99 meson: prereq: Add src/tools/gen_export.pl\n\nStill wondering about the whitespace changes I reported recently, but\nthat can also be fine-tuned later.\n\n\n4245cc888e meson: prereq: Refactor PG_TEST_EXTRA logic in autoconf build\n\nok\n\n\n3afe803e0f meson: prereq: Fix warning compat_informix/rnull.pgc with msvc\n\nok\n\n\nae7733f46c meson: prereq: Move darwin sysroot determination into \nseparate file\n\nok\n\n\na1fb97a81b meson: Add meson based buildsystem\n\nI'm not a fan of all this business to protect the two build systems\nfrom each other. I don't like the build process touching a file under\nversion control every time. How necessary is this? What happens\notherwise?\n\nconversion_helpers.txt: should probably be removed now.\n\ndoc/src/sgml/resolv.xsl: I don't understand what this is doing. Maybe\nat least add a comment in the file.\n\nsrc/common/unicode/meson.build: The comment at the top of the file\nshould be moved next to the files it is describing (similar to how it\nis in the makefile). I don't see CLDR_VERSION set anywhere. Is that\npart implemented?\n\nsrc/port/win32ver.rc.in: This is redundant with src/port/win32ver.rc.\n(Note that the latter is also used as an input file for text\nsubstitution. So having another file named *.in next to it would be\nsuper confusing.)\n\nsrc/tools/find_meson: Could use a brief comment what it does.\n\nsrc/tools/pgflex: Could use a not-brief comment about what it does,\nwhy it's needed. Also a comment where it's used. Also run this\nthrough pycodestyle.\n\nsrc/tools/rcgen: This is connected with the comment on win32ver.rc.in\nabove. We already have this equivalent code in\nsrc/makefiles/Makefile.win32. Let's figure out a way to share this\ncode. (It could be a Perl script, which is already required on\nWindows.) Also pycodestyle.\n\nsrc/tools/testwrap: also documentation/comments/pycodestyle\n\n\ncd193eb3e8 meson: ci: Build both with meson and as before\n\nI haven't reviewed this one in detail. Maybe add a summary in the\ncommit message, like these are the new jobs, these are the changes to\nexisting jobs. It looks like there is more in there than just adding\na few meson jobs.\n\n\nIf the above are addressed, I think this will be just about at the\npoint where the above patches can be committed.\n\nEverything past these patches I'm mentally postponing right now.", "msg_date": "Wed, 31 Aug 2022 10:28:05 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "On 24.08.22 17:30, Andres Freund wrote:\n>> 0545eec895 meson: Add docs\n>>\n>> We should think more about how to arrange the documentation. We\n>> probably don't want to copy-and-paste all the introductory and\n>> requirements information. I think we can make this initially much\n>> briefer, like the Windows installation chapter. For example, instead\n>> of documenting each setup option again, just mention which ones exist\n>> and then point (link) to the configure chapter for details.\n> The current docs, including the windows ones, are already hard to follow. I\n> think we should take some care to not make the meson bits even more\n> confusing. Cross referencing left and right seems problematic from that angle.\n\nIf you look at the current structure of the installation chapter\n\n17.1. Short Version\n17.2. Requirements\n17.3. Getting the Source\n17.4. Installation Procedure\n17.5. Post-Installation Setup\n17.6. Supported Platforms\n17.7. Platform-Specific Notes\n\nonly 17.1, small parts of 12.2, and 17.4 should differ between make and \nmeson. There is no conceivable reason why the meson installation \nchapter should have a different \"Getting the Source\" section. And some \nof the post-installation and platform-specific information doesn't \nappear at all on the meson chapter.\n\nI think we can try to be a bit more ingenious in how we weave this \ntogether in the best way. What I really wouldn't want is two separate \nchapters that duplicate the entire process. I think we could do one \nchapter, like\n\n- Short Version\n- Requirements\n- Getting the Source\n- Installation Procedure\n- Installation Procedure using Meson\n- Post-Installation Setup\n- Supported Platforms\n- Platform-Specific Notes\n\nAlternatively, if people prefer two separate chapters, let's think about \nsome source-code level techniques to share the common contents.\n\n\n", "msg_date": "Wed, 31 Aug 2022 10:42:22 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "Hi,\n\nOn 2022-08-31 10:28:05 +0200, Peter Eisentraut wrote:\n> I found that the perl test modules are not installed. See attached patch to\n> correct this.\n>\n> To the patches:\n>\n> 4e15ee0e24 Don't hardcode tmp_check/ as test directory for tap tests\n> 1a3169bc3f Split TESTDIR into TESTLOGDIR and TESTDATADIR\n>\n> It's a bit weird that the first patch changes the meaning of TESTDIR\n> and the second patch removes it. Maybe these patches should be\n> squashed together?\n\nHm, to me they seem topically separate enough, but I don't have a strong\nopinion on it.\n\n\n> 581721fa99 meson: prereq: Add src/tools/gen_export.pl\n>\n> Still wondering about the whitespace changes I reported recently, but\n> that can also be fine-tuned later.\n\nI'll look into it in a bit.\n\n\n> a1fb97a81b meson: Add meson based buildsystem\n>\n> I'm not a fan of all this business to protect the two build systems\n> from each other. I don't like the build process touching a file under\n> version control every time. How necessary is this? What happens\n> otherwise?\n\nI added it after just about everyone trying meson hit problems due to\nconflicts between (past) in-tree configure builds and meson, due to files left\nin tree (picking up the wrong .h files, cannot entirely be fixed with -I\narguments, due to the \"\" includes). By adding the relevant check to the meson\nconfigure phase, and by triggering meson re-configure whenever an in-tree\nconfigure build is done, these issues can be detected.\n\nIt'd of course be nicer to avoid the potential for such conflicts, but that\nappears to be a huge chunk of work, see the bison/flex subthread.\n\nSo I don't really see an alternative.\n\n\n> conversion_helpers.txt: should probably be removed now.\n\nDone.\n\n\n> doc/src/sgml/resolv.xsl: I don't understand what this is doing. Maybe\n> at least add a comment in the file.\n\nIt's only used for building epubs. Perhaps I should extract that into a\nseparate patch as well? The relevant section is:\n\n> #\n> # epub\n> #\n>\n> # This was previously implemented using dbtoepub - but that doesn't seem to\n> # support running in build != source directory (i.e. VPATH builds already\n> # weren't supported).\n> if pandoc.found() and xsltproc.found()\n> # XXX: Wasn't able to make pandoc successfully resolve entities\n> # XXX: Perhaps we should just make all targets use this, to avoid repeatedly\n> # building whole thing? It's comparatively fast though.\n> postgres_full_xml = custom_target('postgres-full.xml',\n> input: ['resolv.xsl', 'postgres.sgml'],\n> output: ['postgres-full.xml'],\n> depends: doc_generated + [postgres_sgml_valid],\n> command: [xsltproc, '--path', '@OUTDIR@/', xsltproc_flags,\n> '-o', '@OUTPUT@', '@INPUT@'],\n> build_by_default: false,\n> )\n\nA noted, I couldn't make pandoc resolve our entities, so I used resolv.xsl\nthem, before calling pandoc.\n\nI'll rename it to resolve-entities.xsl and add a comment.\n\n\n> src/common/unicode/meson.build: The comment at the top of the file\n> should be moved next to the files it is describing (similar to how it\n> is in the makefile).\n\nDone.\n\n\n> I don't see CLDR_VERSION set anywhere. Is that part implemented?\n\nNo, I didn't implement the generation parts of contrib/unaccent. I started\ntackling the src/common/unicode bits after John Naylor asked whether that\ncould be done, but considered that good enough...\n\n\n> src/port/win32ver.rc.in: This is redundant with src/port/win32ver.rc.\n> (Note that the latter is also used as an input file for text\n> substitution. So having another file named *.in next to it would be\n> super confusing.)\n\nYea, this stuff isn't great. I think the better solution, both for meson and\nfor configure, would be to move to do all the substitution to the C\npreprocessor.\n\n\n> src/tools/find_meson: Could use a brief comment what it does.\n\nAdded.\n\n\n> src/tools/pgflex: Could use a not-brief comment about what it does,\n> why it's needed. Also a comment where it's used. Also run this\n> through pycodestyle.\n\nWorking on that.\n\n\n> cd193eb3e8 meson: ci: Build both with meson and as before\n>\n> I haven't reviewed this one in detail. Maybe add a summary in the\n> commit message, like these are the new jobs, these are the changes to\n> existing jobs. It looks like there is more in there than just adding\n> a few meson jobs.\n\nI don't think we want to commit this as-is. It contains CI for a lot of\nplatforms - that's very useful for working on meson, but too much for\nin-tree. I guess I'll split it into two, one patch for converting a reasonable\nsubset of the current CI tasks to meson and another to add (back) the current\narray of tested platforms.\n\n\n> If the above are addressed, I think this will be just about at the\n> point where the above patches can be committed.\n\nWoo!\n\n\n> Everything past these patches I'm mentally postponing right now.\n\nMakes sense.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 31 Aug 2022 11:11:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "Hi,\n\nOn Wed, Aug 31, 2022 at 1:42 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 24.08.22 17:30, Andres Freund wrote:\n> >> 0545eec895 meson: Add docs\n> >>\n> >> We should think more about how to arrange the documentation. We\n> >> probably don't want to copy-and-paste all the introductory and\n> >> requirements information. I think we can make this initially much\n> >> briefer, like the Windows installation chapter. For example, instead\n> >> of documenting each setup option again, just mention which ones exist\n> >> and then point (link) to the configure chapter for details.\n> > The current docs, including the windows ones, are already hard to\n> follow. I\n> > think we should take some care to not make the meson bits even more\n> > confusing. Cross referencing left and right seems problematic from that\n> angle.\n>\n> If you look at the current structure of the installation chapter\n>\n> 17.1. Short Version\n> 17.2. Requirements\n> 17.3. Getting the Source\n> 17.4. Installation Procedure\n> 17.5. Post-Installation Setup\n> 17.6. Supported Platforms\n> 17.7. Platform-Specific Notes\n>\n> only 17.1, small parts of 12.2, and 17.4 should differ between make and\n> meson. There is no conceivable reason why the meson installation\n> chapter should have a different \"Getting the Source\" section. And some\n> of the post-installation and platform-specific information doesn't\n> appear at all on the meson chapter.\n>\n> I think we can try to be a bit more ingenious in how we weave this\n> together in the best way. What I really wouldn't want is two separate\n> chapters that duplicate the entire process. I think we could do one\n> chapter, like\n>\n> - Short Version\n> - Requirements\n> - Getting the Source\n> - Installation Procedure\n> - Installation Procedure using Meson\n> - Post-Installation Setup\n> - Supported Platforms\n> - Platform-Specific Notes\n>\n\nI spent some more time thinking about the structure of the docs. The\ngetting the source, supported platforms, post installation setup and\nplatform specific notes sections are going to be mostly common. We do\nexpect some differences in supported platforms and platform specific notes\nbut I think they should be manageable without confusing readers.\n\nThe others; short version, requirements, and installation procedure are\npretty different and I feel combining them will end up confusing readers or\nrequire creating autoconf / make and meson versions of many things at many\ndifferent places. Also, if we keep it separate, it'll be easier to remove\nmake / autoconf specific sections if (when?) we want to do that.\n\nSo, I was thinking of the following structure:\n- Supported Platforms\n- Getting the Source\n- Building with make and autoconf\n -- Short version\n -- Requirements\n -- Installation Procedure and it's subsections\n- Building with Meson\n -- Short version\n -- Requirements\n -- Installation Procedure and it's subsections\n- Post-installation Setup\n- Platform specific notes\n\nIt has the disadvantage of short version moving to a bit later in the\nchapter but I think it's a good structure to reduce duplication and also\nkeep sections which are different separate. Thoughts on this approach? If\nthis looks good, I can submit a patch rearranging things this way.\n\nAs a follow up patch, we could also try to fit the Windows part into this\nmodel. We could add a Building with visual C++ or Microsoft windows SDK\nsection. It doesn't have a short version but follows the remaining template\nof requirements and installation procedure subsections (Building, Cleaning\nand Installing and Running Regression tests) well.\n\nRegards,\nSamay\n\n>\n> Alternatively, if people prefer two separate chapters, let's think about\n> some source-code level techniques to share the common contents.\n>\n\nHi,On Wed, Aug 31, 2022 at 1:42 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 24.08.22 17:30, Andres Freund wrote:\n>> 0545eec895 meson: Add docs\n>>\n>> We should think more about how to arrange the documentation.  We\n>> probably don't want to copy-and-paste all the introductory and\n>> requirements information.  I think we can make this initially much\n>> briefer, like the Windows installation chapter.  For example, instead\n>> of documenting each setup option again, just mention which ones exist\n>> and then point (link) to the configure chapter for details.\n> The current docs, including the windows ones, are already hard to follow. I\n> think we should take some care to not make the meson bits even more\n> confusing. Cross referencing left and right seems problematic from that angle.\n\nIf you look at the current structure of the installation chapter\n\n17.1. Short Version\n17.2. Requirements\n17.3. Getting the Source\n17.4. Installation Procedure\n17.5. Post-Installation Setup\n17.6. Supported Platforms\n17.7. Platform-Specific Notes\n\nonly 17.1, small parts of 12.2, and 17.4 should differ between make and \nmeson.  There is no conceivable reason why the meson installation \nchapter should have a different \"Getting the Source\" section.  And some \nof the post-installation and platform-specific information doesn't \nappear at all on the meson chapter.\n\nI think we can try to be a bit more ingenious in how we weave this \ntogether in the best way.  What I really wouldn't want is two separate \nchapters that duplicate the entire process.  I think we could do one \nchapter, like\n\n- Short Version\n- Requirements\n- Getting the Source\n- Installation Procedure\n- Installation Procedure using Meson\n- Post-Installation Setup\n- Supported Platforms\n- Platform-Specific NotesI spent some more time thinking about the structure of the docs. The getting the source, supported platforms, post installation setup and platform specific notes sections are going to be mostly common. We do expect some differences in supported platforms and platform specific notes but I think they should be manageable without confusing readers.The others; short version, requirements, and installation procedure are pretty different and I feel combining them will end up confusing readers or require creating autoconf / make and meson versions of many things at many different places. Also, if we keep it separate, it'll be easier to remove make / autoconf specific sections if (when?) we want to do that.So, I was thinking of the following structure:- Supported Platforms- Getting the Source- Building with make and autoconf  -- Short version  -- Requirements  -- Installation Procedure and it's subsections- Building with Meson  -- Short version  -- Requirements  -- Installation Procedure and it's subsections- Post-installation Setup- Platform specific notesIt has the disadvantage of short version moving to a bit later in the chapter but I think it's a good structure to reduce duplication and also keep sections which are different separate. Thoughts on this approach? If this looks good, I can submit a patch rearranging things this way.As a follow up patch, we could also try to fit the Windows part into this model. We could add a Building with visual C++ or Microsoft windows SDK section. It doesn't have a short version but follows the remaining template of requirements and installation procedure subsections (Building, Cleaning and Installing and Running Regression tests) well.Regards,Samay\n\nAlternatively, if people prefer two separate chapters, let's think about \nsome source-code level techniques to share the common contents.", "msg_date": "Thu, 1 Sep 2022 16:12:14 -0700", "msg_from": "samay sharma <smilingsamay@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "On Thu, Sep 1, 2022 at 1:12 AM Andres Freund <andres@anarazel.de> wrote:\n> [v12]\n\n+# Build a small utility static lib for the parser. This makes it easier to not\n+# depend on gram.h already having been generated for most of the other code\n+# (which depends on generated headers having been generated). The generation\n+# of the parser is slow...\n\nIt's not obvious whether this is intended to be a Meson-only\noptimization or a workaround for something awkward to specify.\n\n+ # FIXME: -output option is only available in perl 5.9.3 - but that's\n+ # probably a fine minimum requirement?\n\nSince we've retired some buildfarm animals recently, it seems the\noldest perl there is 5.14? ... which came out in 2011, so it seems\nlater on we could just set that as the minimum.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Sep 2022 14:17:26 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "Hi,\n\nOn 2022-09-02 14:17:26 +0700, John Naylor wrote:\n> On Thu, Sep 1, 2022 at 1:12 AM Andres Freund <andres@anarazel.de> wrote:\n> > [v12]\n> \n> +# Build a small utility static lib for the parser. This makes it easier to not\n> +# depend on gram.h already having been generated for most of the other code\n> +# (which depends on generated headers having been generated). The generation\n> +# of the parser is slow...\n> \n> It's not obvious whether this is intended to be a Meson-only\n> optimization or a workaround for something awkward to specify.\n\nIt is an optimization. The parser generation is by far the slowest part of a\nbuild. If other files can only be compiled once gram.h is generated, there's a\nlong initial period where little can happen. So instead of having all .c files\nhave a dependency on gram.h having been generated, the above makes only\nscan.c, gram.c compilation depend on gram.h. It only matters for the first\ncompilation, because such dependencies are added as order-only dependencies,\nsupplanted by more precise compiler generated dependencies after.\n\nSee the attached dep and nodep.png. That's ui.perfetto.dev displaying the\n.ninja_log file, showing the time for building the backend on my\nworkstation. The difference is probably apparent.\n\nIt's still pretty annoying that so much of the build is initially idle,\nwaiting for genbki.pl to finish.\n\nPart of that is due to some ugly dependencies of src/common on backend headers\nthat IMO probably shouldn't exist (e.g. src/common/relpath.c includes\ncatalog/pg_tablespace_d.h). Looks like it'd not be hard to get at least the\n_shlib version of src/common and libpq build without waiting for that. But for\nall the backend code I don't really see a way, so it'd be nice to make genbki\nfaster at some point.\n\n\n> + # FIXME: -output option is only available in perl 5.9.3 - but that's\n> + # probably a fine minimum requirement?\n> \n> Since we've retired some buildfarm animals recently, it seems the\n> oldest perl there is 5.14? ... which came out in 2011, so it seems\n> later on we could just set that as the minimum.\n\nAt the moment we document 5.8.3 as our minimum, supposedly based on some\nbuildfarm animal - but that's probably outdated. Perhaps time to start that\ndiscussion? Or maybe it's fine to just have the meson stuff have that\ndependency for now. Seems exceedingly unlikely anybody would care.\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 2 Sep 2022 09:35:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "Hi,\n\nOn 2022-09-02 09:35:15 -0700, Andres Freund wrote:\n> Part of that is due to some ugly dependencies of src/common on backend headers\n> that IMO probably shouldn't exist (e.g. src/common/relpath.c includes\n> catalog/pg_tablespace_d.h). Looks like it'd not be hard to get at least the\n> _shlib version of src/common and libpq build without waiting for that. But for\n> all the backend code I don't really see a way, so it'd be nice to make genbki\n> faster at some point.\n\nThis reminded me of a question I had. Requires a bit of an intro...\n\n\nBecause ninja's build specification ends up as a fairly clean DAG, and because\nit also collects compiler generated dependencies as a DAG, it's possible to\ncheck whether the build specification contains sufficient dependencies.\n\nBefore ninja 1.11 there was\nhttps://github.com/llvm/llvm-project/blob/main/llvm/utils/check_ninja_deps.py\nand ninja 1.11 has \"ninja -t missingdeps\" built in.\n\nIntentionally removing some of the dependencies to show the output:\n\n$ ninja -t missingdeps\nMissing dep: src/interfaces/ecpg/preproc/ecpg.p/.._ecpglib_typename.c.o uses src/include/catalog/pg_type_d.h (generated by CUSTOM_COMMAND)\n...\nMissing dep: src/bin/scripts/reindexdb.p/reindexdb.c.o uses src/include/catalog/pg_class_d.h (generated by CUSTOM_COMMAND)\nMissing dep: contrib/oid2name/oid2name.p/oid2name.c.o uses src/include/catalog/pg_class_d.h (generated by CUSTOM_COMMAND)\nMissing dep: contrib/vacuumlo/vacuumlo.p/vacuumlo.c.o uses src/include/catalog/pg_class_d.h (generated by CUSTOM_COMMAND)\nMissing dep: src/test/modules/libpq_pipeline/libpq_pipeline.p/libpq_pipeline.c.o uses src/include/catalog/pg_type_d.h (generated by CUSTOM_COMMAND)\nProcessed 2299 nodes.\nError: There are 62 missing dependency paths.\n62 targets had depfile dependencies on 25 distinct generated inputs (from 1 rules) without a non-depfile dep path to the generator.\nThere might be build flakiness if any of the targets listed above are built alone, or not late enough, in a clean output directory.\n\nObviously that can only work after building, as the compiler generated\ndependencies are needed.\n\nI find this exceedingly helpful, because it supplies a very high guarantee\nthat the build specification will not fail on a different machine due to\ndifferent performance characteristics.\n\nThe question:\n\nIs it worth running ninja -t missingdeps as a test? At the time we run tests\nwe'll obviously have built and thus collected \"real\" dependencies, so we would\nhave the necessary information to determine whether dependencies are missing.\nI think it'd be fine to do so only for ninja >= 1.11, rather than falling back\nto the llvm python implementation, which is much slower (0.068s vs\n3.760s). And also because it's not as obvious how to include the python script.\n\nAlternatively, we could just document that ninja -t missingdeps is worth\nrunning. Perhaps at the top of the toplevel build.meson file?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 2 Sep 2022 09:57:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-09-02 14:17:26 +0700, John Naylor wrote:\n>> + # FIXME: -output option is only available in perl 5.9.3 - but that's\n>> + # probably a fine minimum requirement?\n>> \n>> Since we've retired some buildfarm animals recently, it seems the\n>> oldest perl there is 5.14? ... which came out in 2011, so it seems\n>> later on we could just set that as the minimum.\n\n> At the moment we document 5.8.3 as our minimum, supposedly based on some\n> buildfarm animal - but that's probably outdated.\n\nYeah, definitely. prairiedog was the only animal running such an old\nversion, and it's gone. I don't think we have anything testing ancient\nbison or flex anymore, either. I'm a fan of actually testing whatever\nwe claim as the minimum supported version of any tool, so there's some\nwork to be done here, on buildfarm config or docs or both.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 02 Sep 2022 13:11:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "On Thu, Sep 1, 2022 at 4:12 PM samay sharma <smilingsamay@gmail.com> wrote:\n\n> Hi,\n>\n> On Wed, Aug 31, 2022 at 1:42 AM Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> wrote:\n>\n>> On 24.08.22 17:30, Andres Freund wrote:\n>> >> 0545eec895 meson: Add docs\n>> >>\n>> >> We should think more about how to arrange the documentation. We\n>> >> probably don't want to copy-and-paste all the introductory and\n>> >> requirements information. I think we can make this initially much\n>> >> briefer, like the Windows installation chapter. For example, instead\n>> >> of documenting each setup option again, just mention which ones exist\n>> >> and then point (link) to the configure chapter for details.\n>> > The current docs, including the windows ones, are already hard to\n>> follow. I\n>> > think we should take some care to not make the meson bits even more\n>> > confusing. Cross referencing left and right seems problematic from that\n>> angle.\n>>\n>> If you look at the current structure of the installation chapter\n>>\n>> 17.1. Short Version\n>> 17.2. Requirements\n>> 17.3. Getting the Source\n>> 17.4. Installation Procedure\n>> 17.5. Post-Installation Setup\n>> 17.6. Supported Platforms\n>> 17.7. Platform-Specific Notes\n>>\n>> only 17.1, small parts of 12.2, and 17.4 should differ between make and\n>> meson. There is no conceivable reason why the meson installation\n>> chapter should have a different \"Getting the Source\" section. And some\n>> of the post-installation and platform-specific information doesn't\n>> appear at all on the meson chapter.\n>>\n>> I think we can try to be a bit more ingenious in how we weave this\n>> together in the best way. What I really wouldn't want is two separate\n>> chapters that duplicate the entire process. I think we could do one\n>> chapter, like\n>>\n>> - Short Version\n>> - Requirements\n>> - Getting the Source\n>> - Installation Procedure\n>> - Installation Procedure using Meson\n>> - Post-Installation Setup\n>> - Supported Platforms\n>> - Platform-Specific Notes\n>>\n>\n> I spent some more time thinking about the structure of the docs. The\n> getting the source, supported platforms, post installation setup and\n> platform specific notes sections are going to be mostly common. We do\n> expect some differences in supported platforms and platform specific notes\n> but I think they should be manageable without confusing readers.\n>\n> The others; short version, requirements, and installation procedure are\n> pretty different and I feel combining them will end up confusing readers or\n> require creating autoconf / make and meson versions of many things at many\n> different places. Also, if we keep it separate, it'll be easier to remove\n> make / autoconf specific sections if (when?) we want to do that.\n>\n> So, I was thinking of the following structure:\n> - Supported Platforms\n> - Getting the Source\n> - Building with make and autoconf\n> -- Short version\n> -- Requirements\n> -- Installation Procedure and it's subsections\n> - Building with Meson\n> -- Short version\n> -- Requirements\n> -- Installation Procedure and it's subsections\n> - Post-installation Setup\n> - Platform specific notes\n>\n> It has the disadvantage of short version moving to a bit later in the\n> chapter but I think it's a good structure to reduce duplication and also\n> keep sections which are different separate. Thoughts on this approach? If\n> this looks good, I can submit a patch rearranging things this way.\n>\n\nAnother thing I'd like input on. A common question I've heard from people\nwho've tried out the docs is How do we do the equivalent of X in make with\nmeson. As meson will be new for a bunch of people who are fluent with make,\nI won't be surprised if this is a common ask. To address that, I was\nplanning to add a page to specify the key things one needs to keep in mind\nwhile \"migrating\" from make to meson and having a translation table of\ncommonly used commands.\n\nI was planning to add it in the meson section, but if we go ahead with the\nstructure proposed above, it doesn't fit it into one as cleanly. Maybe, it\nstill goes in the meson section? Thoughts?\n\nRegards,\nSamay\n\n\n>\n> As a follow up patch, we could also try to fit the Windows part into this\n> model. We could add a Building with visual C++ or Microsoft windows SDK\n> section. It doesn't have a short version but follows the remaining template\n> of requirements and installation procedure subsections (Building, Cleaning\n> and Installing and Running Regression tests) well.\n>\n> Regards,\n> Samay\n>\n>>\n>> Alternatively, if people prefer two separate chapters, let's think about\n>> some source-code level techniques to share the common contents.\n>>\n>\n\nOn Thu, Sep 1, 2022 at 4:12 PM samay sharma <smilingsamay@gmail.com> wrote:Hi,On Wed, Aug 31, 2022 at 1:42 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 24.08.22 17:30, Andres Freund wrote:\n>> 0545eec895 meson: Add docs\n>>\n>> We should think more about how to arrange the documentation.  We\n>> probably don't want to copy-and-paste all the introductory and\n>> requirements information.  I think we can make this initially much\n>> briefer, like the Windows installation chapter.  For example, instead\n>> of documenting each setup option again, just mention which ones exist\n>> and then point (link) to the configure chapter for details.\n> The current docs, including the windows ones, are already hard to follow. I\n> think we should take some care to not make the meson bits even more\n> confusing. Cross referencing left and right seems problematic from that angle.\n\nIf you look at the current structure of the installation chapter\n\n17.1. Short Version\n17.2. Requirements\n17.3. Getting the Source\n17.4. Installation Procedure\n17.5. Post-Installation Setup\n17.6. Supported Platforms\n17.7. Platform-Specific Notes\n\nonly 17.1, small parts of 12.2, and 17.4 should differ between make and \nmeson.  There is no conceivable reason why the meson installation \nchapter should have a different \"Getting the Source\" section.  And some \nof the post-installation and platform-specific information doesn't \nappear at all on the meson chapter.\n\nI think we can try to be a bit more ingenious in how we weave this \ntogether in the best way.  What I really wouldn't want is two separate \nchapters that duplicate the entire process.  I think we could do one \nchapter, like\n\n- Short Version\n- Requirements\n- Getting the Source\n- Installation Procedure\n- Installation Procedure using Meson\n- Post-Installation Setup\n- Supported Platforms\n- Platform-Specific NotesI spent some more time thinking about the structure of the docs. The getting the source, supported platforms, post installation setup and platform specific notes sections are going to be mostly common. We do expect some differences in supported platforms and platform specific notes but I think they should be manageable without confusing readers.The others; short version, requirements, and installation procedure are pretty different and I feel combining them will end up confusing readers or require creating autoconf / make and meson versions of many things at many different places. Also, if we keep it separate, it'll be easier to remove make / autoconf specific sections if (when?) we want to do that.So, I was thinking of the following structure:- Supported Platforms- Getting the Source- Building with make and autoconf  -- Short version  -- Requirements  -- Installation Procedure and it's subsections- Building with Meson  -- Short version  -- Requirements  -- Installation Procedure and it's subsections- Post-installation Setup- Platform specific notesIt has the disadvantage of short version moving to a bit later in the chapter but I think it's a good structure to reduce duplication and also keep sections which are different separate. Thoughts on this approach? If this looks good, I can submit a patch rearranging things this way.Another thing I'd like input on. A common question I've heard from people who've tried out the docs is How do we do the equivalent of X in make with meson. As meson will be new for a bunch of people who are fluent with make, I won't be surprised if this is a common ask. To address that, I was planning to add a page to specify the key things one needs to keep in mind while \"migrating\" from make to meson and having a translation table of commonly used commands.I was planning to add it in the meson section, but if we go ahead with the structure proposed above, it doesn't fit it into one as cleanly. Maybe, it still goes in the meson section? Thoughts?Regards,Samay As a follow up patch, we could also try to fit the Windows part into this model. We could add a Building with visual C++ or Microsoft windows SDK section. It doesn't have a short version but follows the remaining template of requirements and installation procedure subsections (Building, Cleaning and Installing and Running Regression tests) well.Regards,Samay\n\nAlternatively, if people prefer two separate chapters, let's think about \nsome source-code level techniques to share the common contents.", "msg_date": "Fri, 2 Sep 2022 10:16:53 -0700", "msg_from": "samay sharma <smilingsamay@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "Hi,\n\nSplit off from the meson thread at https://postgr.es/m/990067.1662138678%40sss.pgh.pa.us\n\nOn 2022-09-02 13:11:18 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-09-02 14:17:26 +0700, John Naylor wrote:\n> >> + # FIXME: -output option is only available in perl 5.9.3 - but that's\n> >> + # probably a fine minimum requirement?\n> >>\n> >> Since we've retired some buildfarm animals recently, it seems the\n> >> oldest perl there is 5.14? ... which came out in 2011, so it seems\n> >> later on we could just set that as the minimum.\n>\n> > At the moment we document 5.8.3 as our minimum, supposedly based on some\n> > buildfarm animal - but that's probably outdated.\n>\n> Yeah, definitely. prairiedog was the only animal running such an old\n> version, and it's gone. I don't think we have anything testing ancient\n> bison or flex anymore, either. I'm a fan of actually testing whatever\n> we claim as the minimum supported version of any tool, so there's some\n> work to be done here, on buildfarm config or docs or both.\n\n5.8.3 is from 2004-Jan-14, that's impressive :). I don't see any benefit in\nsetting up a buildfarm animal running that old a version.\n\nFor the meson stuff it'd suffice to set 5.9.3. as the minimum version for\nplperl (or I could try to work around it). However, supporting a perl version\nfrom 2006-Jan-28 doesn't strike me as particularly useful either.\n\n\nRelevent somewhat recent discussion / work:\nhttps://postgr.es/m/87y278s6iq.fsf%40wibble.ilmari.org\nhttps://www.postgresql.org/message-id/E1mYY6Z-0006OL-QN%40gemulon.postgresql.org\n\n\nI looked at which buildfarm animals currently use 5.14 (mentioned by John),\nand it's frogfish, snapper and skate. The latter two do build with plperl.\n\n\nI started a query on the buildfarm machine to collect the perl versions, but\nit's just awfully slow...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 2 Sep 2022 11:15:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "minimum perl version" }, { "msg_contents": "Hi John,\n\nAre you planning to press ahead with these?\n\n\n> Subject: [PATCH v4 01/11] Preparatory refactoring for compiling guc-file.c\n> standalone\n> Subject: [PATCH v4 02/11] Move private declarations shared between guc.c and\n> guc-file.l to new header\n> Subject: [PATCH v4 03/11] Build guc-file.c standalone\n\n01-03 are a bit more complicated, but still look not far off. There's a FIXME\nabout failing headercheck.\n\n\n> Subject: [PATCH v4 04/11] Build bootscanner.c standalone\n> Subject: [PATCH v4 05/11] Build repl_scanner.c standalone\n> Subject: [PATCH v4 06/11] Build syncrep_scanner.c standalone\n> Subject: [PATCH v4 07/11] Build specscanner.c standalone\n> Subject: [PATCH v4 08/11] Build exprscan.c standalone\n\nLGTM\n\n\n> Subject: [PATCH v4 09/11] Build cubescan.c standalone\n> \n> Pass scanbuflen as a parameter to yyparse rather than\n> resorting to a global variable.\n\nNice.\n\n\n> Subject: [PATCH v4 10/11] Build segscan.c standalone\n> Subject: [PATCH v4 11/11] Build jsonpath_scan.c standalone\n\nLGTM.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 2 Sep 2022 11:29:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I started a query on the buildfarm machine to collect the perl versions, but\n> it's just awfully slow...\n\nThis is from March, but it's probably still accurate enough.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 02 Sep 2022 14:31:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: minimum perl version" }, { "msg_contents": "Hi,\n\nOn 2022-09-02 14:31:57 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I started a query on the buildfarm machine to collect the perl versions, but\n> > it's just awfully slow...\n>\n> This is from March, but it's probably still accurate enough.\n\nThanks.\n\nMine did just finish. Over the last month there were the following perl\nversion on HEAD:\n\n perl_version | last_report | array_agg\n--------------+---------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n {5,8,3} | 2022-08-04 09:38:04 | {prairiedog}\n {5,14,2} | 2022-09-02 16:40:12 | {skate,lapwing,snapper,frogfish}\n {5,16,3} | 2022-09-02 16:52:17 | {prion,dhole,buri,parula,mantid,chub,clam,snakefly,rhinoceros,quokka}\n {5,18,2} | 2022-09-02 06:42:13 | {shelduck}\n {5,20,2} | 2022-09-02 16:15:34 | {curculio,chipmunk,topminnow}\n {5,22,1} | 2022-09-02 16:02:11 | {spurfowl,cuon,batfish}\n {5,24,1} | 2022-09-02 17:00:17 | {urocryon,grison,mussurana,butterflyfish,ayu,chimaera,tadarida}\n {5,24,3} | 2022-09-02 09:04:12 | {fairywren}\n {5,26,1} | 2022-09-02 18:40:18 | {elasmobranch,avocet,bichir,blossomcrown,trilobite,cavefish,cotinga,demoiselle,perch,hippopotamus,jay}\n {5,26,2} | 2022-09-02 09:02:03 | {vulpes,wobbegong}\n {5,26,3} | 2022-09-02 12:04:01 | {jacana}\n {5,28,0} | 2022-09-02 17:00:17 | {myna}\n {5,28,1} | 2022-09-02 16:02:01 | {sungazer,hornet,hoverfly,ibisbill,kittiwake,mandrill,tern}\n {5,28,2} | 2022-09-01 23:39:33 | {bonito}\n {5,30,0} | 2022-09-02 14:16:16 | {branta,moonjelly,urutau,seawasp}\n {5,30,1} | 2022-09-02 02:59:06 | {wrasse}\n {5,30,2} | 2022-09-02 16:05:24 | {massasauga}\n {5,30,3} | 2022-09-02 17:00:06 | {longfin,sifaka,gombessa}\n {5,32,0} | 2022-09-02 16:00:05 | {margay}\n {5,32,1} | 2022-09-02 17:49:36 | {lorikeet,alabio,guaibasaurus,eelpout,tayra,peripatus,plover,gull,mereswine,warbler,morepork,mule,loach,boomslang,florican,copperhead,conchuela}\n {5,34,0} | 2022-09-02 16:30:04 | {culicidae,komodoensis,grassquit,mamba,francolin,mylodon,olingo,flaviventris,petalura,phycodurus,piculet,pogona,dragonet,devario,desmoxytes,rorqual,serinus,kestrel,crake,skink,chickadee,cardinalfish,tamandua,xenodermus,thorntail,calliphoridae,idiacanthus}\n {5,34,1} | 2022-09-02 16:05:33 | {sidewinder,malleefowl,pollock}\n {5,36,0} | 2022-09-02 03:01:08 | {dangomushi,caiman}\n(23 rows)\n\n5.14 would be a trivial lift as far as the buildfarm is concerned. The Debian\n7 animals couldn't trivially be updated to a newer perl. It's from 2013-05-04,\nso I wouldn't feel bad about dropping support for it - but probably wouldn't\npersonally bother just for this.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 2 Sep 2022 12:03:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: minimum perl version" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> 5.14 would be a trivial lift as far as the buildfarm is concerned.\n\nYeah, that seems like a reasonable new minimum for Perl. I might\nsee about setting up an animal running 5.14.0, just so we can say\n\"5.14\" in the docs without fine print.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 02 Sep 2022 15:11:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: minimum perl version" }, { "msg_contents": "> On 2 Sep 2022, at 21:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Andres Freund <andres@anarazel.de> writes:\n>> 5.14 would be a trivial lift as far as the buildfarm is concerned.\n> \n> Yeah, that seems like a reasonable new minimum for Perl. I might\n> see about setting up an animal running 5.14.0, just so we can say\n> \"5.14\" in the docs without fine print.\n\nMaybe perlbrew can be used, as per the instructions in src/test/perl/README?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 2 Sep 2022 21:17:43 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: minimum perl version" }, { "msg_contents": "On Sat, Sep 3, 2022 at 1:29 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi John,\n>\n> Are you planning to press ahead with these?\n\nI was waiting for feedback on the latest set, so tomorrow I'll see\nabout the FIXME and remove the leftover bogus include. I was thinking\nof applying the guc-file patches separately and then squashing the\nrest since they're *mostly* mechanical:\n\n> > Subject: [PATCH v4 01/11] Preparatory refactoring for compiling guc-file.c\n> > standalone\n> > Subject: [PATCH v4 02/11] Move private declarations shared between guc.c and\n> > guc-file.l to new header\n> > Subject: [PATCH v4 03/11] Build guc-file.c standalone\n>\n> 01-03 are a bit more complicated, but still look not far off. There's a FIXME\n> about failing headercheck.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 3 Sep 2022 10:03:57 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "On Sat, Sep 3, 2022 at 1:29 AM Andres Freund <andres@anarazel.de> wrote:\n\n> > Subject: [PATCH v4 01/11] Preparatory refactoring for compiling guc-file.c\n> > standalone\n> > Subject: [PATCH v4 02/11] Move private declarations shared between guc.c and\n> > guc-file.l to new header\n> > Subject: [PATCH v4 03/11] Build guc-file.c standalone\n>\n> 01-03 are a bit more complicated, but still look not far off. There's a FIXME\n> about failing headercheck.\n\nFixed by adding utils/guc.h to the new internal header, which now\nlives in the same directory as guc.c and guc-file.l, similar to how I\ndid json path in the last patch. Also removed the bogus include from\nv4 to . Pushed 01 and 02 separately, then squashed and pushed the\nrest.\n\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 4 Sep 2022 12:16:10 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "On Fri, Sep 2, 2022 at 11:35 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-09-02 14:17:26 +0700, John Naylor wrote:\n> > On Thu, Sep 1, 2022 at 1:12 AM Andres Freund <andres@anarazel.de> wrote:\n> > > [v12]\n> >\n> > +# Build a small utility static lib for the parser. This makes it easier to not\n> > +# depend on gram.h already having been generated for most of the other code\n> > +# (which depends on generated headers having been generated). The generation\n> > +# of the parser is slow...\n> >\n> > It's not obvious whether this is intended to be a Meson-only\n> > optimization or a workaround for something awkward to specify.\n>\n> It is an optimization. The parser generation is by far the slowest part of a\n> build. If other files can only be compiled once gram.h is generated, there's a\n> long initial period where little can happen. So instead of having all .c files\n> have a dependency on gram.h having been generated, the above makes only\n> scan.c, gram.c compilation depend on gram.h. It only matters for the first\n> compilation, because such dependencies are added as order-only dependencies,\n> supplanted by more precise compiler generated dependencies after.\n\nOkay, I think the comment could include some of this info for clarity.\n\n> It's still pretty annoying that so much of the build is initially idle,\n> waiting for genbki.pl to finish.\n>\n> Part of that is due to some ugly dependencies of src/common on backend headers\n> that IMO probably shouldn't exist (e.g. src/common/relpath.c includes\n> catalog/pg_tablespace_d.h).\n\nTechnically, *_d.h headers are not backend, that's why it's safe to\ninclude them anywhere. relpath.c in its current form has to know the\ntablespace OIDs, which I guess is what you think is ugly. (I agree\nit's not great)\n\n> Looks like it'd not be hard to get at least the\n> _shlib version of src/common and libpq build without waiting for that. But for\n> all the backend code I don't really see a way, so it'd be nice to make genbki\n> faster at some point.\n\nThe attached gets me a ~15% reduction in clock time by having\nCatalog.pm parse the .dat files in one sweep, when we don't care about\nformatting, i.e. most of the time:\n\nmaster:\nUser time (seconds): 0.48\nMaximum resident set size (kbytes): 36112\n\npatch:\nUser time (seconds): 0.41\nMaximum resident set size (kbytes): 35808\n\nThat's pretty simple -- I think going beyond that would require some\nperl profiling.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Sun, 4 Sep 2022 13:12:52 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "Hi,\n\nOn 2022-09-04 12:16:10 +0700, John Naylor wrote:\n> Pushed 01 and 02 separately, then squashed and pushed the rest.\n\nThanks a lot! It does look a good bit cleaner to me now.\n\nI think, as a followup improvement, we should move gramparse.h to\nsrc/backend/parser, and stop installing gram.h, gramparse.h. gramparse.h\nalready had this note:\n\n * NOTE: this file is only meant to be included in the core parsing files,\n * i.e., parser.c, gram.y, and scan.l.\n * Definitions that are needed outside the core parser should be in parser.h.\n\nWhat do you think?\n\n\nI looked for projects including gramparse.h ([1], and found libpg-query, pgpool,\nslony1 and oracfe:\n- libpg-query, pgpool are partial copies of our code so will catch up when\n they sync up,\n- slony1's [2] is a configure check, one that long seems outdated, because it's\n grepping for standard_conforming strings, which was moved out in 6566e37e027\n in 2009.\n- As far as I can tell oracfe's include in sqlscan.l is vistigial, it compiles\n without it. And the include in parse_keywords.c is just required because it\n needs to include parser/scanner.h.\n\nGreetings,\n\nAndres Freund\n\n[1] https://codesearch.debian.net/search?q=gramparse.h&literal=1&perpkg=1\n[2] https://git.postgresql.org/gitweb/?p=slony1-engine.git;a=blob;f=config/acx_libpq.m4;h=7653357c0a731e36ec637df5ab378832d9279c19;hb=HEAD#l530\n\n\n", "msg_date": "Sun, 4 Sep 2022 11:17:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "Hi,\n\nOn 2022-09-04 13:12:52 +0700, John Naylor wrote:\n> On Fri, Sep 2, 2022 at 11:35 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2022-09-02 14:17:26 +0700, John Naylor wrote:\n> > > On Thu, Sep 1, 2022 at 1:12 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > [v12]\n> > >\n> > > +# Build a small utility static lib for the parser. This makes it easier to not\n> > > +# depend on gram.h already having been generated for most of the other code\n> > > +# (which depends on generated headers having been generated). The generation\n> > > +# of the parser is slow...\n> > >\n> > > It's not obvious whether this is intended to be a Meson-only\n> > > optimization or a workaround for something awkward to specify.\n> >\n> > It is an optimization. The parser generation is by far the slowest part of a\n> > build. If other files can only be compiled once gram.h is generated, there's a\n> > long initial period where little can happen. So instead of having all .c files\n> > have a dependency on gram.h having been generated, the above makes only\n> > scan.c, gram.c compilation depend on gram.h. It only matters for the first\n> > compilation, because such dependencies are added as order-only dependencies,\n> > supplanted by more precise compiler generated dependencies after.\n> \n> Okay, I think the comment could include some of this info for clarity.\n\nWorking on that.\n\n\n> > It's still pretty annoying that so much of the build is initially idle,\n> > waiting for genbki.pl to finish.\n> >\n> > Part of that is due to some ugly dependencies of src/common on backend headers\n> > that IMO probably shouldn't exist (e.g. src/common/relpath.c includes\n> > catalog/pg_tablespace_d.h).\n> \n> Technically, *_d.h headers are not backend, that's why it's safe to\n> include them anywhere. relpath.c in its current form has to know the\n> tablespace OIDs, which I guess is what you think is ugly. (I agree\n> it's not great)\n\nYea, I'm not saying it's unsafe in a produces-wrong-results way, just that it\nseems architecturally dubious / circular.\n\n\n> > Looks like it'd not be hard to get at least the\n> > _shlib version of src/common and libpq build without waiting for that. But for\n> > all the backend code I don't really see a way, so it'd be nice to make genbki\n> > faster at some point.\n> \n> The attached gets me a ~15% reduction in clock time by having\n> Catalog.pm parse the .dat files in one sweep, when we don't care about\n> formatting, i.e. most of the time:\n\nCool. Seems worthwhile.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 4 Sep 2022 14:10:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "On Mon, Sep 5, 2022 at 4:11 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-09-04 13:12:52 +0700, John Naylor wrote:\n> > On Fri, Sep 2, 2022 at 11:35 PM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2022-09-02 14:17:26 +0700, John Naylor wrote:\n> > > > On Thu, Sep 1, 2022 at 1:12 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > [v12]\n> > > >\n> > > > +# Build a small utility static lib for the parser. This makes it easier to not\n> > > > +# depend on gram.h already having been generated for most of the other code\n> > > > +# (which depends on generated headers having been generated). The generation\n> > > > +# of the parser is slow...\n> > > >\n> > > > It's not obvious whether this is intended to be a Meson-only\n> > > > optimization or a workaround for something awkward to specify.\n> > >\n> > > It is an optimization. The parser generation is by far the slowest part of a\n> > > build. If other files can only be compiled once gram.h is generated, there's a\n> > > long initial period where little can happen. So instead of having all .c files\n> > > have a dependency on gram.h having been generated, the above makes only\n> > > scan.c, gram.c compilation depend on gram.h. It only matters for the first\n> > > compilation, because such dependencies are added as order-only dependencies,\n> > > supplanted by more precise compiler generated dependencies after.\n> >\n> > Okay, I think the comment could include some of this info for clarity.\n>\n> Working on that.\n>\n>\n> > > It's still pretty annoying that so much of the build is initially idle,\n> > > waiting for genbki.pl to finish.\n> > >\n> > > Part of that is due to some ugly dependencies of src/common on backend headers\n> > > that IMO probably shouldn't exist (e.g. src/common/relpath.c includes\n> > > catalog/pg_tablespace_d.h).\n> >\n> > Technically, *_d.h headers are not backend, that's why it's safe to\n> > include them anywhere. relpath.c in its current form has to know the\n> > tablespace OIDs, which I guess is what you think is ugly. (I agree\n> > it's not great)\n>\n> Yea, I'm not saying it's unsafe in a produces-wrong-results way, just that it\n> seems architecturally dubious / circular.\n>\n>\n> > > Looks like it'd not be hard to get at least the\n> > > _shlib version of src/common and libpq build without waiting for that. But for\n> > > all the backend code I don't really see a way, so it'd be nice to make genbki\n> > > faster at some point.\n> >\n> > The attached gets me a ~15% reduction in clock time by having\n> > Catalog.pm parse the .dat files in one sweep, when we don't care about\n> > formatting, i.e. most of the time:\n>\n> Cool. Seems worthwhile.\n\nOkay, here's a cleaned up version with more idiomatic style and a new\ncopy of the perlcritic exception.\n\nNote that the indentation hasn't changed. My thought there: perltidy\nwill be run again next year, at which time it will be part of a listed\nwhitespace-only commit. Any objections, since that could confuse\nsomeone before then?\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 6 Sep 2022 15:02:36 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "On Mon, Sep 5, 2022 at 1:18 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-09-04 12:16:10 +0700, John Naylor wrote:\n> > Pushed 01 and 02 separately, then squashed and pushed the rest.\n>\n> Thanks a lot! It does look a good bit cleaner to me now.\n>\n> I think, as a followup improvement, we should move gramparse.h to\n> src/backend/parser, and stop installing gram.h, gramparse.h. gramparse.h\n> already had this note:\n>\n> * NOTE: this file is only meant to be included in the core parsing files,\n> * i.e., parser.c, gram.y, and scan.l.\n> * Definitions that are needed outside the core parser should be in parser.h.\n>\n> What do you think?\n\n+1 for the concept, but haven't looked at the details.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Sep 2022 15:03:54 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "On 02.09.22 01:12, samay sharma wrote:\n> So, I was thinking of the following structure:\n> - Supported Platforms\n> - Getting the Source\n> - Building with make and autoconf\n>   -- Short version\n>   -- Requirements\n>   -- Installation Procedure and it's subsections\n> - Building with Meson\n>   -- Short version\n>   -- Requirements\n>   -- Installation Procedure and it's subsections\n> - Post-installation Setup\n> - Platform specific notes\n\nI like that.\n\n> As a follow up patch, we could also try to fit the Windows part into \n> this model. We could add a Building with visual C++ or Microsoft windows \n> SDK section. It doesn't have a short version but follows the remaining \n> template of requirements and installation procedure subsections \n> (Building, Cleaning and Installing and Running Regression tests) well.\n\nWe were thinking about removing the old Windows build system for PG 16. \nLet's see how that goes. Otherwise, yes, that would be good as well.\n\n\n", "msg_date": "Wed, 7 Sep 2022 06:46:08 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "On 02.09.22 19:16, samay sharma wrote:\n> Another thing I'd like input on. A common question I've heard from \n> people who've tried out the docs is How do we do the equivalent of X in \n> make with meson. As meson will be new for a bunch of people who are \n> fluent with make, I won't be surprised if this is a common ask. To \n> address that, I was planning to add a page to specify the key things one \n> needs to keep in mind while \"migrating\" from make to meson and having a \n> translation table of commonly used commands.\n> \n> I was planning to add it in the meson section, but if we go ahead with \n> the structure proposed above, it doesn't fit it into one as cleanly. \n> Maybe, it still goes in the meson section? Thoughts?\n\nThis could go into the wiki.\n\nFor example, we have \n<https://wiki.postgresql.org/wiki/Working_with_Git>, which was added \nduring the CVS->Git transition.\n\nThis avoids that we make the PostgreSQL documentation a substitute \nmanual for a third-party product.\n\n\n", "msg_date": "Wed, 7 Sep 2022 06:48:40 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "\nOn 31.08.22 20:11, Andres Freund wrote:\n>> src/port/win32ver.rc.in: This is redundant with src/port/win32ver.rc.\n>> (Note that the latter is also used as an input file for text\n>> substitution. So having another file named *.in next to it would be\n>> super confusing.)\n> Yea, this stuff isn't great. I think the better solution, both for meson and\n> for configure, would be to move to do all the substitution to the C\n> preprocessor.\n\nYeah, I think if we can get rid of the evil date-based versioning, then\nthis could be done like\n\ndiff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32\nindex 17d6819644..609156382f 100644\n--- a/src/makefiles/Makefile.win32\n+++ b/src/makefiles/Makefile.win32\n@@ -65,21 +65,12 @@ endif\n # win32ver.rc or furnish a rule for generating one. Set $(PGFILEDESC) to\n # signal win32ver.rc availability to the dll build rule below.\n ifndef PGXS\n-win32ver.rc: $(top_srcdir)/src/port/win32ver.rc\n- sed -e 's;FILEDESC;$(PGFILEDESC);' \\\n- -e 's;VFT_APP;$(PGFTYPE);' \\\n- -e 's;_ICO_;$(PGICOSTR);' \\\n- -e 's;\\(VERSION.*\\),0 *$$;\\1,'`date '+%y%j' | sed 's/^0*//'`';' \\\n- -e '/_INTERNAL_NAME_/$(if $(shlib),s;_INTERNAL_NAME_;\"$(basename $(shlib))\";,d)' \\\n- -e '/_ORIGINAL_NAME_/$(if $(shlib),s;_ORIGINAL_NAME_;\"$(shlib)\";,d)' \\\n- $< >$@\n-\n # Depend on Makefile.global to force rebuild on re-run of configure.\n win32ver.rc: $(top_builddir)/src/Makefile.global\n endif\n\n-win32ver.o: win32ver.rc\n- $(WINDRES) -i $< -o $@ --include-dir=$(top_builddir)/src/include --include-dir=$(srcdir)\n+win32ver.o: $(top_srcdir)/src/port/win32ver.rc\n+ $(WINDRES) -i $< -o $@ --include-dir=$(top_builddir)/src/include --include-dir=$(srcdir) -D FILEDESC=$(PGFILEDESC) -D VFT_APP=$(PGFTYPE) -D_ICO_=$(PGICOSTR) -D_INTERNAL_NAME_=$(if $(shlib),s;_INTERNAL_NAME_;\"$(basename $(shlib))\";,d) -D_ORIGINAL_NAME_=$(if $(shlib),s;_ORIGINAL_NAME_;\"$(shlib)\";,d)\n\n\nProbably needs some careful checking of the quoting. But that should be\nthe right thing in principle.\n\n\n", "msg_date": "Wed, 7 Sep 2022 07:00:17 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "On 02.09.22 18:57, Andres Freund wrote:\n> Is it worth running ninja -t missingdeps as a test? At the time we run tests\n> we'll obviously have built and thus collected \"real\" dependencies, so we would\n> have the necessary information to determine whether dependencies are missing.\n> I think it'd be fine to do so only for ninja >= 1.11, rather than falling back\n> to the llvm python implementation, which is much slower (0.068s vs\n> 3.760s). And also because it's not as obvious how to include the python script.\n> \n> Alternatively, we could just document that ninja -t missingdeps is worth\n> running. Perhaps at the top of the toplevel build.meson file?\n\nIn the GNU/make world there is a distinction between \"check\" and \n\"maintainer-check\" for this kind of thing.\n\nI think here if we put these kinds of things into a different, what's \nthe term, \"suite\", then that would be a clear way to collect them and be \nable to run them all easily.\n\n\n\n", "msg_date": "Wed, 7 Sep 2022 07:10:37 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "On 31.08.22 20:11, Andres Freund wrote:\n>> doc/src/sgml/resolv.xsl: I don't understand what this is doing. Maybe\n>> at least add a comment in the file.\n> It's only used for building epubs. Perhaps I should extract that into a\n> separate patch as well? The relevant section is:\n> \n>> #\n>> # epub\n>> #\n>>\n>> # This was previously implemented using dbtoepub - but that doesn't seem to\n>> # support running in build != source directory (i.e. VPATH builds already\n>> # weren't supported).\n>> if pandoc.found() and xsltproc.found()\n>> # XXX: Wasn't able to make pandoc successfully resolve entities\n>> # XXX: Perhaps we should just make all targets use this, to avoid repeatedly\n>> # building whole thing? It's comparatively fast though.\n>> postgres_full_xml = custom_target('postgres-full.xml',\n>> input: ['resolv.xsl', 'postgres.sgml'],\n>> output: ['postgres-full.xml'],\n>> depends: doc_generated + [postgres_sgml_valid],\n>> command: [xsltproc, '--path', '@OUTDIR@/', xsltproc_flags,\n>> '-o', '@OUTPUT@', '@INPUT@'],\n>> build_by_default: false,\n>> )\n> A noted, I couldn't make pandoc resolve our entities, so I used resolv.xsl\n> them, before calling pandoc.\n> \n> I'll rename it to resolve-entities.xsl and add a comment.\n\nWe can have xmllint do this. The following gets the epub build working \nwith vpath:\n\ndiff --git a/doc/src/sgml/Makefile b/doc/src/sgml/Makefile\nindex 4ae7ca2be7..33b72d03db 100644\n--- a/doc/src/sgml/Makefile\n+++ b/doc/src/sgml/Makefile\n@@ -184,8 +184,12 @@ XSLTPROC_FO_FLAGS += --stringparam img.src.path \n'$(srcdir)/'\n\n epub: postgres.epub\n postgres.epub: postgres.sgml $(ALLSGML) $(ALL_IMAGES)\n- $(XMLLINT) --noout --valid $<\n- $(DBTOEPUB) -o $@ $<\n+ $(XMLLINT) $(XMLINCLUDE) --output tmp.sgml --noent --valid $<\n+ifeq ($(vpath_build),yes)\n+ $(MKDIR_P) images\n+ cp $(ALL_IMAGES) images/\n+endif\n+ $(DBTOEPUB) -o $@ tmp.sgml\n\n\nThis could also be combined with the idea of the postgres.sgml.valid \nthing you have in the meson patch set.\n\nI'll finish this up and produce a proper patch.\n\n\n", "msg_date": "Wed, 7 Sep 2022 09:19:51 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "On 07.09.22 09:19, Peter Eisentraut wrote:\n> This could also be combined with the idea of the postgres.sgml.valid \n> thing you have in the meson patch set.\n> \n> I'll finish this up and produce a proper patch.\n\nSomething like this.\n\nThis does make the rules more straightforward and avoids repeated \nxmllint calls. I suppose this also helps writing the meson rules in a \nsimpler way.\n\nA possible drawback is that the intermediate postgres-full.xml file is \n >10MB, but I guess we're past the point where we are worrying about \nthat kind of thing.\n\nI don't know if there is any performance difference between xsltproc \nreading one big file versus many smaller files.", "msg_date": "Wed, 7 Sep 2022 09:53:55 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "On 2022-Sep-06, John Naylor wrote:\n\n> Note that the indentation hasn't changed. My thought there: perltidy\n> will be run again next year, at which time it will be part of a listed\n> whitespace-only commit. Any objections, since that could confuse\n> someone before then?\n\nI think a good plan is to commit the fix without tidy, then commit the\ntidy separately, then add the latter commit to .git-blame-ignore-revs.\nThat avoids leaving the code untidy for a year.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n Are you not unsure you want to delete Firefox?\n [Not unsure] [Not not unsure] [Cancel]\n http://smylers.hates-software.com/2008/01/03/566e45b2.html\n\n\n", "msg_date": "Wed, 7 Sep 2022 10:35:58 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "On 04.09.22 20:17, Andres Freund wrote:\n> I think, as a followup improvement, we should move gramparse.h to\n> src/backend/parser, and stop installing gram.h, gramparse.h. gramparse.h\n> already had this note:\n> \n> * NOTE: this file is only meant to be included in the core parsing files,\n> * i.e., parser.c, gram.y, and scan.l.\n> * Definitions that are needed outside the core parser should be in parser.h.\n> \n> What do you think?\n\nI found in my notes:\n\n* maybe gram.h and gramparse.h should not be installed\n\nSo, yeah. ;-)\n\n\n", "msg_date": "Wed, 7 Sep 2022 11:27:29 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "On Wed, Sep 7, 2022 at 3:36 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Sep-06, John Naylor wrote:\n>\n> > Note that the indentation hasn't changed. My thought there: perltidy\n> > will be run again next year, at which time it will be part of a listed\n> > whitespace-only commit. Any objections, since that could confuse\n> > someone before then?\n>\n> I think a good plan is to commit the fix without tidy, then commit the\n> tidy separately, then add the latter commit to .git-blame-ignore-revs.\n> That avoids leaving the code untidy for a year.\n\nOkay, done that way. I also made sure we got the same info for error\nreporting. It's not identical, but arguably better, going from:\n\nBareword found where operator expected at (eval 4480) line 3, near \"'btree' xxx\"\n(Missing operator before xxx?)\n../../../src/include/catalog/pg_amop.dat: error parsing line 20:\n\nto:\n\nBareword found where operator expected at (eval 12) line 20, near \"'btree' xxx\"\n(Missing operator before xxx?)\nerror parsing ../../../src/include/catalog/pg_amop.dat\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Sep 2022 14:10:45 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "Hi,\n\nOn Tue, Sep 6, 2022 at 9:48 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 02.09.22 19:16, samay sharma wrote:\n> > Another thing I'd like input on. A common question I've heard from\n> > people who've tried out the docs is How do we do the equivalent of X in\n> > make with meson. As meson will be new for a bunch of people who are\n> > fluent with make, I won't be surprised if this is a common ask. To\n> > address that, I was planning to add a page to specify the key things one\n> > needs to keep in mind while \"migrating\" from make to meson and having a\n> > translation table of commonly used commands.\n> >\n> > I was planning to add it in the meson section, but if we go ahead with\n> > the structure proposed above, it doesn't fit it into one as cleanly.\n> > Maybe, it still goes in the meson section? Thoughts?\n>\n> This could go into the wiki.\n>\n> For example, we have\n> <https://wiki.postgresql.org/wiki/Working_with_Git>, which was added\n> during the CVS->Git transition.\n>\n\nThat's a good idea. I'll add a page to the wiki about this topic and share\nit on the list for review.\n\n\n>\n> This avoids that we make the PostgreSQL documentation a substitute\n> manual for a third-party product.\n>\n\nRegards,\nSamay\n\nHi,On Tue, Sep 6, 2022 at 9:48 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 02.09.22 19:16, samay sharma wrote:\n> Another thing I'd like input on. A common question I've heard from \n> people who've tried out the docs is How do we do the equivalent of X in \n> make with meson. As meson will be new for a bunch of people who are \n> fluent with make, I won't be surprised if this is a common ask. To \n> address that, I was planning to add a page to specify the key things one \n> needs to keep in mind while \"migrating\" from make to meson and having a \n> translation table of commonly used commands.\n> \n> I was planning to add it in the meson section, but if we go ahead with \n> the structure proposed above, it doesn't fit it into one as cleanly. \n> Maybe, it still goes in the meson section? Thoughts?\n\nThis could go into the wiki.\n\nFor example, we have \n<https://wiki.postgresql.org/wiki/Working_with_Git>, which was added \nduring the CVS->Git transition.That's a good idea. I'll add a page to the wiki about this topic and share it on the list for review. \n\nThis avoids that we make the PostgreSQL documentation a substitute \nmanual for a third-party product.Regards,Samay", "msg_date": "Thu, 8 Sep 2022 00:20:33 -0700", "msg_from": "samay sharma <smilingsamay@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "On 2022-Sep-07, Peter Eisentraut wrote:\n\n> A possible drawback is that the intermediate postgres-full.xml file is\n> >10MB, but I guess we're past the point where we are worrying about that\n> kind of thing.\n\nI think we are, but maybe mark it .PRECIOUS? IIUC that would prevent it\nfrom being removed if there's a problem in the other recipes.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"¿Cómo puedes confiar en algo que pagas y que no ves,\ny no confiar en algo que te dan y te lo muestran?\" (Germán Poo)\n\n\n", "msg_date": "Thu, 8 Sep 2022 09:42:49 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "On 2022-09-08 14:10:45 +0700, John Naylor wrote:\n> Okay, done that way.\n\nThanks!\n\n\n", "msg_date": "Thu, 8 Sep 2022 10:01:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "On Tue, Sep 6, 2022 at 9:46 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 02.09.22 01:12, samay sharma wrote:\n> > So, I was thinking of the following structure:\n> > - Supported Platforms\n> > - Getting the Source\n> > - Building with make and autoconf\n> > -- Short version\n> > -- Requirements\n> > -- Installation Procedure and it's subsections\n> > - Building with Meson\n> > -- Short version\n> > -- Requirements\n> > -- Installation Procedure and it's subsections\n> > - Post-installation Setup\n> > - Platform specific notes\n>\n> I like that.\n>\n\nAttached is a docs-only patch with that structure. We need to update the\nplatform specific notes section to add meson specific nuances. Also, in\nterms of supported platforms, if there are platforms which work with make\nbut not with meson, we have to add that too.\n\nRegards,\nSamay\n\n>\n> > As a follow up patch, we could also try to fit the Windows part into\n> > this model. We could add a Building with visual C++ or Microsoft windows\n> > SDK section. It doesn't have a short version but follows the remaining\n> > template of requirements and installation procedure subsections\n> > (Building, Cleaning and Installing and Running Regression tests) well.\n>\n> We were thinking about removing the old Windows build system for PG 16.\n> Let's see how that goes. Otherwise, yes, that would be good as well.\n>", "msg_date": "Thu, 8 Sep 2022 15:26:38 -0700", "msg_from": "samay sharma <smilingsamay@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v11" }, { "msg_contents": "On Wed, Sep 7, 2022 at 4:27 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 04.09.22 20:17, Andres Freund wrote:\n> > I think, as a followup improvement, we should move gramparse.h to\n> > src/backend/parser, and stop installing gram.h, gramparse.h. gramparse.h\n> > already had this note:\n> >\n> > * NOTE: this file is only meant to be included in the core parsing files,\n> > * i.e., parser.c, gram.y, and scan.l.\n> > * Definitions that are needed outside the core parser should be in parser.h.\n> >\n> > What do you think?\n>\n> I found in my notes:\n>\n> * maybe gram.h and gramparse.h should not be installed\n>\n> So, yeah. ;-)\n\nIt seems gramparse.h isn't installed now? In any case, here's a patch\nto move gramparse to the backend dir and stop symlinking/ installing\ngram.h. Confusingly, MSVC didn't seem to copy gram.h to src/include,\nso I'm not yet sure how it still managed to build...\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 9 Sep 2022 12:18:20 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "Hi,\n\nOn 2022-08-31 11:11:54 -0700, Andres Freund wrote:\n> > If the above are addressed, I think this will be just about at the\n> > point where the above patches can be committed.\n>\n> Woo!\n\nThere was a lot less progress over the last ~week than I had hoped. The reason\nis that I was trying to figure out the reason for the occasional failures of\necpg tests getting compiled when building on windows in CI, with msbuild.\n\nI went into many layers of rabbitholes while investigating. Wasting an absurd\namount of time.\n\n\nThe problem:\n\nOccasionally ecpg test files would fail to compile, exiting with -1073741819:\nC:\\BuildTools\\MSBuild\\Microsoft\\VC\\v160\\Microsoft.CppCommon.targets(241,5): error MSB8066: Custom build for 'C:\\cirrus\\build\\meson-private\\custom_target.rule' exited with code -1073741819. [c:\\cirrus\\build\\src\\interfaces\\ecpg\\test\\sql\\3701597@@twophase.c@cus.vcxproj]\n\n-1073741819 is 0xc0000005, which in turn is STATUS_ACCESS_VIOLATION, i.e. a\nsegfault. This happens in roughly 1/3 of the builds, but with \"streaks\" of not\nhappening and more frequently happening.\n\nHowever, despite our CI images having a JIT debugger configured (~coredump\nhandler), no crash report was triggered. The problem never occurs in my\nwindows VM.\n\n\nAt first I thought that might be because it's an assertion failure or such,\nwhich only causes a dump when a bunch of magic is done (see main.c). But\ndespite adding all the necessary magic to ecpg.exe, no dump.\n\nUnfortunately, adding debug output reduces the frequency of the issue\nsubstantially.\n\nEventually I figured out that it's not actually ecpg.exe that is crashing. It\nis meson's python wrapper around built binaries as part of the build (for\nsetting PATH, working directory, without running into cmd.exe issues). A\nmodified meson wrapper showed that ecpg.exe completes successfully.\n\nThe only thing the meson wrapper does after running the command is to call\nsys.exit(returncode), and I had printed out the returncode, which is 0.\n\n\nI looked through a lot of the python code, to see why no crashdump and no\ndetails are forthcoming. There weren't any relevant\nSetErrorMode(SEM_NOGPFAULTERRORBOX) calls. I tried to set PYTHONFAULTHANDLER,\nbut still no stack trace.\n\nNext I suspected that cmd.exe might be crashing and causing the\nproblem. Modified meson to add 'echo %ERRORLEVEL%' to the msbuild\ncustombuild. Which indeed shows the STATUS_ACCESS_VIOLATION returncode after\nrunning python. So it's not cmd.exe.\n\n\nThe problem even persisted when replacing meson's sys.exit() with os._exit(),\nwhich indeed just calls _exit().\n\nI tried to reproduce the problem using a python with debugging enabled. The\nproblem doesn't occur despite quite a few runs.\n\n\nI found scattered other reports of this problem happening on windows. Went\ndown a few more rabbitholes. Too boring to repeat here.\n\n\nAt this point I finally figured out that the reason the crash reports don't\nhappen is that everythin started by cirrus-ci on windows has an errormode of\nSEM_FAILCRITICALERRORS | SEM_NOGPFAULTERRORBOX | SEM_NOOPENFILEERRORBOX.\n\nA good bit later I figured out that while cirrus-ci isn't intentionally\nsetting that, golang does so *unconditionally* on windows:\nhttps://github.com/golang/go/blob/54182ff54a687272dd7632c3a963e036ce03cb7c/src/runtime/signal_windows.go#L14\nhttps://github.com/golang/go/blob/54182ff54a687272dd7632c3a963e036ce03cb7c/src/runtime/os_windows.go#L553\nArgh. I should have checked what the error mode is earlier, but this is just\nvery sneaky.\n\n\nSo I modified meson to change the errormode and tried to reproduce the issue\nagain, to finally get a stackdump. And tried again. And again. Without a\nsingle relevant failure (I saw tests fail in ways that are discussed on the\nlist, but that's irrelevant here).\n\nI've run this through enough attempts by now that I'm quite confident that the\nproblem does not occur when the errormode does not include\nSEM_NOOPENFILEERRORBOX. I'll want a few more runs to be certain, but...\n\n\nGiven that the problem appears to happen after _exit() is called, and only\nwhen SEM_NOOPENFILEERRORBOX is not set, it seems likely to be an OS / C\nruntime bug. Presumably it's related to something that python does first, but\nI don't see how anything could justify crashing only if SEM_NOOPENFILEERRORBOX\nis set (rather than the opposite).\n\nI have no idea how to debug this further, given that the problem is quite rare\n(can't attach a debugger and wait), only happens when crashdumps are prevented\nfrom happening (so no idea where it crashes) and is made less common by debug\nprintfs.\n\n\nSo for now the best way forward I can see is to change the error mode for CI\nruns. Which is likely a good idea anyway, so we can see crashdumps for\nbinaries other than postgres.exe (which does SetErrorMode() internally). I\nmanaged to do so by setting CIRRUS_SHELL to a python wrapper around cmd.exe\nthat does SetErrorMode(). I'm sure there's easier ways, but I couldn't figure\nout any.\n\n\nI'd like to reclaim my time. But I'm afraid nobody will be listening to that\nplea...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 9 Sep 2022 16:58:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "On Fri, Sep 9, 2022 at 12:18 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> It seems gramparse.h isn't installed now? In any case, here's a patch\n> to move gramparse to the backend dir and stop symlinking/ installing\n> gram.h.\n\nLooking more closely at src/include/Makefile, this is incorrect -- all\nfiles in SUBDIRS are copied over. So moving gramparse.h to the backend\nwill automatically not install it. The explicit install rule for\ngram.h was for vpath builds.\n\nCI builds fine. For v2 I only adjusted the commit message. I'll push\nin a couple days unless there are objections.\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 12 Sep 2022 14:49:50 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "Hi,\n\nOn 2022-09-07 07:00:17 +0200, Peter Eisentraut wrote:\n> On 31.08.22 20:11, Andres Freund wrote:\n> > > src/port/win32ver.rc.in: This is redundant with src/port/win32ver.rc.\n> > > (Note that the latter is also used as an input file for text\n> > > substitution. So having another file named *.in next to it would be\n> > > super confusing.)\n> > Yea, this stuff isn't great. I think the better solution, both for meson and\n> > for configure, would be to move to do all the substitution to the C\n> > preprocessor.\n>\n> Yeah, I think if we can get rid of the evil date-based versioning, then\n> this could be done like\n\n> -win32ver.o: win32ver.rc\n> - $(WINDRES) -i $< -o $@ --include-dir=$(top_builddir)/src/include --include-dir=$(srcdir)\n> +win32ver.o: $(top_srcdir)/src/port/win32ver.rc\n> + $(WINDRES) -i $< -o $@ --include-dir=$(top_builddir)/src/include --include-dir=$(srcdir) -D FILEDESC=$(PGFILEDESC) -D VFT_APP=$(PGFTYPE) -D_ICO_=$(PGICOSTR) -D_INTERNAL_NAME_=$(if $(shlib),s;_INTERNAL_NAME_;\"$(basename $(shlib))\";,d) -D_ORIGINAL_NAME_=$(if $(shlib),s;_ORIGINAL_NAME_;\"$(shlib)\";,d)\n\nIt tried this and while it works for some places, it doesn't work for all. It\nlooks like windres uses broken quoting when internally invoking cpp. It\nescapes e.g. whitespaces, but it doesn't escape at least < and >. Which\ndoesn't work well with descriptions like\n\nPGFILEDESC\t= \"cyrillic <-> mic text conversions\"\n\nresulting in this:\n\nstrace --string-limit=2000 -f -e execve \\\nx86_64-w64-mingw32-windres -DPGFILEDESC=\"cyrillic <-> mic text conversions\" -DPGFTYPE=VFT_DLL -DPGNAME=cyrillic_and_mic -DPGFILEENDING=dll -I../../../../../../src/include -I/home/andres/src/postgresql/src/include -I/home/andres/src/postgresql/src/include/port/win32 \"-I/home/andres/src/postgresql/src/include/port/win32\" -DWIN32_STACK_RLIMIT=4194304 -i /home/andres/src/postgresql/src/port/win32ver.rc -o win32ver.o\n...\n[pid 1788987] execve(\"/bin/sh\", [\"sh\", \"-c\", \"x86_64-w64-mingw32-gcc -E -xc -DRC_INVOKED -DPGFILEDESC=cyrillic\\\\ <->\\\\ mic\\\\ text\\\\ conversions -DPGFTYPE=VFT_DLL -DPGNAME=cyrillic_and_mic -DPGFILEENDING=dll -I../../../../../../src/include -I/home/andres/src/postgresql/src/include -I/home/andres/src/postgresql/src/include/port/win32 -I/home/andres/src/postgresql/src/include/port/win32 -DWIN32_STACK_RLIMIT=4194304 /home/andres/src/postgresql/src/port/win32ver.rc\"], 0x7ffd47edc790 /* 67 vars */) = 0\nsh: 1: cannot open -: No such file\n[pid 1788987] +++ exited with 2 +++\n--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=1788987, si_uid=1000, si_status=2, si_utime=0, si_stime=0} ---\nx86_64-w64-mingw32-windres: preprocessing failed.\n\ngiven this shoddy quoting, I think it's probably not wise to go down this\npath?\n\nWe could invoke the preprocessor ourselves, but that requires feeding the\ncompiler via stdin (otherwise it'll just warn \"linker input file unused\nbecause linking not done\") and defining -DRC_INVOKED (otherwise there'll be\nsyntax errors). That feels like too much magic?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 12 Sep 2022 18:06:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "On Sat, Sep 3, 2022 at 2:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > 5.14 would be a trivial lift as far as the buildfarm is concerned.\n>\n> Yeah, that seems like a reasonable new minimum for Perl. I might\n> see about setting up an animal running 5.14.0, just so we can say\n> \"5.14\" in the docs without fine print.\n\nUntil such time as that happens, here is a draft to require 5.14.2.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 13 Sep 2022 17:53:33 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: minimum perl version" }, { "msg_contents": "On Tue, Sep 13, 2022 at 5:53 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> Until such time as that happens, here is a draft to require 5.14.2.\n\nAs soon as I hit send, it occurred to me that we don't check the perl\nversion on Windows, since (I seem to recall) 5.8.3 was too old to be\nan option on that platform. We'll have to add a new check somewhere.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 13 Sep 2022 18:00:30 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: minimum perl version" }, { "msg_contents": "On 2022-09-12 14:49:50 +0700, John Naylor wrote:\n> CI builds fine. For v2 I only adjusted the commit message. I'll push\n> in a couple days unless there are objections.\n\nMakes sense to me. Thanks for working on it!\n\n\n", "msg_date": "Tue, 13 Sep 2022 16:10:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Sat, Sep 3, 2022 at 2:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, that seems like a reasonable new minimum for Perl. I might\n>> see about setting up an animal running 5.14.0, just so we can say\n>> \"5.14\" in the docs without fine print.\n\n> Until such time as that happens, here is a draft to require 5.14.2.\n\nI've just switched longfin to use built-from-source perl 5.14.0.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Sep 2022 19:47:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: minimum perl version" }, { "msg_contents": "On Wed, Sep 14, 2022 at 6:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I've just switched longfin to use built-from-source perl 5.14.0.\n\nIn that case, here is a quick update with commit message. Not yet any\nchange for MSVC, but I can put together something later.\n\nSince we're much less willing to support older Windows and Visual\nStudio versions, maybe it's low-enough risk defer the check to the\nMeson conversion? I understand our MSVC process will then go away much\nmore quickly than autoconf...\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 14 Sep 2022 10:30:33 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: minimum perl version" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Wed, Sep 14, 2022 at 6:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I've just switched longfin to use built-from-source perl 5.14.0.\n\n> In that case, here is a quick update with commit message. Not yet any\n> change for MSVC, but I can put together something later.\n\nLooks reasonable just by eyeball, did not test.\n\n> Since we're much less willing to support older Windows and Visual\n> Studio versions, maybe it's low-enough risk defer the check to the\n> Meson conversion? I understand our MSVC process will then go away much\n> more quickly than autoconf...\n\nAgreed --- the MSVC scripts are on a pretty short leash now.\nNot clear it's worth fixing them for this point. If we've\nfailed to get rid of them by the time v16 release approaches,\nmaybe it'd be worth doing something then.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Sep 2022 23:46:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: minimum perl version" }, { "msg_contents": "On Wed, Sep 14, 2022 at 6:10 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-09-12 14:49:50 +0700, John Naylor wrote:\n> > CI builds fine. For v2 I only adjusted the commit message. I'll push\n> > in a couple days unless there are objections.\n>\n> Makes sense to me. Thanks for working on it!\n\nThis is done.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 14 Sep 2022 11:27:45 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: build remaining Flex files standalone" }, { "msg_contents": "On Wed, Sep 14, 2022 at 10:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > On Wed, Sep 14, 2022 at 6:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I've just switched longfin to use built-from-source perl 5.14.0.\n>\n> > In that case, here is a quick update with commit message. Not yet any\n> > change for MSVC, but I can put together something later.\n>\n> Looks reasonable just by eyeball, did not test.\n>\n> > Since we're much less willing to support older Windows and Visual\n> > Studio versions, maybe it's low-enough risk defer the check to the\n> > Meson conversion? I understand our MSVC process will then go away much\n> > more quickly than autoconf...\n>\n> Agreed --- the MSVC scripts are on a pretty short leash now.\n> Not clear it's worth fixing them for this point. If we've\n> failed to get rid of them by the time v16 release approaches,\n> maybe it'd be worth doing something then.\n\nOkay, pushed with no further MSVC changes, after doing a round on CI.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 14 Sep 2022 12:40:43 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: minimum perl version" }, { "msg_contents": "On 07.09.22 09:53, Peter Eisentraut wrote:\n> On 07.09.22 09:19, Peter Eisentraut wrote:\n>> This could also be combined with the idea of the postgres.sgml.valid \n>> thing you have in the meson patch set.\n>>\n>> I'll finish this up and produce a proper patch.\n> \n> Something like this.\n> \n> This does make the rules more straightforward and avoids repeated \n> xmllint calls.  I suppose this also helps writing the meson rules in a \n> simpler way.\n\ncommitted this\n\n\n\n", "msg_date": "Wed, 14 Sep 2022 20:20:18 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "On 08.09.22 09:42, Alvaro Herrera wrote:\n> On 2022-Sep-07, Peter Eisentraut wrote:\n> \n>> A possible drawback is that the intermediate postgres-full.xml file is\n>>> 10MB, but I guess we're past the point where we are worrying about that\n>> kind of thing.\n> \n> I think we are, but maybe mark it .PRECIOUS? IIUC that would prevent it\n> from being removed if there's a problem in the other recipes.\n\nI don't think .PRECIOUS is the right tool here. There are existing uses \nof .SECONDARY in doc/src/sgml/Makefile; I integrated my patch there.\n\n\n\n", "msg_date": "Wed, 14 Sep 2022 20:21:18 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "Hi,\n\nAttached is v13 of the meson patchset. The biggest changes are:\n\n- fix for the occasional ecpg.c crashes - which turned out to be crashes of\n python, which in turn likely are due to a bug in the windows CRT\n\n- split out and improved the patch to add resource files on windows. This\n doesn't yet add them to all binaries, but I think the infrastructure looks\n better now, and there's no duplicated win32ver.rc anymore.\n\n- several rebasing adjustments, most notably the parser stuff and the\n introduction of postgresql-full.xml\n\n- improved structure of docs, based on Peter's review (Samay)\n\n- generation of proper dependencies for xmllint/xsltproc, by parsing xsltproc\n --load-trace. Previously the meson build didn't rebuild docs properly (meson\n doesn't have \"glob-style\" dependencies). (Bilal)\n\n- numerous small improvements and fixes\n\n- added a patch to drop DLLTOOL/DLLWRAP from configure.ac / Makefile.global.in\n - we've removed the use of them in 2014. This way the pgxs emulation doesn't\n need to care.\n\n- noticed that libpgport.a had and needed a dependency on errcodes.h - that\n seemed wrong. The dependency is due to src/port/*p{read,write}v?.c including\n postgres.h - which seems wrong. So I added a patch changing them to include\n c.h.\n\n\nOne thing I just realized is that the existing autoconf/make and\nsrc/tools/msvc buildsystems don't generate static libraries for e.g. libpq. So\nfar the meson build generates both static and shared libraries on windows\ntoo.\n\nMeson solves the naming conflict that presumably lead to us not generating\nstatic libraries on windows by naming the link library for dlls's differently\nthan static libraries.\n\nI'm inclined to build the static lib on windows as long as we do it on other\nplatforms.\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 14 Sep 2022 19:26:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I'm inclined to build the static lib on windows as long as we do it on other\n> platforms.\n\nMaybe I spent too much time working for Red Hat, but I'm kind of\nunhappy that we build static libraries at all. They are maintenance\nhazards and therefore security hazards by definition, because if\nyou find a problem in $package_x you will have to find and rebuild\nevery other package that has statically-embedded code from $package_x.\nSo Red Hat has, or least had, a policy against packages exporting\nsuch libraries.\n\nI realize that there are people for whom other considerations outweigh\nthat, but I don't think that we should install static libraries by\ndefault. Long ago it was pretty common for configure scripts to\noffer --enable-shared and --enable-static options ... should we\nresurrect that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 15 Sep 2022 01:10:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-15 01:10:16 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I'm inclined to build the static lib on windows as long as we do it on other\n> > platforms.\n> \n> Maybe I spent too much time working for Red Hat, but I'm kind of\n> unhappy that we build static libraries at all.\n\nYea, I have been wondering about that too.\n\nOddly enough, given our current behaviour, the strongest case for static\nlibraries IMO is on windows, due to the lack of a) rpath b) a general library\nsearch path.\n\nPeter IIRC added the static libraries to the meson port just to keep the set\nof installed files the same, which makes sense.\n\n\n> They are maintenance hazards and therefore security hazards by definition,\n> because if you find a problem in $package_x you will have to find and\n> rebuild every other package that has statically-embedded code from\n> $package_x. So Red Hat has, or least had, a policy against packages\n> exporting such libraries.\n\nIt obviously is a bad idea for widely used system packages. I think there are\na few situations, e.g. a downloadable self-contained and relocatable\napplication, where shared libraries provide less of a benefit.\n\n\n> I realize that there are people for whom other considerations outweigh\n> that, but I don't think that we should install static libraries by\n> default. Long ago it was pretty common for configure scripts to\n> offer --enable-shared and --enable-static options ... should we\n> resurrect that?\n\nIt'd be easy enough. I don't really have an opinion on whether it's worth\nhaving the options. I think most packaging systems have ways of not including\nfiles even if $software installs them.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 14 Sep 2022 22:17:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Thu, Sep 15, 2022 at 2:26 PM Andres Freund <andres@anarazel.de> wrote:\n> - noticed that libpgport.a had and needed a dependency on errcodes.h - that\n> seemed wrong. The dependency is due to src/port/*p{read,write}v?.c including\n> postgres.h - which seems wrong. So I added a patch changing them to include\n> c.h.\n\nOops. +1\n\nGCC 12 produces a bunch of warnings by default with meson, and that\nturned out to be because the default optimisation level is -O3.\nThat's a change from the make build, which uses -O2. Should we set a\ndefault of 2, or is there some meson-way-of-doing-things reason why\nnot?\n\n\n", "msg_date": "Fri, 16 Sep 2022 09:14:20 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-16 09:14:20 +1200, Thomas Munro wrote:\n> GCC 12 produces a bunch of warnings by default with meson, and that\n> turned out to be because the default optimisation level is -O3.\n> That's a change from the make build, which uses -O2. Should we set a\n> default of 2, or is there some meson-way-of-doing-things reason why\n> not?\n\nWe can change the defaults - the only downside is that there's a convenience\nsetting 'buildtype' (debug, debugoptimized, release, minsize, custom, plain)\nthat changes multiple settings (optimization level, amount of debugging\ninformation) and that doesn't work as nicely if you change the default\ncompiler optimization setting.\n\nThey made a similar discovery as us, deriving the defaults of settings based\non other settings quickly can become confusing. I think they're looking at how\nto make that UI a bit nicer.\n\nI'd prefer to defer fine-tuning the default settings till a bunch of this has\ngone in, but I won't insist on that course.\n\nTheir default warning flags passed to compilers trigger a bunch of warnings in\nour build (irrespective of -O*), so I lowered the warning level. But I think\ntheir set of settings likely is sensible, an we should just disable a bunch of\nwarnings we don't care about. But I haven't done that for now, to keep the set\nof warning flags the same between meson and autoconf.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 15 Sep 2022 15:11:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-16 09:14:20 +1200, Thomas Munro wrote:\n> On Thu, Sep 15, 2022 at 2:26 PM Andres Freund <andres@anarazel.de> wrote:\n> > - noticed that libpgport.a had and needed a dependency on errcodes.h - that\n> > seemed wrong. The dependency is due to src/port/*p{read,write}v?.c including\n> > postgres.h - which seems wrong. So I added a patch changing them to include\n> > c.h.\n> \n> Oops. +1\n\nLooks like this has been the case since\n0d56acfbaa799553c0c6ea350fd6e68d81025994 in 14. Any opinions on whether we\nshould backpatch the \"fix\"?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 16 Sep 2022 11:49:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-09-16 09:14:20 +1200, Thomas Munro wrote:\n>> On Thu, Sep 15, 2022 at 2:26 PM Andres Freund <andres@anarazel.de> wrote:\n>>> - noticed that libpgport.a had and needed a dependency on errcodes.h - that\n>>> seemed wrong. The dependency is due to src/port/*p{read,write}v?.c including\n>>> postgres.h - which seems wrong. So I added a patch changing them to include\n>>> c.h.\n\n>> Oops. +1\n\n> Looks like this has been the case since\n> 0d56acfbaa799553c0c6ea350fd6e68d81025994 in 14. Any opinions on whether we\n> should backpatch the \"fix\"?\n\n+1, those files have no business including all of postgres.h\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Sep 2022 16:22:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-16 16:22:35 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-09-16 09:14:20 +1200, Thomas Munro wrote:\n> >> On Thu, Sep 15, 2022 at 2:26 PM Andres Freund <andres@anarazel.de> wrote:\n> >>> - noticed that libpgport.a had and needed a dependency on errcodes.h - that\n> >>> seemed wrong. The dependency is due to src/port/*p{read,write}v?.c including\n> >>> postgres.h - which seems wrong. So I added a patch changing them to include\n> >>> c.h.\n> \n> >> Oops. +1\n> \n> > Looks like this has been the case since\n> > 0d56acfbaa799553c0c6ea350fd6e68d81025994 in 14. Any opinions on whether we\n> > should backpatch the \"fix\"?\n> \n> +1, those files have no business including all of postgres.h\n\nDone.\n\nI've been wondering whether we should protect against this kind of issue on\nthe buildsystem level. Whenever building frontend code, add something like\n-DBUILDING_FRONTEND, and error out if postgres.h is included without going\nthrough postgres_fe.h.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 17 Sep 2022 09:58:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On 15.09.22 04:26, Andres Freund wrote:\n> Attached is v13 of the meson patchset. The biggest changes are:\n\nDid something about warning flags change from the previous patch set? I \nsee it's building with -Wextra now, which combined with -Werror causes \nthe build to fail for me. I have never encountered that with any of the \nprevious patch sets.\n\n\n\n\n", "msg_date": "Sun, 18 Sep 2022 20:24:06 -0400", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi, \n\nOn September 18, 2022 5:24:06 PM PDT, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>On 15.09.22 04:26, Andres Freund wrote:\n>> Attached is v13 of the meson patchset. The biggest changes are:\n>\n>Did something about warning flags change from the previous patch set? I see it's building with -Wextra now, which combined with -Werror causes the build to fail for me. I have never encountered that with any of the previous patch sets.\n\nIn older versions of the patch the default warning level was set to include Wextra, and I had added my local flags to suppress uninteresting warnings. Comparing the warning flags I reduced the warning level and removed the suppressing flags - but changing default options only affects new build trees. To change existing ones do meson configure -Dwarning_level=1\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Sun, 18 Sep 2022 17:29:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On 19.09.22 02:29, Andres Freund wrote:\n> Hi,\n> \n> On September 18, 2022 5:24:06 PM PDT, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>> On 15.09.22 04:26, Andres Freund wrote:\n>>> Attached is v13 of the meson patchset. The biggest changes are:\n>>\n>> Did something about warning flags change from the previous patch set? I see it's building with -Wextra now, which combined with -Werror causes the build to fail for me. I have never encountered that with any of the previous patch sets.\n> \n> In older versions of the patch the default warning level was set to include Wextra, and I had added my local flags to suppress uninteresting warnings. Comparing the warning flags I reduced the warning level and removed the suppressing flags - but changing default options only affects new build trees. To change existing ones do meson configure -Dwarning_level=1\n\nOk that was the reason. It works now.\n\nIMO, the following commits are ready to be pushed now:\n\nb7d7fe009731 Remove DLLTOOL, DLLWRAP from configure / Makefile.global.in\n979f26889544 Don't hardcode tmp_check/ as test directory for tap tests\n9fc657fbb7e2 Split TESTDIR into TESTLOGDIR and TESTDATADIR\n6de8f1de0ffa meson: prereq: Extend gendef.pl in preparation for meson\n7054861f0fef meson: prereq: Add src/tools/gen_export.pl\n1aa586f2921c meson: prereq: Refactor PG_TEST_EXTRA logic in autoconf build\n5a9731dcc2e6 meson: prereq: port: Include c.h instead of postgres.h in *p{read,write}*.c\n1939bdcfbfea meson: Add meson based buildsystem\n\n\n\n", "msg_date": "Mon, 19 Sep 2022 05:25:59 -0400", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-19 05:25:59 -0400, Peter Eisentraut wrote:\n> IMO, the following commits are ready to be pushed now:\n\nSlowly working through them.\n\n\nTo have some initial \"translation\" for other developers I've started a wiki\npage with a translation table. Still very WIP:\nhttps://wiki.postgresql.org/wiki/Meson\n\nFor now, a bit of polishing aside, I'm just planning to add a minimal\nexplanation of what's happening, and a reference to this thread.\n\n\n> b7d7fe009731 Remove DLLTOOL, DLLWRAP from configure / Makefile.global.in\n> 979f26889544 Don't hardcode tmp_check/ as test directory for tap tests\n> 9fc657fbb7e2 Split TESTDIR into TESTLOGDIR and TESTDATADIR\n> 6de8f1de0ffa meson: prereq: Extend gendef.pl in preparation for meson\n> 5a9731dcc2e6 meson: prereq: port: Include c.h instead of postgres.h in *p{read,write}*.c\n\nDone\n\n\n> 7054861f0fef meson: prereq: Add src/tools/gen_export.pl\n\nThis one I'm planning to merge with the \"main\" commit, given there's no other user.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 19 Sep 2022 19:16:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-19 19:16:30 -0700, Andres Freund wrote:\n> To have some initial \"translation\" for other developers I've started a wiki\n> page with a translation table. Still very WIP:\n> https://wiki.postgresql.org/wiki/Meson\n> \n> For now, a bit of polishing aside, I'm just planning to add a minimal\n> explanation of what's happening, and a reference to this thread.\n\nI added installation instructions for meson for a bunch of platforms, but\nfailed to figure out how to do so in a rhel9 container. I don't have a rhel\nsubscription, and apparently the repos with developer tools now require a\nsubscription. Great way to make it easy for projects to test anything on RHEL.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 20 Sep 2022 17:11:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Wed, Sep 21, 2022 at 7:11 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-09-19 19:16:30 -0700, Andres Freund wrote:\n> > To have some initial \"translation\" for other developers I've started a\nwiki\n> > page with a translation table. Still very WIP:\n> > https://wiki.postgresql.org/wiki/Meson\n> >\n> > For now, a bit of polishing aside, I'm just planning to add a minimal\n> > explanation of what's happening, and a reference to this thread.\n>\n> I added installation instructions for meson for a bunch of platforms, but\n\nSmall typo: The homebrew section is still labeled with \"find MacPorts\nlibraries\".\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Sep 21, 2022 at 7:11 AM Andres Freund <andres@anarazel.de> wrote:>> Hi,>> On 2022-09-19 19:16:30 -0700, Andres Freund wrote:> > To have some initial \"translation\" for other developers I've started a wiki> > page with a translation table. Still very WIP:> > https://wiki.postgresql.org/wiki/Meson> >> > For now, a bit of polishing aside, I'm just planning to add a minimal> > explanation of what's happening, and a reference to this thread.>> I added installation instructions for meson for a bunch of platforms, butSmall typo: The homebrew section is still labeled with \"find MacPorts libraries\".--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 21 Sep 2022 09:52:48 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-21 09:52:48 +0700, John Naylor wrote:\n> On Wed, Sep 21, 2022 at 7:11 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2022-09-19 19:16:30 -0700, Andres Freund wrote:\n> > > To have some initial \"translation\" for other developers I've started a\n> wiki\n> > > page with a translation table. Still very WIP:\n> > > https://wiki.postgresql.org/wiki/Meson\n> > >\n> > > For now, a bit of polishing aside, I'm just planning to add a minimal\n> > > explanation of what's happening, and a reference to this thread.\n> >\n> > I added installation instructions for meson for a bunch of platforms, but\n> \n> Small typo: The homebrew section is still labeled with \"find MacPorts\n> libraries\".\n\nThanks, fixed. I wrote these blindly, so there's probably more wrong than this\n- although Thomas was helpful enough to provide some information / testing.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 20 Sep 2022 22:15:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-19 19:16:30 -0700, Andres Freund wrote:\n> On 2022-09-19 05:25:59 -0400, Peter Eisentraut wrote:\n> > IMO, the following commits are ready to be pushed now:\n> \n> Slowly working through them.\n\nI've attached an updated version of the main meson commit (and pushed all of\nthem to my git tree obviously). Changes:\n\n- Added a longer commit message\n- Stopped building doc/src/sgml/postgres-full.xml by default - somehow I\n thought we did so by default for the autoconf build, but that's not the\n case. Thomas noticed that that was extremely slow on one of his machines,\n which turns out to be because it's downloading the dtd's.\n- Added a missing dependency on check_rules.pl's result, lost that in a\n cleanup, oops\n- Fixed a few typos, via codespell\n\nI'm planning to commit this today, unless somebody wants to argue against\nthat.\n\n\nAfter that I am planning to split the \"ci\" commit so that it converts a few of\nthe CI tasks to use meson, without adding all the other platforms I added for\ndevelopment. I think that's important to get in soon, given that it'll\nprobably take a bit until the buildfarm grows meson coverage and because it\nprovides cfbot coverage which seems important for now as well.\n\nI think we should:\n\n- convert windows to build with ninja - it builds faster, runs all tests,\n parallelizes tests. That means that msbuild based builds don't have coverage\n via CI / cfbot, but we don't currently have the resources to test both.\n- add a linux build using meson, we currently can afford building both with\n autoconf and meson for linux\n\nI'm less clear on whether we should convert macos / freebsd to meson at this\npoint?\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 21 Sep 2022 09:46:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think we should:\n\n> - convert windows to build with ninja - it builds faster, runs all tests,\n> parallelizes tests. That means that msbuild based builds don't have coverage\n> via CI / cfbot, but we don't currently have the resources to test both.\n\nCheck. The sooner we can get rid of the custom MSVC scripts, the better,\nbecause now we'll be on the hook to maintain *three* build systems.\n\n> - add a linux build using meson, we currently can afford building both with\n> autoconf and meson for linux\n\nRight.\n\n> I'm less clear on whether we should convert macos / freebsd to meson at this\n> point?\n\nWe certainly could debug/polish the meson stuff just on linux and windows\nfor now, but is there a reason to wait on the others?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 21 Sep 2022 13:56:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-21 13:56:37 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I think we should:\n> \n> > - convert windows to build with ninja - it builds faster, runs all tests,\n> > parallelizes tests. That means that msbuild based builds don't have coverage\n> > via CI / cfbot, but we don't currently have the resources to test both.\n> \n> Check. The sooner we can get rid of the custom MSVC scripts, the better,\n> because now we'll be on the hook to maintain *three* build systems.\n\nAgreed. I think the only \"major\" missing thing is the windows resource file\ngeneration stuff, which is mostly done in one of the \"later\" commits. Also\nneed to test a few more of the optional dependencies (ICU, gettext, ...) on\nwindows (I did test zlib, lz4, zstd). And of course get a bit of wider\nexposure than \"just me and CI\".\n\n\n> > I'm less clear on whether we should convert macos / freebsd to meson at this\n> > point?\n> \n> We certainly could debug/polish the meson stuff just on linux and windows\n> for now, but is there a reason to wait on the others?\n\nNo - freebsd and macos have worked in CI for a long time. I was wondering\nwhether we want more coverage for autoconf in CI, but thinking about it\nfuther, it's more important to have the meson coverage.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 21 Sep 2022 11:21:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Wed, Sep 21, 2022 at 09:46:30AM -0700, Andres Freund wrote:\n> I think we should:\n> \n> - convert windows to build with ninja - it builds faster, runs all tests,\n> parallelizes tests. That means that msbuild based builds don't have coverage\n> via CI / cfbot, but we don't currently have the resources to test both.\n\n+1\n\nIf multiple Windows (or other) tasks are going to exist, I think they\nshould have separate \"ci-os-only\" conditions, like windows-msvc,\nwindows-ninja, ... It should be possible to run only one.\n\n\n", "msg_date": "Wed, 21 Sep 2022 13:22:48 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-21 09:46:30 -0700, Andres Freund wrote:\n> After that I am planning to split the \"ci\" commit so that it converts a few of\n> the CI tasks to use meson, without adding all the other platforms I added for\n> development. I think that's important to get in soon, given that it'll\n> probably take a bit until the buildfarm grows meson coverage and because it\n> provides cfbot coverage which seems important for now as well.\n> \n> I think we should:\n> \n> - convert windows to build with ninja - it builds faster, runs all tests,\n> parallelizes tests. That means that msbuild based builds don't have coverage\n> via CI / cfbot, but we don't currently have the resources to test both.\n\nI was working on that and hit an issue that took me a while to resolve: Once I\ntested only the \"main\" meson commit plus CI the windows task was running out\nof memory. There was an outage of the CI provider at the same time, so I first\nblamed it on that. But it turns out to be \"legitimately\" high memory usage\nrelated to debug symbols - the only reason CI didn't show that before was that\nit's incidentally fixed as a indirect consequence of using precompiled\nheaders, in a later commit. Argh. It can also be fixed by the option required\nto use ccache at some point, so I'll do that for now.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 21 Sep 2022 16:10:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-21 09:46:30 -0700, Andres Freund wrote:\n> I'm planning to commit this today, unless somebody wants to argue against\n> that.\n\nAnd done!\n\nChanges:\n- fixed a few typos (thanks Thomas)\n- less duplication in the CI tasks\n- removed an incomplete implementation of the target for abbrevs.txt - do we\n even want to have that?\n- plenty hand wringing on my part\n\n\nI also rebased my meson git tree, which still has plenty additional test\nplatforms (netbsd, openbsd, debian sid, fedora rawhide, centos 8, centos 7,\nopensuse tumbleweed), but without the autoconf versions of those targets. I\nalso added a commit that translates most of the CompilerWarnings task to\nmeson. Still need to add a headerscheck / cpluspluscheck target.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 21 Sep 2022 22:57:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "\nOn 2022-09-22 Th 01:57, Andres Freund wrote:\n> Hi,\n>\n> On 2022-09-21 09:46:30 -0700, Andres Freund wrote:\n>> I'm planning to commit this today, unless somebody wants to argue against\n>> that.\n> And done!\n>\n> Changes:\n> - fixed a few typos (thanks Thomas)\n> - less duplication in the CI tasks\n> - removed an incomplete implementation of the target for abbrevs.txt - do we\n> even want to have that?\n> - plenty hand wringing on my part\n>\n>\n> I also rebased my meson git tree, which still has plenty additional test\n> platforms (netbsd, openbsd, debian sid, fedora rawhide, centos 8, centos 7,\n> opensuse tumbleweed), but without the autoconf versions of those targets. I\n> also added a commit that translates most of the CompilerWarnings task to\n> meson. Still need to add a headerscheck / cpluspluscheck target.\n>\n\nGreat. Now I'll start on buildfarm support. Given my current\ncommitments, this will take me a while, but I hope to have a working\nclient by about the beginning of November.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 22 Sep 2022 04:29:15 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-09-21 09:46:30 -0700, Andres Freund wrote:\n>> I'm planning to commit this today, unless somebody wants to argue against\n>> that.\n\n> And done!\n\nYay!\n\nInitial reports from the cfbot are mostly promising, but there are a few\npatches where all the meson builds fail while all the autoconf ones pass,\nso there's something for you to look at. So far CF entries 3464, 3733,\n3771, 3808 look that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Sep 2022 10:49:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On 2022-Sep-22, Tom Lane wrote:\n\n> Initial reports from the cfbot are mostly promising, but there are a few\n> patches where all the meson builds fail while all the autoconf ones pass,\n> so there's something for you to look at. So far CF entries 3464, 3733,\n> 3771, 3808 look that way.\n\nHmm, but those patches add files, which means they're now outdated: they\nneed to add these files to the respective meson.build file.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La experiencia nos dice que el hombre peló millones de veces las patatas,\npero era forzoso admitir la posibilidad de que en un caso entre millones,\nlas patatas pelarían al hombre\" (Ijon Tichy)\n\n\n", "msg_date": "Thu, 22 Sep 2022 16:56:57 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Sep-22, Tom Lane wrote:\n>> Initial reports from the cfbot are mostly promising, but there are a few\n>> patches where all the meson builds fail while all the autoconf ones pass,\n>> so there's something for you to look at. So far CF entries 3464, 3733,\n>> 3771, 3808 look that way.\n\n> Hmm, but those patches add files, which means they're now outdated: they\n> need to add these files to the respective meson.build file.\n\nAh, right, the joys of maintaining multiple build systems. I wonder\nif there's any way to avoid that by scraping file lists from one\ngroup to the other. We got a little spoiled perhaps by the MSVC\nscripts managing to do that in most cases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Sep 2022 11:04:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-22 16:56:57 +0200, Alvaro Herrera wrote:\n> On 2022-Sep-22, Tom Lane wrote:\n> > Initial reports from the cfbot are mostly promising, but there are a few\n> > patches where all the meson builds fail while all the autoconf ones pass,\n> > so there's something for you to look at. So far CF entries 3464, 3733,\n> > 3771, 3808 look that way.\n> \n> Hmm, but those patches add files, which means they're now outdated: they\n> need to add these files to the respective meson.build file.\n\nYea, I looked through all of these and they all need need a simple addition of\na file to be built or installed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 22 Sep 2022 08:05:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-22 04:29:15 -0400, Andrew Dunstan wrote:\n> Great. Now I'll start on buildfarm support. Given my current\n> commitments, this will take me a while, but I hope to have a working\n> client by about the beginning of November.\n\nGreat! Let me know if there's something I can do to help.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 22 Sep 2022 08:05:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On 2022-Sep-22, Tom Lane wrote:\n\n> Ah, right, the joys of maintaining multiple build systems. I wonder\n> if there's any way to avoid that by scraping file lists from one\n> group to the other.\n\nOr maybe we could have a file common to both, say OBJS, which both\nscrape in their own way. That could be easier than one scraping the\nother.\n\n> We got a little spoiled perhaps by the MSVC scripts managing to do\n> that in most cases.\n\nRight ... and it was so annoying in the cases it couldn't.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Aprender sin pensar es inútil; pensar sin aprender, peligroso\" (Confucio)\n\n\n", "msg_date": "Thu, 22 Sep 2022 17:21:54 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "I gave the meson build system a try, and it seems to work nicely. It\ndidn't take long at all to adapt my workflow.\n\nA few notes from my experience:\n\n* I'm using an Ubuntu-based distribution, and the version of meson that apt\ninstalled was not new enough for Postgres. I ended up cloning meson [0]\nand using the newest tag. This is no big deal.\n\n* The installed binaries were unable to locate libraries like libpq. I\nended up setting the extra_lib_dirs option to the directory where these\nlibraries were installed to fix this. This one is probably worth\ninvestigating further.\n\n* meson really doesn't like it when there are things leftover from\nconfigure/make. Whenever I switch from make to meson, I have to run 'make\nmaintainer-clean'.\n\nOtherwise, all of my usual build options, ccache, etc. are working just\nlike before. Nice work!\n\n[0] https://github.com/mesonbuild/meson\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 22 Sep 2022 13:05:33 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Thu, Sep 22, 2022 at 1:05 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Otherwise, all of my usual build options, ccache, etc. are working just\n> like before. Nice work!\n\n+1\n\nIs it generally recommended that individual hackers mostly switch over\nto Meson for their day to day work soon? I'm guessing that this\nquestion doesn't really have a clear answer yet, but thought I'd ask,\njust in case.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 22 Sep 2022 13:21:28 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-22 13:05:33 -0700, Nathan Bossart wrote:\n> I gave the meson build system a try, and it seems to work nicely. It\n> didn't take long at all to adapt my workflow.\n> \n> A few notes from my experience:\n> \n> * I'm using an Ubuntu-based distribution, and the version of meson that apt\n> installed was not new enough for Postgres. I ended up cloning meson [0]\n> and using the newest tag. This is no big deal.\n\nI assume this is 20.04 LTS? If so, we're missing it by one version of meson\ncurrently. There's unfortunately a few features that'd be a bit painful to not\nhave.\n\n\n> * The installed binaries were unable to locate libraries like libpq. I\n> ended up setting the extra_lib_dirs option to the directory where these\n> libraries were installed to fix this. This one is probably worth\n> investigating further.\n\nI think that should be \"fixed\" in a later commit in the meson tree - any\nchance you could try that?\n\nhttps://github.com/anarazel/postgres/tree/meson\n\n\n> * meson really doesn't like it when there are things leftover from\n> configure/make. Whenever I switch from make to meson, I have to run 'make\n> maintainer-clean'.\n\nYes. I recommend building out-of-tree with autoconf as well.\n\n\n> Otherwise, all of my usual build options, ccache, etc. are working just\n> like before. Nice work!\n\nCool!\n\nThanks for testing,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 22 Sep 2022 13:28:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-22 13:21:28 -0700, Peter Geoghegan wrote:\n> Is it generally recommended that individual hackers mostly switch over\n> to Meson for their day to day work soon? I'm guessing that this\n> question doesn't really have a clear answer yet, but thought I'd ask,\n> just in case.\n\nIt'll probably depend on who you ask ;)\n\nI'm likely the most biased person on this, but for me the reliable incremental\nbuilds and the readability of the test output are big enough wins that the\nanswer is pretty clear... Doesn't hurt that running all tests is faster too.\n\n\nThe currently existing limitations are imo mostly around making it usable for\nproduction, particularly on windows.\n\n\ntime to run all tests (cassert, -Og), in a fully built tree:\n\nmake:\n\ntime make -j48 -s -Otarget check-world\nreal\t2m44.206s\nuser\t6m29.121s\nsys\t1m54.069s\n\ntime make -j48 -s -Otarget check-world PROVE_FLAGS='-j4'\nreal\t1m1.577s\nuser\t7m32.579s\nsys\t2m17.767s\n\n\nmeson:\n\ntime meson test\nreal\t0m42.178s\nuser\t7m8.533s\nsys\t2m17.711s\n\n\nFWIW, I just rebased my older patch to cache and copy initdb during the\ntests. The %user saved is impressive enough to pursue it again...\n\ntime make -j48 -s -Otarget check-world PROVE_FLAGS='-j4'\nreal\t0m52.655s\nuser\t2m19.504s\nsys\t1m26.264s\n\ntime meson test:\n\nreal\t0m36.370s\nuser\t2m14.748s\nsys\t1m36.741s\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 22 Sep 2022 14:50:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Thu, Sep 22, 2022 at 2:50 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm likely the most biased person on this, but for me the reliable incremental\n> builds and the readability of the test output are big enough wins that the\n> answer is pretty clear... Doesn't hurt that running all tests is faster too.\n\nIt's nice that things are much more discoverable now. For example, if\nyou want to run some random test on its own then you just...do it in\nthe obvious, discoverable way. It took me about 2 minutes to figure\nout how to do that, without reading any documentation.\n\nOTOH doing the same thing with the old autoconf-based build system\nrequires the user to know the exact magical incantation for Postgres\ntests. You just have to know to run the 2 or 3 tests that are\nundocumented (or poorly documented) dependencies first. That seems\nlike an enormous usability improvement, especially for those of us\nthat haven't been working on Postgres for years.\n\n> time to run all tests (cassert, -Og), in a fully built tree:\n\n> time make -j48 -s -Otarget check-world PROVE_FLAGS='-j4'\n> real 1m1.577s\n> user 7m32.579s\n> sys 2m17.767s\n\n> time meson test\n> real 0m42.178s\n> user 7m8.533s\n> sys 2m17.711s\n\nSold!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 22 Sep 2022 15:04:17 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Thu, Sep 22, 2022 at 01:28:09PM -0700, Andres Freund wrote:\n> On 2022-09-22 13:05:33 -0700, Nathan Bossart wrote:\n>> * I'm using an Ubuntu-based distribution, and the version of meson that apt\n>> installed was not new enough for Postgres. I ended up cloning meson [0]\n>> and using the newest tag. This is no big deal.\n> \n> I assume this is 20.04 LTS? If so, we're missing it by one version of meson\n> currently. There's unfortunately a few features that'd be a bit painful to not\n> have.\n\nYes. I imagine I'll upgrade to 22.04 LTS soon, which appears to provide a\nnew enough version of meson.\n\n>> * The installed binaries were unable to locate libraries like libpq. I\n>> ended up setting the extra_lib_dirs option to the directory where these\n>> libraries were installed to fix this. This one is probably worth\n>> investigating further.\n> \n> I think that should be \"fixed\" in a later commit in the meson tree - any\n> chance you could try that?\n\nYup, after cherry-picking 9bc60bc, this is fixed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 22 Sep 2022 15:37:29 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "... btw, shouldn't the CF entry [1] get closed now?\nThe cfbot's unhappy that the last patch no longer applies.\n\n\t\t\tregards, tom lane\n\n[1] https://commitfest.postgresql.org/39/3395/\n\n\n", "msg_date": "Sat, 24 Sep 2022 13:52:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-24 13:52:29 -0400, Tom Lane wrote:\n> ... btw, shouldn't the CF entry [1] get closed now?\n\nUnfortunately not - there's quite a few followup patches that haven't been\n[fully] reviewed and thus not applied yet.\n\n\n> The cfbot's unhappy that the last patch no longer applies.\n\nRebased patches attached.\n\nSeveral patches here are quite trivial (e.g. 0003) or just part of the series\nto increase cfbot/ci coverage (0002).\n\nGreetings,\n\nAndres Freund", "msg_date": "Sat, 24 Sep 2022 11:09:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Thu, Sep 22, 2022 at 2:50 PM Andres Freund <andres@anarazel.de> wrote:\n> meson:\n>\n> time meson test\n> real 0m42.178s\n> user 7m8.533s\n> sys 2m17.711s\n\nI find that a more or less comparable test run on my workstation\n(which has a Ryzen 9 5950X) takes just over 38 seconds. I think that\nthe improvement is far more pronounced on that machine compared to a\nmuch older workstation.\n\nOne more question about this, that wasn't covered by the Wiki page: is\nthere some equivalent to \"make installcheck\" with meson builds?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 24 Sep 2022 16:56:20 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-24 16:56:20 -0700, Peter Geoghegan wrote:\n> On Thu, Sep 22, 2022 at 2:50 PM Andres Freund <andres@anarazel.de> wrote:\n> > meson:\n> >\n> > time meson test\n> > real 0m42.178s\n> > user 7m8.533s\n> > sys 2m17.711s\n> \n> I find that a more or less comparable test run on my workstation\n> (which has a Ryzen 9 5950X) takes just over 38 seconds. I think that\n> the improvement is far more pronounced on that machine compared to a\n> much older workstation.\n\nCool!\n\n\n> One more question about this, that wasn't covered by the Wiki page: is\n> there some equivalent to \"make installcheck\" with meson builds?\n\nNot yet. Nothing impossible, just not done yet. Partially because installcheck\nis so poorly defined (run against an already running server for pg_regress vs\nusing \"system\" installed binaries for tap tests).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 24 Sep 2022 17:13:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Sat, Sep 24, 2022 at 5:13 PM Andres Freund <andres@anarazel.de> wrote:\n> > One more question about this, that wasn't covered by the Wiki page: is\n> > there some equivalent to \"make installcheck\" with meson builds?\n>\n> Not yet. Nothing impossible, just not done yet. Partially because installcheck\n> is so poorly defined (run against an already running server for pg_regress vs\n> using \"system\" installed binaries for tap tests).\n\nGot it. I can work around that by just having an old autoconf-based\nvpath build directory. I'll need to do this when I run Valgrind.\n\nMy workaround would be annoying if I needed to run \"installcheck\"\nanywhere near as frequently as I run \"make check-world\". But that\nisn't the case. meson delivers a significant improvement in the metric\nthat really matters to me, so I can't really complain.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 24 Sep 2022 17:33:49 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-24 17:33:49 -0700, Peter Geoghegan wrote:\n> On Sat, Sep 24, 2022 at 5:13 PM Andres Freund <andres@anarazel.de> wrote:\n> > > One more question about this, that wasn't covered by the Wiki page: is\n> > > there some equivalent to \"make installcheck\" with meson builds?\n> >\n> > Not yet. Nothing impossible, just not done yet. Partially because installcheck\n> > is so poorly defined (run against an already running server for pg_regress vs\n> > using \"system\" installed binaries for tap tests).\n> \n> Got it. I can work around that by just having an old autoconf-based\n> vpath build directory. I'll need to do this when I run Valgrind.\n> \n> My workaround would be annoying if I needed to run \"installcheck\"\n> anywhere near as frequently as I run \"make check-world\". But that\n> isn't the case. meson delivers a significant improvement in the metric\n> that really matters to me, so I can't really complain.\n\nMy gut feeling is that we should use this opportunity to split 'installcheck'\ninto two. \"test a running server\" and \"test installed binaries\".\n\nI think the cleanest way to do this with meson would be to utilize meson\ntests's \"setups\".\n$ meson test --setup 'running-server'\nwould run all [selected] tests compatible with running against a running\nserver. And\n$ meson test --setup 'installed'\nwould test installed binaries.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 25 Sep 2022 12:38:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-25 12:38:06 -0700, Andres Freund wrote:\n> On 2022-09-24 17:33:49 -0700, Peter Geoghegan wrote:\n> > On Sat, Sep 24, 2022 at 5:13 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > One more question about this, that wasn't covered by the Wiki page: is\n> > > > there some equivalent to \"make installcheck\" with meson builds?\n> > >\n> > > Not yet. Nothing impossible, just not done yet. Partially because installcheck\n> > > is so poorly defined (run against an already running server for pg_regress vs\n> > > using \"system\" installed binaries for tap tests).\n> >\n> > Got it. I can work around that by just having an old autoconf-based\n> > vpath build directory. I'll need to do this when I run Valgrind.\n> >\n> > My workaround would be annoying if I needed to run \"installcheck\"\n> > anywhere near as frequently as I run \"make check-world\". But that\n> > isn't the case. meson delivers a significant improvement in the metric\n> > that really matters to me, so I can't really complain.\n>\n> My gut feeling is that we should use this opportunity to split 'installcheck'\n> into two. \"test a running server\" and \"test installed binaries\".\n>\n> I think the cleanest way to do this with meson would be to utilize meson\n> tests's \"setups\".\n> $ meson test --setup 'running-server'\n> would run all [selected] tests compatible with running against a running\n> server. And\n> $ meson test --setup 'installed'\n> would test installed binaries.\n\nI've added support for a 'running' setup in the attached rebased series. A\nbunch of preparatory changes were necessary - as it turns out we've introduced\na bunch of role name conflicts between tests.\n\nI had to set it up so that the main regress and isolationtester tests don't\nrun in parallel with other tests, because they don't pass reliably due to\nchecking pg_locks etc.\n\nI also found a problem independent of meson [1] / installcheck.\n\n\n# run all tests that support running against existing server\nmeson test --setup running\n\n# run just the main pg_regress tests against existing server\nmeson test --setup running main/regress-running\n\n\nI've also worked some more on cleaning up other patches in the series,\nparticularly the precompiled headers one (interesting because it reduces\nwindows compile times noticably).\n\n\nPeter, would this address your use case?\n\n\nI think it'd make sense to add a few toplevel targets to run tests in certain\nways, but I've not done that here.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20220925232237.p6uskba2dw6fnwj2%40awork3.anarazel.de", "msg_date": "Sun, 25 Sep 2022 17:38:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nI tried to use meson and ninja and they are really efficient.\nBut when I tried to specify \"c_args\", it did not take effect.\n\nAttached my steps:\n[In the HEAD (7d708093b7)]\n$ meson setup build --prefix /home/wangw/install/parallel_apply/ -Dcassert=true -Dtap_tests=enabled -Dicu=enabled -Dc_args='-fno-omit-frame-pointer'\n\nLog:\n......\n Compiler Flags\n CPP FLAGS : -D_GNU_SOURCE\n C FLAGS, functional: -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n C FLAGS, warnings : -Wmissing-prototypes -Wpointer-arith -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -Wdeclaration-after-statement -Wno-format-truncation -Wno-stringop-truncation\n......\n\nAfter I made the below modifications, the specified \"c_args\" took effect.\n```\n@@ -2439,6 +2439,10 @@ endif\n\n # Set up compiler / linker arguments to be used everywhere, individual targets\n # can add further args directly, or indirectly via dependencies\n+\n+tmp_c_args = get_option('c_args')\n+cflags += tmp_c_args\n+\n add_project_arguments(cflags, language: ['c'])\n add_project_arguments(cppflags, language: ['c'])\n add_project_arguments(cflags_warn, language: ['c'])\n```\n\nI might missed something. Just to confirm is there another way to add CFLAG ?\n\nRegards,\nWang wei\n\n\n", "msg_date": "Mon, 26 Sep 2022 06:24:42 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-26 06:24:42 +0000, wangw.fnst@fujitsu.com wrote:\n> I tried to use meson and ninja and they are really efficient.\n> But when I tried to specify \"c_args\", it did not take effect.\n\nThey should take effect, but won't be shown in the summary section\ncurrently. That currently only shows the flags chosen by the configure step,\nrather than user specified ones.\n\n\n> After I made the below modifications, the specified \"c_args\" took effect.\n> ```\n> @@ -2439,6 +2439,10 @@ endif\n> \n> # Set up compiler / linker arguments to be used everywhere, individual targets\n> # can add further args directly, or indirectly via dependencies\n> +\n> +tmp_c_args = get_option('c_args')\n> +cflags += tmp_c_args\n> +\n> add_project_arguments(cflags, language: ['c'])\n> add_project_arguments(cppflags, language: ['c'])\n> add_project_arguments(cflags_warn, language: ['c'])\n> ```\n\nThat'll likely end up with the same cflags added multiple times. You should\nsee them when building with ninja -v.\n\nHow about adding c_args to the summary, in a separate line? I think that'd\nclarify what's happening?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 25 Sep 2022 23:46:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Mon, Sep 26, 2022 at 14:47 PM Andres Freund <andres@anarazel.de> wrote:\n> Hi,\n> \n> On 2022-09-26 06:24:42 +0000, wangw.fnst@fujitsu.com wrote:\n> > I tried to use meson and ninja and they are really efficient.\n> > But when I tried to specify \"c_args\", it did not take effect.\n> \n> They should take effect, but won't be shown in the summary section\n> currently. That currently only shows the flags chosen by the configure step,\n> rather than user specified ones.\n> \n> \n> > After I made the below modifications, the specified \"c_args\" took effect.\n> > ```\n> > @@ -2439,6 +2439,10 @@ endif\n> >\n> > # Set up compiler / linker arguments to be used everywhere, individual\n> targets\n> > # can add further args directly, or indirectly via dependencies\n> > +\n> > +tmp_c_args = get_option('c_args')\n> > +cflags += tmp_c_args\n> > +\n> > add_project_arguments(cflags, language: ['c'])\n> > add_project_arguments(cppflags, language: ['c'])\n> > add_project_arguments(cflags_warn, language: ['c'])\n> > ```\n> \n> That'll likely end up with the same cflags added multiple times. You should\n> see them when building with ninja -v.\n\nThanks for sharing the information.\nI saw the user specified CFLAG when building with `ninja -v`.\n\nBut, after installing PG with command `ninja -v install`, pg_config does not\nshow the user specified CFLAG. Should we print this information there?\n\n> How about adding c_args to the summary, in a separate line? I think that'd\n> clarify what's happening?\n\nYes, I think it might be better.\n\nRegards,\nWang wei\n\n\n", "msg_date": "Mon, 26 Sep 2022 07:25:17 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Wed, Sep 21, 2022 at 7:11 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> I added installation instructions for meson for a bunch of platforms, but\n\nA couple more things for the wiki:\n\n1) /opt/homebrew/ seems to be an \"Apple silicon\" path? Either way it\ndoesn't exist on this machine. I was able to get a working build with\n\n/usr/local/Homebrew/Library/Homebrew/os/mac/pkgconfig\n\n(My homebrew install doesn't seem to have anything relevant for\nextra_include_dirs or extra_lib_dirs.)\n\n2) Also, \"ninja -v install\" has the same line count as \"ninja install\" --\nare there versions that do something different?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Sep 21, 2022 at 7:11 AM Andres Freund <andres@anarazel.de> wrote:>> I added installation instructions for meson for a bunch of platforms, butA couple more things for the wiki:1) /opt/homebrew/ seems to be an \"Apple silicon\" path? Either way it doesn't exist on this machine. I was able to get a working build with /usr/local/Homebrew/Library/Homebrew/os/mac/pkgconfig(My homebrew install doesn't seem to have anything relevant for extra_include_dirs or extra_lib_dirs.)2) Also, \"ninja -v install\" has the same line count as \"ninja install\" -- are there versions that do something different?--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 26 Sep 2022 15:18:29 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On 2022-Sep-25, Andres Freund wrote:\n\n> From 3eb0ca196084da314d94d1e51c7b775012a4773c Mon Sep 17 00:00:00 2001\n> From: Andres Freund <andres@anarazel.de>\n> Date: Wed, 21 Sep 2022 11:03:07 -0700\n> Subject: [PATCH v16 04/16] meson: Add windows resource files\n\n> diff --git a/src/backend/jit/llvm/meson.build b/src/backend/jit/llvm/meson.build\n> index de2e624ab58..5fb63768358 100644\n> --- a/src/backend/jit/llvm/meson.build\n> +++ b/src/backend/jit/llvm/meson.build\n> @@ -20,6 +20,12 @@ llvmjit_sources += files(\n> 'llvmjit_expr.c',\n> )\n> \n> +if host_system == 'windows'\n> + llvmjit_sources += rc_lib_gen.process(win32ver_rc, extra_args: [\n> + '--NAME', 'llvmjit',\n> + '--FILEDESC', 'llvmjit - JIT using LLVM',])\n> +endif\n\nThis is tediously imperative. Isn't there a more declarative way to\nhave it?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Pensar que el espectro que vemos es ilusorio no lo despoja de espanto,\nsólo le suma el nuevo terror de la locura\" (Perelandra, C.S. Lewis)\n\n\n", "msg_date": "Mon, 26 Sep 2022 10:41:01 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On 24.09.22 20:09, Andres Freund wrote:\n> On 2022-09-24 13:52:29 -0400, Tom Lane wrote:\n>> ... btw, shouldn't the CF entry [1] get closed now?\n> \n> Unfortunately not - there's quite a few followup patches that haven't been\n> [fully] reviewed and thus not applied yet.\n\nHere is some review of the remaining ones (might not match exactly what \nyou attached, I was working off your branch):\n\n\n9f789350a7a7 meson: ci: wip: move compilerwarnings task to meson\n\nThis sounds reasonable to me in principle, but I haven't followed the\ncirrus stuff too closely, and it doesn't say why it's \"wip\". Perhaps\nothers could take a closer look.\n\n\nccf20a68f874 meson: ci: Add additional CI coverage\n\nIIUC, this is just for testing your branch, not meant for master?\n\n\n02d84c21b227 meson: prereq: win: remove date from version number in \nwin32ver.rc\n\ndo it\n\n\n5c42b3e7812e meson: WIP: Add some of the windows resource files\n\nWhat is the thinking on this now? What does this change over the\ncurrent state?\n\n\n9bc60bccfd10 meson: Add support for relative rpaths, fixing tests on \nMacOS w/ SIP\n\nI suggest a separate thread and/or CF entry for this. There have been\nvarious attempts to deal with SIP before, with varying results. This\nis not part of the meson transition as such.\n\n\n9f5be26c1215 meson: Add docs for building with meson\n\nI do like the overall layout of this.\n\nThe \"Supported Platforms\" section should be moved back to near the end\nof the chapter. I don't see a reason to move it forward, at least\nnone that is related to the meson issue.\n\nThe changes to the \"Getting the Source\" section are also not\nappropriate for this patch.\n\nIn the section \"Building and Installation with meson\":\n\n- Remove the \"git clone\" stuff.\n\n- The \"Running tests\" section should be moved to Chapter 33. Regression \nTests.\n\nSome copy-editing will probably be suitable, but I haven't focused on\nthat yet.\n\n\n9c00d355d0e9 meson: Add PGXS compatibility\n\nThis looks like a reasonable direction to me. How complete is it? It\nsays it works for some extensions but not others. How do we define\nthe target line here?\n\n\n3fd5e13dcad3 meson: Add postgresql-extension.pc for building extension \nlibraries\n\nSeparate thread for this as well. This is good and important, but we\nmust also add it to the make build.\n\n\n4b5bfa1c19aa meson: Add LLVM bitcode emission\n\nstill in progress\n\n\neb40f6e53104 meson: Add support for building with precompiled headers\n\nAny reason not to enable this by default? The benefits on non-Windows\nappear to be less dramatic, but they are not zero. Might be better to\nenable it consistently so that for example any breakage is easier\ncaught.\n\n\n377bfdea6042 meson: Add xmllint/xsltproc wrapper script to handle \ndependencies automatically\n\nIs this part of the initial transition, required for correctness, or\nis it an optional optimization? Could use more explanation. Maybe\nmove to separate thread also?\n\n\n\n", "msg_date": "Mon, 26 Sep 2022 15:01:56 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-26 10:41:01 +0200, Alvaro Herrera wrote:\n> On 2022-Sep-25, Andres Freund wrote:\n> \n> > From 3eb0ca196084da314d94d1e51c7b775012a4773c Mon Sep 17 00:00:00 2001\n> > From: Andres Freund <andres@anarazel.de>\n> > Date: Wed, 21 Sep 2022 11:03:07 -0700\n> > Subject: [PATCH v16 04/16] meson: Add windows resource files\n> \n> > diff --git a/src/backend/jit/llvm/meson.build b/src/backend/jit/llvm/meson.build\n> > index de2e624ab58..5fb63768358 100644\n> > --- a/src/backend/jit/llvm/meson.build\n> > +++ b/src/backend/jit/llvm/meson.build\n> > @@ -20,6 +20,12 @@ llvmjit_sources += files(\n> > 'llvmjit_expr.c',\n> > )\n> > \n> > +if host_system == 'windows'\n> > + llvmjit_sources += rc_lib_gen.process(win32ver_rc, extra_args: [\n> > + '--NAME', 'llvmjit',\n> > + '--FILEDESC', 'llvmjit - JIT using LLVM',])\n> > +endif\n> \n> This is tediously imperative. Isn't there a more declarative way to\n> have it?\n\nI tried to come up with something better, without success. I think it's\nacceptable, even if not great.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 26 Sep 2022 08:41:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-26 15:01:56 +0200, Peter Eisentraut wrote:\n> Here is some review of the remaining ones (might not match exactly what you\n> attached, I was working off your branch):\n\nThanks, and makes sense.\n\n\n> 9f789350a7a7 meson: ci: wip: move compilerwarnings task to meson\n>\n> This sounds reasonable to me in principle, but I haven't followed the\n> cirrus stuff too closely, and it doesn't say why it's \"wip\". Perhaps\n> others could take a closer look.\n\nIt's mostly WIP because it doesn't yet convert all the checks, specifically\nheaderscheck/cpluspluscheck isn't converted yet.\n\n\n> ccf20a68f874 meson: ci: Add additional CI coverage\n>\n> IIUC, this is just for testing your branch, not meant for master?\n\nYes. I think we might want to add openbsd / netbsd at some point, but that'll\nbe a separate thread. Until then it just catches a bunch of mistakes more\neasily.\n\n\n> 02d84c21b227 meson: prereq: win: remove date from version number in\n> win32ver.rc\n>\n> do it\n\nThe newest version has evolved a bit, changing Project.pm as well.\n\n\n> 5c42b3e7812e meson: WIP: Add some of the windows resource files\n>\n> What is the thinking on this now? What does this change over the\n> current state?\n\nThe newest commit has a lot more rc files added and has this summary:\n\n meson: Add windows resource files\n\n The generated resource files aren't exactly the same ones as the old\n buildsystems generate. Previously \"InternalName\" and \"OriginalFileName\" were\n mostly wrong / not set (despite being required), but that was hard to fix in\n at least the make build. Additionally, the meson build falls back to a\n \"auto-generated\" description when not set, and doesn't set it in a few cases -\n unlikely that anybody looks at these descriptions in detail.\n\nThe only thing missing rc files is the various ecpg libraries. The issue is\nthat we shouldn't add resource file to static libraries, so we need to split\nthe definitions. I'll go and do that next.\n\n\n> 9bc60bccfd10 meson: Add support for relative rpaths, fixing tests on MacOS\n> w/ SIP\n>\n> I suggest a separate thread and/or CF entry for this. There have been\n> various attempts to deal with SIP before, with varying results. This\n> is not part of the meson transition as such.\n\nI think I might need to split this one more time. We don't add all the rpaths\nwe add with autoconf before this commit, even not on macOS, which is not\ngreat... Nor do we have a --disable-rpath equivalent yet - I suspect we'll\nneed that.\n\nhttps://postgr.es/m/20220922223729.GA721620%40nathanxps13\n\n\n> 9f5be26c1215 meson: Add docs for building with meson\n>\n> I do like the overall layout of this.\n>\n> The \"Supported Platforms\" section should be moved back to near the end\n> of the chapter. I don't see a reason to move it forward, at least\n> none that is related to the meson issue.\n>\n> The changes to the \"Getting the Source\" section are also not\n> appropriate for this patch.\n\nWe don't really support building from a tarball with meson yet (you'd need to\nconfiure, maintainer-clean, configure meson), so it does make some sense...\n\n\n> 9c00d355d0e9 meson: Add PGXS compatibility\n>\n> This looks like a reasonable direction to me. How complete is it? It\n> says it works for some extensions but not others. How do we define\n> the target line here?\n\nYea, those are good questions.\n\n\n> How complete is it?\n\nIt's a bit hard to know. I think the most important stuff is there. But\nthere's no clear \"API\" around pgxs. E.g. we don't (yet?) have an exactly\nequivalent definition of 'host', because that's very config.guess specific.\n\nThere's lots of shortcuts - e.g. with meson we don't need an equivalent to\nPGAC_CHECK_STRIP, so we need to make up something for Makefile.global.\n\nNoah suggested using $(error something), but that only works if $variable is\nonly used in recursively expanded variables - the errors end up confusing.\n\n\n> It says it works for some extensions but not others.\n\nI think that's slightly outdated - IIRC it was about pgbouncer, but after a\nfix the remaining failure is shared between autoconf and meson builds.\n\n\n> 3fd5e13dcad3 meson: Add postgresql-extension.pc for building extension\n> libraries\n>\n> Separate thread for this as well. This is good and important, but we\n> must also add it to the make build.\n\nMakes sense.\n\n\n> eb40f6e53104 meson: Add support for building with precompiled headers\n>\n> Any reason not to enable this by default? The benefits on non-Windows\n> appear to be less dramatic, but they are not zero. Might be better to\n> enable it consistently so that for example any breakage is easier\n> caught.\n\nThere's no real reason not to - the wins are small on linux, so introducing\nPCH didn't seem necessary. I'm also not sure how well pch works across random\ncompiler versions - it's so crucial on windows that it seems like a more well\nworn path there.\n\nlinux, gcc 12:\n\nb_pch=false:\nreal\t0m16.233s\nuser\t6m40.375s\nsys\t0m48.953s\n\nb_pch=true:\nreal\t0m15.983s\nuser\t6m20.357s\nsys\t0m49.967s\n\n\nfreebsd VM, clang:\n\nb_pch=false:\n\nreal\t0m23.035s\nuser\t3m11.241s\nsys\t0m31.171s\n\nb_pch=true:\n\nreal\t0m21.643s\nuser\t2m57.143s\nsys\t0m30.246s\n\n\nSomewhat confirming my suspicions from above, gcc11 ICEs on freebsd with PCH,\nand gcc12 fails with an unhelpful:\n<command-line>: sorry, unimplemented: PCH allocation failure\n\n\n\n> 377bfdea6042 meson: Add xmllint/xsltproc wrapper script to handle\n> dependencies automatically\n>\n> Is this part of the initial transition, required for correctness, or\n> is it an optional optimization? Could use more explanation. Maybe\n> move to separate thread also?\n\nIt's required for correctness - in master we don't rebuild the docs when a\nfile changes. meson and ninja don't support wildcards (for good reasons - it\nmakes scanning for changes much more expensive). By using \"compiler\" generated\ndependencies this is solved in a reliably and notationally cheap way. So I\ndon't think it makes sense to split this one off into a separate thread?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 26 Sep 2022 09:35:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-26 15:18:29 +0700, John Naylor wrote:\n> On Wed, Sep 21, 2022 at 7:11 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > I added installation instructions for meson for a bunch of platforms, but\n> \n> A couple more things for the wiki:\n> \n> 1) /opt/homebrew/ seems to be an \"Apple silicon\" path?\n\nYea, it's /usr/local on x86-64, based on what was required to make macos CI\nwork. I updated the wiki page, half-blindly - it'd be nice if you could\nconfirm that that works?\n\n\nI needed something like the below to get (nearly?) all dependencies working:\n\n brewpath=\"/usr/local\"\n PKG_CONFIG_PATH=\"${brewpath}/lib/pkgconfig:${PKG_CONFIG_PATH}\"\n\n for pkg in icu4c krb5 openldap openssl zstd ; do\n pkgpath=\"${brewpath}/opt/${pkg}\"\n PKG_CONFIG_PATH=\"${pkgpath}/lib/pkgconfig:${PKG_CONFIG_PATH}\"\n PATH=\"${pkgpath}/bin:${pkgpath}/sbin:$PATH\"\n done\n\n export PKG_CONFIG_PATH PATH\n\n meson setup \\\n --buildtype=debug \\\n -Dextra_include_dirs=${brewpath}/include \\\n -Dextra_lib_dirs=${brewpath}/lib \\\n -Dcassert=true \\\n -Dssl=openssl -Duuid=e2fs -Ddtrace=auto \\\n -DPG_TEST_EXTRA=\"$PG_TEST_EXTRA\" \\\n build\n\nthe per-package stuff is needed because some libraries aren't installed into\n/usr/local (or /opt/homebrew), but only in a subdirectory within that.\n\n\n> Either way it doesn't exist on this machine. I was able to get a working\n> build with\n> \n> /usr/local/Homebrew/Library/Homebrew/os/mac/pkgconfig\n\nHm - what did you need this path for - I don't think that should be needed.\n\n\n\n> (My homebrew install doesn't seem to have anything relevant for\n> extra_include_dirs or extra_lib_dirs.)\n\nI think libintl.h / libintl.dylib are only in there. With meson that's the\nonly need for extra_include_dirs / extra_lib_dirs I found on arm apple.\n\n\n> 2) Also, \"ninja -v install\" has the same line count as \"ninja install\" --\n> are there versions that do something different?\n\nYea, that looks like a copy-and-pasto (not even from me :)). Think I fixed it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 26 Sep 2022 12:06:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-26 09:35:16 -0700, Andres Freund wrote:\n> > 9c00d355d0e9 meson: Add PGXS compatibility\n> >\n> > This looks like a reasonable direction to me. How complete is it? It\n> > says it works for some extensions but not others. How do we define\n> > the target line here?\n>\n> Yea, those are good questions.\n>\n>\n> > How complete is it?\n>\n> It's a bit hard to know. I think the most important stuff is there. But\n> there's no clear \"API\" around pgxs. E.g. we don't (yet?) have an exactly\n> equivalent definition of 'host', because that's very config.guess specific.\n>\n> There's lots of shortcuts - e.g. with meson we don't need an equivalent to\n> PGAC_CHECK_STRIP, so we need to make up something for Makefile.global.\n>\n> Noah suggested using $(error something), but that only works if $variable is\n> only used in recursively expanded variables - the errors end up confusing.\n\nLooking through a few of the not-nicely-replaced things, I think we can\nsimplify at least some away:\n\n- RANLIB: most platforms use AROPT = crs, making ranlib unnecessary. {free,\n net, open}bsd don't currently, but all support it from what I know\n\n- with_gnu_ld: this is only used on solaris, to set export_dynamic = -Wl,-E\n when using a gnu ld. How about moving this to configure instead, and just\n checking if -Wl,-E links?\n\n- FLEXFLAGS: As a configure input this is afaict unused and undocumented - and\n it's not clear why it'd be useful? Not that an empty replacement is a\n meaningful effort\n\n\nI'm not sure what to do about:\n- autodepend - I'm inclined to set it to true when using a gcc like\n compiler. I think extension authors won't be happy if suddenly their\n extensions don't rebuild reliably anymore. An --enable-depend like\n setting doesn't make sense for meson, so we don't have anything to source it\n from.\n- {LDAP,UUID,ICU}_{LIBS,CFLAGS} - might some extension need them?\n\n\nFor some others I think it's ok to not have replacement. Would be good for\nsomebody to check my thinking though:\n\n- LIBOBJS, PG_CRC32C_OBJS, TAS: Not needed because we don't build\n the server / PLs with the generated makefile\n- ZIC: only needed to build tzdata as part of server build\n- MSGFMT et al: translation doesn't appear to be supported by pgxs, correct?\n- XMLLINT et al: docs don't seem to be supported by pgxs\n- GENHTML et al: supporting coverage for pgxs-in-meson build doesn't seem worth it\n- WINDRES: I don't think extensions are bothering to generate rc files on windows\n\n\nI'll include an updated pgxs-compat patch in the next post of the series (in a\nfew hours).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 26 Sep 2022 12:44:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Sun, Sep 25, 2022 at 5:38 PM Andres Freund <andres@anarazel.de> wrote:\n> # run just the main pg_regress tests against existing server\n> meson test --setup running main/regress-running\n\n> Peter, would this address your use case?\n\nI tried out your v16 patchset, which seems to mostly work as I'd hoped\nit would. Some feedback:\n\n* I gather that \"running\" as it appears in commands like \"meson test\n--setup running\" refers to a particular setup named \"running\", that\nyou invented as part of creating a meson-ish substitute for\ninstallcheck. Can \"running\" be renamed to something that makes it\nobvious that it's a Postgres thing, and not a generic meson thing?\n\nMaybe some kind of consistent naming convention would work best here.\nThis setup could be \"pg_against_running_server\", or something along\nthose lines.\n\n* It would be nice if failed tests told me exactly which \"diffs\" file\nI needed to look at, without my having to look for the message through\nthe meson log (or running with -v). Is this possible?\n\nTo be fair I should probably just be running -v when I run tests\nagainst an existing running server, anyway -- so maybe I'm asking for\nthe wrong thing. It would at least be slightly better if I always got\nto see a path to a .diffs file for failed tests, even without -v. But\nit's just a \"nice to have\" thing -- it's not worth going out of your\nway to make it work like that\n\n* Just FYI, there are considerations about the libpq that we link to\nhere (actually this isn't particularly related to the new installcheck\nwork, but thought I'd mention it in passing).\n\nI'm using Debian Unstable here. Like Nathan, I found that I needed a\ncustom -Dextra_lib_dirs just so that binaries would link against the\ninstallation's own libpq, rather than the system libpq. This is\nimportant-ish because linking to the wrong libpq means that I get an\nerror about not being able to connect via trust authentication to a\nunix socket from the directory /var/run/postgresql -- confusion over\nwhere to look for sockets visibly breaks many things.\n\nThe workaround that I have is fine, but this still seems like\nsomething that should \"just work\". I believe that there is a pending\npatch for this already, so enough said here.\n\n> I think it'd make sense to add a few toplevel targets to run tests in certain\n> ways, but I've not done that here.\n\nI usually run \"installcheck-world\" (not just installcheck) when I want\nto do a generic smoke test with Vaglrind. Sometimes that will fail\nrelatively early for very silly reasons, for example because I don't\nhave exactly the expected plan in some harmless way (I try to account\nfor this by running Valgrind in a shellscript that tries to match\n\"make check\", but that doesn't always work). It is nice that I won't\nhave to worry about such minor issues derailing everything for a long\nrunning and unsupervised Valgrind test. (Maybe I could have worked\naround this before now, but I guess I never tried.)\n\nMore generally, I think that we should be encouraging users to think\nof the tests as something that you can run in any order. People should\nbe encouraged to think in terms of the meson abstractions, such as\ntest setups.\n\nI found that \"meson test --setup running --list\" will show me what\ntests I'll be running if I want to do \"installcheck\" style testing,\nwithout having to run any tests at all -- another small but important\nimprovement. This seems worth drawing attention to on the meson Wiki\npage as a non-obvious improvement over \"installcheck\". I might even\nfind it useful to hard-code some of these tests in a shellscript, that\nruns only a subset of \"--setup running\" tests that happen to be\ninteresting for Valgrind testing right now.\n\nBTW the meson wiki page iencourages users to think of \"meson setup\"\nand \"meson configure\" as equivalent to autoconf configure. I get why\nyou explained it like that, but that confused me at first. What I\nsince figured out (which will be absurdly obvious to you) is that you\nreally need to decouple the generic from the specific -- very much\nunlike autoconf. I found it useful to separate stuff that I know will\nnever change for a given build directory (such as the prefix install\npath) from other things that are variable configuration settings\n(things like the optimization level used by GCC). I now have a\nscripted way of running \"meson setup\" for the former stuff (which is\ngeneric), and a scripted way of running \"meson configure\" for the\nlatter set of stuff (with variations for \"standard\" release and debug\nbuilds, building Valgrind, etc).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 26 Sep 2022 12:47:14 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-26 12:47:14 -0700, Peter Geoghegan wrote:\n> On Sun, Sep 25, 2022 at 5:38 PM Andres Freund <andres@anarazel.de> wrote:\n> > # run just the main pg_regress tests against existing server\n> > meson test --setup running main/regress-running\n> \n> > Peter, would this address your use case?\n> \n> I tried out your v16 patchset, which seems to mostly work as I'd hoped\n> it would.\n\nThanks & cool.\n\n\n> Some feedback:\n> * I gather that \"running\" as it appears in commands like \"meson test\n> --setup running\" refers to a particular setup named \"running\", that\n> you invented as part of creating a meson-ish substitute for\n> installcheck. Can \"running\" be renamed to something that makes it\n> obvious that it's a Postgres thing, and not a generic meson thing?\n\nYes. The only caveat is that it makes lines longer, because it's included in\nthe printed test line (there's no real requirement to have the test suite and\nthe setup named the same,b ut it seems confusing not to)\n\n\n> Maybe some kind of consistent naming convention would work best here.\n> This setup could be \"pg_against_running_server\", or something along\n> those lines.\n\n\n\n> * It would be nice if failed tests told me exactly which \"diffs\" file\n> I needed to look at, without my having to look for the message through\n> the meson log (or running with -v). Is this possible?\n\nYou can use --print-errorlogs to print the log output iff a test fails. It's a\nbit painful that some tests have very verbose output :(. I don't really see a\nway that meson can help us here, this is pretty much on \"our\" end.\n\n\n> BTW the meson wiki page iencourages users to think of \"meson setup\"\n> and \"meson configure\" as equivalent to autoconf configure. I get why\n> you explained it like that, but that confused me at first. What I\n> since figured out (which will be absurdly obvious to you) is that you\n> really need to decouple the generic from the specific -- very much\n> unlike autoconf. I found it useful to separate stuff that I know will\n> never change for a given build directory (such as the prefix install\n> path) from other things that are variable configuration settings\n> (things like the optimization level used by GCC). I now have a\n> scripted way of running \"meson setup\" for the former stuff (which is\n> generic), and a scripted way of running \"meson configure\" for the\n> latter set of stuff (with variations for \"standard\" release and debug\n> builds, building Valgrind, etc).\n\nHm. I'm not entirely sure what you mean here. The only thing that you can't\nchange in a existing build-dir with meson configure is the compiler.\n\nI personally have different types of build dirs set up in parallel\n(e.g. assert, optimize, assert-32, assert-w64). I'll occasionally\nenable/disable\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 26 Sep 2022 13:27:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Mon, Sep 26, 2022 at 1:27 PM Andres Freund <andres@anarazel.de> wrote:\n> > Some feedback:\n> > * I gather that \"running\" as it appears in commands like \"meson test\n> > --setup running\" refers to a particular setup named \"running\", that\n> > you invented as part of creating a meson-ish substitute for\n> > installcheck. Can \"running\" be renamed to something that makes it\n> > obvious that it's a Postgres thing, and not a generic meson thing?\n>\n> Yes. The only caveat is that it makes lines longer, because it's included in\n> the printed test line (there's no real requirement to have the test suite and\n> the setup named the same,b ut it seems confusing not to)\n\nProbably doesn't have to be too long. And I'm not sure of the details.\nJust a small thing from my point of view.\n\n> > * It would be nice if failed tests told me exactly which \"diffs\" file\n> > I needed to look at, without my having to look for the message through\n> > the meson log (or running with -v). Is this possible?\n>\n> You can use --print-errorlogs to print the log output iff a test fails. It's a\n> bit painful that some tests have very verbose output :(. I don't really see a\n> way that meson can help us here, this is pretty much on \"our\" end.\n\nMakes sense. Thanks.\n\n> Hm. I'm not entirely sure what you mean here. The only thing that you can't\n> change in a existing build-dir with meson configure is the compiler.\n\nI do understand that it doesn't particularly matter to meson itself.\nThe point I was making was one about how I personally find it\nconvenient to set those things that I know will never change in\npractice (because they're fundamentally things that I know that I\nwon't ever need to change) during \"meson setup\", while doing\neverything else using \"meson configure\". I like to automate everything\nusing shell scripts. I will very occasionally have to run \"meson\nsetup\" via a zsh function anyway, so why not couple that process with\nthe process of setting \"immutable for this build directory\" settings?\n\nWith autoconf, I will run one of various zsh functions that run\nconfigure in some specific way -- there are various \"build types\",\nsuch as debug, release, and Valgrind. But with meson it makes sense to\nsplit it in two -- have a generic zsh function for generic once-off\nbuild directory setup (including even the mkdir), that also sets\ngeneric, \"immutable\" settings, and a specialized zsh function that\nchanges things in a way that is particular to that kind of build (like\nwhether asserts are enabled, optimization level, and so on).\n\n> I personally have different types of build dirs set up in parallel\n> (e.g. assert, optimize, assert-32, assert-w64). I'll occasionally\n> enable/disable\n\nI know that other experienced hackers do it that way. I have found\nthat ccache works well enough that I don't feel the need for multiple\nbuild directories per branch.\n\nPerhaps I've assumed more than I should about my approach being\nbroadly representative. It might ultimately be easier to just have\nmultiple build directories per branch/source directory -- one per\n\"build type\" per branch. That has the advantage of not requiring each\n\"build type\" zsh function to remember to reset anything that might\nhave been set by one of its sibling zsh functions for some other build\ntype (there is no need to \"reset the setting to its default\"). That\napproach is more like scripting autoconf/configure would be, in that\nyou basically never change any settings for a given build directory in\npractice (you maybe invent a new kind of build type instead, or you\nupdate the definition of an existing standard build type based on a\nnew requirement for that build type).\n\nIt's really handy that meson lets you quickly change one setting\nagainst an existing build directory. I'm slightly worried that that\nwill allow me to shoot myself in the foot, though. Perhaps I'll change\nsome exotic setting in an ad-hoc way, and then forget to unset it\nafterwards, leading to (say) a mysterious performance degradation for\nwhat is supposed to be one of my known standard build types. There is\nno risk of that with my autoconf/configure workflow, because I'll\nrerun the relevant configure zsh function before long anyway, making\nit impossible for me to accidentally keep something that I never meant\nto keep.\n\nI like being able to throw everything away and quickly rebuild \"from\nscratch\" (in reality rebuild using ccache and a cache for configure)\ndue to superstition/defensive paranoia/learned helplessness. This has\nalways worked well enough because ccache works fairly well. I'm not\nsure how useful that kind of mindset will be with meson just yet, and\nif I'm just thinking about it in the wrong way, so forgive me for\nrambling like this.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 26 Sep 2022 14:15:35 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\n> I'll include an updated pgxs-compat patch in the next post of the series (in a\n> few hours).\n\nAttaches is version 17. Other changes:\n\n- Added a new patch to fix the display of user provided CFLAGS in the meson\n summary and to add them to pg_config output, addressing the report by Wang Wei\n at [1]. Planning to apply this soon. We can fine tune this later, the\n current situation is confusing.\n\n- Added a new patch to set rpath to $libdir. I'd hoped we'd quickly go for\n relative rpaths (so the install is relocatable, making it trivial to use\n tmp_install), but I now think that might take a bit longer. I'm planning to\n push this soon, as multiple people have been hit by this.\n\n- Added a patch to separately define static / shared libraries for the ecpg\n runtime libraries. This is a prerequisite patch for adding windows resource\n files, since the resource files should only be defined for shared libraries.\n\n- The patch adding windows resource files is, I think, now complete, including\n adding resource files to the ecpg libs.\n\n- A few more improvements for the PGXS compatibility. The pieces depending on\n the changes discussed below are left in a separate patch for now, as I'm not\n sure they'll survive as-is... There's a few more things needed, but I think\n it's getting closer.\n\n- Made some of the ecpg libraries use precompiled headers as well (gaining\n maybe 500ms in a debug build)\n\n One interesting question for this patch is where to add a note about when it\n is sensible for a target to use a precompiled header, and when not. At the\n moment meson generates a separate precompiled header \"object\" for each\n target (as the flags can differ), so for a full build precompiled headers\n can only be a win when a target has > 1 source file.\n\n- Tweaked the patch adding tests against running instances a bit, mainly by\n using a different suite name for the 'running' tests (otherwise meson test\n --suite something does bad things) and removing the 'tmp-install', 'running'\n suites. Haven't yet renamed 'running', as had been suggested by Peter\n Geoghegan, his suggestion seemed a bit long.\n\n- Reordered the series so that the patches that might take a while (including\n being moved into a separate CF entry & thread) are last. I left the CI\n patches at the start, because they make it easier to test parts of the\n patchseries (e.g. [2] just checks up to 0004)\n\n\nOn 2022-09-26 12:44:35 -0700, Andres Freund wrote:\n> Looking through a few of the not-nicely-replaced things, I think we can\n> simplify at least some away:\n>\n> - RANLIB: most platforms use AROPT = crs, making ranlib unnecessary. {free,\n> net, open}bsd don't currently, but all support it from what I know\n\nDone in the attached 0009.\n\n\n> - with_gnu_ld: this is only used on solaris, to set export_dynamic = -Wl,-E\n> when using a gnu ld. How about moving this to configure instead, and just\n> checking if -Wl,-E links?\n\nDone in 0011. Together with 0010, which gets rid of the need for $(LD) on aix\nby using $(CC) -r instead, this allows us to get rid of libtool.m4\n\nRight now 0011 adds a PGAC_PROG_CC_LD_EXPORT_DYNAMIC() which tests for\n-Wl,-E. It's used on solaris only. Based on its return value\nSOLARIS_EXPORT_DYNAMIC is set in Makefile.global.\n\nI'm not convinced by the precise structure I came up with in 0011, I'd welcome\nfeedback. But the idea as a whole seems promising to me.\n\n\n0008 unifies CFLAGS_SSE42 and CFLAGS_ARMV8_CRC32C. We really don't need two\ndifferent variables for this - on the makefile level we really don't need to\ncare.\n\n\nI'm wondering about moving the bulk of the pgxs compatibility stuff from\nsrc/meson.build to src/makefiles/meson.build. Will look a bit uglier ('../'\nreferences), but src/meson.build feels a bit too prominent somehow.\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/OS3PR01MB62751847BC9CD2DB7B29AC129E529%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n[2] https://cirrus-ci.com/build/6353192312111104", "msg_date": "Mon, 26 Sep 2022 18:19:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Tue, Sep 27, 2022 at 2:19 PM Andres Freund <andres@anarazel.de> wrote:\n> Subject: [PATCH v17 15/23] windows: Set UMDF_USING_NTSTATUS globally, include ntstatus.h\n\nNo Windows expertise here, but this looks reasonable. I originally\ntried to contain UMDF_USING_NTSTATUS to small translation units for\nfear of unintended consequences, but PCH requires you not to mess with\nmacros that affect the compilation of a header as seen by different\ntranslation units, which is an incompatible goal. If this is passing\non MSVC and MingGW then +1 from me.\n\nYou mentioned WIN32_NO_STATUS in the commit message -- a mistake?\nDigging out my old emails/notes... that's another way to be allowed to\ninclude both ntstatus.h and windows.h/etc in the same translation\nunit, but not the one we're using. I assume it's worse because you\nhave to define it and then undefine it, which sounds more antithetical\nto the PCH dream. Admittedly UMDF_USING_NTSTATUS -- from the\n\"User-Mode Driver Framework\" -- is a weird thing to be getting tangled\nup with because we aren't writing a driver here, but it seems to be a\nwell known and widely used alternative, and is nicer because you only\nhave to define it.\n\n> Subject: [PATCH v17 16/23] windows: adjust FD_SETSIZE via commandline define\n\nRight, we have to fix that across translation units for the same\nreason. But why as -D and not in win32_port.h? I followed the\ndiscussion from 9acda73118 to try to find the answer to that and saw\nthat Michael wanted to put it there, but wanted to minimise the blast\nradius at the time:\n\nhttps://www.postgresql.org/message-id/20190826054000.GE7005%40paquier.xyz\n\n\n", "msg_date": "Tue, 27 Sep 2022 17:29:27 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-27 17:29:27 +1300, Thomas Munro wrote:\n> On Tue, Sep 27, 2022 at 2:19 PM Andres Freund <andres@anarazel.de> wrote:\n> > Subject: [PATCH v17 15/23] windows: Set UMDF_USING_NTSTATUS globally, include ntstatus.h\n> \n> No Windows expertise here, but this looks reasonable. I originally\n> tried to contain UMDF_USING_NTSTATUS to small translation units for\n> fear of unintended consequences, but PCH requires you not to mess with\n> macros that affect the compilation of a header as seen by different\n> translation units, which is an incompatible goal. If this is passing\n> on MSVC and MingGW then +1 from me.\n\nYes, passes both.\n\n\n> You mentioned WIN32_NO_STATUS in the commit message -- a mistake?\n\nArgh. An earlier iteration. Works on mingw, but making it work with msvc\nrequired a lot more modifications IIRC.\n\n\n> Digging out my old emails/notes... that's another way to be allowed to\n> include both ntstatus.h and windows.h/etc in the same translation\n> unit, but not the one we're using. I assume it's worse because you\n> have to define it and then undefine it, which sounds more antithetical\n> to the PCH dream. Admittedly UMDF_USING_NTSTATUS -- from the\n> \"User-Mode Driver Framework\" -- is a weird thing to be getting tangled\n> up with because we aren't writing a driver here, but it seems to be a\n> well known and widely used alternative, and is nicer because you only\n> have to define it.\n\nIt's definitely weird. But it appears to be widely used...\n\n\n> > Subject: [PATCH v17 16/23] windows: adjust FD_SETSIZE via commandline define\n> \n> Right, we have to fix that across translation units for the same\n> reason. But why as -D and not in win32_port.h? I followed the\n> discussion from 9acda73118 to try to find the answer to that and saw\n> that Michael wanted to put it there, but wanted to minimise the blast\n> radius at the time:\n> \n> https://www.postgresql.org/message-id/20190826054000.GE7005%40paquier.xyz\n\nI guess a similar consideration. I was a bit worried about the references to\nFD_SETSIZE in src/backend/port/win32/socket.c. Multi kB on-stack arrays in\npostmaster seem like they could cause issues.\n\nISTM we really ought to move away from stuff using FD_SETSIZE on windows...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 26 Sep 2022 21:48:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Tue, Sep 27, 2022 at 2:06 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-09-26 15:18:29 +0700, John Naylor wrote:\n> > Either way it doesn't exist on this machine. I was able to get a working\n> > build with\n> >\n> > /usr/local/Homebrew/Library/Homebrew/os/mac/pkgconfig\n>\n> Hm - what did you need this path for - I don't think that should be\nneeded.\n\nI just cargo-culted the pattern from Arm (before I figured out it was Arm)\nand used the \"find\" command to look for the directories by name. I tried\nagain without specifying any of the three directory flags, and I can run\nthe tests getting:\n\nOk: 233\nExpected Fail: 0\nFail: 0\nUnexpected Pass: 0\nSkipped: 2\nTimeout: 0\n\n...which is fine for me since I don't do much development on MacOS nowadays.\n\n> > 1) /opt/homebrew/ seems to be an \"Apple silicon\" path?\n>\n> Yea, it's /usr/local on x86-64, based on what was required to make macos\nCI\n> work. I updated the wiki page, half-blindly - it'd be nice if you could\n> confirm that that works?\n\nNot sure if you intended for me to try the full script in your last\nresponse or just what's in the wiki page, but for the latter (on commit\nbed0927aeb0c6), it fails at\n\n[1656/2199] Linking target src/bin/psql/psql\nFAILED: src/bin/psql/psql\nclang -o src/bin/psql/psql\nsrc/bin/psql/psql.p/meson-generated_.._psqlscanslash.c.o\nsrc/bin/psql/psql.p/meson-generated_.._sql_help.c.o\nsrc/bin/psql/psql.p/command.c.o src/bin/psql/psql.p/common.c.o\nsrc/bin/psql/psql.p/copy.c.o src/bin/psql/psql.p/crosstabview.c.o\nsrc/bin/psql/psql.p/describe.c.o src/bin/psql/psql.p/help.c.o\nsrc/bin/psql/psql.p/input.c.o src/bin/psql/psql.p/large_obj.c.o\nsrc/bin/psql/psql.p/mainloop.c.o src/bin/psql/psql.p/prompt.c.o\nsrc/bin/psql/psql.p/startup.c.o src/bin/psql/psql.p/stringutils.c.o\nsrc/bin/psql/psql.p/tab-complete.c.o src/bin/psql/psql.p/variables.c.o\n-L/usr/local/opt/readline/lib -L/usr/local/opt/gettext/lib\n-L/usr/local/opt/zlib/lib -L/usr/local/opt/openssl/lib\n-I/usr/local/opt/readline/include -I/usr/local/opt/gettext/include\n-I/usr/local/opt/zlib/include -I/usr/local/opt/openssl/include\n-Wl,-dead_strip_dylibs -Wl,-headerpad_max_install_names\n-Wl,-undefined,error -isysroot\n/Library/Developer/CommandLineTools/SDKs/MacOSX11.3.sdk\n-Wl,-rpath,@loader_path/../../interfaces/libpq -Wl,-rpath,/usr/local/lib\n-Wl,-rpath,/usr/local/Cellar/zstd/1.5.2/lib src/fe_utils/libpgfeutils.a\nsrc/common/libpgcommon.a src/common/libpgcommon_ryu.a\nsrc/common/libpgcommon_config_info.a src/port/libpgport.a\nsrc/port/libpgport_crc.a src/interfaces/libpq/libpq.5.dylib -lm\n/usr/local/lib/libintl.dylib -ledit -lz\n/usr/local/Cellar/zstd/1.5.2/lib/libzstd.dylib -lz -lz -lz\nUndefined symbols for architecture x86_64:\n \"_rl_completion_suppress_quote\", referenced from:\n _psql_completion in tab-complete.c.o\n _quote_file_name in tab-complete.c.o\n _complete_from_files in tab-complete.c.o\n \"_rl_filename_dequoting_function\", referenced from:\n _initialize_readline in tab-complete.c.o\n \"_rl_filename_quote_characters\", referenced from:\n _initialize_readline in tab-complete.c.o\n \"_rl_filename_quoting_function\", referenced from:\n _initialize_readline in tab-complete.c.o\nld: symbol(s) not found for architecture x86_64\nclang: error: linker command failed with exit code 1 (use -v to see\ninvocation)\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Sep 27, 2022 at 2:06 AM Andres Freund <andres@anarazel.de> wrote:>> On 2022-09-26 15:18:29 +0700, John Naylor wrote:> > Either way it doesn't exist on this machine. I was able to get a working> > build with> >> > /usr/local/Homebrew/Library/Homebrew/os/mac/pkgconfig>> Hm - what did you need this path for - I don't think that should be needed.I just cargo-culted the pattern from Arm (before I figured out it was Arm) and used the \"find\" command to look for the directories by name. I tried again without specifying any of the three directory flags, and I can run the tests getting:Ok:                 233Expected Fail:      0Fail:               0Unexpected Pass:    0Skipped:            2Timeout:            0...which is fine for me since I don't do much development on MacOS nowadays.> > 1) /opt/homebrew/ seems to be an \"Apple silicon\" path?>> Yea, it's /usr/local on x86-64, based on what was required to make macos CI> work. I updated the wiki page, half-blindly - it'd be nice if you could> confirm that that works?Not sure if you intended for me to try the full script in your last response or just what's in the wiki page, but for the latter (on commit bed0927aeb0c6), it fails at[1656/2199] Linking target src/bin/psql/psqlFAILED: src/bin/psql/psqlclang  -o src/bin/psql/psql src/bin/psql/psql.p/meson-generated_.._psqlscanslash.c.o src/bin/psql/psql.p/meson-generated_.._sql_help.c.o src/bin/psql/psql.p/command.c.o src/bin/psql/psql.p/common.c.o src/bin/psql/psql.p/copy.c.o src/bin/psql/psql.p/crosstabview.c.o src/bin/psql/psql.p/describe.c.o src/bin/psql/psql.p/help.c.o src/bin/psql/psql.p/input.c.o src/bin/psql/psql.p/large_obj.c.o src/bin/psql/psql.p/mainloop.c.o src/bin/psql/psql.p/prompt.c.o src/bin/psql/psql.p/startup.c.o src/bin/psql/psql.p/stringutils.c.o src/bin/psql/psql.p/tab-complete.c.o src/bin/psql/psql.p/variables.c.o -L/usr/local/opt/readline/lib -L/usr/local/opt/gettext/lib -L/usr/local/opt/zlib/lib -L/usr/local/opt/openssl/lib -I/usr/local/opt/readline/include -I/usr/local/opt/gettext/include -I/usr/local/opt/zlib/include -I/usr/local/opt/openssl/include -Wl,-dead_strip_dylibs -Wl,-headerpad_max_install_names -Wl,-undefined,error -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX11.3.sdk -Wl,-rpath,@loader_path/../../interfaces/libpq -Wl,-rpath,/usr/local/lib -Wl,-rpath,/usr/local/Cellar/zstd/1.5.2/lib src/fe_utils/libpgfeutils.a src/common/libpgcommon.a src/common/libpgcommon_ryu.a src/common/libpgcommon_config_info.a src/port/libpgport.a src/port/libpgport_crc.a src/interfaces/libpq/libpq.5.dylib -lm /usr/local/lib/libintl.dylib -ledit -lz /usr/local/Cellar/zstd/1.5.2/lib/libzstd.dylib -lz -lz -lzUndefined symbols for architecture x86_64:  \"_rl_completion_suppress_quote\", referenced from:      _psql_completion in tab-complete.c.o      _quote_file_name in tab-complete.c.o      _complete_from_files in tab-complete.c.o  \"_rl_filename_dequoting_function\", referenced from:      _initialize_readline in tab-complete.c.o  \"_rl_filename_quote_characters\", referenced from:      _initialize_readline in tab-complete.c.o  \"_rl_filename_quoting_function\", referenced from:      _initialize_readline in tab-complete.c.old: symbol(s) not found for architecture x86_64clang: error: linker command failed with exit code 1 (use -v to see invocation)--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Tue, 27 Sep 2022 14:41:02 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On 26.09.22 18:35, Andres Freund wrote:\n>> 9f5be26c1215 meson: Add docs for building with meson\n>>\n>> I do like the overall layout of this.\n>>\n>> The \"Supported Platforms\" section should be moved back to near the end\n>> of the chapter. I don't see a reason to move it forward, at least\n>> none that is related to the meson issue.\n>>\n>> The changes to the \"Getting the Source\" section are also not\n>> appropriate for this patch.\n> We don't really support building from a tarball with meson yet (you'd need to\n> confiure, maintainer-clean, configure meson), so it does make some sense...\n\nOkay, interesting point. I suggest that we write it as if that were \nfixed, and for the time being insert a <note> (or similar) explaining \nthe above restriction. Otherwise we'll have to rewrite it again later.\n\n\n", "msg_date": "Tue, 27 Sep 2022 15:16:44 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Tue, Sep 27, 2022 at 2:41 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n>\n> On Tue, Sep 27, 2022 at 2:06 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2022-09-26 15:18:29 +0700, John Naylor wrote:\n\n> > Yea, it's /usr/local on x86-64, based on what was required to make\nmacos CI\n> > work. I updated the wiki page, half-blindly - it'd be nice if you could\n> > confirm that that works?\n>\n> Not sure if you intended for me to try the full script in your last\nresponse or just what's in the wiki page, but for the latter (on commit\nbed0927aeb0c6), it fails at\n>\n> [1656/2199] Linking target src/bin/psql/psql\n> FAILED: src/bin/psql/psql\n\nPer off-list discussion with Andres, the linking failure was caused by some\nenv variables set in my bash profile for the sake of Homebrew. After\nremoving those, the recipe in the wiki worked fine.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Sep 27, 2022 at 2:41 PM John Naylor <john.naylor@enterprisedb.com> wrote:>> On Tue, Sep 27, 2022 at 2:06 AM Andres Freund <andres@anarazel.de> wrote:> >> > On 2022-09-26 15:18:29 +0700, John Naylor wrote:> > Yea, it's /usr/local on x86-64, based on what was required to make macos CI> > work. I updated the wiki page, half-blindly - it'd be nice if you could> > confirm that that works?>> Not sure if you intended for me to try the full script in your last response or just what's in the wiki page, but for the latter (on commit bed0927aeb0c6), it fails at>> [1656/2199] Linking target src/bin/psql/psql> FAILED: src/bin/psql/psqlPer off-list discussion with Andres, the linking failure was caused by some env variables set in my bash profile for the sake of Homebrew. After removing those, the recipe in the wiki worked fine.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Fri, 30 Sep 2022 11:53:39 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On 27.09.22 03:19, Andres Freund wrote:\n> Attaches is version 17. Other changes:\n[23 attachments]\n\nHow shall we proceed here? The more progress we make, the more patches \nappear. ;-)\n\nMaybe close this commitfest entry now, and start new threads for each \nsubsequent topic.\n\n\n", "msg_date": "Fri, 30 Sep 2022 23:51:04 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-30 23:51:04 +0200, Peter Eisentraut wrote:\n> On 27.09.22 03:19, Andres Freund wrote:\n> > Attaches is version 17. Other changes:\n> [23 attachments]\n>\n> How shall we proceed here? The more progress we make, the more patches\n> appear. ;-)\n\n> Maybe close this commitfest entry now, and start new threads for each\n> subsequent topic.\n\nI was thinking of starting at least the following threads / CF entries once a\nfew of the remaining things are resolved:\n\n- PGXS compatibility, plus related autoconf simplification patches\n- pkg-config files for building postgres extensions\n- relative rpath support\n\nI am a bit on the fence about whether it's worth doing so for:\n\n- installcheck equivalent\n- precompiled header support (would like it soon, because it reduces\n compile-test times substantially)\n\nand, for no really tangible reason, considered\n- resource files generation\n- docs\n- docs dependency\n\nto be part of this thread / CF entry.\n\nNow that I think about it more, I am inclined to also push the docs changes to\na new thread, just for wider visibility.\n\nI think it'd be ok to commit the docs dependency fix soon, without a separate\nthread, as it really fixes a \"build bug\".\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 30 Sep 2022 15:35:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Mon, Sep 26, 2022 at 06:19:51PM -0700, Andres Freund wrote:\n> From 680ff3f7b4da1dbf21d0c7cd87af9bb5ee8b230c Mon Sep 17 00:00:00 2001\n> From: Andres Freund <andres@anarazel.de>\n> Date: Wed, 21 Sep 2022 20:36:36 -0700\n> Subject: [PATCH v17 01/23] meson: ci: wip: move compilerwarnings task to meson\n\nThis patch isn't finished, but this part looks like a rebase conflict:\n\n- make -s -j${BUILD_JOBS} clean\n+ make -s -j${BUILD_JOBS} world-bin\n\nAlso, you wrote \"rm -fr build\" between building for gcc and clang, but\nsince they run in an \"always\" block, it'd be better to use separate\ndirs, to allow seeing logs for the the all (failed) tasks, in case the\nlast one succeeds.\n\nOn Mon, Sep 26, 2022 at 06:19:51PM -0700, Andres Freund wrote:\n> From 6025cb80d65fd7a8414241931df9f003a292052f Mon Sep 17 00:00:00 2001\n> From: Andres Freund <andres@anarazel.de>\n> Date: Sun, 25 Sep 2022 12:07:29 -0700\n> Subject: [PATCH v17 16/23] windows: adjust FD_SETSIZE via commandline\n> define\n\n> +++ b/src/bin/pgbench/meson.build\n> @@ -27,6 +27,8 @@ pgbench = executable('pgbench',\n> pgbench_sources,\n> dependencies: [frontend_code, libpq, thread_dep],\n> include_directories: include_directories('.'),\n> + c_pch: pch_postgres_fe_h,\n> + c_args: host_system == 'windows' ? ['-DFD_SETSIZE=1024'] : [],\n> kwargs: default_bin_args,\n> )\n\nThis puts PCH into the preparatory commit.\n\nAlso, src/tools/msvc/Mkvcbuild.pm seems to use spaces rather than tabs.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 2 Oct 2022 12:25:20 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-10-02 12:25:20 -0500, Justin Pryzby wrote:\n> On Mon, Sep 26, 2022 at 06:19:51PM -0700, Andres Freund wrote:\n> > From 680ff3f7b4da1dbf21d0c7cd87af9bb5ee8b230c Mon Sep 17 00:00:00 2001\n> > From: Andres Freund <andres@anarazel.de>\n> > Date: Wed, 21 Sep 2022 20:36:36 -0700\n> > Subject: [PATCH v17 01/23] meson: ci: wip: move compilerwarnings task to meson\n>\n> This patch isn't finished, but this part looks like a rebase conflict:\n>\n> - make -s -j${BUILD_JOBS} clean\n> + make -s -j${BUILD_JOBS} world-bin\n\nI don't think so - it's the first task building with autoconf / in-tree. I\nhowever shouldn't added ccache to CC, that was an accident. I think I'll\nconvert it to a vpath build, seems cleaner.\n\n\n> Also, you wrote \"rm -fr build\" between building for gcc and clang, but\n> since they run in an \"always\" block, it'd be better to use separate\n> dirs, to allow seeing logs for the the all (failed) tasks, in case the\n> last one succeeds.\n\nHm, when are logs important for CompilerWarnings? I don't think we even\ncollect any? Using a different builddir for the \"sibling\" tests (i.e. the two\ngcc and the two clang tests) would increase the times a bit because we'd\nregenerate the bison files etc.\n\nI guess it'll look a bit cleaner to use a build-gcc and a build-clang, just to\nget rid of the irregularity of needing that rm -rf.\n\n\n> On Mon, Sep 26, 2022 at 06:19:51PM -0700, Andres Freund wrote:\n> > From 6025cb80d65fd7a8414241931df9f003a292052f Mon Sep 17 00:00:00 2001\n> > From: Andres Freund <andres@anarazel.de>\n> > Date: Sun, 25 Sep 2022 12:07:29 -0700\n> > Subject: [PATCH v17 16/23] windows: adjust FD_SETSIZE via commandline\n> > define\n>\n> > +++ b/src/bin/pgbench/meson.build\n> > @@ -27,6 +27,8 @@ pgbench = executable('pgbench',\n> > pgbench_sources,\n> > dependencies: [frontend_code, libpq, thread_dep],\n> > include_directories: include_directories('.'),\n> > + c_pch: pch_postgres_fe_h,\n> > + c_args: host_system == 'windows' ? ['-DFD_SETSIZE=1024'] : [],\n> > kwargs: default_bin_args,\n> > )\n>\n> This puts PCH into the preparatory commit.\n> Also, src/tools/msvc/Mkvcbuild.pm seems to use spaces rather than tabs.\n\nOops, will fix.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 2 Oct 2022 11:05:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Sun, Oct 02, 2022 at 11:05:30AM -0700, Andres Freund wrote:\n> > Also, you wrote \"rm -fr build\" between building for gcc and clang, but\n> > since they run in an \"always\" block, it'd be better to use separate\n> > dirs, to allow seeing logs for the the all (failed) tasks, in case the\n> > last one succeeds.\n> \n> Hm, when are logs important for CompilerWarnings? I don't think we even\n> collect any? Using a different builddir for the \"sibling\" tests (i.e. the two\n> gcc and the two clang tests) would increase the times a bit because we'd\n> regenerate the bison files etc.\n> \n> I guess it'll look a bit cleaner to use a build-gcc and a build-clang, just to\n> get rid of the irregularity of needing that rm -rf.\n\nThe build logs are important when hacking on .cirrus.yml itself.\n\nYou're right that we don't normally save logs for CompilerWarnings; one or\nanother (unpublished) patch of mine adds that, and then also needed to change\nto use separate dirs in order to debug building while experimenting with your\npatch to use meson.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 2 Oct 2022 13:38:37 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Sun, Oct 02, 2022 at 01:38:37PM -0500, Justin Pryzby wrote:\n> On Sun, Oct 02, 2022 at 11:05:30AM -0700, Andres Freund wrote:\n> > > Also, you wrote \"rm -fr build\" between building for gcc and clang, but\n> > > since they run in an \"always\" block, it'd be better to use separate\n> > > dirs, to allow seeing logs for the the all (failed) tasks, in case the\n> > > last one succeeds.\n> > \n> > Hm, when are logs important for CompilerWarnings? I don't think we even\n> > collect any? Using a different builddir for the \"sibling\" tests (i.e. the two\n> > gcc and the two clang tests) would increase the times a bit because we'd\n> > regenerate the bison files etc.\n> > \n> > I guess it'll look a bit cleaner to use a build-gcc and a build-clang, just to\n> > get rid of the irregularity of needing that rm -rf.\n> \n> The build logs are important when hacking on .cirrus.yml itself.\n> \n> You're right that we don't normally save logs for CompilerWarnings; one or\n> another (unpublished) patch of mine adds that, and then also needed to change\n> to use separate dirs in order to debug building while experimenting with your\n> patch to use meson.\n\nFYI, this is what led me to make that suggestion.\n\nhttps://cirrus-ci.com/task/5920691940753408\n\nI had a patch laying around to change the \"compiler warnings\" task to\nuse debian \"testing\", which seems to have added some new flags in -Wall,\nwhich caused me to add (for now) some compiler flags like -Wno-error=...\n\nBut when I added them to the task's CFLAGS, it broke \"clang\" (which\ndoesn't support the warnings) in an obscure way[0], and no logs\navailable to show why.\n\n[0] Header \"uuid/uuid.h\" has symbol \"uuid_generate\" with dependency\nuuid: NO\n\nSo, I think it's worth reporting meson's build logs, even though no\ntests are run here.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 2 Oct 2022 19:19:35 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn Mon, Sep 26, 2022 at 6:02 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 24.09.22 20:09, Andres Freund wrote:\n> > On 2022-09-24 13:52:29 -0400, Tom Lane wrote:\n> >> ... btw, shouldn't the CF entry [1] get closed now?\n> >\n> > Unfortunately not - there's quite a few followup patches that haven't\n> been\n> > [fully] reviewed and thus not applied yet.\n>\n> Here is some review of the remaining ones (might not match exactly what\n> you attached, I was working off your branch):\n>\n>\n> 9f789350a7a7 meson: ci: wip: move compilerwarnings task to meson\n>\n> This sounds reasonable to me in principle, but I haven't followed the\n> cirrus stuff too closely, and it doesn't say why it's \"wip\". Perhaps\n> others could take a closer look.\n>\n>\n> ccf20a68f874 meson: ci: Add additional CI coverage\n>\n> IIUC, this is just for testing your branch, not meant for master?\n>\n>\n> 02d84c21b227 meson: prereq: win: remove date from version number in\n> win32ver.rc\n>\n> do it\n>\n>\n> 5c42b3e7812e meson: WIP: Add some of the windows resource files\n>\n> What is the thinking on this now? What does this change over the\n> current state?\n>\n>\n> 9bc60bccfd10 meson: Add support for relative rpaths, fixing tests on\n> MacOS w/ SIP\n>\n> I suggest a separate thread and/or CF entry for this. There have been\n> various attempts to deal with SIP before, with varying results. This\n> is not part of the meson transition as such.\n>\n>\n> 9f5be26c1215 meson: Add docs for building with meson\n>\n> I do like the overall layout of this.\n>\n> The \"Supported Platforms\" section should be moved back to near the end\n> of the chapter. I don't see a reason to move it forward, at least\n> none that is related to the meson issue.\n>\n\nAgreed that it's unrelated to meson. However, I think it's better to move\nit in the front as it's generally useful to know if your platform is\nsupported before you start performing the installation steps and get stuck\nsomewhere.\n\nDo you think I should submit that as a separate commit in the same\npatch-set or just move it out to a completely different patch submission?\n\n\n>\n> The changes to the \"Getting the Source\" section are also not\n> appropriate for this patch.\n>\n>\nGiven that many developers are now using Git for downloading the source\ncode, I think it makes sense to be in the Getting the source section. Also,\nmeson today doesn't cleanly build via the tarballs. Hence, I added it to\nthe section (and patchset).\n\nDo you think I should move this to a different patch?\n\n\n> In the section \"Building and Installation with meson\":\n>\n> - Remove the \"git clone\" stuff.\n\n\n> - The \"Running tests\" section should be moved to Chapter 33. Regression\n> Tests.\n>\n\nThe autoconf / make section also has a small section on how to run the\nregression tests. The \"Running tests\" section is meant to the be equivalent\nof that for meson (i.e. brief overview). I do intend to add a detailed\nsection to Chapter 33 with more info on how to interpret test results etc.\nDo you think the current section is too verbose for where it is?\n\n\n> Some copy-editing will probably be suitable, but I haven't focused on\n> that yet.\n>\n>\n> 9c00d355d0e9 meson: Add PGXS compatibility\n>\n> This looks like a reasonable direction to me. How complete is it? It\n> says it works for some extensions but not others. How do we define\n> the target line here?\n>\n>\n> 3fd5e13dcad3 meson: Add postgresql-extension.pc for building extension\n> libraries\n>\n> Separate thread for this as well. This is good and important, but we\n> must also add it to the make build.\n>\n>\n> 4b5bfa1c19aa meson: Add LLVM bitcode emission\n>\n> still in progress\n>\n>\n> eb40f6e53104 meson: Add support for building with precompiled headers\n>\n> Any reason not to enable this by default? The benefits on non-Windows\n> appear to be less dramatic, but they are not zero. Might be better to\n> enable it consistently so that for example any breakage is easier\n> caught.\n>\n>\n> 377bfdea6042 meson: Add xmllint/xsltproc wrapper script to handle\n> dependencies automatically\n>\n> Is this part of the initial transition, required for correctness, or\n> is it an optional optimization? Could use more explanation. Maybe\n> move to separate thread also?\n>\n>\n\nHi,On Mon, Sep 26, 2022 at 6:02 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 24.09.22 20:09, Andres Freund wrote:\n> On 2022-09-24 13:52:29 -0400, Tom Lane wrote:\n>> ... btw, shouldn't the CF entry [1] get closed now?\n> \n> Unfortunately not - there's quite a few followup patches that haven't been\n> [fully] reviewed and thus not applied yet.\n\nHere is some review of the remaining ones (might not match exactly what \nyou attached, I was working off your branch):\n\n\n9f789350a7a7 meson: ci: wip: move compilerwarnings task to meson\n\nThis sounds reasonable to me in principle, but I haven't followed the\ncirrus stuff too closely, and it doesn't say why it's \"wip\".  Perhaps\nothers could take a closer look.\n\n\nccf20a68f874 meson: ci: Add additional CI coverage\n\nIIUC, this is just for testing your branch, not meant for master?\n\n\n02d84c21b227 meson: prereq: win: remove date from version number in \nwin32ver.rc\n\ndo it\n\n\n5c42b3e7812e meson: WIP: Add some of the windows resource files\n\nWhat is the thinking on this now?  What does this change over the\ncurrent state?\n\n\n9bc60bccfd10 meson: Add support for relative rpaths, fixing tests on \nMacOS w/ SIP\n\nI suggest a separate thread and/or CF entry for this.  There have been\nvarious attempts to deal with SIP before, with varying results.  This\nis not part of the meson transition as such.\n\n\n9f5be26c1215 meson: Add docs for building with meson\n\nI do like the overall layout of this.\n\nThe \"Supported Platforms\" section should be moved back to near the end\nof the chapter.  I don't see a reason to move it forward, at least\nnone that is related to the meson issue.Agreed that it's unrelated to meson. However, I think it's better to move it in the front as it's generally useful to know if your platform is supported before you start performing the installation steps and get stuck somewhere.Do you think I should submit that as a separate commit in the same patch-set or just move it out to a completely different patch submission? \n\nThe changes to the \"Getting the Source\" section are also not\nappropriate for this patch.\nGiven that many developers are now using Git for downloading the source code, I think it makes sense to be in the Getting the source section. Also, meson today doesn't cleanly build via the tarballs. Hence, I added it to the section (and patchset).Do you think I should move this to a different patch? \nIn the section \"Building and Installation with meson\":\n\n- Remove the \"git clone\" stuff. \n\n- The \"Running tests\" section should be moved to Chapter 33. Regression \nTests.The autoconf / make section also has a small section on how to run the regression tests. The \"Running tests\" section is meant to the be equivalent of that for meson (i.e. brief overview). I do intend to add a detailed section to Chapter 33 with more info on how to interpret test results etc. Do you think the current section is too verbose for where it is?\n\nSome copy-editing will probably be suitable, but I haven't focused on\nthat yet.\n\n\n9c00d355d0e9 meson: Add PGXS compatibility\n\nThis looks like a reasonable direction to me.  How complete is it?  It\nsays it works for some extensions but not others.  How do we define\nthe target line here?\n\n\n3fd5e13dcad3 meson: Add postgresql-extension.pc for building extension \nlibraries\n\nSeparate thread for this as well.  This is good and important, but we\nmust also add it to the make build.\n\n\n4b5bfa1c19aa meson: Add LLVM bitcode emission\n\nstill in progress\n\n\neb40f6e53104 meson: Add support for building with precompiled headers\n\nAny reason not to enable this by default?  The benefits on non-Windows\nappear to be less dramatic, but they are not zero.  Might be better to\nenable it consistently so that for example any breakage is easier\ncaught.\n\n\n377bfdea6042 meson: Add xmllint/xsltproc wrapper script to handle \ndependencies automatically\n\nIs this part of the initial transition, required for correctness, or\nis it an optional optimization?  Could use more explanation.  Maybe\nmove to separate thread also?", "msg_date": "Mon, 3 Oct 2022 00:39:14 -0700", "msg_from": "samay sharma <smilingsamay@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-30 15:35:26 -0700, Andres Freund wrote:\n> I was thinking of starting at least the following threads / CF entries once a\n> few of the remaining things are resolved:\n> \n> - PGXS compatibility, plus related autoconf simplification patches\n> - pkg-config files for building postgres extensions\n> - relative rpath support\n> \n> I am a bit on the fence about whether it's worth doing so for:\n> \n> - installcheck equivalent\n> - precompiled header support (would like it soon, because it reduces\n> compile-test times substantially)\n> \n> and, for no really tangible reason, considered\n> - resource files generation\n> - docs\n> - docs dependency\n> \n> to be part of this thread / CF entry.\n> \n> Now that I think about it more, I am inclined to also push the docs changes to\n> a new thread, just for wider visibility.\n> \n> I think it'd be ok to commit the docs dependency fix soon, without a separate\n> thread, as it really fixes a \"build bug\".\n\nI've not yet posted these different threads, but I've split up the meson tree\ninto subtrees corresponding to pretty much the above.\n\n\nThe meson tree now mainly merges those subtrees together. It still directly\ncontains the xml-tools dependency wrapper (to be merged soon) and the CI\nchanges (either later or never).\n\nI've attached a revised version of the xml-tools dependency wrapper (0001):\nCleanups, minor error handling improvements, and bit of comment polishing. I'd\nwelcome review. But as it fixes a build-dependency bug / FIXME, I'm planning\nto push it relatively soon otherwise.\n\n0002 fixes libpq's .pc file (static dependencies didn't show up anymore) and\nAIX compilation. AIX doesn't yet support link_whole (support was merged into\nmeson yesterday though). On the way it also improves comments and a bit of\ngeneric infrastructure. The price for now is that the static libpq is built\nseparately from the shared one, not reusing any objects. I felt that the\ncomplexity of reusing the objects isn't worth it for now.\n\nPeter, it'd be great if you could have a look at 0002.\n\n0003 mirrors the setup of libpq to the various ecpg libraries. This is a\nprerequisite to adding resource files.\n\n0004 adds the resource files\n\n\nI think after that we could close the CF entry (and create a bunch of followup\nentries, as discussed above). Although it somehow seems frivolous to start a\nseparate thread for \"installcheck equivalent\" :)\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 3 Oct 2022 20:25:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On 03.10.22 09:39, samay sharma wrote:\n> 9f5be26c1215 meson: Add docs for building with meson\n> \n> I do like the overall layout of this.\n> \n> The \"Supported Platforms\" section should be moved back to near the end\n> of the chapter.  I don't see a reason to move it forward, at least\n> none that is related to the meson issue.\n> \n> \n> Agreed that it's unrelated to meson. However, I think it's better to \n> move it in the front as it's generally useful to know if your platform \n> is supported before you start performing the installation steps and get \n> stuck somewhere.\n\nThe way it is currently organized is that 17.2 says\n\n\"In general, a modern Unix-compatible platform should be able to run \nPostgreSQL. The platforms that had received specific testing at the time \nof release are described in Section 17.6 below.\"\n\nSo basically, it says, don't worry about it, your platform is probably \nsupported, but check below if you are interested in the details.\n\nI don't see a reason to turn this around.\n\n> \n> Do you think I should submit that as a separate commit in the same \n> patch-set or just move it out to a completely different patch submission?\n> \n> \n> The changes to the \"Getting the Source\" section are also not\n> appropriate for this patch.\n> \n> \n> Given that many developers are now using Git for downloading the source \n> code, I think it makes sense to be in the Getting the source section. \n> Also, meson today doesn't cleanly build via the tarballs. Hence, I added \n> it to the section (and patchset).\n\nSection 17.3 already contains a link to section I.1 about using Git.\n\n> Do you think I should move this to a different patch?\n\nIf you wanted to pursue these changes, then yes, but I think they are \nnot clear improvements, as mentioned above.\n\nI suggest focusing on getting the actual meson documentation finished \nand then considering polishing the overall flow if desired.\n\n\n\n", "msg_date": "Wed, 5 Oct 2022 08:40:59 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On 04.10.22 05:25, Andres Freund wrote:\n> I've attached a revised version of the xml-tools dependency wrapper (0001):\n> Cleanups, minor error handling improvements, and bit of comment polishing. I'd\n> welcome review. But as it fixes a build-dependency bug / FIXME, I'm planning\n> to push it relatively soon otherwise.\n> \n> 0002 fixes libpq's .pc file (static dependencies didn't show up anymore) and\n> AIX compilation. AIX doesn't yet support link_whole (support was merged into\n> meson yesterday though). On the way it also improves comments and a bit of\n> generic infrastructure. The price for now is that the static libpq is built\n> separately from the shared one, not reusing any objects. I felt that the\n> complexity of reusing the objects isn't worth it for now.\n> \n> Peter, it'd be great if you could have a look at 0002.\n> \n> 0003 mirrors the setup of libpq to the various ecpg libraries. This is a\n> prerequisite to adding resource files.\n> \n> 0004 adds the resource files\n\nThese patches look ok to me.\n\n\n", "msg_date": "Wed, 5 Oct 2022 10:16:06 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-10-05 10:16:06 +0200, Peter Eisentraut wrote:\n> On 04.10.22 05:25, Andres Freund wrote:\n> > I've attached a revised version of the xml-tools dependency wrapper (0001):\n> > Cleanups, minor error handling improvements, and bit of comment polishing. I'd\n> > welcome review. But as it fixes a build-dependency bug / FIXME, I'm planning\n> > to push it relatively soon otherwise.\n> > \n> > 0002 fixes libpq's .pc file (static dependencies didn't show up anymore) and\n> > AIX compilation. AIX doesn't yet support link_whole (support was merged into\n> > meson yesterday though). On the way it also improves comments and a bit of\n> > generic infrastructure. The price for now is that the static libpq is built\n> > separately from the shared one, not reusing any objects. I felt that the\n> > complexity of reusing the objects isn't worth it for now.\n> > \n> > Peter, it'd be great if you could have a look at 0002.\n> > \n> > 0003 mirrors the setup of libpq to the various ecpg libraries. This is a\n> > prerequisite to adding resource files.\n> > \n> > 0004 adds the resource files\n> \n> These patches look ok to me.\n\nThanks for checking.\n\nWith that I'm closing the original meson CF entry. Wohoo!\n\nI'll post two new threads, one about pgxs compatibility, one about precompiled\nheaders in a bit.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Oct 2022 10:14:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nI noticed that `pg_config --configure` didn't show the options given when\nbuilding with meson. \n\nFor example, \nmeson setup build -Dcache=gcc.cache -Ddtrace=enabled -Dicu=enabled -Dcassert=true -Dprefix=/home/postgres/install_meson/\nmeson compile -C build \nmeson install -C build\n\n$ pg_config --configure\n\nThe options I specified (like dtrace) are not shown. I found they actually work\nin compilation.\nWhen specifying `-Ddtrace=enabled`, there is a log like this. And with\n`-Ddtrace=disabled`, no such log.\n\n[120/1834] /usr/bin/dtrace -C -h -s ../src/include/utils/../../backend/utils/probes.d -o src/include/utils/probes.h.tmp\n\nMaybe it would be better if pg_config can output this information, to be\nconsistent with the output when building with `./configure` and `make`.\n\nThe output when building with `./configure` and `make`:\n$ pg_config --configure\n '--prefix=/home/postgres/install/' '--cache' 'gcc.cache' '--enable-dtrace' '--with-icu' '--enable-cassert'\n\n\nRegards,\nShi yu\n\n\n", "msg_date": "Thu, 13 Oct 2022 09:24:51 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-10-13 09:24:51 +0000, shiy.fnst@fujitsu.com wrote:\n> I noticed that `pg_config --configure` didn't show the options given when\n> building with meson.\n\nYes, that was noted somewhere on this thread.\n\n\n> Maybe it would be better if pg_config can output this information, to be\n> consistent with the output when building with `./configure` and `make`.\n>\n> The output when building with `./configure` and `make`:\n> $ pg_config --configure\n> '--prefix=/home/postgres/install/' '--cache' 'gcc.cache' '--enable-dtrace' '--with-icu' '--enable-cassert'\n\nIt'd be a fair amount of work, both initially and to maintain it, to generate\nsomething compatible. I can see some benefit in showing some feature\ninfluencing output in --configure, but compatible output doesn't seem worth it\nto me.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 13 Oct 2022 09:39:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Fri, Oct 14, 2022 12:40 AM Andres Freund <andres@anarazel.de> wrote:\n> \n> Hi,\n> \n> On 2022-10-13 09:24:51 +0000, shiy.fnst@fujitsu.com wrote:\n> > I noticed that `pg_config --configure` didn't show the options given when\n> > building with meson.\n> \n> Yes, that was noted somewhere on this thread.\n> \n> \n> > Maybe it would be better if pg_config can output this information, to be\n> > consistent with the output when building with `./configure` and `make`.\n> >\n> > The output when building with `./configure` and `make`:\n> > $ pg_config --configure\n> > '--prefix=/home/postgres/install/' '--cache' 'gcc.cache' '--enable-dtrace' '--\n> with-icu' '--enable-cassert'\n> \n> It'd be a fair amount of work, both initially and to maintain it, to generate\n> something compatible. I can see some benefit in showing some feature\n> influencing output in --configure, but compatible output doesn't seem worth it\n> to me.\n> \n\nI agree that there are some benefits to showing that, which helps to confirm the\nbuild options. Although that can be confirmed from the compile log, but the log\nmay not be available all the time.\n\nAnd it's ok for me that the output is not exactly the same as before.\n\nRegards,\nShi yu\n\n\n", "msg_date": "Fri, 14 Oct 2022 03:21:09 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [RFC] building postgres with meson - v13" }, { "msg_contents": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com> writes:\n> On Fri, Oct 14, 2022 12:40 AM Andres Freund <andres@anarazel.de> wrote:\n>> It'd be a fair amount of work, both initially and to maintain it, to generate\n>> something compatible. I can see some benefit in showing some feature\n>> influencing output in --configure, but compatible output doesn't seem worth it\n>> to me.\n\n> And it's ok for me that the output is not exactly the same as before.\n\nYeah, the output doesn't have to be exactly the same. But it had better\nshow all the options influencing the compilation, or else we will be\nlooking for other ways to reconstruct that information.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Oct 2022 23:35:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-10-13 23:35:14 -0400, Tom Lane wrote:\n> \"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com> writes:\n> > On Fri, Oct 14, 2022 12:40 AM Andres Freund <andres@anarazel.de> wrote:\n> >> It'd be a fair amount of work, both initially and to maintain it, to generate\n> >> something compatible. I can see some benefit in showing some feature\n> >> influencing output in --configure, but compatible output doesn't seem worth it\n> >> to me.\n> \n> > And it's ok for me that the output is not exactly the same as before.\n> \n> Yeah, the output doesn't have to be exactly the same.\n\nSeems like we should have a different pg_config flag for meson options than\nfor configure, and perhaps separately an option to show the buildsystem?\n\nMaybe --buildsystem -> autoconf|meson, --configure -> existing CONFIGURE_ARGS\nfor autoconf, empty otherwise, --meson-options -> meson options?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 15 Oct 2022 12:10:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Seems like we should have a different pg_config flag for meson options than\n> for configure, and perhaps separately an option to show the buildsystem?\n\nYeah, probably a good idea, given that shoving the options for one\nbuildsystem into the other isn't likely to work.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 15 Oct 2022 17:39:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "> From 680ff3f7b4da1dbf21d0c7cd87af9bb5ee8b230c Mon Sep 17 00:00:00 2001\n> From: Andres Freund <andres@anarazel.de>\n> Date: Wed, 21 Sep 2022 20:36:36 -0700\n> Subject: [PATCH v17 01/23] meson: ci: wip: move compilerwarnings task to meson\n> \n> ---\n> .cirrus.yml | 92 +++++++++++++-------------\n> src/tools/ci/linux-mingw-w64-64bit.txt | 13 ++++\n> 2 files changed, 59 insertions(+), 46 deletions(-)\n> create mode 100644 src/tools/ci/linux-mingw-w64-64bit.txt\n> \n> diff --git a/.cirrus.yml b/.cirrus.yml\n> index 7b5cb021027..eb33fdc4855 100644\n> --- a/.cirrus.yml\n> +++ b/.cirrus.yml\n> @@ -465,6 +465,10 @@ task:\n> ccache_cache:\n> folder: $CCACHE_DIR\n> \n> + ccache_stats_start_script:\n> + ccache -s\n> + ccache -z\n\nI realized that ccache -z clears out not only the global stats, but the\nper-file cache stats (from which the global stats are derived) - which\nobviously makes the cache work poorly.\n\nNewer ccache has CCACHE_STATSLOG, and --show-log-stats, which I think\ncan do what's wanted. I'll update my ci branch with that.\n\n\n", "msg_date": "Tue, 18 Oct 2022 12:09:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Sun, Aug 28, 2022 at 01:37:41PM -0700, Andres Freund wrote:\n> > You're running tap tests via a python script. There's no problem with\n> > that, but it's different from what's done by the existing makefiles.\n> > I was able to remove the python indirection - maybe that's better to\n> > talk about on the CI thread? That moves some setup for TAP tests\n> > (TESTDIR, PATH, cd) from Makefile into the existing perl, which means\n> > less duplication.\n> \n> I'm doubtful it's worth removing. You'd need to move removing the files from\n> the last run into both pg_regress and the tap test infrastructure. And I do\n> think it's nice to afterwards have markers which tests failed, so we can only\n> collect their logs.\n\nAre you planning on putting something in place to remove (or allow\nremoving) logs for successful tests ? Is that primarily for cirrus, or\nbuildfarm or ??\n\nIt is wasteful to upload thousdands of logfiles to show a single\nfailure. That would make our cirrus tasks faster - compressing and\nuploading the logs takes over a minute.\n\nIt's also a lot friendlier to show fewer than 8 pages of test folders to\nsearch through to find the one that failed.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 14 Nov 2022 17:16:46 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "Hi,\n\nOn 2022-11-14 17:16:46 -0600, Justin Pryzby wrote:\n> On Sun, Aug 28, 2022 at 01:37:41PM -0700, Andres Freund wrote:\n> > > You're running tap tests via a python script. There's no problem with\n> > > that, but it's different from what's done by the existing makefiles.\n> > > I was able to remove the python indirection - maybe that's better to\n> > > talk about on the CI thread? That moves some setup for TAP tests\n> > > (TESTDIR, PATH, cd) from Makefile into the existing perl, which means\n> > > less duplication.\n> > \n> > I'm doubtful it's worth removing. You'd need to move removing the files from\n> > the last run into both pg_regress and the tap test infrastructure. And I do\n> > think it's nice to afterwards have markers which tests failed, so we can only\n> > collect their logs.\n> \n> Are you planning on putting something in place to remove (or allow\n> removing) logs for successful tests ? Is that primarily for cirrus, or\n> buildfarm or ??\n\nWhat I'd like to do is to add a 'collect-logs-for-failed-test's script and/or\ntarget that moves those logs into a different folder. By default we'd then\ncollect all the files from that different folder in CI. I think that's better\nthan removing logs for successful tests.\n\nI'd like to use the same script for the BF as well - we've had too many cases\nwhere we had to adjust things in multiple places / code-bases.\n\nPerhaps we could also use that test to print the list of relevant logfiles at\nthe end of a \"local\" testrun?\n\n\n> It is wasteful to upload thousdands of logfiles to show a single\n> failure. That would make our cirrus tasks faster - compressing and\n> uploading the logs takes over a minute.\n>\n> It's also a lot friendlier to show fewer than 8 pages of test folders to\n> search through to find the one that failed.\n\nIndeed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Nov 2022 15:53:28 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v12" }, { "msg_contents": "Hi,\n\nOn 2022-09-22 04:29:15 -0400, Andrew Dunstan wrote:\n> Now I'll start on buildfarm support. Given my current commitments, this will\n> take me a while, but I hope to have a working client by about the beginning\n> of November.\n\nJust checking: Any progress on this? Anything I can help with?\n\nI'd like to move towards dropping src/tools/msvc at some point not too far\naway, and we can't do so before having buildfarm support. I was just reminded\nof this by looking at the windows-arm support patch...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 2 Dec 2022 09:40:36 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "Hi,\n\nOn 2022-09-26 14:15:35 -0700, Peter Geoghegan wrote:\n> On Mon, Sep 26, 2022 at 1:27 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Some feedback:\n> > > * I gather that \"running\" as it appears in commands like \"meson test\n> > > --setup running\" refers to a particular setup named \"running\", that\n> > > you invented as part of creating a meson-ish substitute for\n> > > installcheck. Can \"running\" be renamed to something that makes it\n> > > obvious that it's a Postgres thing, and not a generic meson thing?\n> >\n> > Yes. The only caveat is that it makes lines longer, because it's included in\n> > the printed test line (there's no real requirement to have the test suite and\n> > the setup named the same,b ut it seems confusing not to)\n> \n> Probably doesn't have to be too long. And I'm not sure of the details.\n> Just a small thing from my point of view.\n\nAttached is an updated version of that patch. I left the name as 'running'\nbecause a postgres- or pg- prefix felt to awkward. This just adds fairly\nminimal documentation for the 'running' setup, while we now have some basic\ndocs for building with meson, we don't yet have a \"translation\" of\nregress.sgml. Not sure how to structure that best, either.\n\nI plan to commit that soon. This likely isn't the be-all-end-all, but it's\nquite useful as-is.\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 6 Dec 2022 19:25:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "> From 680ff3f7b4da1dbf21d0c7cd87af9bb5ee8b230c Mon Sep 17 00:00:00 2001\n> From: Andres Freund <andres@anarazel.de>\n> Date: Wed, 21 Sep 2022 20:36:36 -0700\n> Subject: [PATCH v17 01/23] meson: ci: wip: move compilerwarnings task to meson\n\n> always:\n> gcc_warning_script: |\n> - time ./configure \\\n> - --cache gcc.cache \\\n> - --enable-dtrace \\\n> - ${LINUX_CONFIGURE_FEATURES} \\\n> - CC=\"ccache gcc\" CXX=\"ccache g++\" CLANG=\"ccache clang\"\n> - make -s -j${BUILD_JOBS} clean\n> - time make -s -j${BUILD_JOBS} world-bin\n> + mkdir build && cd build\n> + CC=\"ccache gcc\" CXX=\"ccache g++\" \\\n> + meson setup \\\n> + -Dwerror=true \\\n> + -Dcassert=false \\\n> + -Ddtrace=enabled \\\n> + ${LINUX_MESON_FEATURES} \\\n> + ..\n> + time ninja -j${BUILD_JOBS}\n\nWith gcc, autoconf uses -O2, so I think this should specify\nbuildtype=debugoptimized, or pass -Doptimization=2. Otherwise it ends\nup in release mode with -O3.\n\n\n", "msg_date": "Fri, 23 Dec 2022 10:51:08 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson - v13" }, { "msg_contents": "On Wed, 11 May 2022 at 06:19, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> After that, these configure options don't have an equivalent yet:\n>\n> --enable-profiling\n\nAfaics this still doesn't exist? Is there a common idiom to enable\nthis? Like, if I add in something to cflags is that enough? I seem to\nrecall we had some hack to actually get each backend's gmon.out to not\nstep on each other's which needed an explicit flag to enable?\n\n-- \ngreg\n\n\n", "msg_date": "Fri, 14 Apr 2023 11:58:42 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nOn 2023-04-14 11:58:42 -0400, Greg Stark wrote:\n> On Wed, 11 May 2022 at 06:19, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > After that, these configure options don't have an equivalent yet:\n> >\n> > --enable-profiling\n> \n> Afaics this still doesn't exist? Is there a common idiom to enable\n> this? Like, if I add in something to cflags is that enough?\n\nYes. Or, well, you might also need to specify it when linking.\n\n\n> I seem to recall we had some hack to actually get each backend's gmon.out to\n> not step on each other's which needed an explicit flag to enable?\n\nI think that's enabled by default in gcc these days, if supported by the\nplatform?\n\nTBH, I really don't see the point of this style of profiling. It doesn't\nprovide an accurate view of where time is spent. You're much better off using\nperformance counter driven profiling with perf et al.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 14 Apr 2023 09:06:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson -v8" }, { "msg_contents": "Hi,\n\nSmall update for posterity:\n\nOn 2022-09-09 16:58:36 -0700, Andres Freund wrote:\n> I've run this through enough attempts by now that I'm quite confident that the\n> problem does not occur when the errormode does not include\n> SEM_NOOPENFILEERRORBOX. I'll want a few more runs to be certain, but...\n> \n> \n> Given that the problem appears to happen after _exit() is called, and only\n> when SEM_NOOPENFILEERRORBOX is not set, it seems likely to be an OS / C\n> runtime bug. Presumably it's related to something that python does first, but\n> I don't see how anything could justify crashing only if SEM_NOOPENFILEERRORBOX\n> is set (rather than the opposite).\n\nThese SEM_NOOPENFILEERRORBOX references should have been SEM_NOGPFAULTERRORBOX\n- I guess after staring at these names for a while, I couldn't quite see the\ndifference anymore.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 12 Jun 2023 10:10:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: [RFC] building postgres with meson - v12" } ]
[ { "msg_contents": "Hi hackers,\n\nI noticed that `make install` updates modification time for all\ninstalled headers. This leads to recompilation of all dependent\nobjects, which is inconvenient for example when working on a\nthird-party extension. A way to solve this would be to pass\n`INSTALL=\"install -p\"` to `configure`, to make `install` preserve the\ntimestamp. After this, a new problem arises -- the\n`src/include/Makefile` doesn't use `install` for all headers, but\ninstead uses `cp`. This patch adds `-p` switch to `cp` invocation in\nthese files, to make it preserve timestamps. Combined with the\naforementioned install flag, it allows a developer to hack on both\npostgres and a third-party extension at the same time, without the\nunneeded recompilation.\n\n\n--\nAlexander Kuzmenkov\nTimescale", "msg_date": "Tue, 12 Oct 2021 13:22:50 +0300", "msg_from": "Alexander Kuzmenkov <akuzmenkov@timescale.com>", "msg_from_op": true, "msg_subject": "preserve timestamps when installing headers" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nPersonally, I don't often encounter the problem that Alexander is describing, but I agree that there are cases when the simplest way to debug a tricky bug is to make a modification to the core. In fact, I used this technique to diagnose [1].\r\n\r\nUnless anyone can think of the scenario when the proposed change will break something, I would suggest merging it.\r\n\r\n[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=98ec35b0\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Thu, 02 Dec 2021 11:31:23 +0000", "msg_from": "Aleksander Alekseev <afiskon@gmail.com>", "msg_from_op": false, "msg_subject": "Re: preserve timestamps when installing headers" }, { "msg_contents": "On Tue, Oct 12, 2021 at 01:22:50PM +0300, Alexander Kuzmenkov wrote:\n> I noticed that `make install` updates modification time for all\n> installed headers. This leads to recompilation of all dependent\n> objects, which is inconvenient for example when working on a\n> third-party extension. A way to solve this would be to pass\n> `INSTALL=\"install -p\"` to `configure`, to make `install` preserve the\n> timestamp. After this, a new problem arises -- the\n> `src/include/Makefile` doesn't use `install` for all headers, but\n> instead uses `cp`. This patch adds `-p` switch to `cp` invocation in\n> these files, to make it preserve timestamps. Combined with the\n> aforementioned install flag, it allows a developer to hack on both\n> postgres and a third-party extension at the same time, without the\n> unneeded recompilation.\n\nThe use of cp instead of $(INSTALL_DATA) for the installation of the\nheaders comes from a703269, back from 2005. How do numbers compare\ntoday, 16 years later?\n--\nMichael", "msg_date": "Mon, 6 Dec 2021 15:38:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: preserve timestamps when installing headers" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Oct 12, 2021 at 01:22:50PM +0300, Alexander Kuzmenkov wrote:\n>> This patch adds `-p` switch to `cp` invocation in\n>> these files, to make it preserve timestamps.\n\n> The use of cp instead of $(INSTALL_DATA) for the installation of the\n> headers comes from a703269, back from 2005. How do numbers compare\n> today, 16 years later?\n\nAccording to a nearby copy of POSIX, \"cp -p\" does a lot more than\npreserve timestamps. It also specifies preserving file ownership,\nwhich seems absolutely catastrophic for the standard use-case of\n\"build as some ordinary user, then install as root\".\n\nTBH, I am not convinced that the complained-of case is enough of a\nproblem to justify any change in our build rules, even if there\nweren't any semantic issues. If you are worried about build times,\nyou should be using ccache, and IME builds using ccache are not\nterribly impacted by file timestamp changes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Dec 2021 01:51:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: preserve timestamps when installing headers" }, { "msg_contents": "On Mon, Dec 06, 2021 at 01:51:39AM -0500, Tom Lane wrote:\n> TBH, I am not convinced that the complained-of case is enough of a\n> problem to justify any change in our build rules, even if there\n> weren't any semantic issues. If you are worried about build times,\n> you should be using ccache, and IME builds using ccache are not\n> terribly impacted by file timestamp changes.\n\nFWIW, I am not on board with changing build semantics or any\nassumptions the header installation relies on either, but I could see\na point in switching back to INSTALL_DATA instead of cp to be\nconsistent with the rest of the build, iff the argument made back in\n2005 about the performance of this code path does not hold anymore.\nIf we do that, it would then be possible to feed a custom INSTALL\ncommand to ./configure.\n--\nMichael", "msg_date": "Mon, 6 Dec 2021 20:15:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: preserve timestamps when installing headers" }, { "msg_contents": "\nOn 06.12.21 07:51, Tom Lane wrote:\n> TBH, I am not convinced that the complained-of case is enough of a\n> problem to justify any change in our build rules, even if there\n> weren't any semantic issues. If you are worried about build times,\n> you should be using ccache, and IME builds using ccache are not\n> terribly impacted by file timestamp changes.\n\nI have never heard of a dependency-based build system taking into \naccount the timestamps of files outside of the source (or build) tree. \nIt does make sense to some degree, but it seems very unusual, and \nbasically nothing works like that. I'm also not sure how packaging \nsystems preserve file timestamps. Maybe it's a thing now, but I'd like \nto see a more comprehensive analysis before we commit to this.\n\n\n", "msg_date": "Mon, 6 Dec 2021 12:39:04 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: preserve timestamps when installing headers" }, { "msg_contents": "On 06.12.21 12:15, Michael Paquier wrote:\n> FWIW, I am not on board with changing build semantics or any\n> assumptions the header installation relies on either, but I could see\n> a point in switching back to INSTALL_DATA instead of cp to be\n> consistent with the rest of the build, iff the argument made back in\n> 2005 about the performance of this code path does not hold anymore.\n\nI think you will find that it is still very slow.\n\n\n", "msg_date": "Mon, 6 Dec 2021 12:41:09 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: preserve timestamps when installing headers" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 06.12.21 12:15, Michael Paquier wrote:\n>> FWIW, I am not on board with changing build semantics or any\n>> assumptions the header installation relies on either, but I could see\n>> a point in switching back to INSTALL_DATA instead of cp to be\n>> consistent with the rest of the build, iff the argument made back in\n>> 2005 about the performance of this code path does not hold anymore.\n\n> I think you will find that it is still very slow.\n\nThat would likely depend on whether configure had found a suitable\n\"install\" program or decided to fall back on config/install-sh.\nThe latter will definitely be horribly slow, but C-coded install\nutilities are probably no slower than \"cp\".\n\nHowever, there's another problem with using INSTALL_DATA as a solution\nto this issue: why would you expect that to preserve timestamps?\ninstall-sh won't. I see that /usr/bin/install (which configure picks\non my RHEL box) won't preserve them by default, but it has a -p\noption to do so. I would not bet on that being portable to all of\nthe myriad of foo-install programs that configure will accept, though.\n\nOn the whole, I think we should just reject this proposal and move on.\nThe portability hazards seem significant, and it's really unclear\nto me what the advantages are (per Peter's earlier comment).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Jan 2022 16:21:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: preserve timestamps when installing headers" }, { "msg_contents": "On 04.01.22 22:21, Tom Lane wrote:\n> However, there's another problem with using INSTALL_DATA as a solution\n> to this issue: why would you expect that to preserve timestamps?\n> install-sh won't. I see that /usr/bin/install (which configure picks\n> on my RHEL box) won't preserve them by default, but it has a -p\n> option to do so. I would not bet on that being portable to all of\n> the myriad of foo-install programs that configure will accept, though.\n\nI don't think preserving timestamps should be the default behavior, but \nI would support organizing things so that additional options can be \npassed to \"install\" to make it do whatever the user prefers. But that \nwon't work if some installations don't go through install.\n\nWe could have some mode where \"install\" is used instead of \"cp\", if \nsomeone wants to figure out exactly how to make that determination.\n\nBtw., a quick test of make -C src/include/ install:\n\ncp (current code): 0.5 s\nGNU install: 0.6 s\ninstall-sh: 12.5 s\n\n\n", "msg_date": "Fri, 7 Jan 2022 08:44:47 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: preserve timestamps when installing headers" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I don't think preserving timestamps should be the default behavior, but \n> I would support organizing things so that additional options can be \n> passed to \"install\" to make it do whatever the user prefers. But that \n> won't work if some installations don't go through install.\n\nCheck, but ...\n\n> Btw., a quick test of make -C src/include/ install:\n> cp (current code): 0.5 s\n> GNU install: 0.6 s\n> install-sh: 12.5 s\n\nSo this says that there's only a performance issue with install-sh;\nbut that's used by just a tiny minority of systems anymore. Scraping\nthe buildfarm's configure results, I find this many animals reporting\neach of these choices:\n\n 4 /bin/install -c\n 8 config/install-sh -c\n 2 /opt/packages/coreutils-8.6/inst/bin/install -c\n 1 /usr/bin/ginstall -c\n 100 /usr/bin/install -c\n 1 /usr/gnu/bin/install -c\n\nThe 8 holdouts are\n\ngaur\nhaddock\nhake\nhornet\nhoverfly\nmandrill\nsungazer\ntern\n\nie, ancient HPUX, OpenIndiana (Solaris), and AIX, none of which\nare likely development platforms anymore --- and if somebody\ndid care about this, there's nothing stopping them from\ninstalling GNU install on their machine.\n\nSo I fear we're optimizing for a case that stopped being mainstream\na decade or more back. I could get behind switching the code back\nto using $(INSTALL) for this, and then offering some way to inject\nuser-selected switches into the $(INSTALL) invocations. That\nwouldn't need much more than another gmake macro. (Does there\nneed to be a way to inject such switches only into header\ninstallations, or is it OK to do it across the board?)\n\n[ wanders away wondering how this'd affect the meson conversion\nproject ]\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Jan 2022 17:03:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: preserve timestamps when installing headers" }, { "msg_contents": "On 11/01/2022 00:03, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> I don't think preserving timestamps should be the default behavior, but\n>> I would support organizing things so that additional options can be\n>> passed to \"install\" to make it do whatever the user prefers. But that\n>> won't work if some installations don't go through install.\n\n+1. We just bumped into this with Neon, where we have a build script \nthat generates Rust bindings from the PostgreSQL header files. The build \nscript runs \"make install\", and because that changes the mtime even if \nthere were no changes to the headers, the bindings are also regenerated \nevery time.\n\n> So I fear we're optimizing for a case that stopped being mainstream\n> a decade or more back. I could get behind switching the code back\n> to using $(INSTALL) for this, and then offering some way to inject\n> user-selected switches into the $(INSTALL) invocations. That\n> wouldn't need much more than another gmake macro. (Does there\n> need to be a way to inject such switches only into header\n> installations, or is it OK to do it across the board?)\n\nHere's a patch to switch back to $(INSTALL). With that, you can do:\n\n./configure INSTALL=\"/usr/bin/install -C\"\n\n> [ wanders away wondering how this'd affect the meson conversion\n> project ]\n\nIf anything, I guess this will help, by making the Makefile a bit less \nspecial.\n\n- Heikki", "msg_date": "Fri, 9 Sep 2022 22:23:57 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: preserve timestamps when installing headers" }, { "msg_contents": "On Fri, Sep 09, 2022 at 10:23:57PM +0300, Heikki Linnakangas wrote:\n> On 11/01/2022 00:03, Tom Lane wrote:\n> > Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > > I don't think preserving timestamps should be the default behavior, but\n> > > I would support organizing things so that additional options can be\n> > > passed to \"install\" to make it do whatever the user prefers. But that\n> > > won't work if some installations don't go through install.\n\n> Here's a patch to switch back to $(INSTALL). With that, you can do:\n> \n> ./configure INSTALL=\"/usr/bin/install -C\"\n\n+1, I recently looked for a way to do that while trying to accelerate\nCygwin/Mingw.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 10 Sep 2022 09:38:43 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: preserve timestamps when installing headers" }, { "msg_contents": "On 09.09.22 21:23, Heikki Linnakangas wrote:\n>> So I fear we're optimizing for a case that stopped being mainstream\n>> a decade or more back.  I could get behind switching the code back\n>> to using $(INSTALL) for this, and then offering some way to inject\n>> user-selected switches into the $(INSTALL) invocations.  That\n>> wouldn't need much more than another gmake macro.  (Does there\n>> need to be a way to inject such switches only into header\n>> installations, or is it OK to do it across the board?)\n> \n> Here's a patch to switch back to $(INSTALL).\n\nI'm content to go ahead with this.\n\n\n\n", "msg_date": "Mon, 12 Sep 2022 17:49:36 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: preserve timestamps when installing headers" }, { "msg_contents": "On 12/09/2022 18:49, Peter Eisentraut wrote:\n> On 09.09.22 21:23, Heikki Linnakangas wrote:\n>>> So I fear we're optimizing for a case that stopped being mainstream\n>>> a decade or more back.  I could get behind switching the code back\n>>> to using $(INSTALL) for this, and then offering some way to inject\n>>> user-selected switches into the $(INSTALL) invocations.  That\n>>> wouldn't need much more than another gmake macro.  (Does there\n>>> need to be a way to inject such switches only into header\n>>> installations, or is it OK to do it across the board?)\n>>\n>> Here's a patch to switch back to $(INSTALL).\n> \n> I'm content to go ahead with this.\n\nCommitted.\n\n- Heikki\n\n\n", "msg_date": "Mon, 12 Sep 2022 23:04:10 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: preserve timestamps when installing headers" } ]
[ { "msg_contents": "While working on the issue [1], I realize that if a subtransaction\nhasn't done any catalog change then we don't add this in the commit\nxid list even if we are building a full snapshot [2]. That means when\nwe will convert this to the MVCC snapshot we will add this to a\nrunning xid list. If my understanding is correct then how visibility\nwill work? Because if I look at the code in XidInMVCCSnapshot(), then\nif the suboverflowed is not set then first we are going to look into\nthe snapshot->subxip array and if we don't find it there then we look\ninto the snapshot->xip array, and now we will be finding even\ncommitted subxips in the snapshot->xip array. Am I missing something?\n\n[2]\n/*\n* Add subtransaction to base snapshot if catalog modifying, we don't\n* distinguish to toplevel transactions there.\n*/\nif (ReorderBufferXidHasCatalogChanges(builder->reorder, subxid))\n{\nsub_needs_timetravel = true;\nneeds_snapshot = true;\n\n[3]\n\n[1] https://www.postgresql.org/message-id/CAFiTN-tqopqpfS6HHug2nnOGieJJ_nm-Nvy0WBZ%3DZpo-LqtSJA%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 Oct 2021 18:21:08 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Question about building an exportable snapshop" }, { "msg_contents": "On Tue, Oct 12, 2021 at 6:21 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> While working on the issue [1], I realize that if a subtransaction\n> hasn't done any catalog change then we don't add this in the commit\n> xid list even if we are building a full snapshot [2].\n>\n\nI think this is true only if we have reached SNAPBUILD_CONSISTENT\nstate otherwise, we are adding subtransactions in the committed xip\narray by setting 'needs_timetravel' to true. And if we have already\nreached a consistent state before it then we anyway don't need to add\nthis. If this is true, do you still see any problem?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 20 Oct 2021 17:06:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question about building an exportable snapshop" }, { "msg_contents": "On Wed, Oct 20, 2021 at 5:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 12, 2021 at 6:21 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > While working on the issue [1], I realize that if a subtransaction\n> > hasn't done any catalog change then we don't add this in the commit\n> > xid list even if we are building a full snapshot [2].\n> >\n>\n> I think this is true only if we have reached SNAPBUILD_CONSISTENT\n> state otherwise, we are adding subtransactions in the committed xip\n> array by setting 'needs_timetravel' to true. And if we have already\n> reached a consistent state before it then we anyway don't need to add\n> this. If this is true, do you still see any problem?\n\nYeah, you are right.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Oct 2021 17:11:22 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Question about building an exportable snapshop" } ]
[ { "msg_contents": "I am very new to this list, so I don’t know whether this is the right place.\n\nMicrosoft SQL and MySQL both use the @ sign at the beginning of their \nvariables. The most obvious benefit of this is that it is very easy to \ndistinguish between variable names and column names.\n\nI’m not asking for a change in how PostgreSQL manages variables, but \nwhether it’s possible to allow the @ sign, and possibly the $ sign to \nstart a variable name. I am aware that the _ can start a variable name, \nbut the other characters are a little more eye-catching.\n\nDoes that make sense?\n\n\n-- \n\n\n Mark Simon\n\nManngo Net Pty Ltd\n\nmobile:0411 246 672\n\nemail:mark@manngo.net <mailto:mark@comparity.net>\nweb:http://www.manngo.net\n\nResume:http://mark.manngo.net\n\n\n\n\n\n\nI am very new to this list, so I\n don’t know whether this is the right place.\nMicrosoft SQL and MySQL both use the\n @ sign at the beginning of their variables. The most obvious\n benefit of this is that it is very easy to distinguish between\n variable names and column names.\nI’m not asking for a change in how\n PostgreSQL manages variables, but whether it’s possible to allow\n the @ sign, and possibly the $ sign to start a variable name. I\n am aware that the _ can start a variable name, but the other\n characters are a little more eye-catching.\nDoes that make sense?\n\n\n\n-- \n\n\nMark Simon\nManngo Net Pty Ltd\nmobile:0411 246 672\nemail:mark@manngo.net\nweb:http://www.manngo.net\nResume:http://mark.manngo.net", "msg_date": "Wed, 13 Oct 2021 12:57:24 +1100", "msg_from": "Mark Simon <mark@manngo.net>", "msg_from_op": true, "msg_subject": "Feature Request: Allow additional special characters at the beginning\n of the name." }, { "msg_contents": "Mark Simon <mark@manngo.net> writes:\n> I’m not asking for a change in how PostgreSQL manages variables, but \n> whether it’s possible to allow the @ sign, and possibly the $ sign to \n> start a variable name.\n\n@ is allowed in operator names, and indeed is used in (mumble select\ncount(*) ...) 59 built-in operators. So we could not support that\nwithout breaking a lot of applications. Is \"a<@b\" to be parsed as\n\"a <@ b\" or \"a < @b\"? For that matter, is \"@a\" a name or an invocation\nof the built-in prefix operator \"@\" on variable \"a\"?\n\nAs for allowing $ to start a name, there are also issues:\n\n* It'd be rather ambiguous with the $id$ string delimiter syntax [1],\nwhich is a Postgres-ism for sure, but a lot of people use it.\n\n* It'd not be entirely clear whether $1 is a variable name\nor a parameter reference.\n\n* I think there are client interfaces that allow $name to be\na parameter symbol, so we'd also be breaking anything that\nworks that way.\n\nMaybe we could have done this twenty years ago, but I think\ncompatibility considerations preclude it now.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING\n\n\n", "msg_date": "Tue, 12 Oct 2021 22:49:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Feature Request: Allow additional special characters at the\n beginning of the name." } ]
[ { "msg_contents": "In a blog post (https://about.gitlab.com/blog/2021/09/29/why-we-spent-the-last-month-eliminating-postgresql-subtransactions/),\nI described how PostgreSQL can enter into a suboverflow condition on\nthe replica under a number of conditions:\n\n1. A long transaction starts.\n2. A single SAVEPOINT is issued.\n3. Many rows are updated on the primary, and the same rows are read\nfrom the replica.\n\nThis can cause a significant performance degradation with a replica\ndue to SubtransSLRU wait events since the replica needs to perform a\nparent lookup on an ever-growing range of XIDs. Full details on how to\nreplicate this: https://gitlab.com/-/snippets/2187338.\n\nThe main two lines of code that cause the replica to enter in the\nsuboverflowed state are here\n(https://github.com/postgres/postgres/blob/317632f3073fc06047a42075eb5e28a9577a4f96/src/backend/storage/ipc/procarray.c#L2431-L2432):\n\nif (TransactionIdPrecedesOrEquals(xmin, procArray->lastOverflowedXid))\n suboverflowed = true;\n\nI noticed that lastOverflowedXid doesn't get cleared even after all\nsubtransactions have been completed. On a replica, it only seems to be\nupdated via a XLOG_XACT_ASSIGNMENT, but no such message will be sent\nif subtransactions halt. If the XID wraps around again and a long\ntransaction starts before lastOverflowedXid, the replica might\nunnecessarily enter in the suboverflow condition again.\n\nI've validated this by issuing a SAVEPOINT, running the read/write\ntest, logging lastOverflowedXid to stderr, and then using pg_bench to\nadvance XID with SELECT txid_current(). After many hours, I validated\nthat lastOverflowedXid remained the same, and I could induce a high\ndegree of SubtransSLRU wait events without issuing a new SAVEPOINT.\n\nI'm wondering a few things:\n\n1. Should lastOverflowedXid be reset to 0 at some point? I'm not sure\nif there's a good way at the moment for the replica to know that all\nsubtransactions have completed.\n2. Alternatively, should the epoch number be used to compare xmin and\nlastOverflowedXid?\n\nTo mitigate this issue, we've considered:\n\n1. Restarting the replicas. This isn't great, and if another SAVEPOINT\ncomes along, we'd have to do this again. It would be nice to be able\nto monitor the exact value of lastOverflowedXid.\n2. Raise the NUM_SUBTRANS_BUFFERS as a workaround until the scalable\nSLRU patches are available\n(https://commitfest.postgresql.org/34/2627/).\n3. Issue SAVEPOINTs periodically to \"run away\" from this wraparound issue.\n\n\n", "msg_date": "Tue, 12 Oct 2021 21:53:22 -0700", "msg_from": "Stan Hu <stanhu@gmail.com>", "msg_from_op": true, "msg_subject": "lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "> On Tue, Oct 12, 2021 at 09:53:22PM -0700, Stan Hu wrote:\n>\n> I described how PostgreSQL can enter into a suboverflow condition on\n> the replica under a number of conditions:\n>\n> 1. A long transaction starts.\n> 2. A single SAVEPOINT is issued.\n> 3. Many rows are updated on the primary, and the same rows are read\n> from the replica.\n>\n> I noticed that lastOverflowedXid doesn't get cleared even after all\n> subtransactions have been completed. On a replica, it only seems to be\n> updated via a XLOG_XACT_ASSIGNMENT, but no such message will be sent\n> if subtransactions halt. If the XID wraps around again and a long\n> transaction starts before lastOverflowedXid, the replica might\n> unnecessarily enter in the suboverflow condition again.\n\nHi,\n\nthat's an interesting finding, thanks for the investigation. I didn't\nreproduce it fully (haven't checked the wraparound part), but indeed\nlastOverflowedXid is not changing that often, only every\nPGPROC_MAX_CACHED_SUBXIDS subtransactions. I wonder what would be side\neffects of clearing it when the snapshot is not suboverfloved anymore?\n\n\n", "msg_date": "Sun, 17 Oct 2021 18:55:21 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "\n\n> 17 окт. 2021 г., в 21:55, Dmitry Dolgov <9erthalion6@gmail.com> написал(а):\n> I wonder what would be side\n> effects of clearing it when the snapshot is not suboverfloved anymore?\n\nI think we should just invalidate lastOverflowedXid on every XLOG_RUNNING_XACTS if subxid_overflow == false. I can't find a reason not to do so.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 20 Oct 2021 16:00:35 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "> On Wed, Oct 20, 2021 at 04:00:35PM +0500, Andrey Borodin wrote:\n> > 17 окт. 2021 г., в 21:55, Dmitry Dolgov <9erthalion6@gmail.com> написал(а):\n> > I wonder what would be side\n> > effects of clearing it when the snapshot is not suboverfloved anymore?\n>\n> I think we should just invalidate lastOverflowedXid on every XLOG_RUNNING_XACTS if subxid_overflow == false. I can't find a reason not to do so.\n\n From what I understand that was actually the case, lastOverflowedXid was\nset to InvalidTransactionId in ProcArrayApplyRecoveryInfo if\nsubxid_overflow wasn't set. Looks like 10b7c686e52a6d1bb has changed it,\nto what I didn't pay attention originally.\n\n\n", "msg_date": "Wed, 20 Oct 2021 13:48:33 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "On Wed, Oct 20, 2021 at 4:00 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n>\n>\n> > 17 окт. 2021 г., в 21:55, Dmitry Dolgov <9erthalion6@gmail.com>\n> написал(а):\n> > I wonder what would be side\n> > effects of clearing it when the snapshot is not suboverfloved anymore?\n>\n> I think we should just invalidate lastOverflowedXid on every\n> XLOG_RUNNING_XACTS if subxid_overflow == false. I can't find a reason not\n> to do so.\n>\n>\nOn a replica, I think it's possible for lastOverflowedXid to be set even if\nsubxid_overflow is false on the primary and secondary (\nhttps://github.com/postgres/postgres/blob/dc899146dbf0e1d23fb24155a5155826ddce34c9/src/backend/storage/ipc/procarray.c#L1327).\nI thought subxid_overflow only gets set if there are more than\nPGPROC_MAX_CACHED_SUBXIDS (64) used in a given transaction.\n\nShould the replica be invalidating lastOverflowedXid if subxcnt goes to\nzero in XLOG_RUNNING_XACTS? But if there's an outstanding snapshot with an\nxmin that precedes lastOverflowedXid we might violate MVCC if we invalidate\nthis, so I wonder if we also need to check the snapshot with the lowest\nxmin?\n\nOn Wed, Oct 20, 2021 at 4:00 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n> 17 окт. 2021 г., в 21:55, Dmitry Dolgov <9erthalion6@gmail.com> написал(а):\n> I wonder what would be side\n> effects of clearing it when the snapshot is not suboverfloved anymore?\n\nI think we should just invalidate lastOverflowedXid on every XLOG_RUNNING_XACTS if subxid_overflow == false. I can't find a reason not to do so.\nOn a replica, I think it's possible for lastOverflowedXid to be set even if subxid_overflow is false on the primary and secondary (https://github.com/postgres/postgres/blob/dc899146dbf0e1d23fb24155a5155826ddce34c9/src/backend/storage/ipc/procarray.c#L1327). I thought subxid_overflow only gets set if there are more than PGPROC_MAX_CACHED_SUBXIDS (64) used in a given transaction. Should the replica be invalidating lastOverflowedXid if subxcnt goes to zero in XLOG_RUNNING_XACTS? But if there's an outstanding snapshot with an xmin that precedes lastOverflowedXid we might violate MVCC if we invalidate this, so I wonder if we also need to check the snapshot with the lowest xmin?", "msg_date": "Wed, 20 Oct 2021 08:55:12 -0700", "msg_from": "Stan Hu <stanhu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "At Wed, 20 Oct 2021 13:48:33 +0200, Dmitry Dolgov <9erthalion6@gmail.com> wrote in \n> > On Wed, Oct 20, 2021 at 04:00:35PM +0500, Andrey Borodin wrote:\n> > > 17 окт. 2021 г., в 21:55, Dmitry Dolgov <9erthalion6@gmail.com> написал(а):\n> > > I wonder what would be side\n> > > effects of clearing it when the snapshot is not suboverfloved anymore?\n> >\n> > I think we should just invalidate lastOverflowedXid on every XLOG_RUNNING_XACTS if subxid_overflow == false. I can't find a reason not to do so.\n> \n> From what I understand that was actually the case, lastOverflowedXid was\n> set to InvalidTransactionId in ProcArrayApplyRecoveryInfo if\n> subxid_overflow wasn't set. Looks like 10b7c686e52a6d1bb has changed it,\n> to what I didn't pay attention originally.\n\nUnfortunately(?), that doesn't happen once standbyState reaches\nSTANDBY_SNAPSHOT_READY.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 21 Oct 2021 13:01:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "At Wed, 20 Oct 2021 08:55:12 -0700, Stan Hu <stanhu@gmail.com> wrote in \n> On Wed, Oct 20, 2021 at 4:00 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> >\n> >\n> > > 17 окт. 2021 г., в 21:55, Dmitry Dolgov <9erthalion6@gmail.com>\n> > написал(а):\n> > > I wonder what would be side\n> > > effects of clearing it when the snapshot is not suboverfloved anymore?\n> >\n> > I think we should just invalidate lastOverflowedXid on every\n> > XLOG_RUNNING_XACTS if subxid_overflow == false. I can't find a reason not\n> > to do so.\n> >\n> >\n> On a replica, I think it's possible for lastOverflowedXid to be set even if\n> subxid_overflow is false on the primary and secondary (\n> https://github.com/postgres/postgres/blob/dc899146dbf0e1d23fb24155a5155826ddce34c9/src/backend/storage/ipc/procarray.c#L1327).\n> I thought subxid_overflow only gets set if there are more than\n> PGPROC_MAX_CACHED_SUBXIDS (64) used in a given transaction.\n> \n> Should the replica be invalidating lastOverflowedXid if subxcnt goes to\n> zero in XLOG_RUNNING_XACTS? But if there's an outstanding snapshot with an\n> xmin that precedes lastOverflowedXid we might violate MVCC if we invalidate\n> this, so I wonder if we also need to check the snapshot with the lowest\n> xmin?\n\nlastOverflowedXid is the smallest subxid that possibly exists but\npossiblly not known to the standby. So if all top-level transactions\nolder than lastOverflowedXid end, that means that all the\nsubtransactions in doubt are known to have been ended.\n\nXLOG_RUNNING_XACTS reports oldestRunningXid, which is the oldest\nrunning top-transaction. Standby expires xids in KnownAssignedXids\narray that precede to the oldestRunningXid. We are sure that all\npossiblly-overflown subtransactions are gone as well if the oldest xid\nis newer than the first overflowed subtransaction.\n\nAs a cross check, the following existing code in GetSnapshotData means\nthat no overflow is not happening if the smallest xid in the known\nassigned list is larger than lastOverflowedXid, which agrees to the\nconsideration above.\n\nprocaray.c:2428\n>\t\tsubcount = KnownAssignedXidsGetAndSetXmin(snapshot->subxip, &xmin,\n>\t\t\t\t\t\t\t\t\t\t\t\t xmax);\n>\n>\t\tif (TransactionIdPrecedesOrEquals(xmin, procArray->lastOverflowedXid))\n>\t\t\tsuboverflowed = true;\n\n\nIf the discussion so far is correct, the following diff will fix the\nissue.\n\ndiff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\nindex bd3c7a47fe..19682b73ec 100644\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n@@ -4463,6 +4463,12 @@ ExpireOldKnownAssignedTransactionIds(TransactionId xid)\n {\n LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n KnownAssignedXidsRemovePreceding(xid);\n+ /*\n+ * reset lastOverflowedXid if we know transactions that have been possiblly\n+ * running are being gone.\n+ */\n+ if (TransactionIdPrecedes(procArray->lastOverflowedXid, xid))\n+ procArray->lastOverflowedXid = InvalidTransactionId;\n LWLockRelease(ProcArrayLock);\n }\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 21 Oct 2021 13:01:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "On Wed, Oct 20, 2021 at 9:01 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> lastOverflowedXid is the smallest subxid that possibly exists but\n> possiblly not known to the standby. So if all top-level transactions\n> older than lastOverflowedXid end, that means that all the\n> subtransactions in doubt are known to have been ended.\n\nThanks for the patch! I verified that it appears to reset\nlastOverflowedXid properly.\n\nI may not be understanding\nhttps://github.com/postgres/postgres/blob/dc899146dbf0e1d23fb24155a5155826ddce34c9/src/backend/storage/ipc/procarray.c#L1326-L1327\ncorrectly, but isn't lastOverflowedXid the last subxid for a given\ntop-level XID, so isn't it actually the largest subxid that possibly\nexists?\n\n\n", "msg_date": "Thu, 21 Oct 2021 07:20:57 -0700", "msg_from": "Stan Hu <stanhu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "On Thu, Oct 21, 2021 at 07:21 Stan Hu <stanhu@gmail.com> wrote:\n\n> On Wed, Oct 20, 2021 at 9:01 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > lastOverflowedXid is the smallest subxid that possibly exists but\n> > possiblly not known to the standby. So if all top-level transactions\n> > older than lastOverflowedXid end, that means that all the\n> > subtransactions in doubt are known to have been ended.\n>\n> Thanks for the patch! I verified that it appears to reset\n> lastOverflowedXid properly.\n>\n\nIs it right time to register the patch in the current commit fest, right?\n(How to do that?)\n\nOn a separate note, I think it would be really good to improve\nobservability for SLRUs -- particularly for Subtrans SLRU and this\noverflow-related aspects. pg_stat_slru added in PG13 is really helpful,\nbut not enough to troubleshoot, analyze and tune issues like this, and the\npatches related to SLRU. Basic ideas:\n- expose to the user how many pages are currently used (especially useful\nif SLRU sizes will be configurable, see\nhttps://commitfest.postgresql.org/34/2627/)\n- Andrew Borodin also expressed the idea to extend pageinspect to allow\nseeing the content of SLRUs\n- a more specific thing: allow seeing lastOverflowedXid somehow (via SQL or\nin logs) - we see how important it for standbys health, but we cannot see\nit now.\n\nAny ideas in the direction of observability?\n\nNik\n\n>\n\nOn Thu, Oct 21, 2021 at 07:21 Stan Hu <stanhu@gmail.com> wrote:On Wed, Oct 20, 2021 at 9:01 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> lastOverflowedXid is the smallest subxid that possibly exists but\n> possiblly not known to the standby. So if all top-level transactions\n> older than lastOverflowedXid end, that means that all the\n> subtransactions in doubt are known to have been ended.\n\nThanks for the patch! I verified that it appears to reset\nlastOverflowedXid properly.Is it right time to register the patch in the current commit fest, right? (How to do that?)On a separate note, I think it would be really good to improve observability for SLRUs -- particularly for Subtrans SLRU and this overflow-related aspects.  pg_stat_slru added in PG13 is really helpful, but not enough to troubleshoot, analyze and tune issues like this, and the patches related to SLRU. Basic ideas:- expose to the user how many pages are currently used (especially useful if SLRU sizes will be configurable, seehttps://commitfest.postgresql.org/34/2627/)- Andrew Borodin also expressed the idea to extend pageinspect to allow seeing the content of SLRUs- a more specific thing: allow seeing lastOverflowedXid somehow (via SQL or in logs) - we see how important it for standbys health, but we cannot see it now.Any ideas in the direction of observability?Nik", "msg_date": "Mon, 25 Oct 2021 11:41:03 -0700", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "On Mon, Oct 25, 2021 at 11:41 AM Nikolay Samokhvalov <samokhvalov@gmail.com>\nwrote:\n\n> On Thu, Oct 21, 2021 at 07:21 Stan Hu <stanhu@gmail.com> wrote:\n>\n>> On Wed, Oct 20, 2021 at 9:01 PM Kyotaro Horiguchi\n>> <horikyota.ntt@gmail.com> wrote:\n>> >\n>> > lastOverflowedXid is the smallest subxid that possibly exists but\n>> > possiblly not known to the standby. So if all top-level transactions\n>> > older than lastOverflowedXid end, that means that all the\n>> > subtransactions in doubt are known to have been ended.\n>>\n>> Thanks for the patch! I verified that it appears to reset\n>> lastOverflowedXid properly.\n>\n> ...\n\n> Any ideas in the direction of observability?\n>\n\nPerhaps, anything additional should be considered separately.\n\nThe behavior discussed here looks like a bug.\n\nI also have tested the patch. It works fully as expected, details of\ntesting – below.\n\nI think this is a serious bug hitting heavily loaded Postgres setups with\nhot standbys\n and propose fixing it in all supported major versions ASAP since the fix\nlooks simple.\n\nAny standby in heavily loaded systems (10k+ TPS) where subtransactions are\nused\nmay experience huge performance degradation on standbys [1]. This is what\nhappened\nrecently with GitLab [2]. While a full solution to this problem is\nsomething more complex, probably\nrequiring changes in SLRU [3], the problem discussed here definitely feels\nlike a serious bug\n– if we fully get rid of subtransactions, since 32-bit lastOverflowedXid is\nnot reset, in new\nXID epoch standbys start experience SubtransControlLock/SubtransSLRU again\n–\nwithout any subtransactions. This problem is extremely difficult to\ndiagnose on one hand,\nand it may fully make standbys irresponsible while a long-lasting\ntransaction last on the primary\n(\"long\" here may be a matter of minutes or even dozens of seconds – it\ndepends on the\nTPS level). It is especially hard to diagnose in PG 12 or older – because\nit doesn't have\npg_stat_slru yet, so one cannot easily notice Subtrans reads.)\n\nThe only current solution to this problem is to restart standby Postgres.\n\nHow I tested the patch. First, I reproduced the problem:\n- current 15devel Postgres, installed on 2 x c5ad.2xlarge on AWS (8 vCPUs,\n16 GiB), working as\nprimary + standby\n- follow the steps described in [3] to initiate SubtransSLRU on the standby\n- at some point, stop using SAVEPOINTs on the primary - use regular UPDATEs\ninstead, wait.\n\nUsing the following, observe procArray->lastOverflowedXid:\n\ndiff --git a/src/backend/storage/ipc/procarray.c\nb/src/backend/storage/ipc/procarray.c\nindex\nbd3c7a47fe21949ba63da26f0d692b2ee618f885..ccf3274344d7ba52a6f28a10b08dbfc310cf97e9\n100644\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n@@ -2428,6 +2428,9 @@ GetSnapshotData(Snapshot snapshot)\n subcount = KnownAssignedXidsGetAndSetXmin(snapshot->subxip, &xmin,\n xmax);\n\n+ if (random() % 100000 == 0)\n+ elog(WARNING, \"procArray->lastOverflowedXid: %u\",\nprocArray->lastOverflowedXid);\n+\n if (TransactionIdPrecedesOrEquals(xmin, procArray->lastOverflowedXid))\n suboverflowed = true;\n }\n\nOnce we stop using SAVEPOINTs on the primary, the\nvalue procArray->lastOverflowedXid stop\n changing, as expected.\n\nWithout the patch applied, lastOverflowedXid remains constant forever –\ntill the server restart.\nAnd as I mentioned, we start experiencing SubtransSLRU and pg_subtrans\nreads.\n\nWith the patch, lastOverflowedXid is reset to 0, as expected, shortly after\nan ongoing \"long\"\nthe transaction ends on the primary.\n\nThis solves the bug – we don't have SubtransSLRU on standby without actual\nuse of subtransactions\non the primary.\n\n[1]\nhttps://postgres.ai/blog/20210831-postgresql-subtransactions-considered-harmful\n[2]\nhttps://about.gitlab.com/blog/2021/09/29/why-we-spent-the-last-month-eliminating-postgresql-subtransactions/\n[3]\nhttps://www.postgresql.org/message-id/flat/494C5E7F-E410-48FA-A93E-F7723D859561%40yandex-team.ru#18c79477bf7fc44a3ac3d1ce55e4c169\n[4]\nhttps://gitlab.com/postgres-ai/postgresql-consulting/tests-and-benchmarks/-/issues/21\n\nOn Mon, Oct 25, 2021 at 11:41 AM Nikolay Samokhvalov <samokhvalov@gmail.com> wrote:On Thu, Oct 21, 2021 at 07:21 Stan Hu <stanhu@gmail.com> wrote:On Wed, Oct 20, 2021 at 9:01 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> lastOverflowedXid is the smallest subxid that possibly exists but\n> possiblly not known to the standby. So if all top-level transactions\n> older than lastOverflowedXid end, that means that all the\n> subtransactions in doubt are known to have been ended.\n\nThanks for the patch! I verified that it appears to reset\nlastOverflowedXid properly.... Any ideas in the direction of observability?Perhaps, anything additional should be considered separately.The behavior discussed here looks like a bug.I also have tested the patch. It works fully as expected, details of testing – below.I think this is a serious bug hitting heavily loaded Postgres setups with hot standbys and propose fixing it in all supported major versions ASAP since the fix looks simple.Any standby in heavily loaded systems (10k+ TPS) where subtransactions are usedmay experience huge performance degradation on standbys [1]. This is what happenedrecently with GitLab [2]. While a full solution to this problem is something more complex, probablyrequiring changes in SLRU [3], the problem discussed here definitely feels like a serious bug– if we fully get rid of subtransactions, since 32-bit lastOverflowedXid is not reset, in newXID epoch standbys start experience SubtransControlLock/SubtransSLRU again – without any subtransactions. This problem is extremely difficult to diagnose on one hand,and it may fully make standbys irresponsible while a long-lasting transaction last on the primary(\"long\" here may be a matter of minutes or even dozens of seconds – it depends on theTPS level). It is especially hard to diagnose in PG 12 or older – because it doesn't havepg_stat_slru yet, so one cannot easily notice Subtrans reads.)The only current solution to this problem is to restart standby Postgres.How I tested the patch. First, I reproduced the problem:- current 15devel Postgres, installed on 2 x c5ad.2xlarge on AWS (8 vCPUs, 16 GiB), working asprimary + standby- follow the steps described in [3] to initiate SubtransSLRU on the standby- at some point, stop using SAVEPOINTs on the primary - use regular UPDATEs instead, wait.Using the following, observe procArray->lastOverflowedXid:diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.cindex bd3c7a47fe21949ba63da26f0d692b2ee618f885..ccf3274344d7ba52a6f28a10b08dbfc310cf97e9 100644--- a/src/backend/storage/ipc/procarray.c+++ b/src/backend/storage/ipc/procarray.c@@ -2428,6 +2428,9 @@ GetSnapshotData(Snapshot snapshot) \t\tsubcount = KnownAssignedXidsGetAndSetXmin(snapshot->subxip, &xmin, \t\t\t\t\t\t\t\t\t\t\t\t  xmax); +        if (random() % 100000 == 0)+                elog(WARNING, \"procArray->lastOverflowedXid: %u\", procArray->lastOverflowedXid);+ \t\tif (TransactionIdPrecedesOrEquals(xmin, procArray->lastOverflowedXid)) \t\t\tsuboverflowed = true; \t}Once we stop using SAVEPOINTs on the primary, the value procArray->lastOverflowedXid stop changing, as expected.Without the patch applied, lastOverflowedXid remains constant forever – till the server restart.And as I mentioned, we start experiencing SubtransSLRU and pg_subtrans reads.With the patch, lastOverflowedXid is reset to 0, as expected, shortly after an ongoing \"long\"the transaction ends on the primary.This solves the bug – we don't have SubtransSLRU on standby without actual use of subtransactionson the primary.[1] https://postgres.ai/blog/20210831-postgresql-subtransactions-considered-harmful[2] https://about.gitlab.com/blog/2021/09/29/why-we-spent-the-last-month-eliminating-postgresql-subtransactions/[3] https://www.postgresql.org/message-id/flat/494C5E7F-E410-48FA-A93E-F7723D859561%40yandex-team.ru#18c79477bf7fc44a3ac3d1ce55e4c169[4] https://gitlab.com/postgres-ai/postgresql-consulting/tests-and-benchmarks/-/issues/21", "msg_date": "Mon, 1 Nov 2021 23:47:08 -0700", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, failed\nSpec compliant: not tested\nDocumentation: not tested\n\nThe fix is trivial and works as expected, solving the problem\r\n\r\nTested, described details of the testing in the email thread.", "msg_date": "Tue, 02 Nov 2021 06:54:31 +0000", "msg_from": "Nikolay Samokhvalov <nikolay@samokhvalov.com>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "On Mon, Nov 1, 2021 at 11:55 PM Nikolay Samokhvalov <nikolay@samokhvalov.com>\nwrote:\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, failed\n\n\nPlease ignore this – I didn't understand the UI.\n\nOn Mon, Nov 1, 2021 at 11:55 PM Nikolay Samokhvalov <nikolay@samokhvalov.com> wrote:The following review has been posted through the commitfest application:\nmake installcheck-world:  tested, failedPlease ignore this – I didn't understand the UI.", "msg_date": "Tue, 2 Nov 2021 00:01:28 -0700", "msg_from": "Nikolay Samokhvalov <nikolay@samokhvalov.com>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "\n\n> 21 окт. 2021 г., в 09:01, Kyotaro Horiguchi <horikyota.ntt@gmail.com> написал(а):\n> \n> If the discussion so far is correct, the following diff will fix the\n> issue.\n> \n> diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\n> index bd3c7a47fe..19682b73ec 100644\n> --- a/src/backend/storage/ipc/procarray.c\n> +++ b/src/backend/storage/ipc/procarray.c\n> @@ -4463,6 +4463,12 @@ ExpireOldKnownAssignedTransactionIds(TransactionId xid)\n> {\n> LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> KnownAssignedXidsRemovePreceding(xid);\n> + /*\n> + * reset lastOverflowedXid if we know transactions that have been possiblly\n> + * running are being gone.\n> + */\n> + if (TransactionIdPrecedes(procArray->lastOverflowedXid, xid))\n> + procArray->lastOverflowedXid = InvalidTransactionId;\n> LWLockRelease(ProcArrayLock);\n> }\n\nThe patch seems correct bugfix to me. The only question I have: is it right place from modularity standpoint? procArray->lastOverflowedXid is not a part of KnownAssignedTransactionIds?\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 3 Nov 2021 13:44:49 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "( a.On Wed, Nov 3, 2021 at 11:44 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > 21 окт. 2021 г., в 09:01, Kyotaro Horiguchi <horikyota.ntt@gmail.com> написал(а):\n> >\n> > If the discussion so far is correct, the following diff will fix the\n> > issue.\n> >\n> > diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\n> > index bd3c7a47fe..19682b73ec 100644\n> > --- a/src/backend/storage/ipc/procarray.c\n> > +++ b/src/backend/storage/ipc/procarray.c\n> > @@ -4463,6 +4463,12 @@ ExpireOldKnownAssignedTransactionIds(TransactionId xid)\n> > {\n> > LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> > KnownAssignedXidsRemovePreceding(xid);\n> > + /*\n> > + * reset lastOverflowedXid if we know transactions that have been possiblly\n> > + * running are being gone.\n> > + */\n> > + if (TransactionIdPrecedes(procArray->lastOverflowedXid, xid))\n> > + procArray->lastOverflowedXid = InvalidTransactionId;\n> > LWLockRelease(ProcArrayLock);\n> > }\n>\n> The patch seems correct bugfix to me. The only question I have: is it right place from modularity standpoint? procArray->lastOverflowedXid is not a part of KnownAssignedTransactionIds?\n\nIt seems the right place because we take ProcArrayLock here. It would\nbe undesirable to take it twice. We could give a better name for\nExpireOldKnownAssignedTransactionIds() indicating that it could modify\nlastOverflowedXid as well. Any ideas?\n\nShould ExpireAllKnownAssignedTransactionIds() be also involved here?\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 3 Nov 2021 12:08:52 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "\n\n> 3 нояб. 2021 г., в 14:08, Alexander Korotkov <aekorotkov@gmail.com> написал(а):\n> \n> ( a.On Wed, Nov 3, 2021 at 11:44 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>>> 21 окт. 2021 г., в 09:01, Kyotaro Horiguchi <horikyota.ntt@gmail.com> написал(а):\n>>> \n>>> If the discussion so far is correct, the following diff will fix the\n>>> issue.\n>>> \n>>> diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\n>>> index bd3c7a47fe..19682b73ec 100644\n>>> --- a/src/backend/storage/ipc/procarray.c\n>>> +++ b/src/backend/storage/ipc/procarray.c\n>>> @@ -4463,6 +4463,12 @@ ExpireOldKnownAssignedTransactionIds(TransactionId xid)\n>>> {\n>>> LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n>>> KnownAssignedXidsRemovePreceding(xid);\n>>> + /*\n>>> + * reset lastOverflowedXid if we know transactions that have been possiblly\n>>> + * running are being gone.\n>>> + */\n>>> + if (TransactionIdPrecedes(procArray->lastOverflowedXid, xid))\n>>> + procArray->lastOverflowedXid = InvalidTransactionId;\n>>> LWLockRelease(ProcArrayLock);\n>>> }\n>> \n>> The patch seems correct bugfix to me. The only question I have: is it right place from modularity standpoint? procArray->lastOverflowedXid is not a part of KnownAssignedTransactionIds?\n> \n> It seems the right place because we take ProcArrayLock here.\nOh.. I see. ProcArrayApplyRecoveryInfo() is taking ProcArrayLock in so many places.\n\n> It would\n> be undesirable to take it twice. We could give a better name for\n> ExpireOldKnownAssignedTransactionIds() indicating that it could modify\n> lastOverflowedXid as well. Any ideas?\nLooking more I think the name is OK. KnownAssignedXidsReset() and KnownAssignedXidsRemovePreceding() interferes with procArray a lot.\n\n> Should ExpireAllKnownAssignedTransactionIds() be also involved here?\nI think it's good for unification, but I do not see how procArray->lastOverflowedXid can be used after ExpireAllKnownAssignedTransactionIds().\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 3 Nov 2021 14:32:51 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "On Thu, 21 Oct 2021 at 05:01, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 20 Oct 2021 08:55:12 -0700, Stan Hu <stanhu@gmail.com> wrote in\n> > On Wed, Oct 20, 2021 at 4:00 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> >\n> > >\n> > >\n> > > > 17 окт. 2021 г., в 21:55, Dmitry Dolgov <9erthalion6@gmail.com>\n> > > написал(а):\n> > > > I wonder what would be side\n> > > > effects of clearing it when the snapshot is not suboverfloved anymore?\n> > >\n> > > I think we should just invalidate lastOverflowedXid on every\n> > > XLOG_RUNNING_XACTS if subxid_overflow == false. I can't find a reason not\n> > > to do so.\n\nI believe that to be an incorrect fix, but so very nearly correct.\nThere is a documented race condition in the generation of a\nXLOG_RUNNING_XACTS that means there could be a new overflow event\nafter the snapshot was taken but before it was logged.\n\n> > On a replica, I think it's possible for lastOverflowedXid to be set even if\n> > subxid_overflow is false on the primary and secondary (\n> > https://github.com/postgres/postgres/blob/dc899146dbf0e1d23fb24155a5155826ddce34c9/src/backend/storage/ipc/procarray.c#L1327).\n> > I thought subxid_overflow only gets set if there are more than\n> > PGPROC_MAX_CACHED_SUBXIDS (64) used in a given transaction.\n> >\n> > Should the replica be invalidating lastOverflowedXid if subxcnt goes to\n> > zero in XLOG_RUNNING_XACTS? But if there's an outstanding snapshot with an\n> > xmin that precedes lastOverflowedXid we might violate MVCC if we invalidate\n> > this, so I wonder if we also need to check the snapshot with the lowest\n> > xmin?\n>\n> lastOverflowedXid is the smallest subxid that possibly exists but\n> possiblly not known to the standby. So if all top-level transactions\n> older than lastOverflowedXid end, that means that all the\n> subtransactions in doubt are known to have been ended.\n\nAgreed\n\n> XLOG_RUNNING_XACTS reports oldestRunningXid, which is the oldest\n> running top-transaction. Standby expires xids in KnownAssignedXids\n> array that precede to the oldestRunningXid. We are sure that all\n> possiblly-overflown subtransactions are gone as well if the oldest xid\n> is newer than the first overflowed subtransaction.\n\nAgreed\n\n> As a cross check, the following existing code in GetSnapshotData means\n> that no overflow is not happening if the smallest xid in the known\n> assigned list is larger than lastOverflowedXid, which agrees to the\n> consideration above.\n>\n> procaray.c:2428\n> > subcount = KnownAssignedXidsGetAndSetXmin(snapshot->subxip, &xmin,\n> > xmax);\n> >\n> > if (TransactionIdPrecedesOrEquals(xmin, procArray->lastOverflowedXid))\n> > suboverflowed = true;\n>\n>\n> If the discussion so far is correct, the following diff will fix the\n> issue.\n>\n> diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\n> index bd3c7a47fe..19682b73ec 100644\n> --- a/src/backend/storage/ipc/procarray.c\n> +++ b/src/backend/storage/ipc/procarray.c\n> @@ -4463,6 +4463,12 @@ ExpireOldKnownAssignedTransactionIds(TransactionId xid)\n> {\n> LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> KnownAssignedXidsRemovePreceding(xid);\n> + /*\n> + * reset lastOverflowedXid if we know transactions that have been possiblly\n> + * running are being gone.\n> + */\n> + if (TransactionIdPrecedes(procArray->lastOverflowedXid, xid))\n> + procArray->lastOverflowedXid = InvalidTransactionId;\n> LWLockRelease(ProcArrayLock);\n> }\n\nSo I agree with this fix.\n\nIt is however, an undocumented modularity violation. I think that is\nacceptable because of the ProcArrayLock traffic, but needs to have a\ncomment to explain this at the call to\nExpireOldKnownAssignedTransactionIds() i.e. \" and potentially reset\nlastOverflowedXid\", as well as a comment on the\nExpireOldKnownAssignedTransactionIds() function.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 3 Nov 2021 17:50:54 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "Hi!\n\nOn Wed, Nov 3, 2021 at 8:51 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> It is however, an undocumented modularity violation. I think that is\n> acceptable because of the ProcArrayLock traffic, but needs to have a\n> comment to explain this at the call to\n> ExpireOldKnownAssignedTransactionIds() i.e. \" and potentially reset\n> lastOverflowedXid\", as well as a comment on the\n> ExpireOldKnownAssignedTransactionIds() function.\n\nThank you for your feedback. Please find the revised patch attached.\nIt incorporates this function comment changes altogether with minor\neditings and commit message. Let me know if you have further\nsuggestions.\n\nI'm going to push this if no objections.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Thu, 4 Nov 2021 01:07:05 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "Good catch on doing this in ExpireAllKnownAssignedTransactionIds() as well.\nThanks. Looks good to me!\n\nAs Nikolay mentioned, I think this is an important bug that we are seeing\nin production and would appreciate a backport to v12 if possible.\n\nOn Wed, Nov 3, 2021 at 3:07 PM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> Hi!\n>\n> On Wed, Nov 3, 2021 at 8:51 PM Simon Riggs <simon.riggs@enterprisedb.com>\n> wrote:\n> > It is however, an undocumented modularity violation. I think that is\n> > acceptable because of the ProcArrayLock traffic, but needs to have a\n> > comment to explain this at the call to\n> > ExpireOldKnownAssignedTransactionIds() i.e. \" and potentially reset\n> > lastOverflowedXid\", as well as a comment on the\n> > ExpireOldKnownAssignedTransactionIds() function.\n>\n> Thank you for your feedback. Please find the revised patch attached.\n> It incorporates this function comment changes altogether with minor\n> editings and commit message. Let me know if you have further\n> suggestions.\n>\n> I'm going to push this if no objections.\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\nGood catch on doing this in ExpireAllKnownAssignedTransactionIds() as well. Thanks. Looks good to me!As Nikolay mentioned, I think this is an important bug that we are seeing in production and would appreciate a backport to v12 if possible. On Wed, Nov 3, 2021 at 3:07 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:Hi!\n\nOn Wed, Nov 3, 2021 at 8:51 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> It is however, an undocumented modularity violation. I think that is\n> acceptable because of the ProcArrayLock traffic, but needs to have a\n> comment to explain this at the call to\n> ExpireOldKnownAssignedTransactionIds() i.e. \" and potentially reset\n> lastOverflowedXid\", as well as a comment on the\n> ExpireOldKnownAssignedTransactionIds() function.\n\nThank you for your feedback.  Please find the revised patch attached.\nIt incorporates this function comment changes altogether with minor\neditings and commit message. Let me know if you have further\nsuggestions.\n\nI'm going to push this if no objections.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Wed, 3 Nov 2021 16:27:35 -0700", "msg_from": "Stan Hu <stanhu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "On Wed, 3 Nov 2021 at 22:07, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> Hi!\n>\n> On Wed, Nov 3, 2021 at 8:51 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > It is however, an undocumented modularity violation. I think that is\n> > acceptable because of the ProcArrayLock traffic, but needs to have a\n> > comment to explain this at the call to\n> > ExpireOldKnownAssignedTransactionIds() i.e. \" and potentially reset\n> > lastOverflowedXid\", as well as a comment on the\n> > ExpireOldKnownAssignedTransactionIds() function.\n>\n> Thank you for your feedback. Please find the revised patch attached.\n> It incorporates this function comment changes altogether with minor\n> editings and commit message. Let me know if you have further\n> suggestions.\n>\n> I'm going to push this if no objections.\n\nLooks good, go for it.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 4 Nov 2021 11:45:23 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "At Thu, 4 Nov 2021 01:07:05 +0300, Alexander Korotkov <aekorotkov@gmail.com> wrote in \n> Hi!\n> \n> On Wed, Nov 3, 2021 at 8:51 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > It is however, an undocumented modularity violation. I think that is\n> > acceptable because of the ProcArrayLock traffic, but needs to have a\n> > comment to explain this at the call to\n> > ExpireOldKnownAssignedTransactionIds() i.e. \" and potentially reset\n> > lastOverflowedXid\", as well as a comment on the\n> > ExpireOldKnownAssignedTransactionIds() function.\n> \n> Thank you for your feedback. Please find the revised patch attached.\n> It incorporates this function comment changes altogether with minor\n> editings and commit message. Let me know if you have further\n> suggestions.\n> \n> I'm going to push this if no objections.\n\nThanks for taking a look on and refining this, Simon and Alex! (while\nI was sick in bed X:)\n\nIt looks good to me except the commit Author doesn't contain the name\nof Alexander Korotkov?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 05 Nov 2021 16:31:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "Hi!\n\nOn Fri, Nov 5, 2021 at 10:31 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Thu, 4 Nov 2021 01:07:05 +0300, Alexander Korotkov <aekorotkov@gmail.com> wrote in\n> > On Wed, Nov 3, 2021 at 8:51 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > > It is however, an undocumented modularity violation. I think that is\n> > > acceptable because of the ProcArrayLock traffic, but needs to have a\n> > > comment to explain this at the call to\n> > > ExpireOldKnownAssignedTransactionIds() i.e. \" and potentially reset\n> > > lastOverflowedXid\", as well as a comment on the\n> > > ExpireOldKnownAssignedTransactionIds() function.\n> >\n> > Thank you for your feedback. Please find the revised patch attached.\n> > It incorporates this function comment changes altogether with minor\n> > editings and commit message. Let me know if you have further\n> > suggestions.\n> >\n> > I'm going to push this if no objections.\n>\n> Thanks for taking a look on and refining this, Simon and Alex! (while\n> I was sick in bed X:)\n>\n> It looks good to me except the commit Author doesn't contain the name\n> of Alexander Korotkov?\n\nThank you for the suggestion. And thanks to everybody for the feedback.\nPushed!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sat, 6 Nov 2021 19:16:09 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" }, { "msg_contents": "At Sat, 6 Nov 2021 19:16:09 +0300, Alexander Korotkov <aekorotkov@gmail.com> wrote in \n> Pushed!\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 08 Nov 2021 10:28:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lastOverflowedXid does not handle transaction ID wraparound" } ]
[ { "msg_contents": "Hi,\n\nI see that the recoveryStopsAfter() doesn't have any \"recovery\nstopping after XXXX\" sort of log for RECOVERY_TARGET_TIME recovery\ntarget type. It has similar logs for other recoveryTarget types\nthough. Is there any specific reason for not having it?\n\nI see that we have \"starting point-in-time recovery to XXXX\" sorts of\nlogs for all the recovery target types and also recoveryStopsBefore()\nhas a log (by setting stopsHere) for RECOVERY_TARGET_TIME.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 13 Oct 2021 19:56:17 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Missing log message in recoveryStopsAfter() for RECOVERY_TARGET_TIME\n recovery target type" }, { "msg_contents": "At Wed, 13 Oct 2021 19:56:17 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> Hi,\n> \n> I see that the recoveryStopsAfter() doesn't have any \"recovery\n> stopping after XXXX\" sort of log for RECOVERY_TARGET_TIME recovery\n> target type. It has similar logs for other recoveryTarget types\n> though. Is there any specific reason for not having it?\n>\n> I see that we have \"starting point-in-time recovery to XXXX\" sorts of\n> logs for all the recovery target types and also recoveryStopsBefore()\n> has a log (by setting stopsHere) for RECOVERY_TARGET_TIME.\n\nSo you should have seen the following comment there.\n>\t/*\n>\t * There can be many transactions that share the same commit time, so\n>\t * we stop after the last one, if we are inclusive, or stop at the\n>\t * first one if we are exclusive\n>\t */\n\nSince both inclusive and exclusive cases are processed in\nrecoveryStopsBefore(), recoveryStopsAfter() has nothing to do for the\ntarget type.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 14 Oct 2021 10:35:00 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Missing log message in recoveryStopsAfter() for\n RECOVERY_TARGET_TIME recovery target type" }, { "msg_contents": "On Thu, Oct 14, 2021 at 7:05 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 13 Oct 2021 19:56:17 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > Hi,\n> >\n> > I see that the recoveryStopsAfter() doesn't have any \"recovery\n> > stopping after XXXX\" sort of log for RECOVERY_TARGET_TIME recovery\n> > target type. It has similar logs for other recoveryTarget types\n> > though. Is there any specific reason for not having it?\n> >\n> > I see that we have \"starting point-in-time recovery to XXXX\" sorts of\n> > logs for all the recovery target types and also recoveryStopsBefore()\n> > has a log (by setting stopsHere) for RECOVERY_TARGET_TIME.\n>\n> So you should have seen the following comment there.\n> > /*\n> > * There can be many transactions that share the same commit time, so\n> > * we stop after the last one, if we are inclusive, or stop at the\n> > * first one if we are exclusive\n> > */\n>\n> Since both inclusive and exclusive cases are processed in\n> recoveryStopsBefore(), recoveryStopsAfter() has nothing to do for the\n> target type.\n\nIIUC, the recoveryStopsBefore handles the target type\nRECOVERY_TARGET_TIME and recoveryStopsAfter has nothing to do with the\ntarget type RECOVERY_TARGET_TIME when the actual recovery ends. Am I\ncorrect? If yes, can we have a comment in recoveryStopsBefore or\nrecoveryStopsAfter?\n\nI have another question: do recoveryStopsAfter and recoveryStopsBefore\never be doing useful work when ArchiveRecoveryRequested is true and\nrecoveryTarget is RECOVERY_TARGET_UNSET? With Assert(recoveryTarget !=\nRECOVERY_TARGET_UNSET);, in those two functions, the regression tests\nfail. May I know what is the recovery scenario (crash recovery or\nrecovery with specified target or recovery with unspecified target)\nthat makes the startup process call recoveryStopsAfter and\nrecoveryStopsBefore when ArchiveRecoveryRequested is true?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 16 Oct 2021 17:51:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Missing log message in recoveryStopsAfter() for\n RECOVERY_TARGET_TIME recovery target type" } ]
[ { "msg_contents": "As you might have seen from my email in another thread, thanks to\nStephen and Cybertec staff, I am back working on cluster file\nencryption/TDE.\n\nStephen was going to research if XTS cipher mode would be a good fit for\nthis since it was determined that CTR cipher mode was too vulnerable to\nIV reuse, and the LSN provides insufficient uniqueness. Stephen\nreported having trouble finding a definitive answer, so I figured I\nwould research it myself.\n\nOf course, I found the same lack of information that Stephen did. ;-)\nNone of my classic cryptographic books cover XTS or the XEX encryption\nmode it is based on, since XTS was only standardized in 2007 and\nrecommended in 2010. (Yeah, don't get me started on poor cryptographic\ndocumentation.)\n\nTherefore, I decide to go backward and look at CTR and CBC to see how\nthe nonce is used there, and if XTS fixes problems with nonce reuse.\n\nFirst, I originally chose CTR mode since it was a streaming cipher, and\nwe therefore could skip certain page fields like the LSN. However, CTR\nis very sensitive to LSN reuse since the input bits generate encrypted\nbits in exactly the same locations on the page. (It uses a simple XOR\nagainst a cipher). Since sometimes pages with different page contents\nare encrypted with the same LSN, especially on replicas, this method\nfailed.\n\nSecond is CBC mode. which is a block cipher. I thought that meant that\nyou could only encrypt 16-byte chunks, meaning you couldn't skip\nencryption of certain page fields unless they are 16-byte chunks. \nHowever, there is something called ciphertext stealing\n(https://en.wikipedia.org/wiki/Ciphertext_stealing#CBC_ciphertext_stealing)\nwhich allows that. I am not sure if OpenSSL supports this, but looking\nat my OpenSSL 1.1.1d manual entry for EVP_aes, cipher stealing is only\nmentioned for XTS.\n\nAnyway, CBC mode still needs a nonce for the first 16-byte block, and\nthen feeds the encrypted output of the first block as a IV to the second\nblock, etc. This gives us the same problem with finding a nonce per\npage. However, since it is a block cipher, the bits don't output in the\nsame locations they have on input, so that is less of a problem. There\nis also the problem that the encrypted output from one 16-byte block\ncould repeat, causing leakage.\n\nSo, let's look how XTS is designed. First, it uses two keys. If you\nare using AES128, you need _two_ 128-bit keys. If using AES256, you\nneed two 256-bit keys. The first of the two keys is used like normal,\nto encrypt the data. The second key, which is also secret, is used to\nencrypt the values used for the IV for the first 16-byte block (in our\ncase dboid, relfilenode, blocknum, maybe LSN). This is most clearly\nexplained here:\n\n\thttps://www.kingston.com/unitedstates/en/solutions/data-security/xts-encryption\n\nThat IV is XOR'ed against both the input value and the encryption output\nvalue, as explained here as key tweaking:\n\n\thttps://crossbowerbt.github.io/xts_mode_tweaking.html\n\nThe purpose of using it before and after encryption is explained here:\n\n\thttps://crypto.stackexchange.com/questions/24431/what-is-the-benefit-of-applying-the-tweak-a-second-time-using-xts\n\nThe second 16-byte block gets an IV that is the multiplication of the\nfirst IV and an alpha value raised to the second power but mapped to a\nfinite field (Galois field, modulus a prime). This effectively means an\nattacker has _no_ idea what the IV is since it involves a secret key,\nand each 16-byte block uses a different, unpredictable IV value. XTS\nalso supports ciphertext stealing by default so we can use the LSN if we\nwant, but we aren't sure we need to.\n\nFinally, there is an interesting web page about when not to use XTS:\n\n\thttps://sockpuppet.org/blog/2014/04/30/you-dont-want-xts/\n\nBasically, what XTS does is to make the IV unknown to attackers and\nnon-repeating except for multiple writes to a specific 16-byte block\n(with no LSN change). What isn't clear is if repeated encryption of\ndifferent data in the same 16-byte block can leak data.\n\nThis probably needs more research and maybe we need to write something\nup like the above and let security researchers review it since there\ndoesn't seem to be enough documentation for us to decide ourselves.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 13 Oct 2021 18:26:48 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "XTS cipher mode for cluster file encryption" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> As you might have seen from my email in another thread, thanks to\n> Stephen and Cybertec staff, I am back working on cluster file\n> encryption/TDE.\n> \n> Stephen was going to research if XTS cipher mode would be a good fit for\n> this since it was determined that CTR cipher mode was too vulnerable to\n> IV reuse, and the LSN provides insufficient uniqueness. Stephen\n> reported having trouble finding a definitive answer, so I figured I\n> would research it myself.\n> \n> Of course, I found the same lack of information that Stephen did. ;-)\n> None of my classic cryptographic books cover XTS or the XEX encryption\n> mode it is based on, since XTS was only standardized in 2007 and\n> recommended in 2010. (Yeah, don't get me started on poor cryptographic\n> documentation.)\n> \n> Therefore, I decide to go backward and look at CTR and CBC to see how\n> the nonce is used there, and if XTS fixes problems with nonce reuse.\n> \n> First, I originally chose CTR mode since it was a streaming cipher, and\n> we therefore could skip certain page fields like the LSN. However, CTR\n> is very sensitive to LSN reuse since the input bits generate encrypted\n> bits in exactly the same locations on the page. (It uses a simple XOR\n> against a cipher). Since sometimes pages with different page contents\n> are encrypted with the same LSN, especially on replicas, this method\n> failed.\n> \n> Second is CBC mode. which is a block cipher. I thought that meant that\n> you could only encrypt 16-byte chunks, meaning you couldn't skip\n> encryption of certain page fields unless they are 16-byte chunks. \n> However, there is something called ciphertext stealing\n> (https://en.wikipedia.org/wiki/Ciphertext_stealing#CBC_ciphertext_stealing)\n> which allows that. I am not sure if OpenSSL supports this, but looking\n> at my OpenSSL 1.1.1d manual entry for EVP_aes, cipher stealing is only\n> mentioned for XTS.\n> \n> Anyway, CBC mode still needs a nonce for the first 16-byte block, and\n> then feeds the encrypted output of the first block as a IV to the second\n> block, etc. This gives us the same problem with finding a nonce per\n> page. However, since it is a block cipher, the bits don't output in the\n> same locations they have on input, so that is less of a problem. There\n> is also the problem that the encrypted output from one 16-byte block\n> could repeat, causing leakage.\n> \n> So, let's look how XTS is designed. First, it uses two keys. If you\n> are using AES128, you need _two_ 128-bit keys. If using AES256, you\n> need two 256-bit keys. The first of the two keys is used like normal,\n> to encrypt the data. The second key, which is also secret, is used to\n> encrypt the values used for the IV for the first 16-byte block (in our\n> case dboid, relfilenode, blocknum, maybe LSN). This is most clearly\n> explained here:\n> \n> \thttps://www.kingston.com/unitedstates/en/solutions/data-security/xts-encryption\n> \n> That IV is XOR'ed against both the input value and the encryption output\n> value, as explained here as key tweaking:\n> \n> \thttps://crossbowerbt.github.io/xts_mode_tweaking.html\n> \n> The purpose of using it before and after encryption is explained here:\n> \n> \thttps://crypto.stackexchange.com/questions/24431/what-is-the-benefit-of-applying-the-tweak-a-second-time-using-xts\n> \n> The second 16-byte block gets an IV that is the multiplication of the\n> first IV and an alpha value raised to the second power but mapped to a\n> finite field (Galois field, modulus a prime). This effectively means an\n> attacker has _no_ idea what the IV is since it involves a secret key,\n> and each 16-byte block uses a different, unpredictable IV value. XTS\n> also supports ciphertext stealing by default so we can use the LSN if we\n> want, but we aren't sure we need to.\n\nYeah, this all seems to be about where I got to too.\n\n> Finally, there is an interesting web page about when not to use XTS:\n> \n> \thttps://sockpuppet.org/blog/2014/04/30/you-dont-want-xts/\n\nThis particular article always struck me as more of a reason for us, at\nleast, to use XTS than to not- in particular the very first comment it\nmakes, which seems to be pretty well supported, is: \"XTS is the de-facto\nstandard disk encryption mode.\" Much of the rest of it is the well\ntrodden discussion we've had about how FDE (or TDE in our case) doesn't\nprotect against all the attack vectors that sometimes people think it\ndoes. Another point is that XTS isn't authenticated- something else we\nknow quite well around here and isn't news.\n\n> Basically, what XTS does is to make the IV unknown to attackers and\n> non-repeating except for multiple writes to a specific 16-byte block\n> (with no LSN change). What isn't clear is if repeated encryption of\n> different data in the same 16-byte block can leak data.\n\nAny time a subset of the data is changed but the rest of it isn't,\nthere's a leak of information. This is a really good example of exactly\nwhat that looks like:\n\nhttps://github.com/robertdavidgraham/ecb-penguin\n\nIn our case, if/when this happens (no LSN change, repeated encryption\nof the same block), someone might be able to deduce that hint bits were\nbeing updated/changed, and where some of those are in the block.\n\nThat said, I don't think that's really a huge issue or something that's\na show stopper or a reason to hold off on using XTS. Note that what\nthose bits actually *are* isn't leaked, just that they changed in some\nfashion inside of that 16-byte cipher text block. That they're directly\nleaked with CTR is why there was concern raised about using that method,\nas discussed above and previously.\n\n> This probably needs more research and maybe we need to write something\n> up like the above and let security researchers review it since there\n> doesn't seem to be enough documentation for us to decide ourselves.\n\nThe one issue identified here is hopefully answered above and given that\nwhat you've found matches what I found, I'd argue that moving forward\nwith XTS makes sense.\n\nThe other bit of research that I wanted to do, and thanks for sending\nthis and prodding me to go do so, was to look at other implementations\nand see what they do for the IV when it comes to using XTS, and this is\nwhat I found:\n\nhttps://wiki.gentoo.org/wiki/Dm-crypt_full_disk_encryption\n\nSpecifically: The default cipher for LUKS is nowadays aes-xts-plain64\n\nand then this:\n\nhttps://gitlab.com/cryptsetup/cryptsetup/-/wikis/DMCrypt\n\nwhere plain64 is defined as:\n\nplain64: the initial vector is the 64-bit little-endian version of the\nsector number, padded with zeros if necessary\n\nThat is, the default for LUKS is AES, XTS, with a simple IV. That\nstrikes me as a pretty ringing endorsement.\n\nNow, to address the concern around re-encrypting a block with the same\nkey+IV but different data and leaking what parts of the page changed, I\ndo think we should use the LSN and have it change regularly (including\nunlogged tables) but that's just because it's relatively easy for us to\ndo and means an attacker wouldn't be able to tell what part of the page\nchanged when the LSN was also changed. That was also recommended by\nNIST and that's a pretty strong endorsement also.\n\nI'm all for getting security folks and whomever else to come and review\nthis thread and chime in with their thoughts, but I don't think it's a\nreason to hold off on moving forward with the approach that we've been\nconverging towards.\n\nThanks!\n\nStephen", "msg_date": "Fri, 15 Oct 2021 15:22:48 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "\n\nOn 10/15/21 21:22, Stephen Frost wrote:\n> Greetings,\n> \n> * Bruce Momjian (bruce@momjian.us) wrote:\n>> As you might have seen from my email in another thread, thanks to\n>> Stephen and Cybertec staff, I am back working on cluster file\n>> encryption/TDE.\n>>\n>> Stephen was going to research if XTS cipher mode would be a good fit for\n>> this since it was determined that CTR cipher mode was too vulnerable to\n>> IV reuse, and the LSN provides insufficient uniqueness. Stephen\n>> reported having trouble finding a definitive answer, so I figured I\n>> would research it myself.\n>>\n>> Of course, I found the same lack of information that Stephen did. ;-)\n>> None of my classic cryptographic books cover XTS or the XEX encryption\n>> mode it is based on, since XTS was only standardized in 2007 and\n>> recommended in 2010. (Yeah, don't get me started on poor cryptographic\n>> documentation.)\n>>\n>> Therefore, I decide to go backward and look at CTR and CBC to see how\n>> the nonce is used there, and if XTS fixes problems with nonce reuse.\n>>\n>> First, I originally chose CTR mode since it was a streaming cipher, and\n>> we therefore could skip certain page fields like the LSN. However, CTR\n>> is very sensitive to LSN reuse since the input bits generate encrypted\n>> bits in exactly the same locations on the page. (It uses a simple XOR\n>> against a cipher). Since sometimes pages with different page contents\n>> are encrypted with the same LSN, especially on replicas, this method\n>> failed.\n>>\n>> Second is CBC mode. which is a block cipher. I thought that meant that\n>> you could only encrypt 16-byte chunks, meaning you couldn't skip\n>> encryption of certain page fields unless they are 16-byte chunks.\n>> However, there is something called ciphertext stealing\n>> (https://en.wikipedia.org/wiki/Ciphertext_stealing#CBC_ciphertext_stealing)\n>> which allows that. I am not sure if OpenSSL supports this, but looking\n>> at my OpenSSL 1.1.1d manual entry for EVP_aes, cipher stealing is only\n>> mentioned for XTS.\n>>\n>> Anyway, CBC mode still needs a nonce for the first 16-byte block, and\n>> then feeds the encrypted output of the first block as a IV to the second\n>> block, etc. This gives us the same problem with finding a nonce per\n>> page. However, since it is a block cipher, the bits don't output in the\n>> same locations they have on input, so that is less of a problem. There\n>> is also the problem that the encrypted output from one 16-byte block\n>> could repeat, causing leakage.\n>>\n>> So, let's look how XTS is designed. First, it uses two keys. If you\n>> are using AES128, you need _two_ 128-bit keys. If using AES256, you\n>> need two 256-bit keys. The first of the two keys is used like normal,\n>> to encrypt the data. The second key, which is also secret, is used to\n>> encrypt the values used for the IV for the first 16-byte block (in our\n>> case dboid, relfilenode, blocknum, maybe LSN). This is most clearly\n>> explained here:\n>>\n>> \thttps://www.kingston.com/unitedstates/en/solutions/data-security/xts-encryption\n>>\n>> That IV is XOR'ed against both the input value and the encryption output\n>> value, as explained here as key tweaking:\n>>\n>> \thttps://crossbowerbt.github.io/xts_mode_tweaking.html\n>>\n>> The purpose of using it before and after encryption is explained here:\n>>\n>> \thttps://crypto.stackexchange.com/questions/24431/what-is-the-benefit-of-applying-the-tweak-a-second-time-using-xts\n>>\n>> The second 16-byte block gets an IV that is the multiplication of the\n>> first IV and an alpha value raised to the second power but mapped to a\n>> finite field (Galois field, modulus a prime). This effectively means an\n>> attacker has _no_ idea what the IV is since it involves a secret key,\n>> and each 16-byte block uses a different, unpredictable IV value. XTS\n>> also supports ciphertext stealing by default so we can use the LSN if we\n>> want, but we aren't sure we need to.\n> \n> Yeah, this all seems to be about where I got to too.\n> \n>> Finally, there is an interesting web page about when not to use XTS:\n>>\n>> \thttps://sockpuppet.org/blog/2014/04/30/you-dont-want-xts/\n> \n> This particular article always struck me as more of a reason for us, at\n> least, to use XTS than to not- in particular the very first comment it\n> makes, which seems to be pretty well supported, is: \"XTS is the de-facto\n> standard disk encryption mode.\" Much of the rest of it is the well\n> trodden discussion we've had about how FDE (or TDE in our case) doesn't\n> protect against all the attack vectors that sometimes people think it\n> does. Another point is that XTS isn't authenticated- something else we\n> know quite well around here and isn't news.\n> \n>> Basically, what XTS does is to make the IV unknown to attackers and\n>> non-repeating except for multiple writes to a specific 16-byte block\n>> (with no LSN change). What isn't clear is if repeated encryption of\n>> different data in the same 16-byte block can leak data.\n> \n> Any time a subset of the data is changed but the rest of it isn't,\n> there's a leak of information. This is a really good example of exactly\n> what that looks like:\n> \n> https://github.com/robertdavidgraham/ecb-penguin\n> \n> In our case, if/when this happens (no LSN change, repeated encryption\n> of the same block), someone might be able to deduce that hint bits were\n> being updated/changed, and where some of those are in the block.\n> \n> That said, I don't think that's really a huge issue or something that's\n> a show stopper or a reason to hold off on using XTS. Note that what\n> those bits actually *are* isn't leaked, just that they changed in some\n> fashion inside of that 16-byte cipher text block. That they're directly\n> leaked with CTR is why there was concern raised about using that method,\n> as discussed above and previously.\n> \n\nYeah. With CTR you pretty learn where the hint bits are exactly, while \nwith XTS the whole ciphertext changes.\n\nThis also means CTR is much more malleable, i.e. you can tweak the \nciphertext bits to flip the plaintext, while with XTS that's not really \npossible - it's pretty much guaranteed to break the block structure. Not \nsure if that's an issue for our use case, but if it is then neither of \nthe two modes is a solution.\n\n>> This probably needs more research and maybe we need to write something\n>> up like the above and let security researchers review it since there\n>> doesn't seem to be enough documentation for us to decide ourselves.\n> \n> The one issue identified here is hopefully answered above and given that\n> what you've found matches what I found, I'd argue that moving forward\n> with XTS makes sense.\n> \n\n+1\n\n> The other bit of research that I wanted to do, and thanks for sending\n> this and prodding me to go do so, was to look at other implementations\n> and see what they do for the IV when it comes to using XTS, and this is\n> what I found:\n> \n> https://wiki.gentoo.org/wiki/Dm-crypt_full_disk_encryption\n> \n> Specifically: The default cipher for LUKS is nowadays aes-xts-plain64\n> \n> and then this:\n> \n> https://gitlab.com/cryptsetup/cryptsetup/-/wikis/DMCrypt\n> \n> where plain64 is defined as:\n> \n> plain64: the initial vector is the 64-bit little-endian version of the\n> sector number, padded with zeros if necessary\n> \n> That is, the default for LUKS is AES, XTS, with a simple IV. That\n> strikes me as a pretty ringing endorsement.\n> \n\nSeems reasonable, on the assumption the threat models are the same.\n\n> Now, to address the concern around re-encrypting a block with the same\n> key+IV but different data and leaking what parts of the page changed, I\n> do think we should use the LSN and have it change regularly (including\n> unlogged tables) but that's just because it's relatively easy for us to\n> do and means an attacker wouldn't be able to tell what part of the page\n> changed when the LSN was also changed. That was also recommended by\n> NIST and that's a pretty strong endorsement also.\n> \n\nNot sure - it seems a bit weird to force LSN change even in cases that \ndon't generate any WAL. I was not following the encryption thread and \nmaybe it was discussed/rejected there, but I've always imagined we'd \nhave a global nonce generator (similar to a sequence) and we'd store it \nat the end of each block, or something like that.\n\n> I'm all for getting security folks and whomever else to come and review\n> this thread and chime in with their thoughts, but I don't think it's a\n> reason to hold off on moving forward with the approach that we've been\n> converging towards.\n> \n\n+1\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 15 Oct 2021 22:57:03 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Fri, Oct 15, 2021 at 3:22 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Specifically: The default cipher for LUKS is nowadays aes-xts-plain64\n>\n> and then this:\n>\n> https://gitlab.com/cryptsetup/cryptsetup/-/wikis/DMCrypt\n>\n> where plain64 is defined as:\n>\n> plain64: the initial vector is the 64-bit little-endian version of the\n> sector number, padded with zeros if necessary\n>\n> That is, the default for LUKS is AES, XTS, with a simple IV. That\n> strikes me as a pretty ringing endorsement.\n\nYes, that sounds promising. It might not hurt to check for other\nprecedents as well, but that seems like a pretty good one.\n\nI'm not very convinced that using the LSN for any of this is a good\nidea. Something that changes most of the time but not all the time\nseems more like it could hurt by masking fuzzy thinking more than it\nhelps anything.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 15 Oct 2021 17:02:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Hi,\n\nOn 2021-10-15 15:22:48 -0400, Stephen Frost wrote:\n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > Finally, there is an interesting web page about when not to use XTS:\n> > \n> > \thttps://sockpuppet.org/blog/2014/04/30/you-dont-want-xts/\n> \n> This particular article always struck me as more of a reason for us, at\n> least, to use XTS than to not- in particular the very first comment it\n> makes, which seems to be pretty well supported, is: \"XTS is the de-facto\n> standard disk encryption mode.\"\n\nI don't find that line of argument *that* convincing. The reason XTS is the\nde-facto standard is that for generic block layer encryption is that you can't\nadd additional data for each block without very significant overhead\n(basically needing journaling to ensure that the data doesn't get out of\nsync). But we don't really face the same situation - we *can* add additional\ndata.\n\nWith something like AES-GCM-SIV we can use the additional data to get IV reuse\nresistance *and* authentication. And while perhaps we are ok with the IV reuse\nguarantees XTS has, it seems pretty clear that we'll want want guaranteed\nauthenticity at some point. And then we'll need extra data anyway.\n\nThus, to me, it doesn't seem worth going down the XTS route, just to\ntemporarily save a bit of implementation effort. We'll have to endure that\npain anyway.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 15 Oct 2021 14:21:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "\n\nOn 10/15/21 23:02, Robert Haas wrote:\n> On Fri, Oct 15, 2021 at 3:22 PM Stephen Frost <sfrost@snowman.net> wrote:\n>> Specifically: The default cipher for LUKS is nowadays aes-xts-plain64\n>>\n>> and then this:\n>>\n>> https://gitlab.com/cryptsetup/cryptsetup/-/wikis/DMCrypt\n>>\n>> where plain64 is defined as:\n>>\n>> plain64: the initial vector is the 64-bit little-endian version of the\n>> sector number, padded with zeros if necessary\n>>\n>> That is, the default for LUKS is AES, XTS, with a simple IV. That\n>> strikes me as a pretty ringing endorsement.\n> \n> Yes, that sounds promising. It might not hurt to check for other\n> precedents as well, but that seems like a pretty good one.\n> \n\nTrueCrypt/VeraCrypt uses XTS too, I think. There's an overview of other \nFDE products at [1], and some of them use XTS, but I would take that \nwith a grain of salt - some of the products are somewhat obscure, very \nold, or both.\n\nWhat is probably more interesting is that there's an IEEE standard [2] \ndealing with encrypted shared storage, and that uses XTS too. I'd bet \nthere's a bunch of smart cryptographers involved.\n\n\n[1] https://en.wikipedia.org/wiki/Comparison_of_disk_encryption_software\n\n[2] https://en.wikipedia.org/wiki/IEEE_P1619\n\n> I'm not very convinced that using the LSN for any of this is a good\n> idea. Something that changes most of the time but not all the time\n> seems more like it could hurt by masking fuzzy thinking more than it\n> helps anything.\n> \n\nI haven't been following the discussion about using LSN, but I agree \nthat while using it seems convenient, the consequences of some changes \nnot incrementing LSN seem potentially disastrous, depending on the \nencryption mode.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 15 Oct 2021 23:26:01 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Fri, Oct 15, 2021 at 10:57:03PM +0200, Tomas Vondra wrote:\n> > That said, I don't think that's really a huge issue or something that's\n> > a show stopper or a reason to hold off on using XTS. Note that what\n> > those bits actually *are* isn't leaked, just that they changed in some\n> > fashion inside of that 16-byte cipher text block. That they're directly\n> > leaked with CTR is why there was concern raised about using that method,\n> > as discussed above and previously.\n> > \n> \n> Yeah. With CTR you pretty learn where the hint bits are exactly, while with\n> XTS the whole ciphertext changes.\n> \n> This also means CTR is much more malleable, i.e. you can tweak the\n> ciphertext bits to flip the plaintext, while with XTS that's not really\n> possible - it's pretty much guaranteed to break the block structure. Not\n> sure if that's an issue for our use case, but if it is then neither of the\n> two modes is a solution.\n\nYes, this is a vary good point. Let's look at the impact of _not_ using\nthe LSN. For CTR (already rejected) bit changes would be visible by\ncomparing old/new page contents. For CBC (also not under consideration)\nthe first 16-byte block would show a change, and all later 16-byte\nblocks would show a change. For CBC, you see the 16-byte blocks change,\nbut you have no idea how many bits were changed, and in what locations\nin the 16-byte block (AES uses substitution and diffusion). For XTS,\nbecause earlier blocks don't change the IV used by later blocks like\nCBC, you would be able to see each 16-byte block that changed in the 8k\npage. Again, you would not know the number of bits changed or their\nlocations.\n\nDo we think knowing which 16-byte blocks on an 8k page change would leak\nuseful information? If so, we should use the LSN and just accept that\nsome cases might leak as described above. If we don't care, then we can\nskip the use of the LSN and simplify the patch.\n\n> Not sure - it seems a bit weird to force LSN change even in cases that don't\n> generate any WAL. I was not following the encryption thread and maybe it was\n> discussed/rejected there, but I've always imagined we'd have a global nonce\n> generator (similar to a sequence) and we'd store it at the end of each\n> block, or something like that.\n\nStoring the nonce in the page means more code complexity, possible\nperformance impact, and the inability to create standbys via binary\nreplication that use cluster file encryption.\n\nAs a final comment to Andres's email, adding a GCM has the problems\nabove, plus it wouldn't detect changes to pg_xact, fsm, vm, etc, which\ncould also affect the integrity of the data. Someone could also restore\nand old copy of a patch to revert a change, and that would not be\ndetected even by GCM.\n\nI consider this a checkbox feature and making it too complex will cause\nit to be rightly rejected.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sat, 16 Oct 2021 10:16:25 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Hi,\n\nOn 2021-10-16 10:16:25 -0400, Bruce Momjian wrote:\n> As a final comment to Andres's email, adding a GCM has the problems\n> above, plus it wouldn't detect changes to pg_xact, fsm, vm, etc, which\n> could also affect the integrity of the data. Someone could also restore\n> and old copy of a patch to revert a change, and that would not be\n> detected even by GCM.\n\n> I consider this a checkbox feature and making it too complex will cause\n> it to be rightly rejected.\n\nYou're just deferring / hiding the complexity. For one, we'll need integrity\nbefore long if we add encryption support. Then we'll deal with a more complex\non-disk format because there will be two different ways of encrypting. For\nanother, you're spreading out the security analysis to a lot of places in the\ncode and more importantly to future changes affecting on-disk data.\n\nIf it's really just a checkbox feature without a real use case, then we should\njust reject requests for it and use our energy for useful things.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 16 Oct 2021 09:15:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Sat, Oct 16, 2021 at 09:15:05AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2021-10-16 10:16:25 -0400, Bruce Momjian wrote:\n> > As a final comment to Andres's email, adding a GCM has the problems\n> > above, plus it wouldn't detect changes to pg_xact, fsm, vm, etc, which\n> > could also affect the integrity of the data. Someone could also restore\n> > and old copy of a patch to revert a change, and that would not be\n> > detected even by GCM.\n> \n> > I consider this a checkbox feature and making it too complex will cause\n> > it to be rightly rejected.\n> \n> You're just deferring / hiding the complexity. For one, we'll need integrity\n> before long if we add encryption support. Then we'll deal with a more complex\n> on-disk format because there will be two different ways of encrypting. For\n> another, you're spreading out the security analysis to a lot of places in the\n> code and more importantly to future changes affecting on-disk data.\n> \n> If it's really just a checkbox feature without a real use case, then we should\n> just reject requests for it and use our energy for useful things.\n\nAgreed. That is the conclusion I came to in May:\n\n\thttps://www.postgresql.org/message-id/20210526210201.GZ3048%40momjian.us\n\thttps://www.postgresql.org/message-id/20210527160003.GF5646%40momjian.us\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sat, 16 Oct 2021 12:28:51 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "\n\nOn 10/16/21 16:16, Bruce Momjian wrote:\n> On Fri, Oct 15, 2021 at 10:57:03PM +0200, Tomas Vondra wrote:\n>>> That said, I don't think that's really a huge issue or something that's\n>>> a show stopper or a reason to hold off on using XTS. Note that what\n>>> those bits actually *are* isn't leaked, just that they changed in some\n>>> fashion inside of that 16-byte cipher text block. That they're directly\n>>> leaked with CTR is why there was concern raised about using that method,\n>>> as discussed above and previously.\n>>>\n>>\n>> Yeah. With CTR you pretty learn where the hint bits are exactly, while with\n>> XTS the whole ciphertext changes.\n>>\n>> This also means CTR is much more malleable, i.e. you can tweak the\n>> ciphertext bits to flip the plaintext, while with XTS that's not really\n>> possible - it's pretty much guaranteed to break the block structure. Not\n>> sure if that's an issue for our use case, but if it is then neither of the\n>> two modes is a solution.\n> \n> Yes, this is a vary good point. Let's look at the impact of _not_ using\n> the LSN. For CTR (already rejected) bit changes would be visible by\n> comparing old/new page contents. For CBC (also not under consideration)\n> the first 16-byte block would show a change, and all later 16-byte\n> blocks would show a change. For CBC, you see the 16-byte blocks change,\n> but you have no idea how many bits were changed, and in what locations\n> in the 16-byte block (AES uses substitution and diffusion). For XTS,\n> because earlier blocks don't change the IV used by later blocks like\n> CBC, you would be able to see each 16-byte block that changed in the 8k\n> page. Again, you would not know the number of bits changed or their\n> locations.\n> \n> Do we think knowing which 16-byte blocks on an 8k page change would leak\n> useful information? If so, we should use the LSN and just accept that\n> some cases might leak as described above. If we don't care, then we can\n> skip the use of the LSN and simplify the patch.\n> \n>> Not sure - it seems a bit weird to force LSN change even in cases that don't\n>> generate any WAL. I was not following the encryption thread and maybe it was\n>> discussed/rejected there, but I've always imagined we'd have a global nonce\n>> generator (similar to a sequence) and we'd store it at the end of each\n>> block, or something like that.\n> \n> Storing the nonce in the page means more code complexity, possible\n> performance impact, and the inability to create standbys via binary\n> replication that use cluster file encryption.\n> \n\nWould it really be that complex? Reserving a bunch of bytes at the end \nof each encrypted page (a bit like the \"special\" space, but after \nencryption) seems fairly straightforward. And I don't quite see why \nwould this have a measurable impact, given the nonce is 16B at most. The \nencryption is likely way more expensive.\n\nMoreover, it seems fairly reasonable to trade a bit of code complexity \nfor something LSN-based which seems simpler but apparently has various \nweak points and is much harder to reason about.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 17 Oct 2021 23:11:49 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On 10/16/21 18:28, Bruce Momjian wrote:\n> On Sat, Oct 16, 2021 at 09:15:05AM -0700, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2021-10-16 10:16:25 -0400, Bruce Momjian wrote:\n>>> As a final comment to Andres's email, adding a GCM has the problems\n>>> above, plus it wouldn't detect changes to pg_xact, fsm, vm, etc, which\n>>> could also affect the integrity of the data. Someone could also restore\n>>> and old copy of a patch to revert a change, and that would not be\n>>> detected even by GCM.\n>>\n>>> I consider this a checkbox feature and making it too complex will cause\n>>> it to be rightly rejected.\n>>\n>> You're just deferring / hiding the complexity. For one, we'll need integrity\n>> before long if we add encryption support. Then we'll deal with a more complex\n>> on-disk format because there will be two different ways of encrypting. For\n>> another, you're spreading out the security analysis to a lot of places in the\n>> code and more importantly to future changes affecting on-disk data.\n>>\n\nI've argued for storing the nonce, but I don't quite see why would we \nneed integrity guarantees?\n\nAFAICS the threat model the patch aims to address is an attacker who can \nobserve the data (e.g. a low-privileged OS user), but can't modify the \nfiles. Which seems like a reasonable model for shared environments.\n\nIMO extending this to cases where the attacker can modify the data moves \nthe goalposts quite significantly. And it's quite possible authenticated \nencryption would not be enough to prevent that, because that still works \nonly at block level, and you can probably do a lot of harm with replay \nattacks (e.g. replacing blocks with older versions). And if you can \nmodify the data directory / config files, what are the chances you can't \njust get access to the database, trace the processes or whatever?\n\nWe already have a way to check integrity by storing page checksum, but \nI'm not sure if that's good enough (there's a lot of subtle issues with \nbuilding proper AEAD scheme).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 17 Oct 2021 23:23:49 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Just a mention. the HMAC (or AE/AD) can be disabled in AES-GCM. HMAC in \r\nAES-GCM is an encrypt-then-hash MAC.\r\n\r\nCRC-32 is not a crypto-safe hash (technically CRC-32 is not a hash \r\nfunction). Cryptographers may unhappy with CRC-32.\r\n\r\nI think CRC or SHA is not such important. If IV can be stored, I believe \r\nthere should have enough space to store HMAC.\r\n\r\nOn 2021/10/18 05:23, Tomas Vondra wrote:\r\n> \r\n> I've argued for storing the nonce, but I don't quite see why would we \r\n> need integrity guarantees?", "msg_date": "Mon, 18 Oct 2021 10:19:48 +0800", "msg_from": "Sasasu <i@sasa.su>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On 2021/10/16 04:57, Tomas Vondra wrote:\r\n >\r\n > Seems reasonable, on the assumption the threat models are the same.\r\n\r\nOn 2021/10/16 03:22, Stephen Frost wrote:\r\n> plain64: the initial vector is the 64-bit little-endian version of the\r\n> sector number, padded with zeros if necessary\r\n> \r\n> That is, the default for LUKS is AES, XTS, with a simple IV. That\r\n> strikes me as a pretty ringing endorsement\r\nOn 2021/10/18 05:23, Tomas Vondra wrote:\r\n >\r\n > AFAICS the threat model the patch aims to address is an attacker who can\r\n > observe the data (e.g. a low-privileged OS user), but can't modify the\r\n > files. Which seems like a reasonable model for shared environments.\r\n\r\nI agree this threat model.\r\n\r\nAnd if PostgreSQL is using XTS, there is no different with dm-encrypt.\r\nThe user can use dm-encrypt directly.", "msg_date": "Mon, 18 Oct 2021 10:35:48 +0800", "msg_from": "Sasasu <i@sasa.su>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 10/15/21 21:22, Stephen Frost wrote:\n> >Now, to address the concern around re-encrypting a block with the same\n> >key+IV but different data and leaking what parts of the page changed, I\n> >do think we should use the LSN and have it change regularly (including\n> >unlogged tables) but that's just because it's relatively easy for us to\n> >do and means an attacker wouldn't be able to tell what part of the page\n> >changed when the LSN was also changed. That was also recommended by\n> >NIST and that's a pretty strong endorsement also.\n> \n> Not sure - it seems a bit weird to force LSN change even in cases that don't\n> generate any WAL. I was not following the encryption thread and maybe it was\n> discussed/rejected there, but I've always imagined we'd have a global nonce\n> generator (similar to a sequence) and we'd store it at the end of each\n> block, or something like that.\n\nThe 'LSN' being referred to here isn't the regular LSN that is\nassociated with the WAL but rather the separate FakeLSN counter which we\nalready have. I wasn't suggesting having the regular LSN change in\ncases that don't generate WAL.\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Oct 15, 2021 at 3:22 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Specifically: The default cipher for LUKS is nowadays aes-xts-plain64\n> >\n> > and then this:\n> >\n> > https://gitlab.com/cryptsetup/cryptsetup/-/wikis/DMCrypt\n> >\n> > where plain64 is defined as:\n> >\n> > plain64: the initial vector is the 64-bit little-endian version of the\n> > sector number, padded with zeros if necessary\n> >\n> > That is, the default for LUKS is AES, XTS, with a simple IV. That\n> > strikes me as a pretty ringing endorsement.\n> \n> Yes, that sounds promising. It might not hurt to check for other\n> precedents as well, but that seems like a pretty good one.\n> \n> I'm not very convinced that using the LSN for any of this is a good\n> idea. Something that changes most of the time but not all the time\n> seems more like it could hurt by masking fuzzy thinking more than it\n> helps anything.\n\nThis argument doesn't come across as very strong at all to me,\nparticularly when we have explicit recommendations from NIST that having\nthe IV vary more is beneficial. While this would be using the LSN, the\nfact that the LSN changes most of the time but not all of the time isn't\nnew and is something we already have to deal with. I'd think we'd\naddress the concern about mis-thinking around how this works by\nproviding a README and/or an appropriate set of comments around what's\nbeing done and why.\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2021-10-15 15:22:48 -0400, Stephen Frost wrote:\n> > * Bruce Momjian (bruce@momjian.us) wrote:\n> > > Finally, there is an interesting web page about when not to use XTS:\n> > > \n> > > \thttps://sockpuppet.org/blog/2014/04/30/you-dont-want-xts/\n> > \n> > This particular article always struck me as more of a reason for us, at\n> > least, to use XTS than to not- in particular the very first comment it\n> > makes, which seems to be pretty well supported, is: \"XTS is the de-facto\n> > standard disk encryption mode.\"\n> \n> I don't find that line of argument *that* convincing. The reason XTS is the\n> de-facto standard is that for generic block layer encryption is that you can't\n> add additional data for each block without very significant overhead\n> (basically needing journaling to ensure that the data doesn't get out of\n> sync). But we don't really face the same situation - we *can* add additional\n> data.\n\nNo, we can't always add additional data, and that's part of the\nconsideration for an XTS option- there are things we can do if we use\nXTS that we can't with GCM or another solution. Specifically, being\nable to perform physical replication from an unencrypted cluster to an\nencrypted one is a worthwhile use-case that we shouldn't be just tossing\nout.\n\n> With something like AES-GCM-SIV we can use the additional data to get IV reuse\n> resistance *and* authentication. And while perhaps we are ok with the IV reuse\n> guarantees XTS has, it seems pretty clear that we'll want want guaranteed\n> authenticity at some point. And then we'll need extra data anyway.\n\nI agree that it'd be useful to have an authenticated encryption option.\nImplementing XTS doesn't preclude us from adding that capability down\nthe road and it's simpler with fewer dependencies. These all strike me\nas good reasons to add XTS first.\n\n> Thus, to me, it doesn't seem worth going down the XTS route, just to\n> temporarily save a bit of implementation effort. We'll have to endure that\n> pain anyway.\n\nThis isn't a valid argument as it isn't just about implementation but\nabout the capabilities we will have once it's done.\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 10/15/21 23:02, Robert Haas wrote:\n> >On Fri, Oct 15, 2021 at 3:22 PM Stephen Frost <sfrost@snowman.net> wrote:\n> >>That is, the default for LUKS is AES, XTS, with a simple IV. That\n> >>strikes me as a pretty ringing endorsement.\n> >\n> >Yes, that sounds promising. It might not hurt to check for other\n> >precedents as well, but that seems like a pretty good one.\n> \n> TrueCrypt/VeraCrypt uses XTS too, I think. There's an overview of other FDE\n> products at [1], and some of them use XTS, but I would take that with a\n> grain of salt - some of the products are somewhat obscure, very old, or\n> both.\n> \n> What is probably more interesting is that there's an IEEE standard [2]\n> dealing with encrypted shared storage, and that uses XTS too. I'd bet\n> there's a bunch of smart cryptographers involved.\n\nThanks for finding those and linking to them, that's helpful.\n\n> >I'm not very convinced that using the LSN for any of this is a good\n> >idea. Something that changes most of the time but not all the time\n> >seems more like it could hurt by masking fuzzy thinking more than it\n> >helps anything.\n> \n> I haven't been following the discussion about using LSN, but I agree that\n> while using it seems convenient, the consequences of some changes not\n> incrementing LSN seem potentially disastrous, depending on the encryption\n> mode.\n\nYes, this depends on the encryption mode, and is why we are specifically\ntalking about XTS here as it's an encryption mode that doesn't suffer\nfrom this risk and therefore it's perfectly fine to use the LSN/FakeLSN\nwith XTS (and would also be alright for AES-GCM-SIV as it's specifically\ndesigned to be resistant to IV reuse).\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Fri, Oct 15, 2021 at 10:57:03PM +0200, Tomas Vondra wrote:\n> > > That said, I don't think that's really a huge issue or something that's\n> > > a show stopper or a reason to hold off on using XTS. Note that what\n> > > those bits actually *are* isn't leaked, just that they changed in some\n> > > fashion inside of that 16-byte cipher text block. That they're directly\n> > > leaked with CTR is why there was concern raised about using that method,\n> > > as discussed above and previously.\n> > \n> > Yeah. With CTR you pretty learn where the hint bits are exactly, while with\n> > XTS the whole ciphertext changes.\n> > \n> > This also means CTR is much more malleable, i.e. you can tweak the\n> > ciphertext bits to flip the plaintext, while with XTS that's not really\n> > possible - it's pretty much guaranteed to break the block structure. Not\n> > sure if that's an issue for our use case, but if it is then neither of the\n> > two modes is a solution.\n> \n> Yes, this is a vary good point. Let's look at the impact of _not_ using\n> the LSN. For CTR (already rejected) bit changes would be visible by\n> comparing old/new page contents. For CBC (also not under consideration)\n> the first 16-byte block would show a change, and all later 16-byte\n> blocks would show a change. For CBC, you see the 16-byte blocks change,\n> but you have no idea how many bits were changed, and in what locations\n> in the 16-byte block (AES uses substitution and diffusion). For XTS,\n> because earlier blocks don't change the IV used by later blocks like\n> CBC, you would be able to see each 16-byte block that changed in the 8k\n> page. Again, you would not know the number of bits changed or their\n> locations.\n> \n> Do we think knowing which 16-byte blocks on an 8k page change would leak\n> useful information? If so, we should use the LSN and just accept that\n> some cases might leak as described above. If we don't care, then we can\n> skip the use of the LSN and simplify the patch.\n\nWhile there may not be an active attack against PG that leverages such a\nleak, I have a hard time seeing why we would intentionally design this\nin when we have a great option that's directly available to us and\ndoesn't cause such a leak with nearly such regularity as not using the\nLSN would, and also follows recommendations of using XTS from NIST.\nFurther, not using the LSN wouldn't really be an option if we did\neventually implement AES-GCM-SIV, so why not have the two cases be\nconsistent?\n\n> > Not sure - it seems a bit weird to force LSN change even in cases that don't\n> > generate any WAL. I was not following the encryption thread and maybe it was\n> > discussed/rejected there, but I've always imagined we'd have a global nonce\n> > generator (similar to a sequence) and we'd store it at the end of each\n> > block, or something like that.\n> \n> Storing the nonce in the page means more code complexity, possible\n> performance impact, and the inability to create standbys via binary\n> replication that use cluster file encryption.\n\nRight- the point of XTS is that we can do things that we can't with GCM\nand that it's simpler.\n\n> As a final comment to Andres's email, adding a GCM has the problems\n> above, plus it wouldn't detect changes to pg_xact, fsm, vm, etc, which\n> could also affect the integrity of the data. Someone could also restore\n> and old copy of a patch to revert a change, and that would not be\n> detected even by GCM.\n\nThat's an argument based on how things stand today. I appreciate that\nit's no small thing to consider changes to those other systems but I\nwould argue that having authentication of the heap is still better than\nnot (but also agree that XTS is simpler to implement and therefore makes\nsense to do first and see how things stand after that's done). Surely,\nwe would want progress made here to be done so incrementally as a patch\nthat attempted to change all of those other systems to be encrypted and\nauthenticated with AES-GCM-SIV would be far too large to consider in one\nshot anyway.\n\n> I consider this a checkbox feature and making it too complex will cause\n> it to be rightly rejected.\n\nPresuming that 'checkbox feature' here means \"we need it to please\n$someone but no one will ever use it\" or something along those lines,\nthis is very clearly not the case and therefore we shouldn't be\ndescribing it or treating it as such. Even if the meaning here is\n\"there's other ways people could get this capability\" the reality is\nthat those other methods are simply not always available and in those\ncases, people will choose to not use PostgreSQL. Nearly every other\ndatabase system which we might compare ourselves to has a solution in\nthis area and people actively use those solutions in a lot of\ndeployments.\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2021-10-16 10:16:25 -0400, Bruce Momjian wrote:\n> > As a final comment to Andres's email, adding a GCM has the problems\n> > above, plus it wouldn't detect changes to pg_xact, fsm, vm, etc, which\n> > could also affect the integrity of the data. Someone could also restore\n> > and old copy of a patch to revert a change, and that would not be\n> > detected even by GCM.\n> \n> > I consider this a checkbox feature and making it too complex will cause\n> > it to be rightly rejected.\n> \n> You're just deferring / hiding the complexity. For one, we'll need integrity\n> before long if we add encryption support. Then we'll deal with a more complex\n> on-disk format because there will be two different ways of encrypting. For\n> another, you're spreading out the security analysis to a lot of places in the\n> code and more importantly to future changes affecting on-disk data.\n\nI don't follow this argument. The XTS approach is explicitly the same\non-disk format as what we have unencrypted today, just encrypted, and\nthat's the whole point of going with that approach. If we were to\nimplement AES-GCM-SIV, then that would introduce a new on-disk format\nand then we'd have two- one which has space on each page for the\nauthentication information, and one which doesn't.\n\n> If it's really just a checkbox feature without a real use case, then we should\n> just reject requests for it and use our energy for useful things.\n\nThis capability certainly has a real use-case and it's one that a lot of\norganizations are looking for PG to provide a solution for. That we\ndon't today is keeping those organizations from using PG in at least\nsome cases, and for some organizations, it prevents them from using PG\nat all, as they understandably would rather not deal with a hybrid of\nusing PG for some things and having to use another solution for other\nthings.\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 10/16/21 16:16, Bruce Momjian wrote:\n> >Storing the nonce in the page means more code complexity, possible\n> >performance impact, and the inability to create standbys via binary\n> >replication that use cluster file encryption.\n> \n> Would it really be that complex? Reserving a bunch of bytes at the end of\n> each encrypted page (a bit like the \"special\" space, but after encryption)\n> seems fairly straightforward. And I don't quite see why would this have a\n> measurable impact, given the nonce is 16B at most. The encryption is likely\n> way more expensive.\n\nThere's a patch to do exactly this- make space available towards the end\nof the page. If we go down the route of using a different page format\nthen we lose the ability to do physical replication between an\nunencrypted cluster and an encrypted one. That's certainly a nice\ncapability to have and it will help people migrate to an encrypted PG\ninstance, plus it's overall simpler to work with, which is also an\nadvantge.\n\n> Moreover, it seems fairly reasonable to trade a bit of code complexity for\n> something LSN-based which seems simpler but apparently has various weak\n> points and is much harder to reason about.\n\nThis isn't just about code complexity but is also about the resulting\ncapabilities from these different approaches.\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 10/16/21 18:28, Bruce Momjian wrote:\n> >On Sat, Oct 16, 2021 at 09:15:05AM -0700, Andres Freund wrote:\n> >>On 2021-10-16 10:16:25 -0400, Bruce Momjian wrote:\n> >>>As a final comment to Andres's email, adding a GCM has the problems\n> >>>above, plus it wouldn't detect changes to pg_xact, fsm, vm, etc, which\n> >>>could also affect the integrity of the data. Someone could also restore\n> >>>and old copy of a patch to revert a change, and that would not be\n> >>>detected even by GCM.\n> >>\n> >>>I consider this a checkbox feature and making it too complex will cause\n> >>>it to be rightly rejected.\n> >>\n> >>You're just deferring / hiding the complexity. For one, we'll need integrity\n> >>before long if we add encryption support. Then we'll deal with a more complex\n> >>on-disk format because there will be two different ways of encrypting. For\n> >>another, you're spreading out the security analysis to a lot of places in the\n> >>code and more importantly to future changes affecting on-disk data.\n> \n> I've argued for storing the nonce, but I don't quite see why would we need\n> integrity guarantees?\n> \n> AFAICS the threat model the patch aims to address is an attacker who can\n> observe the data (e.g. a low-privileged OS user), but can't modify the\n> files. Which seems like a reasonable model for shared environments.\n\nThere are multiple threat models which we should be considering and\nthat's why we may want to eventually add integrity.\n\n> IMO extending this to cases where the attacker can modify the data moves the\n> goalposts quite significantly. And it's quite possible authenticated\n> encryption would not be enough to prevent that, because that still works\n> only at block level, and you can probably do a lot of harm with replay\n> attacks (e.g. replacing blocks with older versions). And if you can modify\n> the data directory / config files, what are the chances you can't just get\n> access to the database, trace the processes or whatever?\n\nI agree that working towards an authenticated solution is a larger task.\nI don't agree that we should throw out the possibility that we may want\nto implement it eventually as there are certainly threat models where an\nattacker might have access to the storage but not to the database or the\nsystem on which the database is running. Implementing a system to\naddress such an attack vector would take more consideration than just\nhaving authenticated encryption provided by PG, but it certainly\ncouldn't be done without that either.\n\n> We already have a way to check integrity by storing page checksum, but I'm\n> not sure if that's good enough (there's a lot of subtle issues with building\n> proper AEAD scheme).\n\nNo, it isn't good enough.\n\n* Sasasu (i@sasa.su) wrote:\n> Just a mention. the HMAC (or AE/AD) can be disabled in AES-GCM. HMAC in\n> AES-GCM is an encrypt-then-hash MAC.\n\nNot sure why you would though.\n\n> CRC-32 is not a crypto-safe hash (technically CRC-32 is not a hash\n> function). Cryptographers may unhappy with CRC-32.\n\nYes, that's correct (and it isn't even CRC-32 that we have, heh).\n\n> I think CRC or SHA is not such important. If IV can be stored, I believe\n> there should have enough space to store HMAC.\n\nThis would be the case, yes. If we can find a way to make room for an\nIV then we could make room to store the tag too, and we certainly should\n(and we should include a variety of additional data in the AEAD- block\nnumber, relfileno, etc).\n\n* Sasasu (i@sasa.su) wrote:\n> On 2021/10/16 04:57, Tomas Vondra wrote:\n> > Seems reasonable, on the assumption the threat models are the same.\n> \n> On 2021/10/16 03:22, Stephen Frost wrote:\n> >plain64: the initial vector is the 64-bit little-endian version of the\n> >sector number, padded with zeros if necessary\n> >\n> >That is, the default for LUKS is AES, XTS, with a simple IV. That\n> >strikes me as a pretty ringing endorsement\n> On 2021/10/18 05:23, Tomas Vondra wrote:\n> > AFAICS the threat model the patch aims to address is an attacker who can\n> > observe the data (e.g. a low-privileged OS user), but can't modify the\n> > files. Which seems like a reasonable model for shared environments.\n> \n> I agree this threat model.\n> \n> And if PostgreSQL is using XTS, there is no different with dm-encrypt.\n> The user can use dm-encrypt directly.\n\ndm-encrypt is not always an option and it doesn't actually address the\nthreat-model that Tomas brought up here anyway, as it would be below the\nlevel that the low-privileged OS user would be looking at. That's not\nthe only threat model to consider, but it is one which could potentially\nbe addressed by either XTS or AES-GCM-SIV. There are threat models\nwhich dm-crypt would address, of course, such as data-at-rest (hard\ndrive theft, improper disposal of storage media, backups which don't\nhave their own encryption, etc), but, again, dm-crypt isn't always an\noption that is available and so I don't agree that we should throw this\nout just because dm-crypt exists and may be useable in some cases.\n\nThanks,\n\nStephen", "msg_date": "Mon, 18 Oct 2021 11:56:03 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Fri, Oct 15, 2021 at 5:21 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't find that line of argument *that* convincing. The reason XTS is the\n> de-facto standard is that for generic block layer encryption is that you can't\n> add additional data for each block without very significant overhead\n> (basically needing journaling to ensure that the data doesn't get out of\n> sync). But we don't really face the same situation - we *can* add additional\n> data.\n\nYes. The downside is that there is some code complexity, and also some\nruntime overhead even for cases that don't use encryption, because\nsome things that now are compile time constants might need to be\ncomputed at runtime. That can probably be made pretty small, though.\n\n> With something like AES-GCM-SIV we can use the additional data to get IV reuse\n> resistance *and* authentication. And while perhaps we are ok with the IV reuse\n> guarantees XTS has, it seems pretty clear that we'll want want guaranteed\n> authenticity at some point. And then we'll need extra data anyway.\n>\n> Thus, to me, it doesn't seem worth going down the XTS route, just to\n> temporarily save a bit of implementation effort. We'll have to endure that\n> pain anyway.\n\nI agree up to a point, but I do also kind of feel like we should be\nleaving it up to whoever is working on an implementation to decide\nwhat they want to implement. I don't particularly like this discussion\nwhere it feels like people are trying to tell other people what they\nhave to do because \"the community has decided X.\" It's pretty clear\nthat there are multiple opinions here, and I don't really see any of\nthem to be without merit, nor do I see why Bruce or Stephen or you or\nanyone else should get to say \"what the community has decided\" in the\nabsence of a clear consensus.\n\nI do really like the idea of using AES-GCM-SIV not because I know\nanything about it, but because the integrity checking seems cool, and\nstoring the nonce seems like it would improve security. However, based\non what I know now, I would not vote to reject an XTS-based patch and,\nas Stephen and Bruce have said, maybe with the right design it permits\nupgrades from non-encrypted clusters to encrypted clusters. I'm\nactually kind of doubtful about that, because that seems to require\nsome pretty specific and somewhat painful implementation decisions.\nFor example, I think if your solution for rotating keys is to shut\ndown the standby, re-encrypt it with a new key, start it up again, and\nfail over to it, then you probably ever can't do key rotation in any\nother way. The keys now have to be non-WAL-logged so that the standby\ncan be different, which means you can't add a new key on the master\nand run around re-encrypting everything with it, WAL-logging those\nchanges as you go. Now I realize that implementing that is really\nchallenging anyway so maybe some people wouldn't like to go that way,\nbut then maybe other people would. Another thing you probably can't do\nin this model is encrypt different parts of the database with\ndifferent keys, because how would you keep track of that? Certainly\nnot in the system catalogs, if none of that can show up in the WAL\nstream.\n\nBut, you know, still: if somebody showed up with a fully-working XTS\npatch with everything in good working order, I don't see that we have\nenough evidence to reject it just because it's XTS. And I would hope\nthat the people favoring XTS would not vote to reject a fully working\nGCM patch just because it's GCM. I think what we ought to be doing at\nthis point is combining our efforts to try to get some things\ncommitted which make future work in this area committed - like that\npatch to preserve relfilenode and database OID, or maybe some patches\nto drive all of our I/O through a smaller number of code paths instead\nof having every different type of temporary file we write reinvent the\nwheel. These discussions about what encryption type we ought to use\nare useful for ruling out options that we know are bad, but beyond\nthat I'm not sure they have much value. AES in any mode could seem\nlike a much less safe choice by the time we get a committed feature\nhere than it does today - even if somehow that were to happen for v15.\nI expect there are people out there trying to break it even as I write\nthese words, and it seems likely that they will eventually succeed,\nbut as to when, who can say?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Oct 2021 12:37:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "\n\nOn 10/18/21 04:19, Sasasu wrote:\n> Just a mention. the HMAC (or AE/AD) can be disabled in AES-GCM. HMAC in \n> AES-GCM is an encrypt-then-hash MAC.\n> \n> CRC-32 is not a crypto-safe hash (technically CRC-32 is not a hash \n> function). Cryptographers may unhappy with CRC-32.\n> \n\nTrue. If you can flip enough bits in the page, it probably is not very \nhard to generate a page with the desired checksum. It's probably harder \nwith XTS, but likely not much more.\n\n> I think CRC or SHA is not such important. If IV can be stored, I believe \n> there should have enough space to store HMAC.\n> \n\nRight, I agree.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 18 Oct 2021 21:02:56 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "\n\nOn 10/18/21 17:56, Stephen Frost wrote:\n> Greetings,\n> \n> * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n>> On 10/15/21 21:22, Stephen Frost wrote:\n>>> Now, to address the concern around re-encrypting a block with the same\n>>> key+IV but different data and leaking what parts of the page changed, I\n>>> do think we should use the LSN and have it change regularly (including\n>>> unlogged tables) but that's just because it's relatively easy for us to\n>>> do and means an attacker wouldn't be able to tell what part of the page\n>>> changed when the LSN was also changed. That was also recommended by\n>>> NIST and that's a pretty strong endorsement also.\n>>\n>> Not sure - it seems a bit weird to force LSN change even in cases that don't\n>> generate any WAL. I was not following the encryption thread and maybe it was\n>> discussed/rejected there, but I've always imagined we'd have a global nonce\n>> generator (similar to a sequence) and we'd store it at the end of each\n>> block, or something like that.\n> \n> The 'LSN' being referred to here isn't the regular LSN that is\n> associated with the WAL but rather the separate FakeLSN counter which we\n> already have. I wasn't suggesting having the regular LSN change in\n> cases that don't generate WAL.\n> \n\nI'm not very familiar with FakeLSN, but isn't that just about unlogged \ntables? How does that help cases like setting hint bits, which may not \ngenerate WAL?\n\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n>> On Fri, Oct 15, 2021 at 3:22 PM Stephen Frost <sfrost@snowman.net> wrote:\n>>> Specifically: The default cipher for LUKS is nowadays aes-xts-plain64\n>>>\n>>> and then this:\n>>>\n>>> https://gitlab.com/cryptsetup/cryptsetup/-/wikis/DMCrypt\n>>>\n>>> where plain64 is defined as:\n>>>\n>>> plain64: the initial vector is the 64-bit little-endian version of the\n>>> sector number, padded with zeros if necessary\n>>>\n>>> That is, the default for LUKS is AES, XTS, with a simple IV. That\n>>> strikes me as a pretty ringing endorsement.\n>>\n>> Yes, that sounds promising. It might not hurt to check for other\n>> precedents as well, but that seems like a pretty good one.\n>>\n>> I'm not very convinced that using the LSN for any of this is a good\n>> idea. Something that changes most of the time but not all the time\n>> seems more like it could hurt by masking fuzzy thinking more than it\n>> helps anything.\n> \n> This argument doesn't come across as very strong at all to me,\n> particularly when we have explicit recommendations from NIST that having\n> the IV vary more is beneficial. While this would be using the LSN, the\n> fact that the LSN changes most of the time but not all of the time isn't\n> new and is something we already have to deal with. I'd think we'd\n> address the concern about mis-thinking around how this works by\n> providing a README and/or an appropriate set of comments around what's\n> being done and why.\n> \n\nI don't think anyone objects to varying IV more, as recommended by NIST. \nAFAICS the issue at hand is exactly the opposite - maybe not varying it \nenough, in some cases. It might be enough for MVCC purposes yet it might \nresult in fatal failure of the encryption scheme. That's my concern, at \nleast, and I assume it's what Robert meant by \"fuzzy thinking\" too.\n\nFWIW I think we seem to be mixing nonces, IVs and tweak values. Although \nvarious encryption schemes place different requirements on those anyway.\n\n\n> * Andres Freund (andres@anarazel.de) wrote:\n>> On 2021-10-15 15:22:48 -0400, Stephen Frost wrote:\n>>> * Bruce Momjian (bruce@momjian.us) wrote:\n>>>> Finally, there is an interesting web page about when not to use XTS:\n>>>>\n>>>> \thttps://sockpuppet.org/blog/2014/04/30/you-dont-want-xts/\n>>>\n>>> This particular article always struck me as more of a reason for us, at\n>>> least, to use XTS than to not- in particular the very first comment it\n>>> makes, which seems to be pretty well supported, is: \"XTS is the de-facto\n>>> standard disk encryption mode.\"\n>>\n>> I don't find that line of argument *that* convincing. The reason XTS is the\n>> de-facto standard is that for generic block layer encryption is that you can't\n>> add additional data for each block without very significant overhead\n>> (basically needing journaling to ensure that the data doesn't get out of\n>> sync). But we don't really face the same situation - we *can* add additional\n>> data.\n> \n> No, we can't always add additional data, and that's part of the\n> consideration for an XTS option- there are things we can do if we use\n> XTS that we can't with GCM or another solution. Specifically, being\n> able to perform physical replication from an unencrypted cluster to an\n> encrypted one is a worthwhile use-case that we shouldn't be just tossing\n> out.\n> \n\nYeah, XTS seems like a reasonable first step, both because it doesn't \nrequire storing extra data and it's widespread use in FDE software (of \ncourse, there's a link between those). And I suspect replication between \nencrypted and unencrypted clusters is going to be a huge can of worms, \neven with XTS.\n\nIt's probably a silly / ugly idea, but can't we simply store a special \n\"page format\" flag in controldat - when set to 'true' during initdb, \neach page would have a bit of space (at the end) reserved for additional \nencryption data. Say, ~64B should be enough. On the encrypted cluster \nthis would store the nonce/IV/... and on the unencrypted cluster it'd be \nsimply unused. 64B seems like a negligible amount of data. And when set \nto 'false' the cluster would not allow encryption.\n\n\n>> With something like AES-GCM-SIV we can use the additional data to get IV reuse\n>> resistance *and* authentication. And while perhaps we are ok with the IV reuse\n>> guarantees XTS has, it seems pretty clear that we'll want want guaranteed\n>> authenticity at some point. And then we'll need extra data anyway.\n> \n> I agree that it'd be useful to have an authenticated encryption option.\n> Implementing XTS doesn't preclude us from adding that capability down\n> the road and it's simpler with fewer dependencies. These all strike me\n> as good reasons to add XTS first.\n> \n\nTrue. If XTS addresses the threat model we aimed to solve ...\n\n>> Thus, to me, it doesn't seem worth going down the XTS route, just to\n>> temporarily save a bit of implementation effort. We'll have to endure that\n>> pain anyway.\n> \n> This isn't a valid argument as it isn't just about implementation but\n> about the capabilities we will have once it's done.\n> \n> * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n>> On 10/15/21 23:02, Robert Haas wrote:\n>>> On Fri, Oct 15, 2021 at 3:22 PM Stephen Frost <sfrost@snowman.net> wrote:\n>>>> That is, the default for LUKS is AES, XTS, with a simple IV. That\n>>>> strikes me as a pretty ringing endorsement.\n>>>\n>>> Yes, that sounds promising. It might not hurt to check for other\n>>> precedents as well, but that seems like a pretty good one.\n>>\n>> TrueCrypt/VeraCrypt uses XTS too, I think. There's an overview of other FDE\n>> products at [1], and some of them use XTS, but I would take that with a\n>> grain of salt - some of the products are somewhat obscure, very old, or\n>> both.\n>>\n>> What is probably more interesting is that there's an IEEE standard [2]\n>> dealing with encrypted shared storage, and that uses XTS too. I'd bet\n>> there's a bunch of smart cryptographers involved.\n> \n> Thanks for finding those and linking to them, that's helpful.\n> \n>>> I'm not very convinced that using the LSN for any of this is a good\n>>> idea. Something that changes most of the time but not all the time\n>>> seems more like it could hurt by masking fuzzy thinking more than it\n>>> helps anything.\n>>\n>> I haven't been following the discussion about using LSN, but I agree that\n>> while using it seems convenient, the consequences of some changes not\n>> incrementing LSN seem potentially disastrous, depending on the encryption\n>> mode.\n> \n> Yes, this depends on the encryption mode, and is why we are specifically\n> talking about XTS here as it's an encryption mode that doesn't suffer\n> from this risk and therefore it's perfectly fine to use the LSN/FakeLSN\n> with XTS (and would also be alright for AES-GCM-SIV as it's specifically\n> designed to be resistant to IV reuse).\n> \n\nI'm not quite sure about the \"perfectly fine\" bit, as it's making XTS \nvulnerable to traffic analysis attacks (comparing multiple copies of an \nencrypted block). It may be a reasonable trade-off, of course.\n\n> * Bruce Momjian (bruce@momjian.us) wrote:\n>> On Fri, Oct 15, 2021 at 10:57:03PM +0200, Tomas Vondra wrote:\n>>>> That said, I don't think that's really a huge issue or something that's\n>>>> a show stopper or a reason to hold off on using XTS. Note that what\n>>>> those bits actually *are* isn't leaked, just that they changed in some\n>>>> fashion inside of that 16-byte cipher text block. That they're directly\n>>>> leaked with CTR is why there was concern raised about using that method,\n>>>> as discussed above and previously.\n>>>\n>>> Yeah. With CTR you pretty learn where the hint bits are exactly, while with\n>>> XTS the whole ciphertext changes.\n>>>\n>>> This also means CTR is much more malleable, i.e. you can tweak the\n>>> ciphertext bits to flip the plaintext, while with XTS that's not really\n>>> possible - it's pretty much guaranteed to break the block structure. Not\n>>> sure if that's an issue for our use case, but if it is then neither of the\n>>> two modes is a solution.\n>>\n>> Yes, this is a vary good point. Let's look at the impact of _not_ using\n>> the LSN. For CTR (already rejected) bit changes would be visible by\n>> comparing old/new page contents. For CBC (also not under consideration)\n>> the first 16-byte block would show a change, and all later 16-byte\n>> blocks would show a change. For CBC, you see the 16-byte blocks change,\n>> but you have no idea how many bits were changed, and in what locations\n>> in the 16-byte block (AES uses substitution and diffusion). For XTS,\n>> because earlier blocks don't change the IV used by later blocks like\n>> CBC, you would be able to see each 16-byte block that changed in the 8k\n>> page. Again, you would not know the number of bits changed or their\n>> locations.\n>>\n>> Do we think knowing which 16-byte blocks on an 8k page change would leak\n>> useful information? If so, we should use the LSN and just accept that\n>> some cases might leak as described above. If we don't care, then we can\n>> skip the use of the LSN and simplify the patch.\n> \n> While there may not be an active attack against PG that leverages such a\n> leak, I have a hard time seeing why we would intentionally design this\n> in when we have a great option that's directly available to us and\n> doesn't cause such a leak with nearly such regularity as not using the\n> LSN would, and also follows recommendations of using XTS from NIST.\n> Further, not using the LSN wouldn't really be an option if we did\n> eventually implement AES-GCM-SIV, so why not have the two cases be\n> consistent?\n> \n\nI'm a bit confused, because the question was what happens if we encrypt \nthe page twice with the same LSN or any tweak value in general. It \ncertainly does not matter when it comes to malleability or replay \nattacks, because in that case the attacker is the one who modifies the \nblock (and obviously won't change the LSN).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 19 Oct 2021 01:54:58 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On 10/18/21 17:56, Stephen Frost wrote:\n >> ...\n>> I've argued for storing the nonce, but I don't quite see why would we need\n>> integrity guarantees?\n>>\n>> AFAICS the threat model the patch aims to address is an attacker who can\n>> observe the data (e.g. a low-privileged OS user), but can't modify the\n>> files. Which seems like a reasonable model for shared environments.\n> \n> There are multiple threat models which we should be considering and\n> that's why we may want to eventually add integrity.\n> \n\nSo what are these threat models? If we should be considering them it'd \nbe nice to have a description, explaining what capabilities must the \nattacker have ...\n\nMy (perhaps naive) understanding is that the authentication / integrity \nprovides (partial) protection against attackers that may modify instance \ndata - modify files, etc. But I'd guess an attacker with such capability \ncan do various other (simpler) things to extract data. Say, modify the \nconfig to load an extension that dumps keys from memory, or whatever.\n\nSo what's a plausible / practical threat model that would be mitigated \nby the authenticated encryption?\n\nIt'd be a bit silly to add complexity to allow AEAD, only to find out \nthere are ways around it.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 19 Oct 2021 02:07:27 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On 2021/10/19 00:37, Robert Haas wrote:\r\n> I think what we ought to be doing at\r\n> this point is combining our efforts to try to get some things\r\n> committed which make future work in this area committed - like that\r\n> patch to preserve relfilenode and database OID, or maybe some patches\r\n> to drive all of our I/O through a smaller number of code paths instead\r\n> of having every different type of temporary file we write reinvent the\r\n> wheel.\r\n\r\nA unified block-based I/O API sounds great. Has anyone tried to do this \r\nbefore? It would be nice if the front-end tools could also use these API.\r\n\r\nAs there are so many threat models, I propose to do the TDE feature by a \r\nset of hooks. those hooks are on the critical path of IO operation, add \r\nthe ability to let extension replace the IO API. and also load extension \r\nwhen initdb, single-mode, and in front-end tools.\r\nThis sounds Like using $LD_PRELOAD to replace pread(2) and pwrite(2), \r\nwhich widely used in folder based encryption. but the hook will pass \r\nmore context (filenode, tableoid, blocksize, and many) to the under \r\nlayer, this hook API will look like object_access_hook.\r\nthen implement the simplest AES-XTS. and put it to contrib. provide a \r\ntool to deactivate AES-XTS to make PostgreSQL upgradeable.\r\n\r\nI think this is the most peaceful method. GCM people will not reject \r\nthis just because XTS. and XTS people will satisfied(maybe?) with the \r\ncomplexity. for performance, just one more long-jump compare with \r\ncurrent AES-XTS code.", "msg_date": "Tue, 19 Oct 2021 23:46:12 +0800", "msg_from": "Sasasu <i@sasa.su>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Tue, Oct 19, 2021 at 11:46 AM Sasasu <i@sasa.su> wrote:\n> As there are so many threat models, I propose to do the TDE feature by a\n> set of hooks.\n\nThis is too invasive to do using hooks. We are inevitably going to\nneed to make significant changes in core.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Oct 2021 12:35:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 10/18/21 17:56, Stephen Frost wrote:\n> >* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> >>On 10/15/21 21:22, Stephen Frost wrote:\n> >>>Now, to address the concern around re-encrypting a block with the same\n> >>>key+IV but different data and leaking what parts of the page changed, I\n> >>>do think we should use the LSN and have it change regularly (including\n> >>>unlogged tables) but that's just because it's relatively easy for us to\n> >>>do and means an attacker wouldn't be able to tell what part of the page\n> >>>changed when the LSN was also changed. That was also recommended by\n> >>>NIST and that's a pretty strong endorsement also.\n> >>\n> >>Not sure - it seems a bit weird to force LSN change even in cases that don't\n> >>generate any WAL. I was not following the encryption thread and maybe it was\n> >>discussed/rejected there, but I've always imagined we'd have a global nonce\n> >>generator (similar to a sequence) and we'd store it at the end of each\n> >>block, or something like that.\n> >\n> >The 'LSN' being referred to here isn't the regular LSN that is\n> >associated with the WAL but rather the separate FakeLSN counter which we\n> >already have. I wasn't suggesting having the regular LSN change in\n> >cases that don't generate WAL.\n> \n> I'm not very familiar with FakeLSN, but isn't that just about unlogged\n> tables? How does that help cases like setting hint bits, which may not\n> generate WAL?\n\nErrr, there seems to have been some confusion in this thread between the\ndifferent ideas being tossed around.\n\nThe point of using FakeLSN for unlogged tables consistently is to\nprovide variability in the value used as the IV. I wasn't suggesting to\nuse FakeLSN to provide a uniqueness guarantee- the reason we're talking\nabout using XTS is specifically because, unlike CTR, unique IVs are not\nrequired. Further, we don't need to find any additional space on the\npage if we use XTS, meaning we can do things like go from an unencrypted\nto an encrypted system with a basebackup+physical replication.\n\n> >* Robert Haas (robertmhaas@gmail.com) wrote:\n> >>On Fri, Oct 15, 2021 at 3:22 PM Stephen Frost <sfrost@snowman.net> wrote:\n> >>>Specifically: The default cipher for LUKS is nowadays aes-xts-plain64\n> >>>\n> >>>and then this:\n> >>>\n> >>>https://gitlab.com/cryptsetup/cryptsetup/-/wikis/DMCrypt\n> >>>\n> >>>where plain64 is defined as:\n> >>>\n> >>>plain64: the initial vector is the 64-bit little-endian version of the\n> >>>sector number, padded with zeros if necessary\n> >>>\n> >>>That is, the default for LUKS is AES, XTS, with a simple IV. That\n> >>>strikes me as a pretty ringing endorsement.\n> >>\n> >>Yes, that sounds promising. It might not hurt to check for other\n> >>precedents as well, but that seems like a pretty good one.\n> >>\n> >>I'm not very convinced that using the LSN for any of this is a good\n> >>idea. Something that changes most of the time but not all the time\n> >>seems more like it could hurt by masking fuzzy thinking more than it\n> >>helps anything.\n> >\n> >This argument doesn't come across as very strong at all to me,\n> >particularly when we have explicit recommendations from NIST that having\n> >the IV vary more is beneficial. While this would be using the LSN, the\n> >fact that the LSN changes most of the time but not all of the time isn't\n> >new and is something we already have to deal with. I'd think we'd\n> >address the concern about mis-thinking around how this works by\n> >providing a README and/or an appropriate set of comments around what's\n> >being done and why.\n> \n> I don't think anyone objects to varying IV more, as recommended by NIST.\n\nGreat.\n\n> AFAICS the issue at hand is exactly the opposite - maybe not varying it\n> enough, in some cases. It might be enough for MVCC purposes yet it might\n> result in fatal failure of the encryption scheme. That's my concern, at\n> least, and I assume it's what Robert meant by \"fuzzy thinking\" too.\n\nXTS does not require the IV to be unique for every invocation, and\nindeed other systems like dm-crypt don't use a unique IV for XTS. We\nreally can't divorce the encryption methodology from the parameters that\nare being used. CTR and GCM are the methods that require a unique IV\n(or, at least, a very very very likely unique one if you can't actually\nimplement a proper counter) but that's *not* what we're talking about\nhere. The methods being discussed are XTS and GCM-SIV, the former\nexplicitly doesn't require a unique IV and the latter is specifically\ndesigned to reduce the impact of an IV being re-used. Both are, as NIST\npoints out, better off with a varying IV, but having the IV be reused\nfrom time to time in either XTS or GCM-SIV does not result in a fatal\nfailure of the encryption scheme.\n\n> FWIW I think we seem to be mixing nonces, IVs and tweak values. Although\n> various encryption schemes place different requirements on those anyway.\n\nThe differences between those are pretty subtle and it gets a bit\nconfusing when things like OpenSSL accept an IV but then use it as the\n'tweak', such as with XTS. In general though, most methods accept a\nkey, a nonce/IV/tweak, and then produce ciphertext and possbily a tag,\nso I don't think we've been incorrect in usage but rather perhaps a bit\nsloppy by not using the right term for the specific methodology.\n\n> >* Andres Freund (andres@anarazel.de) wrote:\n> >>On 2021-10-15 15:22:48 -0400, Stephen Frost wrote:\n> >>>* Bruce Momjian (bruce@momjian.us) wrote:\n> >>>>Finally, there is an interesting web page about when not to use XTS:\n> >>>>\n> >>>>\thttps://sockpuppet.org/blog/2014/04/30/you-dont-want-xts/\n> >>>\n> >>>This particular article always struck me as more of a reason for us, at\n> >>>least, to use XTS than to not- in particular the very first comment it\n> >>>makes, which seems to be pretty well supported, is: \"XTS is the de-facto\n> >>>standard disk encryption mode.\"\n> >>\n> >>I don't find that line of argument *that* convincing. The reason XTS is the\n> >>de-facto standard is that for generic block layer encryption is that you can't\n> >>add additional data for each block without very significant overhead\n> >>(basically needing journaling to ensure that the data doesn't get out of\n> >>sync). But we don't really face the same situation - we *can* add additional\n> >>data.\n> >\n> >No, we can't always add additional data, and that's part of the\n> >consideration for an XTS option- there are things we can do if we use\n> >XTS that we can't with GCM or another solution. Specifically, being\n> >able to perform physical replication from an unencrypted cluster to an\n> >encrypted one is a worthwhile use-case that we shouldn't be just tossing\n> >out.\n> \n> Yeah, XTS seems like a reasonable first step, both because it doesn't\n> require storing extra data and it's widespread use in FDE software (of\n> course, there's a link between those). And I suspect replication between\n> encrypted and unencrypted clusters is going to be a huge can of worms, even\n> with XTS.\n\nGlad you agree with XTS being a reasonable first step. XTS will at\nleast make physical replication possible- other methods (such as below)\nsimply won't work.\n\n> It's probably a silly / ugly idea, but can't we simply store a special \"page\n> format\" flag in controldat - when set to 'true' during initdb, each page\n> would have a bit of space (at the end) reserved for additional encryption\n> data. Say, ~64B should be enough. On the encrypted cluster this would store\n> the nonce/IV/... and on the unencrypted cluster it'd be simply unused. 64B\n> seems like a negligible amount of data. And when set to 'false' the cluster\n> would not allow encryption.\n\nThis is essentially what the patch that was posted does, but the problem\nis that you can't do physical replication when 64B have been stolen off\nof a page because the page in the unencrypted database might be entirely\nfull and not able to physically fit those extra 64B, and then what?\n\n> >>With something like AES-GCM-SIV we can use the additional data to get IV reuse\n> >>resistance *and* authentication. And while perhaps we are ok with the IV reuse\n> >>guarantees XTS has, it seems pretty clear that we'll want want guaranteed\n> >>authenticity at some point. And then we'll need extra data anyway.\n> >\n> >I agree that it'd be useful to have an authenticated encryption option.\n> >Implementing XTS doesn't preclude us from adding that capability down\n> >the road and it's simpler with fewer dependencies. These all strike me\n> >as good reasons to add XTS first.\n> \n> True. If XTS addresses the threat model we aimed to solve ...\n\nIt addresses a valid threat model that people are interested in PG\nhaving a solution for, yes.\n\n> >>Thus, to me, it doesn't seem worth going down the XTS route, just to\n> >>temporarily save a bit of implementation effort. We'll have to endure that\n> >>pain anyway.\n> >\n> >This isn't a valid argument as it isn't just about implementation but\n> >about the capabilities we will have once it's done.\n> >\n> >* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> >>On 10/15/21 23:02, Robert Haas wrote:\n> >>>On Fri, Oct 15, 2021 at 3:22 PM Stephen Frost <sfrost@snowman.net> wrote:\n> >>>>That is, the default for LUKS is AES, XTS, with a simple IV. That\n> >>>>strikes me as a pretty ringing endorsement.\n> >>>\n> >>>Yes, that sounds promising. It might not hurt to check for other\n> >>>precedents as well, but that seems like a pretty good one.\n> >>\n> >>TrueCrypt/VeraCrypt uses XTS too, I think. There's an overview of other FDE\n> >>products at [1], and some of them use XTS, but I would take that with a\n> >>grain of salt - some of the products are somewhat obscure, very old, or\n> >>both.\n> >>\n> >>What is probably more interesting is that there's an IEEE standard [2]\n> >>dealing with encrypted shared storage, and that uses XTS too. I'd bet\n> >>there's a bunch of smart cryptographers involved.\n> >\n> >Thanks for finding those and linking to them, that's helpful.\n> >\n> >>>I'm not very convinced that using the LSN for any of this is a good\n> >>>idea. Something that changes most of the time but not all the time\n> >>>seems more like it could hurt by masking fuzzy thinking more than it\n> >>>helps anything.\n> >>\n> >>I haven't been following the discussion about using LSN, but I agree that\n> >>while using it seems convenient, the consequences of some changes not\n> >>incrementing LSN seem potentially disastrous, depending on the encryption\n> >>mode.\n> >\n> >Yes, this depends on the encryption mode, and is why we are specifically\n> >talking about XTS here as it's an encryption mode that doesn't suffer\n> >from this risk and therefore it's perfectly fine to use the LSN/FakeLSN\n> >with XTS (and would also be alright for AES-GCM-SIV as it's specifically\n> >designed to be resistant to IV reuse).\n> \n> I'm not quite sure about the \"perfectly fine\" bit, as it's making XTS\n> vulnerable to traffic analysis attacks (comparing multiple copies of an\n> encrypted block). It may be a reasonable trade-off, of course.\n\nConsidering the default usage in dmcrypt isn't even varying the IV as\nmuch as what we're talking about here, I'd say that, yes, it's quite\nreasonable for our use-case and allows us to vary the IV quite a bit\nwhich reduces the attack surface in a meaningful way. That dmcrypt has\nthis risk and it isn't considered enough of an issue for them to use\nsomething other than plain64 as the default makes it certainly seem\nreasonable to me.\n\n> >* Bruce Momjian (bruce@momjian.us) wrote:\n> >>On Fri, Oct 15, 2021 at 10:57:03PM +0200, Tomas Vondra wrote:\n> >>>>That said, I don't think that's really a huge issue or something that's\n> >>>>a show stopper or a reason to hold off on using XTS. Note that what\n> >>>>those bits actually *are* isn't leaked, just that they changed in some\n> >>>>fashion inside of that 16-byte cipher text block. That they're directly\n> >>>>leaked with CTR is why there was concern raised about using that method,\n> >>>>as discussed above and previously.\n> >>>\n> >>>Yeah. With CTR you pretty learn where the hint bits are exactly, while with\n> >>>XTS the whole ciphertext changes.\n> >>>\n> >>>This also means CTR is much more malleable, i.e. you can tweak the\n> >>>ciphertext bits to flip the plaintext, while with XTS that's not really\n> >>>possible - it's pretty much guaranteed to break the block structure. Not\n> >>>sure if that's an issue for our use case, but if it is then neither of the\n> >>>two modes is a solution.\n> >>\n> >>Yes, this is a vary good point. Let's look at the impact of _not_ using\n> >>the LSN. For CTR (already rejected) bit changes would be visible by\n> >>comparing old/new page contents. For CBC (also not under consideration)\n> >>the first 16-byte block would show a change, and all later 16-byte\n> >>blocks would show a change. For CBC, you see the 16-byte blocks change,\n> >>but you have no idea how many bits were changed, and in what locations\n> >>in the 16-byte block (AES uses substitution and diffusion). For XTS,\n> >>because earlier blocks don't change the IV used by later blocks like\n> >>CBC, you would be able to see each 16-byte block that changed in the 8k\n> >>page. Again, you would not know the number of bits changed or their\n> >>locations.\n> >>\n> >>Do we think knowing which 16-byte blocks on an 8k page change would leak\n> >>useful information? If so, we should use the LSN and just accept that\n> >>some cases might leak as described above. If we don't care, then we can\n> >>skip the use of the LSN and simplify the patch.\n> >\n> >While there may not be an active attack against PG that leverages such a\n> >leak, I have a hard time seeing why we would intentionally design this\n> >in when we have a great option that's directly available to us and\n> >doesn't cause such a leak with nearly such regularity as not using the\n> >LSN would, and also follows recommendations of using XTS from NIST.\n> >Further, not using the LSN wouldn't really be an option if we did\n> >eventually implement AES-GCM-SIV, so why not have the two cases be\n> >consistent?\n> \n> I'm a bit confused, because the question was what happens if we encrypt the\n> page twice with the same LSN or any tweak value in general. It certainly\n> does not matter when it comes to malleability or replay attacks, because in\n> that case the attacker is the one who modifies the block (and obviously\n> won't change the LSN).\n\nThe question that I was responding to above was specifically about if\nknowing which 16-byte blocks on an 8K page changes was an issue or not\nand that's what I was addressing. As for if we encrypt the page twice\nwith the same LSN/tweak/IV or not- that depends on the specific\nencryption methodology being used as to how much of an issue that is, as\ndiscussed above.\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 10/18/21 17:56, Stephen Frost wrote:\n> >> ...\n> >>I've argued for storing the nonce, but I don't quite see why would we need\n> >>integrity guarantees?\n> >>\n> >>AFAICS the threat model the patch aims to address is an attacker who can\n> >>observe the data (e.g. a low-privileged OS user), but can't modify the\n> >>files. Which seems like a reasonable model for shared environments.\n> >\n> >There are multiple threat models which we should be considering and\n> >that's why we may want to eventually add integrity.\n> \n> So what are these threat models? If we should be considering them it'd be\n> nice to have a description, explaining what capabilities must the attacker\n> have ...\n\nThe first and simplest to consider is the basic \"data at rest\" threat\nmodel, where a hard drive is not properly wiped or a backup isn't\nproperly encrypted, or a laptop or other removable media is stolen, etc.\nThese are well addressed through XTS as in those cases what is needed is\nconfidentiality, not integrity, as the attacker isn't able to modify the\nexisting system in such cases.\n\nAnother threat model to consider is if the attacker has read-only access\nto the data directory through, say, unix group read privileges or maybe\nthe ability to monitor the traffic on the SAN, or the ability to\nread-only mount the LUN on to another system. This might be obtained by\nattacking a backup process where the system was configured to run\nphysical backups using an unprivileged OS user who only has group read\naccess to the cluster (and the necessary but non-superuser privleges in\nthe database system to start/stop the backup), or various potential\nattacks at the storage layer. This is similar to the \"data at rest\"\ncase above in that XTS works well to address this, but because the\nattacker would have ongoing access (rather than just one-time, such as\nin the first case), information such as which blocks are being changed\ninside of a given 8k page might be able to be determined and that could\nbe useful information, though a point here: they would already be able\nto see clearly which 8k pages are being changed and which aren't, and\nthere's not really any way for us to prevent that reasonably. As such,\nI'd argue that using XTS is reasonable and we can mitigate some of this\nconcern by using the LSN in the tweak instead of just the block number\nas the 'plain64' option in dmcrypt does. That doing so would mean that\nwe'd also be able to reasonably use the same IV for both XTS and\nAES-GCM-SIV, should we choose to implement that at some point down the\nroad, is a nice perk to this approach.\n\n> My (perhaps naive) understanding is that the authentication / integrity\n> provides (partial) protection against attackers that may modify instance\n> data - modify files, etc. But I'd guess an attacker with such capability can\n> do various other (simpler) things to extract data. Say, modify the config to\n> load an extension that dumps keys from memory, or whatever.\n\nAnother attack vector to consider is an attacker who is able to actively\nmodify files on the system in some way. The specific files they are\nable to modify matter in such a case. There's no doubt that it's more\ndifficult to address such an attack vector and it's unlikely we'd be\nable to provide a 100% solution in all cases, but that doesn't mean we\nshould throw out the idea entirely.\n\n> So what's a plausible / practical threat model that would be mitigated by\n> the authenticated encryption?\n\nIn a similar vein to above- consider a storage-level attacker who is\nable to gain read/write access to a particular volume. If that volume\nis used only for a tablespace, a TDE implementation which uses\nAES-GCM-SIV would go a long way towards protecting the system. We may\neven go so far as to encourage users to ensure that their 'main' PG data\ndirectory, where configuration, transaction log, et al, are stored be\nkept on a more secure (perhaps local) volume. This isn't a 100%\nsolution, of course, due to the visibility map and the free space map\nbeing stored with the tables in the tablespaces, but attacks on those\nwould be much more coarse and difficult to mount effectively.\n\nWe should also be thinking about ways to address those other subsystems\nand improve the situation around them (and not just for encryption but\nalso for detection of corruption) but trying to get all of that done in\none patch or even one major release would make this a massive change\nthat would almost certainly be rejected as too destablizing.\n\n> It'd be a bit silly to add complexity to allow AEAD, only to find out there\n> are ways around it.\n\nThere are ways around it. There likely always will be. We need to be\nclear about what it provides and what it doesn't. We need to stop\ntelling ourselves that the only answer is a 100% solution and therefore\nit's impossible to do. Users who care about these capabilities will\nunderstand that it's not 100% and they will still happily use it because\nit's better than 0% which is where we are today and is why they are\ngoing with other solutions. Yes, if it's trivial to get around then\nperhaps it's not much better than 0% and if that's the case then it\ndoesn't make sense to do it, but none of what has been discussed here\nthus far has made me feel like either the XTS or the GCM-SIV approaches\nwould be trivial to to circumvent for the threat models they're intended\nto address, though it certainly takes more care and more thought when\nwe're trying to address someone who has write access to part of the\nsystem and that we need to be clear what is addressed and what isn't in\nall of these cases.\n\nThanks,\n\nStephen", "msg_date": "Tue, 19 Oct 2021 14:44:26 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Greetings,\n\n* Sasasu (i@sasa.su) wrote:\n> On 2021/10/19 00:37, Robert Haas wrote:\n> >I think what we ought to be doing at\n> >this point is combining our efforts to try to get some things\n> >committed which make future work in this area committed - like that\n> >patch to preserve relfilenode and database OID, or maybe some patches\n> >to drive all of our I/O through a smaller number of code paths instead\n> >of having every different type of temporary file we write reinvent the\n> >wheel.\n> \n> A unified block-based I/O API sounds great. Has anyone tried to do this\n> before? It would be nice if the front-end tools could also use these API.\n\nThe TDE patch from Cybertec did go down this route, but the API ended up\nbeing rather different which menat a lot of changes in other parts of\nthe system. If we can get a block-based temporary file method that\nmaintains more-or-less the same API, that'd be great, but I'm not sure\nthat we can really do so and I am not entirely convinced that we should\nmake the TDE effort depend on an otherwise quite independent effort of\nmaking all temp files usage be block based.\n\n> As there are so many threat models, I propose to do the TDE feature by a set\n> of hooks. those hooks are on the critical path of IO operation, add the\n> ability to let extension replace the IO API. and also load extension when\n> initdb, single-mode, and in front-end tools.\n> This sounds Like using $LD_PRELOAD to replace pread(2) and pwrite(2), which\n> widely used in folder based encryption. but the hook will pass more context\n> (filenode, tableoid, blocksize, and many) to the under layer, this hook API\n> will look like object_access_hook.\n> then implement the simplest AES-XTS. and put it to contrib. provide a tool\n> to deactivate AES-XTS to make PostgreSQL upgradeable.\n> \n> I think this is the most peaceful method. GCM people will not reject this\n> just because XTS. and XTS people will satisfied(maybe?) with the complexity.\n> for performance, just one more long-jump compare with current AES-XTS code.\n\nI agree with Robert- using hooks for this really isn't realistic. Where\nwould you store the tag for GCM without changes in core, for starters..?\nCertainly wouldn't make sense to provide GCM only to throw the tag away.\nEven between XTS and GCM, to say nothing of other possible methods,\nthere's going to be some serious differences that a single hook-based\nAPI wouldn't be able to address.\n\nThanks,\n\nStephen", "msg_date": "Tue, 19 Oct 2021 14:54:56 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On 2021/10/20 02:54, Stephen Frost wrote:\r\n > I agree with Robert- using hooks for this really isn't realistic.\r\n\r\nOK, I agree use hooks is too invasive. Just a whim, never mind.\r\n\r\nBut If PG has a clear block-based IO API, TDE is much easier to \r\nunderstand. security people may lack database knowledge but they can \r\nunderstand block IO.\r\nThis will allow more people to join PG community.\r\n\r\nOn 2021/10/20 02:54, Stephen Frost wrote:\r\n > Where would you store the tag for GCM without changes in core?\r\n\r\nIf can add 32bit reserve field (in CGM is 28bits) will be best.\r\ndata file size will increase 0.048% (if BLCKSZ = 8KiB), I think it is \r\nacceptable even for the user who does not use TDE. but need ondisk \r\nformat change.\r\nIf without of modify anything in core and doing GCM, the under-layer can \r\nwrite out a key fork, fsync(2) key fork with the same strategy for main \r\nfork. this is crash-safe. The consistency is ensured by WAL. (means \r\nwal_log_hints need set to on)\r\nOr the underlayer can re-struct the IO request. insert one IV block per \r\n2730(=BLKSZ/IV_SIZE) data blocks. this method like the _mdfd_getseg() in \r\nmd.c which split file by 1GiB. No perception in the upper layers.\r\nBoth of them can use cache to reduce performance downgrade.\r\n\r\nfor WAL encryption, the CybertecDB implement is correct. we can not \r\nwrite any extra data without adding a reserved field in core. because \r\ncan not guarantee consistency. If use GCM for WAL encryption must \r\ndisable HMAC verification.\r\n\r\n* only shows the possibility, not mean anyone should implement TDE in \r\nthat way.\r\n* blahblah", "msg_date": "Wed, 20 Oct 2021 15:05:14 +0800", "msg_from": "Sasasu <i@sasa.su>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Greetings,\n\n* Sasasu (i@sasa.su) wrote:\n> But If PG has a clear block-based IO API, TDE is much easier to understand.\n\nPG does have a block-based IO API, it's just not exposed as hooks. In\nparticular, take a look at md.c, though perhaps you'd be more interested\nin the higher level bufmgr.c routines. For the specific places where\nencryption may hook in, looking at the DataChecksumsEnabled() call sites\nmay be informative (there aren't that many of them).\n\n> security people may lack database knowledge but they can understand block\n> IO.\n> This will allow more people to join PG community.\n\nWe'd certainly welcome them. I don't think we're going to try to\nredesign our entire IO subsystem in the hopes that they'll show up\nthough.\n\n> On 2021/10/20 02:54, Stephen Frost wrote:\n> > Where would you store the tag for GCM without changes in core?\n> \n> If can add 32bit reserve field (in CGM is 28bits) will be best.\n\nThat's the idea that's been discussed, but the approach put forward is\nto do it in a manner that allows the same binaries to work with a\nTDE-enabled cluster and a non-TDE cluster which means two different\nformats on disk. This is still a pretty big deal and would require\nlogical replication or pg_dump/restore to go from unencrypted to\nencrypted.\n\n> data file size will increase 0.048% (if BLCKSZ = 8KiB), I think it is\n> acceptable even for the user who does not use TDE. but need ondisk format\n> change.\n\nBreaking our ondisk format explicitly means that pg_upgrade won't work\nany longer and folks won't be able to do in-place upgrades. That's a\npretty huge deal and it's something we've not done in over a decade.\nI doubt that's going to fly.\n\n> If without of modify anything in core and doing GCM, the under-layer can\n> write out a key fork, fsync(2) key fork with the same strategy for main\n> fork. this is crash-safe. The consistency is ensured by WAL. (means\n> wal_log_hints need set to on)\n> Or the underlayer can re-struct the IO request. insert one IV block per\n> 2730(=BLKSZ/IV_SIZE) data blocks. this method like the _mdfd_getseg() in\n> md.c which split file by 1GiB. No perception in the upper layers.\n> Both of them can use cache to reduce performance downgrade.\n\nYes, using another fork for this is something that's been considered but\nit's not without its own drawbacks, in particular having to do another\nwrite and later fsync when a page changes.\n\nFurther, none of this is necessary for XTS, but only for GCM. This is\nwhy it was put forward that GCM involves a lot more changes to the\nsystem and means that we won't be able to do things like binary\nreplication to switch from an unencrypted to encrypted cluster. Those\nare good reasons to consider an XTS implementation first and then later,\nperhaps, implement GCM.\n\n> for WAL encryption, the CybertecDB implement is correct. we can not write\n> any extra data without adding a reserved field in core. because can not\n> guarantee consistency. If use GCM for WAL encryption must disable HMAC\n> verification.\n\nWhat's the point of using GCM if we aren't going to actually verify the\ntag? Also, the Cybertec patch didn't add an extra reserved field to the\npage format, and it used CTR anyway..\n\nThanks,\n\nStephen", "msg_date": "Wed, 20 Oct 2021 08:24:08 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On 2021/10/20 20:24, Stephen Frost wrote:\r\n > PG does have a block-based IO API, it's just not exposed as hooks. In\r\n > particular, take a look at md.c, though perhaps you'd be more interested\r\n > in the higher level bufmgr.c routines. For the specific places where\r\n > encryption may hook in, looking at the DataChecksumsEnabled() call sites\r\n > may be informative (there aren't that many of them).\r\n\r\nmd.c is great, easy to understand. but PG does not have a unified API. \r\nThere has many unexpected pread()/pwrite() in many corners. md.c only \r\nfor heap table, bufmgr.c only for a buffered heap table.\r\n\r\neg: XLogWrite() looks like a block API, but is a range write. equivalent \r\nto the append(2)\r\neg: ALTER DATABASE SET TABLESPACE , the movedb() call. use copy_file() \r\non heap table. which is just pread() pwrite() with 8*BLCKSZ.\r\neg: all front-end tools use pread() to read heap table. in particular, \r\npg_rewind write heap table by offset.\r\neg: in contrib, pg_standby use system(\"cp\") to copy WAL.\r\n\r\nOn 2021/10/20 20:24, Stephen Frost wrote:\r\n > Breaking our ondisk format explicitly means that pg_upgrade won't work\r\n > any longer and folks won't be able to do in-place upgrades. That's a\r\n > pretty huge deal and it's something we've not done in over a decade.\r\n > I doubt that's going to fly.\r\n\r\nI completely agree.\r\n\r\nOn 2021/10/20 20:24, Stephen Frost wrote:\r\n > Yes, using another fork for this is something that's been considered but\r\n > it's not without its own drawbacks, in particular having to do another\r\n > write and later fsync when a page changes.\r\n >\r\n > Further, none of this is necessary for XTS, but only for GCM. This is\r\n > why it was put forward that GCM involves a lot more changes to the\r\n > system and means that we won't be able to do things like binary\r\n > replication to switch from an unencrypted to encrypted cluster. Those\r\n > are good reasons to consider an XTS implementation first and then later,\r\n > perhaps, implement GCM.\r\n\r\nsame as Robert Haas. I wish PG can do some infrastructure first. add \r\nmore abstract layers like md.c (maybe a block-based API with ondisk \r\nformat version field). so people can dive in without understanding the \r\nthings which isolated by the abstract layer.\r\n\r\nOn 2021/10/20 20:24, Stephen Frost wrote:\r\n > What's the point of using GCM if we aren't going to actually verify the\r\n > tag? Also, the Cybertec patch didn't add an extra reserved field to the\r\n > page format, and it used CTR anyway..\r\n\r\nOh, I am wrong, Cybertec patch can not use XTS, because WAL may not be \r\naligned to 16bytes. for WAL need a stream cipher. The CTR implement is \r\nstill correct.\r\n\r\nCTR with hash(offset) as IV is basically equal to XTS. if use another \r\nAES key to encrypt the hash(offset), and block size is 16bytes it is XTS.\r\nThe point is that can not save random IV for WAL without adding a \r\nreserved field, no matter use GCM or CTR.\r\n\r\nBecause WAL only does append to the end, using CTR for WAL is safer than \r\nusing XTS for heap table. If you want more security for WAL encryption, \r\nadd HKDF[1].\r\n\r\n[1]: https://en.wikipedia.org/wiki/HKDF", "msg_date": "Thu, 21 Oct 2021 12:18:59 +0800", "msg_from": "Sasasu <i@sasa.su>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Greetings,\n\n* Sasasu (i@sasa.su) wrote:\n> On 2021/10/20 20:24, Stephen Frost wrote:\n> > PG does have a block-based IO API, it's just not exposed as hooks. In\n> > particular, take a look at md.c, though perhaps you'd be more interested\n> > in the higher level bufmgr.c routines. For the specific places where\n> > encryption may hook in, looking at the DataChecksumsEnabled() call sites\n> > may be informative (there aren't that many of them).\n> \n> md.c is great, easy to understand. but PG does not have a unified API. There\n> has many unexpected pread()/pwrite() in many corners. md.c only for heap\n> table, bufmgr.c only for a buffered heap table.\n\nThere's certainly other calls out there, yes.\n\n> eg: XLogWrite() looks like a block API, but is a range write. equivalent to\n> the append(2)\n\nXLog is certainly another thing that has to be dealt with, of course,\nbut I don't see us trying to shoehorn that into using md.c somehow.\n\n> eg: ALTER DATABASE SET TABLESPACE , the movedb() call. use copy_file() on\n> heap table. which is just pread() pwrite() with 8*BLCKSZ.\n> eg: all front-end tools use pread() to read heap table. in particular,\n> pg_rewind write heap table by offset.\n> eg: in contrib, pg_standby use system(\"cp\") to copy WAL.\n\nNone of these are actually working with or changing the data though,\nthey're just copying it. I don't think we'd actually want these to\ndecrypt and reencrypt the data.\n\n> On 2021/10/20 20:24, Stephen Frost wrote:\n> > Breaking our ondisk format explicitly means that pg_upgrade won't work\n> > any longer and folks won't be able to do in-place upgrades. That's a\n> > pretty huge deal and it's something we've not done in over a decade.\n> > I doubt that's going to fly.\n> \n> I completely agree.\n\nGreat.\n\n> On 2021/10/20 20:24, Stephen Frost wrote:\n> > Yes, using another fork for this is something that's been considered but\n> > it's not without its own drawbacks, in particular having to do another\n> > write and later fsync when a page changes.\n> >\n> > Further, none of this is necessary for XTS, but only for GCM. This is\n> > why it was put forward that GCM involves a lot more changes to the\n> > system and means that we won't be able to do things like binary\n> > replication to switch from an unencrypted to encrypted cluster. Those\n> > are good reasons to consider an XTS implementation first and then later,\n> > perhaps, implement GCM.\n> \n> same as Robert Haas. I wish PG can do some infrastructure first. add more\n> abstract layers like md.c (maybe a block-based API with ondisk format\n> version field). so people can dive in without understanding the things which\n> isolated by the abstract layer.\n\nI really don't think this is necessary. Similar to PageSetChecksumCopy\nand PageSetChecksumInplace, I'm sure we would have functions which are\ncalled in the appropriate spots to do encryption (such as 'encrypt_page'\nand 'encrypt_block' in the Cybertec patch) and folks could review those\nin relative isolation to the rest. Dealing with blocks in PG is already\npretty well handled, the infrastructure that needs to be added is around\nhandling temporary files and is being actively worked on ... if we could\nmove past this debate around if we should be adding support for XTS or\nif only GCM-SIV would be accepted.\n\n> On 2021/10/20 20:24, Stephen Frost wrote:\n> > What's the point of using GCM if we aren't going to actually verify the\n> > tag? Also, the Cybertec patch didn't add an extra reserved field to the\n> > page format, and it used CTR anyway..\n> \n> Oh, I am wrong, Cybertec patch can not use XTS, because WAL may not be\n> aligned to 16bytes. for WAL need a stream cipher. The CTR implement is still\n> correct.\n\nNo, the CTR approach isn't great because, as has been discussed quite a\nbit already, using the LSN as the IV means that different data might be\nre-encrypted with the same LSN and that's not an acceptable thing to\nhave happen with CTR.\n\n> CTR with hash(offset) as IV is basically equal to XTS. if use another AES\n> key to encrypt the hash(offset), and block size is 16bytes it is XTS.\n\nI don't understand why we're talking about CTR+other-stuff. Maybe if\nyou use CTR and then do other things then it's equivilant in some\nfashion to XTS ... but then it's not CTR anymore and we shouldn't be\ncalling it that. Saying that we should do \"CTR+other stuff\" (which\nhappens to make it equivilant to XTS) instead of just saying we should\nuse \"XTS\" is very confusing and further means that we're starting down\nthe path of trying to come up with our own hack on existing encryption\nschemes, and regardless of who shows up on this list to claim that doing\nso makes sense and is \"right\", I'm going to be extremely skeptical.\n\n> The point is that can not save random IV for WAL without adding a reserved\n> field, no matter use GCM or CTR.\n\nYes, it's correct that we can't use a random IV for the WAL without\nfiguring out how to keep track of that random IV. Thankfully, for WAL\n(unlike heap and index blocks) we don't really have that issue- we\nhopefully aren't going to write different WAL records at the same LSN\nand so using the LSN there should be alright. There's some odd edge\ncases around this too (split-brain situations in particular where one of\nthe systems does crash recovery and doesn't actually promote and\ntherefore stays on the same timeline), but that kind of thing ends up\nbreaking other things too (WAL archiving, as an example) and isn't\nreally something we can support. Further, I figure we'll tell people to\nuse different keys for different systems anyway to avoid this risk.\n\n> Because WAL only does append to the end, using CTR for WAL is safer than\n> using XTS for heap table. If you want more security for WAL encryption, add\n> HKDF[1].\n\nI don't follow how you're making these comparisons as to what is \"safer\"\nfor which when talking about two different encryption methodologies and\ntwo very different systems (the heap vs. WAL) or if you're actually\nsuggesting something here.\n\nWe've discussed at length how using CTR for heap isn't a good idea even\nif we're using the LSN for the IV, while if we use XTS then we don't\nhave the issues that CTR has with IV re-use and using the LSN (plus\nblock number and perhaps other things). Nothing in what has been\ndiscussed here has really changed anything there that I can see and so\nit's unclear to me why we continue to go round and round with it.\n\nThanks,\n\nStephen", "msg_date": "Thu, 21 Oct 2021 13:28:12 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On 2021/10/22 01:28, Stephen Frost wrote:\r\n> None of these are actually working with or changing the data though,\r\n> they're just copying it. I don't think we'd actually want these to\r\n> decrypt and reencrypt the data.\r\n\r\nOK, but why ALTER TABLE SET TABLESPACE use smgrread() and smgrextend() \r\ninstead of copy_file().\r\nTDE needs to modify these code paths, and make the patch bigger.\r\n\r\nOn 2021/10/22 01:28, Stephen Frost wrote:\r\n > No, the CTR approach isn't great because, as has been discussed quite a\r\n > bit already, using the LSN as the IV means that different data might be\r\n > re-encrypted with the same LSN and that's not an acceptable thing to\r\n > have happen with CTR.\r\nOn 2021/10/22 01:28, Stephen Frost wrote:\r\n > Thankfully, for WAL\r\n > (unlike heap and index blocks) we don't really have that issue- we\r\n > hopefully aren't going to write different WAL records at the same LSN\r\n > and so using the LSN there should be alright.\r\nOn 2021/10/22 01:28, Stephen Frost wrote:\r\n > We've discussed at length how using CTR for heap isn't a good idea even\r\n > if we're using the LSN for the IV, while if we use XTS then we don't\r\n > have the issues that CTR has with IV re-use and using the LSN (plus\r\n > block number and perhaps other things). Nothing in what has been\r\n > discussed here has really changed anything there that I can see and so\r\n > it's unclear to me why we continue to go round and round with it.\r\n\r\nI am not re-discuss using CTR for heap table. I mean use some CTR-like \r\nalgorithm *only* for WAL encryption. My idea is exactly the same when \r\nyou are typing \"we hopefully aren't going to write different WAL records \r\nat the same LSN and so using the LSN there should be alright.\"\r\n\r\nThe point of disagreement between you and me is only on the block-based API.\r\n\r\nOn 2021/10/22 01:28, Stephen Frost wrote:\r\n > it's unclear to me why we continue to go round and round with it.\r\n\r\nsame to me. I am monitoring this thread about 9 months, watching yours \r\ndiscuss key management/CBC/CRT/GCM round and round.", "msg_date": "Fri, 22 Oct 2021 11:35:38 +0800", "msg_from": "Sasasu <i@sasa.su>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Greetings,\n\n* Sasasu (i@sasa.su) wrote:\n> On 2021/10/22 01:28, Stephen Frost wrote:\n> >None of these are actually working with or changing the data though,\n> >they're just copying it. I don't think we'd actually want these to\n> >decrypt and reencrypt the data.\n> \n> OK, but why ALTER TABLE SET TABLESPACE use smgrread() and smgrextend()\n> instead of copy_file().\n> TDE needs to modify these code paths, and make the patch bigger.\n\nTables and databases are handled differently, yes.\n\nWith ALTER TABLE SET TABLESPACE, we're allocating a new refilenode and\nWAL'ing the table as FPIs. What happens with databases is fundamentally\ndifferent- no one is allowed to be connected to the database being moved\nand we write a single 'database changed tablespace' record in the WAL\nfor this case. When it comes to TDE, this probably is actually helpful\nas we're going to likely want the relfilenode to be included as part of\nthe IV.\n\n> On 2021/10/22 01:28, Stephen Frost wrote:\n> > No, the CTR approach isn't great because, as has been discussed quite a\n> > bit already, using the LSN as the IV means that different data might be\n> > re-encrypted with the same LSN and that's not an acceptable thing to\n> > have happen with CTR.\n> On 2021/10/22 01:28, Stephen Frost wrote:\n> > Thankfully, for WAL\n> > (unlike heap and index blocks) we don't really have that issue- we\n> > hopefully aren't going to write different WAL records at the same LSN\n> > and so using the LSN there should be alright.\n> On 2021/10/22 01:28, Stephen Frost wrote:\n> > We've discussed at length how using CTR for heap isn't a good idea even\n> > if we're using the LSN for the IV, while if we use XTS then we don't\n> > have the issues that CTR has with IV re-use and using the LSN (plus\n> > block number and perhaps other things). Nothing in what has been\n> > discussed here has really changed anything there that I can see and so\n> > it's unclear to me why we continue to go round and round with it.\n> \n> I am not re-discuss using CTR for heap table. I mean use some CTR-like\n> algorithm *only* for WAL encryption. My idea is exactly the same when you\n> are typing \"we hopefully aren't going to write different WAL records at the\n> same LSN and so using the LSN there should be alright.\"\n\nI don't like the idea of \"CTR-like\". What's wrong with using CTR for\nWAL encryption? Based on the available information, that seems like the\nexact use-case for CTR.\n\n> The point of disagreement between you and me is only on the block-based API.\n\nI'm glad to hear that, at least.\n\nThanks,\n\nStephen", "msg_date": "Fri, 22 Oct 2021 11:36:37 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Mon, Oct 18, 2021 at 11:56:03AM -0400, Stephen Frost wrote:\n> Greetings,\n> \n> * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> > On 10/15/21 21:22, Stephen Frost wrote:\n> > >Now, to address the concern around re-encrypting a block with the same\n> > >key+IV but different data and leaking what parts of the page changed, I\n> > >do think we should use the LSN and have it change regularly (including\n> > >unlogged tables) but that's just because it's relatively easy for us to\n> > >do and means an attacker wouldn't be able to tell what part of the page\n> > >changed when the LSN was also changed. That was also recommended by\n> > >NIST and that's a pretty strong endorsement also.\n> > \n> > Not sure - it seems a bit weird to force LSN change even in cases that don't\n> > generate any WAL. I was not following the encryption thread and maybe it was\n> > discussed/rejected there, but I've always imagined we'd have a global nonce\n> > generator (similar to a sequence) and we'd store it at the end of each\n> > block, or something like that.\n> \n> The 'LSN' being referred to here isn't the regular LSN that is\n> associated with the WAL but rather the separate FakeLSN counter which we\n> already have. I wasn't suggesting having the regular LSN change in\n> cases that don't generate WAL.\n\nYes, my original patch created dummy WAL records for dummy LSNs but that\nis no longer needed with XTS.\n\n> > I'm not very convinced that using the LSN for any of this is a good\n> > idea. Something that changes most of the time but not all the time\n> > seems more like it could hurt by masking fuzzy thinking more than it\n> > helps anything.\n> \n> This argument doesn't come across as very strong at all to me,\n> particularly when we have explicit recommendations from NIST that having\n> the IV vary more is beneficial. While this would be using the LSN, the\n> fact that the LSN changes most of the time but not all of the time isn't\n> new and is something we already have to deal with. I'd think we'd\n> address the concern about mis-thinking around how this works by\n> providing a README and/or an appropriate set of comments around what's\n> being done and why.\n\nAgreed. I think we would need to document when we reencrypt a page\nwith the same LSN, and of course write-based attacks.\n\n> > Do we think knowing which 16-byte blocks on an 8k page change would leak\n> > useful information? If so, we should use the LSN and just accept that\n> > some cases might leak as described above. If we don't care, then we can\n> > skip the use of the LSN and simplify the patch.\n> \n> While there may not be an active attack against PG that leverages such a\n> leak, I have a hard time seeing why we would intentionally design this\n> in when we have a great option that's directly available to us and\n> doesn't cause such a leak with nearly such regularity as not using the\n> LSN would, and also follows recommendations of using XTS from NIST.\n\nAgreed.\n\n> > I consider this a checkbox feature and making it too complex will cause\n> > it to be rightly rejected.\n> \n> Presuming that 'checkbox feature' here means \"we need it to please\n> $someone but no one will ever use it\" or something along those lines,\n> this is very clearly not the case and therefore we shouldn't be\n> describing it or treating it as such. Even if the meaning here is\n> \"there's other ways people could get this capability\" the reality is\n> that those other methods are simply not always available and in those\n> cases, people will choose to not use PostgreSQL. Nearly every other\n> database system which we might compare ourselves to has a solution in\n> this area and people actively use those solutions in a lot of\n> deployments.\n\nI think people will use this feature, but I called it a 'checkbox\nfeature' because they usually are not looking for a complex or flexible\nfeature, but rather something that is simple to setup and effective.\n\n> > And if PostgreSQL is using XTS, there is no different with dm-encrypt.\n> > The user can use dm-encrypt directly.\n> \n> dm-encrypt is not always an option and it doesn't actually address the\n> threat-model that Tomas brought up here anyway, as it would be below the\n> level that the low-privileged OS user would be looking at. That's not\n> the only threat model to consider, but it is one which could potentially\n> be addressed by either XTS or AES-GCM-SIV. There are threat models\n> which dm-crypt would address, of course, such as data-at-rest (hard\n> drive theft, improper disposal of storage media, backups which don't\n> have their own encryption, etc), but, again, dm-crypt isn't always an\n> option that is available and so I don't agree that we should throw this\n> out just because dm-crypt exists and may be useable in some cases.\n\nI actually think a Postgres integrity-check feature would need to create\nan abstraction layer on top of all writes to PGDATA and tablespaces so\nthe filesystem would look unencrypted to Postgres.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 22 Oct 2021 19:51:05 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Tue, Oct 19, 2021 at 02:44:26PM -0400, Stephen Frost wrote:\n> There are ways around it. There likely always will be. We need to be\n> clear about what it provides and what it doesn't. We need to stop\n> telling ourselves that the only answer is a 100% solution and therefore\n> it's impossible to do. Users who care about these capabilities will\n> understand that it's not 100% and they will still happily use it because\n> it's better than 0% which is where we are today and is why they are\n> going with other solutions. Yes, if it's trivial to get around then\n> perhaps it's not much better than 0% and if that's the case then it\n> doesn't make sense to do it, but none of what has been discussed here\n> thus far has made me feel like either the XTS or the GCM-SIV approaches\n> would be trivial to to circumvent for the threat models they're intended\n> to address, though it certainly takes more care and more thought when\n> we're trying to address someone who has write access to part of the\n> system and that we need to be clear what is addressed and what isn't in\n> all of these cases.\n\nStephen, your emails on this thread have been very helpful and on-topic.\nI think the distinction above is that it is useful to fully protect\nagainst some attack types, even if we don't protect against all attack\ntypes. For example, if we protect 100% against read attacks, it doesn't\nmean that gets reduced to 50% because we don't protect against write\nattacks --- we are still 100% read-protected and 0% write protected.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 22 Oct 2021 19:57:02 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Mon, Oct 18, 2021 at 12:37:39PM -0400, Robert Haas wrote:\n> > Thus, to me, it doesn't seem worth going down the XTS route, just to\n> > temporarily save a bit of implementation effort. We'll have to endure that\n> > pain anyway.\n> \n> I agree up to a point, but I do also kind of feel like we should be\n> leaving it up to whoever is working on an implementation to decide\n> what they want to implement. I don't particularly like this discussion\n\nUh, our TODO has this list:\n\n\tDesirability -> Design -> Implement -> Test -> Review -> Commit\n\nI think we have to agree on Desirability and Design before anyone starts\nwork since it is more likely a patch will be rejected without this.\n\n> where it feels like people are trying to tell other people what they\n> have to do because \"the community has decided X.\" It's pretty clear\n> that there are multiple opinions here, and I don't really see any of\n> them to be without merit, nor do I see why Bruce or Stephen or you or\n> anyone else should get to say \"what the community has decided\" in the\n> absence of a clear consensus.\n\nI don't see anyone saying we have agreed on anything, but I do hear\npeople say they are willing to work on some things, and not others.\n\n> I do really like the idea of using AES-GCM-SIV not because I know\n> anything about it, but because the integrity checking seems cool, and\n> storing the nonce seems like it would improve security. However, based\n> on what I know now, I would not vote to reject an XTS-based patch and,\n> as Stephen and Bruce have said, maybe with the right design it permits\n> upgrades from non-encrypted clusters to encrypted clusters. I'm\n> actually kind of doubtful about that, because that seems to require\n> some pretty specific and somewhat painful implementation decisions.\n> For example, I think if your solution for rotating keys is to shut\n> down the standby, re-encrypt it with a new key, start it up again, and\n> fail over to it, then you probably ever can't do key rotation in any\n> other way. The keys now have to be non-WAL-logged so that the standby\n> can be different, which means you can't add a new key on the master\n> and run around re-encrypting everything with it, WAL-logging those\n> changes as you go. Now I realize that implementing that is really\n> challenging anyway so maybe some people wouldn't like to go that way,\n> but then maybe other people would. Another thing you probably can't do\n> in this model is encrypt different parts of the database with\n> different keys, because how would you keep track of that? Certainly\n> not in the system catalogs, if none of that can show up in the WAL\n> stream.\n\nThe design is to have a heap/index key and a WAL key. You create a\nbinary replica that uses a different heap/index key but the same WAL\nkey, switch-over to it, and then change the WAL key.\n\n`> But, you know, still: if somebody showed up with a fully-working XTS\n> patch with everything in good working order, I don't see that we have\n> enough evidence to reject it just because it's XTS. And I would hope\n> that the people favoring XTS would not vote to reject a fully working\n> GCM patch just because it's GCM. I think what we ought to be doing at\n\nI don't think that would happen, but I do think patch size, code\nmaintenance, and upgradability would be reasonable considerations.\n\n> this point is combining our efforts to try to get some things\n> committed which make future work in this area committed - like that\n> patch to preserve relfilenode and database OID, or maybe some patches\n> to drive all of our I/O through a smaller number of code paths instead\n> of having every different type of temporary file we write reinvent the\n> wheel. These discussions about what encryption type we ought to use\n> are useful for ruling out options that we know are bad, but beyond\n> that I'm not sure they have much value. AES in any mode could seem\n> like a much less safe choice by the time we get a committed feature\n> here than it does today - even if somehow that were to happen for v15.\n> I expect there are people out there trying to break it even as I write\n> these words, and it seems likely that they will eventually succeed,\n> but as to when, who can say?\n\nYes, we should start on things we know we need, but we will have to have\nthese discussions until we have desirability and design most people\nagree on.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 22 Oct 2021 20:04:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Fri, Oct 22, 2021 at 11:36:37AM -0400, Stephen Frost wrote:\n> > I am not re-discuss using CTR for heap table. I mean use some CTR-like\n> > algorithm *only* for WAL encryption. My idea is exactly the same when you\n> > are typing \"we hopefully aren't going to write different WAL records at the\n> > same LSN and so using the LSN there should be alright.\"\n> \n> I don't like the idea of \"CTR-like\". What's wrong with using CTR for\n> WAL encryption? Based on the available information, that seems like the\n> exact use-case for CTR.\n\nAgreed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 22 Oct 2021 20:06:46 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Tue, Oct 19, 2021 at 02:44:26PM -0400, Stephen Frost wrote:\n> Another threat model to consider is if the attacker has read-only access\n> to the data directory through, say, unix group read privileges or maybe\n> the ability to monitor the traffic on the SAN, or the ability to\n> read-only mount the LUN on to another system. This might be obtained by\n> attacking a backup process where the system was configured to run\n> physical backups using an unprivileged OS user who only has group read\n> access to the cluster (and the necessary but non-superuser privleges in\n> the database system to start/stop the backup), or various potential\n> attacks at the storage layer. This is similar to the \"data at rest\"\n> case above in that XTS works well to address this, but because the\n> attacker would have ongoing access (rather than just one-time, such as\n> in the first case), information such as which blocks are being changed\n> inside of a given 8k page might be able to be determined and that could\n> be useful information, though a point here: they would already be able\n> to see clearly which 8k pages are being changed and which aren't, and\n> there's not really any way for us to prevent that reasonably. As such,\n> I'd argue that using XTS is reasonable and we can mitigate some of this\n> concern by using the LSN in the tweak instead of just the block number\n> as the 'plain64' option in dmcrypt does. That doing so would mean that\n\nThat is an excellent point, and something we should mention in our\ndocumentation --- the fact that a change of 8k granularity will be\nvisible, and in certain specified cases, 16-byte change granularity will\nalso be visible.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sat, 23 Oct 2021 11:29:05 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Mon, Oct 18, 2021 at 12:37:39PM -0400, Robert Haas wrote:\n> I do really like the idea of using AES-GCM-SIV not because I know\n> anything about it, but because the integrity checking seems cool, and\n ----------\n> storing the nonce seems like it would improve security. However, based\n\nFrankly, I think we need to be cautious about doing anything related to\nsecurity for \"cool\" motivations. (This might be how OpenSSL became such\na mess.) For non-security features, you can often add a few lines of\ncode to enable some cool use-case. For security features, you have to\nblock its targeted attack methods fully or it is useless. (It doesn't\nneed to block all attack methods.) To fully block attack methods,\nsecurity features must be thoroughly designed and all potential\ninteractions must be researched.\n\nWhen adding non-security Postgres features, cool features can be more\neasily implemented because they are built on the sold foundation of\nPostgres. For security features, you have to assume that attacks can\ncome from anywhere, so the foundation is unclear and caution is wise.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sat, 23 Oct 2021 11:49:44 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Tue, Oct 19, 2021 at 02:54:56PM -0400, Stephen Frost wrote:\n> * Sasasu (i@sasa.su) wrote:\n> > A unified block-based I/O API sounds great. Has anyone tried to do this\n> > before? It would be nice if the front-end tools could also use these API.\n> \n> The TDE patch from Cybertec did go down this route, but the API ended up\n> being rather different which menat a lot of changes in other parts of\n> the system. If we can get a block-based temporary file method that\n> maintains more-or-less the same API, that'd be great, but I'm not sure\n> that we can really do so and I am not entirely convinced that we should\n> make the TDE effort depend on an otherwise quite independent effort of\n> making all temp files usage be block based.\n\nUh, I thought people felt the Cybertec patch was too large and that a\nunified API for temporary file I/O-encryption was a requirement. Would\na CTR-steaming-encryption API for temporary tables be easier to\nimplement?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sat, 23 Oct 2021 12:03:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "В Чт, 21/10/2021 в 13:28 -0400, Stephen Frost пишет:\n> Greetings,\n> \n> I really don't think this is necessary. Similar to PageSetChecksumCopy\n> and PageSetChecksumInplace, I'm sure we would have functions which are\n> called in the appropriate spots to do encryption (such as 'encrypt_page'\n> and 'encrypt_block' in the Cybertec patch) and folks could review those\n> in relative isolation to the rest. Dealing with blocks in PG is already\n> pretty well handled, the infrastructure that needs to be added is around\n> handling temporary files and is being actively worked on ... if we could\n> move past this debate around if we should be adding support for XTS or\n> if only GCM-SIV would be accepted.\n> \n> .....\n> \n> No, the CTR approach isn't great because, as has been discussed quite a\n> bit already, using the LSN as the IV means that different data might be\n> re-encrypted with the same LSN and that's not an acceptable thing to\n> have happen with CTR.\n> \n> .....\n> \n> We've discussed at length how using CTR for heap isn't a good idea even\n> if we're using the LSN for the IV, while if we use XTS then we don't\n> have the issues that CTR has with IV re-use and using the LSN (plus\n> block number and perhaps other things). Nothing in what has been\n> discussed here has really changed anything there that I can see and so\n> it's unclear to me why we continue to go round and round with it.\n> \n\nInstead of debatting XTS vs GCM-SIV I'd suggest Google's Adiantum [1][2]\n[3][4].\n\nIt is explicitely created to solve large block encryption issue - disk\nencryption. It is used to encrypt 4kb at whole, but in fact has no\n(practical) limit on block size: it is near-zero modified to encrypt 1kb\nor 8kb or 32kb.\n\nIt has benefits of both XTS and GCM-SIV:\n- like GCM-SIV every bit of cipher text depends on every bit of plain text\n- therefore like GCM-SIV it is resistant to IV reuse: it is safe to reuse\n LSN+reloid+blocknumber tuple as IV even for hint-bit changes since every\n block's bit will change.\n- like XTS it doesn't need to change plain text format and doesn't need in\n additional Nonce/Auth Code.\n\nAdiantum stands on \"giant's shoulders\": AES, Chacha and Poly1305.\nIt is included into Linux kernel since 5.0 .\n\nAdiantum/HPolyC approach (hash+cipher+stream-cipher+hash) could be used\nwith other primitives as well. For example, Chacha12 could be replaces\nwith AES-GCM or AES-XTS with IV derived from hash+cipher.\n\n[1] https://security.googleblog.com/2019/02/introducing-adiantum-encryption-for.html\n[2] https://en.wikipedia.org/wiki/Adiantum_(cipher)\n[3] https://tosc.iacr.org/index.php/ToSC/article/view/7360\n[4] https://github.com/google/adiantum\n[5] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=059c2a4d8e164dccc3078e49e7f286023b019a98\n\n-------\n\nregards\nYura Sokolov\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 09:59:17 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Tue, Oct 19, 2021 at 02:54:56PM -0400, Stephen Frost wrote:\n> > * Sasasu (i@sasa.su) wrote:\n> > > A unified block-based I/O API sounds great. Has anyone tried to do this\n> > > before? It would be nice if the front-end tools could also use these API.\n> > \n> > The TDE patch from Cybertec did go down this route, but the API ended up\n> > being rather different which menat a lot of changes in other parts of\n> > the system. If we can get a block-based temporary file method that\n> > maintains more-or-less the same API, that'd be great, but I'm not sure\n> > that we can really do so and I am not entirely convinced that we should\n> > make the TDE effort depend on an otherwise quite independent effort of\n> > making all temp files usage be block based.\n> \n> Uh, I thought people felt the Cybertec patch was too large and that a\n> unified API for temporary file I/O-encryption was a requirement. Would\n> a CTR-steaming-encryption API for temporary tables be easier to\n> implement?\n\nHaving a unified API for temporary file I/O (that could then be extended\nto provide encryption) would definitely help with reducing the size of a\nTDE patch. The approach used in the Cybertec patch was to make\ntemporary file access block based, but the way that was implemented was\nwith an API different from pread/pwrite and that meant changing pretty\nmuch all of the call sites for temporary file access, which naturally\nresulted in changes in a lot of otherwise unrelated code.\n\nThere was an argument put forth that a block-based API for temporary\nfile access would generally be good as it would mean fewer syscalls. If\nwe can get behind that and make it happen in (relatively) short order\nthen we'd certainly be better off when it comes to implementing TDE\nwhich also deals with temporary files. I'm a bit concerned about that\napproach due to the changes needed but I'm also not against it. I do\nthink that an API which was more-or-less the same as what's used today\nwould be a smaller change and therefore might be easier to get in and\nthat it'd also make a TDE patch smaller. Perhaps both could be\naccomplished (an API that's similar to today, but the actual file access\nbeing block-based).\n\nEither way, we should get that unification done in the core code first\nas an independent effort. That's what I hope the Cybertec folks have\nhad a chance to work on.\n\nAs for the specific encryption method to use, using CTR would be simpler\nas it doesn't require access to be block-based, though we would need to\nmake sure to not re-use the IV across any of the temporary files being\ncreated (potentially concurrently). Probably not that hard to do but\njust something to make sure we do. Of course, if we arrange for\nblock-based access then we could use XTS or perhaps GCM/GCM-SIV if we\nwanted to.\n\nThanks,\n\nStephen", "msg_date": "Mon, 25 Oct 2021 11:58:14 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Greetings,\n\n* Yura Sokolov (y.sokolov@postgrespro.ru) wrote:\n> В Чт, 21/10/2021 в 13:28 -0400, Stephen Frost пишет:\n> > I really don't think this is necessary. Similar to PageSetChecksumCopy\n> > and PageSetChecksumInplace, I'm sure we would have functions which are\n> > called in the appropriate spots to do encryption (such as 'encrypt_page'\n> > and 'encrypt_block' in the Cybertec patch) and folks could review those\n> > in relative isolation to the rest. Dealing with blocks in PG is already\n> > pretty well handled, the infrastructure that needs to be added is around\n> > handling temporary files and is being actively worked on ... if we could\n> > move past this debate around if we should be adding support for XTS or\n> > if only GCM-SIV would be accepted.\n> > \n> > .....\n> > \n> > No, the CTR approach isn't great because, as has been discussed quite a\n> > bit already, using the LSN as the IV means that different data might be\n> > re-encrypted with the same LSN and that's not an acceptable thing to\n> > have happen with CTR.\n> > \n> > .....\n> > \n> > We've discussed at length how using CTR for heap isn't a good idea even\n> > if we're using the LSN for the IV, while if we use XTS then we don't\n> > have the issues that CTR has with IV re-use and using the LSN (plus\n> > block number and perhaps other things). Nothing in what has been\n> > discussed here has really changed anything there that I can see and so\n> > it's unclear to me why we continue to go round and round with it.\n> > \n> \n> Instead of debatting XTS vs GCM-SIV I'd suggest Google's Adiantum [1][2]\n> [3][4].\n\nThat sounds like a great thing to think about adding ... after we get\nsomething in that's based on XTS.\n\n> It is explicitely created to solve large block encryption issue - disk\n> encryption. It is used to encrypt 4kb at whole, but in fact has no\n> (practical) limit on block size: it is near-zero modified to encrypt 1kb\n> or 8kb or 32kb.\n> \n> It has benefits of both XTS and GCM-SIV:\n> - like GCM-SIV every bit of cipher text depends on every bit of plain text\n> - therefore like GCM-SIV it is resistant to IV reuse: it is safe to reuse\n> LSN+reloid+blocknumber tuple as IV even for hint-bit changes since every\n> block's bit will change.\n\nThe advantage of GCM-SIV is that it provides integrity as well as\nconfidentiality.\n\n> - like XTS it doesn't need to change plain text format and doesn't need in\n> additional Nonce/Auth Code.\n\nSure, in which case it's something that could potentially be added later\nas another option in the future. I don't think we'll always have just\none encryption method and it's good to generally think about what it\nmight look like to have others but I don't think it makes sense to try\nand get everything in all at once.\n\nThanks,\n\nStephen", "msg_date": "Mon, 25 Oct 2021 12:12:27 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Mon, Oct 25, 2021 at 11:58:14AM -0400, Stephen Frost wrote:\n> As for the specific encryption method to use, using CTR would be simpler\n> as it doesn't require access to be block-based, though we would need to\n> make sure to not re-use the IV across any of the temporary files being\n> created (potentially concurrently). Probably not that hard to do but\n> just something to make sure we do. Of course, if we arrange for\n> block-based access then we could use XTS or perhaps GCM/GCM-SIV if we\n> wanted to.\n\nAgreed on all points.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 16:06:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "В Пн, 25/10/2021 в 12:12 -0400, Stephen Frost пишет:\n> Greetings,\n> \n> * Yura Sokolov (y.sokolov@postgrespro.ru) wrote:\n> > В Чт, 21/10/2021 в 13:28 -0400, Stephen Frost пишет:\n> > > I really don't think this is necessary. Similar to PageSetChecksumCopy\n> > > and PageSetChecksumInplace, I'm sure we would have functions which are\n> > > called in the appropriate spots to do encryption (such as 'encrypt_page'\n> > > and 'encrypt_block' in the Cybertec patch) and folks could review those\n> > > in relative isolation to the rest. Dealing with blocks in PG is already\n> > > pretty well handled, the infrastructure that needs to be added is around\n> > > handling temporary files and is being actively worked on ... if we could\n> > > move past this debate around if we should be adding support for XTS or\n> > > if only GCM-SIV would be accepted.\n> > > \n> > > .....\n> > > \n> > > No, the CTR approach isn't great because, as has been discussed quite a\n> > > bit already, using the LSN as the IV means that different data might be\n> > > re-encrypted with the same LSN and that's not an acceptable thing to\n> > > have happen with CTR.\n> > > \n> > > .....\n> > > \n> > > We've discussed at length how using CTR for heap isn't a good idea even\n> > > if we're using the LSN for the IV, while if we use XTS then we don't\n> > > have the issues that CTR has with IV re-use and using the LSN (plus\n> > > block number and perhaps other things). Nothing in what has been\n> > > discussed here has really changed anything there that I can see and so\n> > > it's unclear to me why we continue to go round and round with it.\n> > > \n> > \n> > Instead of debatting XTS vs GCM-SIV I'd suggest Google's Adiantum [1][2]\n> > [3][4].\n> \n> That sounds like a great thing to think about adding ... after we get\n> something in that's based on XTS.\n\nWhy? I see no points to do it after. Why not XTS after Adiantum?\n\nOk, I see one: XTS is standartized.\n\n> > It is explicitely created to solve large block encryption issue - disk\n> > encryption. It is used to encrypt 4kb at whole, but in fact has no\n> > (practical) limit on block size: it is near-zero modified to encrypt 1kb\n> > or 8kb or 32kb.\n> > \n> > It has benefits of both XTS and GCM-SIV:\n> > - like GCM-SIV every bit of cipher text depends on every bit of plain text\n> > - therefore like GCM-SIV it is resistant to IV reuse: it is safe to reuse\n> > LSN+reloid+blocknumber tuple as IV even for hint-bit changes since every\n> > block's bit will change.\n> \n> The advantage of GCM-SIV is that it provides integrity as well as\n> confidentiality.\n\nIntegrity could be based on simple non-cryptographic checksum, and it could\nbe checked after decryption. It would be imposible to intentionally change\nencrypted page in a way it will pass checksum after decription. \n\nCurrently we have 16bit checksum, and it is very small. But having larger\nchecksum is orthogonal (ie doesn't bound) to having encryption.\n\nIn fact, Adiantum is easily made close to SIV construction:\n- just leave last 8/16 bytes zero. If after decription they are zero,\n then integrity check passed.\nThat is because SIV and Adiantum are very similar in its structure:\n- SIV:\n-- hash\n-- then stream cipher\n- Adiantum:\n-- hash (except last 16bytes)\n-- then encrypt last 16bytes with hash,\n-- then stream cipher\n-- then hash.\nIf last N (N>16) bytes is nonce + zero bytes, then \"hash, then encrypt last\n16bytes with hash\" become equivalent to just \"hash\", and Adiantum became\nlogical equivalent to SIV.\n\n> > - like XTS it doesn't need to change plain text format and doesn't need in\n> > additional Nonce/Auth Code.\n> \n> Sure, in which case it's something that could potentially be added later\n> as another option in the future. I don't think we'll always have just\n> one encryption method and it's good to generally think about what it\n> might look like to have others but I don't think it makes sense to try\n> and get everything in all at once.\n\nAnd among others Adiantum looks best: it is fast even without hardware\nacceleration, it provides whole block encryption (ie every bit depends\non every bit) and it doesn't bound to plain-text format.\n\n> Thanks,\n> \n> Stephen\n\nregards,\n\nYura\n\nPS. Construction beside SIV and Adiantum could be used with XTS as well.\nI.e. instead of AES-GCM-SIV it could be AES-XTS-SIV.\nAnd same way AES-XTS could be used instead of Chacha12 in Adiantum.\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 23:32:43 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On 2021/10/26 04:32, Yura Sokolov wrote:\r\n> And among others Adiantum looks best: it is fast even without hardware\r\n> acceleration,\r\n\r\nNo, AES is fast on modern high-end hardware.\r\n\r\non X86 AMD 3700X\r\ntype 1024 bytes 8192 bytes 16384 bytes\r\naes-128-ctr 8963982.50k 11124613.88k 11509149.42k\r\naes-128-gcm 3978860.44k 4669417.10k 4732070.64k\r\naes-128-xts 7776628.39k 9073664.63k 9264617.74k\r\nchacha20-poly1305 2043729.73k 2131296.36k 2141002.10k\r\n\r\non ARM RK3399, A53 middle-end with AES-NI\r\ntype 1024 bytes 8192 bytes 16384 bytes\r\naes-128-ctr 1663857.66k 1860930.22k 1872991.57k\r\naes-128-xts 685086.38k 712906.07k 716073.64k\r\naes-128-gcm 985578.84k 1054818.30k 1056768.00k\r\nchacha20-poly1305 309012.82k 318889.98k 319711.91k\r\n\r\nI think the baseline is the speed when using read(2) syscall on \r\n/dev/zero (which is 3.6GiB/s, on ARM is 980MiB/s)\r\nchacha is fast on the low-end arm, but I haven't seen any HTTPS sites \r\nusing chacha, including Cloudflare and Google.\r\n\r\nOn 2021/10/26 04:32, Yura Sokolov wrote:\r\n >> That sounds like a great thing to think about adding ... after we get\r\n >> something in that's based on XTS.\r\n > Why? I see no points to do it after. Why not XTS after Adiantum?\r\n >\r\n > Ok, I see one: XTS is standartized.\r\n:>\r\nPostgreSQL even not discuss single-table key rotation or remote KMS.\r\nI think it's too hard to use an encryption algorithm which openssl \r\ndoesn't implement.", "msg_date": "Tue, 26 Oct 2021 11:08:38 +0800", "msg_from": "Sasasu <i@sasa.su>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "В Вт, 26/10/2021 в 11:08 +0800, Sasasu пишет:\n> On 2021/10/26 04:32, Yura Sokolov wrote:\n> > And among others Adiantum looks best: it is fast even without hardware\n> > acceleration,\n> \n> No, AES is fast on modern high-end hardware.\n> \n> on X86 AMD 3700X\n> type 1024 bytes 8192 bytes 16384 bytes\n> aes-128-ctr 8963982.50k 11124613.88k 11509149.42k\n> aes-128-gcm 3978860.44k 4669417.10k 4732070.64k\n> aes-128-xts 7776628.39k 9073664.63k 9264617.74k\n> chacha20-poly1305 2043729.73k 2131296.36k 2141002.10k\n> \n> on ARM RK3399, A53 middle-end with AES-NI\n> type 1024 bytes 8192 bytes 16384 bytes\n> aes-128-ctr 1663857.66k 1860930.22k 1872991.57k\n> aes-128-xts 685086.38k 712906.07k 716073.64k\n> aes-128-gcm 985578.84k 1054818.30k 1056768.00k\n> chacha20-poly1305 309012.82k 318889.98k 319711.91k\n> \n> I think the baseline is the speed when using read(2) syscall on \n> /dev/zero (which is 3.6GiB/s, on ARM is 980MiB/s)\n> chacha is fast on the low-end arm, but I haven't seen any HTTPS sites \n> using chacha, including Cloudflare and Google.\n\n1. Chacha20-poly1305 includes authentication code (poly1305),\n aes-gcm also includes (GCM).\n But aes-128-(ctr,xts) doesn't.\n Therefore, Chacha should be compared with ctr,xts, not Chacha-Poly1305.\n2. Chacha20 has security margin x2.8: only 7 rounds from 20 are broken.\n AES-128 has security margin x1.4: broken 7 rounds from 10.\n That is why Adiantum uses Chacha12: it is still \"more secure\" than AES-128.\n\nYes, AES with AES-NI is fastest. But not so much.\n\nAnd, AES-CTR could be easily used instead of ChaCha12 in Adiantum.\nAdiantum uses ChaCha12 as a stream cipher, and any other stream cipher will\nbe ok as well with minor modifications to algorithm. \n\n> \n> On 2021/10/26 04:32, Yura Sokolov wrote:\n> >> That sounds like a great thing to think about adding ... after we get\n> >> something in that's based on XTS.\n> > Why? I see no points to do it after. Why not XTS after Adiantum?\n> >\n> > Ok, I see one: XTS is standartized.\n> :>\n> PostgreSQL even not discuss single-table key rotation or remote KMS.\n> I think it's too hard to use an encryption algorithm which openssl \n> doesn't implement.\n\nThat is argument. But, again, openssl could be used for primitives:\nAES + AES-CTR + Poly/GCM. And Adiantum like construction could be\ncomposed from them quite easily.\n\n\n\n", "msg_date": "Tue, 26 Oct 2021 12:33:47 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On 2021/10/26 17:33, Yura Sokolov wrote:\r\n> Yes, AES with AES-NI is fastest. But not so much.\r\n> \r\n> And, AES-CTR could be easily used instead of ChaCha12 in Adiantum.\r\n> Adiantum uses ChaCha12 as a stream cipher, and any other stream cipher will\r\n> be ok as well with minor modifications to algorithm.\r\n\r\nnot so much ~= half speed.\r\n\r\nI prefer to use AES on all devices because AES is faster and more power \r\nefficient. using chacha only on low-end arm devices running complex \r\nprogram which most people do not have this device.\r\n\r\nI reserve my opinion on this point, but I agree with you on the rest.\r\nAnd I also agree and think it should add more algorithms. The current \r\nimplementation does not have any reserved fields, which makes any \r\nupgrade like adding a new algorithm unfeasible.\r\n\r\nOn 2021/10/26 17:33, Yura Sokolov wrote:\r\n > That is argument. But, again, openssl could be used for primitives:\r\n > AES + AES-CTR + Poly/GCM. And Adiantum like construction could be\r\n > composed from them quite easily.\r\n\r\nimplement Adiantum is a small problem (which doesn't look good, lack \r\nsecurity audits). the real problem is there are too many code path can \r\ntrigger disk IO.\r\nTDE need modify them. each code path has different behavior (align or \r\nunaligned, once or more than once). and front-end tools even not use VF \r\nlayer, they use pread with offset. TDE need fix them all. at the same \r\ntime, keep the patch small enough.\r\n\r\nI still think there need a unified IO API, not only modify \r\nBufFileDumpBuffer() and BufFileLoadBuffer(). the front-end tools also \r\nuse this API will be great. with that API, TDE can focus on block IO. \r\nthen give out a small patch.\r\n\r\nOther works can also benefit from this API, like PostgreSQL with AIO, \r\nPostgreSQL on S3 (BLKSZ=4M), PostgreSQL on PMEM(no FS) and many.", "msg_date": "Tue, 26 Oct 2021 23:47:10 +0800", "msg_from": "Sasasu <i@sasa.su>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Greetings,\n\n* Yura Sokolov (y.sokolov@postgrespro.ru) wrote:\n> В Пн, 25/10/2021 в 12:12 -0400, Stephen Frost пишет:\n> > * Yura Sokolov (y.sokolov@postgrespro.ru) wrote:\n> > > В Чт, 21/10/2021 в 13:28 -0400, Stephen Frost пишет:\n> > > > I really don't think this is necessary. Similar to PageSetChecksumCopy\n> > > > and PageSetChecksumInplace, I'm sure we would have functions which are\n> > > > called in the appropriate spots to do encryption (such as 'encrypt_page'\n> > > > and 'encrypt_block' in the Cybertec patch) and folks could review those\n> > > > in relative isolation to the rest. Dealing with blocks in PG is already\n> > > > pretty well handled, the infrastructure that needs to be added is around\n> > > > handling temporary files and is being actively worked on ... if we could\n> > > > move past this debate around if we should be adding support for XTS or\n> > > > if only GCM-SIV would be accepted.\n> > > > \n> > > > .....\n> > > > \n> > > > No, the CTR approach isn't great because, as has been discussed quite a\n> > > > bit already, using the LSN as the IV means that different data might be\n> > > > re-encrypted with the same LSN and that's not an acceptable thing to\n> > > > have happen with CTR.\n> > > > \n> > > > .....\n> > > > \n> > > > We've discussed at length how using CTR for heap isn't a good idea even\n> > > > if we're using the LSN for the IV, while if we use XTS then we don't\n> > > > have the issues that CTR has with IV re-use and using the LSN (plus\n> > > > block number and perhaps other things). Nothing in what has been\n> > > > discussed here has really changed anything there that I can see and so\n> > > > it's unclear to me why we continue to go round and round with it.\n> > > > \n> > > \n> > > Instead of debatting XTS vs GCM-SIV I'd suggest Google's Adiantum [1][2]\n> > > [3][4].\n> > \n> > That sounds like a great thing to think about adding ... after we get\n> > something in that's based on XTS.\n> \n> Why? I see no points to do it after. Why not XTS after Adiantum?\n> \n> Ok, I see one: XTS is standartized.\n\nThat's certainly one aspect of it. It's also more easily available to\nus, and frankly the people working on this and writing the patches have\na better understanding of XTS from having looked into it. That it's\nalso more widely used is another reason.\n\n> > > It is explicitely created to solve large block encryption issue - disk\n> > > encryption. It is used to encrypt 4kb at whole, but in fact has no\n> > > (practical) limit on block size: it is near-zero modified to encrypt 1kb\n> > > or 8kb or 32kb.\n> > > \n> > > It has benefits of both XTS and GCM-SIV:\n> > > - like GCM-SIV every bit of cipher text depends on every bit of plain text\n> > > - therefore like GCM-SIV it is resistant to IV reuse: it is safe to reuse\n> > > LSN+reloid+blocknumber tuple as IV even for hint-bit changes since every\n> > > block's bit will change.\n> > \n> > The advantage of GCM-SIV is that it provides integrity as well as\n> > confidentiality.\n> \n> Integrity could be based on simple non-cryptographic checksum, and it could\n> be checked after decryption. It would be imposible to intentionally change\n> encrypted page in a way it will pass checksum after decription. \n\nNo, it wouldn't be impossible when we're talking about non-cryptographic\nchecksums. That is, in fact, why you'd call them that. If it were\nimpossible (or at least utterly impractical) then you'd be able to claim\nthat it's cryptographic-level integrity validation.\n\n> Currently we have 16bit checksum, and it is very small. But having larger\n> checksum is orthogonal (ie doesn't bound) to having encryption.\n\nSure, but that would also require a page-format change. We've pointed\nout the downsides of that and what it would prevent in terms of\nuse-cases. That's still something that might happen but it would be a\ndifferent effort from this.\n\n> In fact, Adiantum is easily made close to SIV construction:\n> - just leave last 8/16 bytes zero. If after decription they are zero,\n> then integrity check passed.\n> That is because SIV and Adiantum are very similar in its structure:\n> - SIV:\n> -- hash\n> -- then stream cipher\n> - Adiantum:\n> -- hash (except last 16bytes)\n> -- then encrypt last 16bytes with hash,\n> -- then stream cipher\n> -- then hash.\n> If last N (N>16) bytes is nonce + zero bytes, then \"hash, then encrypt last\n> 16bytes with hash\" become equivalent to just \"hash\", and Adiantum became\n> logical equivalent to SIV.\n\nWhile I appreciate your interest in this, I don't think it makes sense\nfor us to try and implement something of our own- we're not\ncryptographers. Best is to look at published guideance and what other\nprojects have had success doing, and that's what this thread has been\nabout.\n\n> > > - like XTS it doesn't need to change plain text format and doesn't need in\n> > > additional Nonce/Auth Code.\n> > \n> > Sure, in which case it's something that could potentially be added later\n> > as another option in the future. I don't think we'll always have just\n> > one encryption method and it's good to generally think about what it\n> > might look like to have others but I don't think it makes sense to try\n> > and get everything in all at once.\n> \n> And among others Adiantum looks best: it is fast even without hardware\n> acceleration, it provides whole block encryption (ie every bit depends\n> on every bit) and it doesn't bound to plain-text format.\n\nAnd it could still be added later as another option if folks really want\nit to be. I've outlined why it makes sense to go with XTS first but I\ndon't mean that to imply that we'll only ever have that. Indeed, once\nwe've actually got something, adding other methods will almost certainly\nbe simpler. Trying to do everything from the start will make this very\ndifficult to accomplish though.\n\nThanks,\n\nStephen", "msg_date": "Tue, 26 Oct 2021 15:43:30 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "\n\nOn 10/26/21 21:43, Stephen Frost wrote:\n> Greetings,\n> \n> * Yura Sokolov (y.sokolov@postgrespro.ru) wrote:\n>> ... >>\n>> Integrity could be based on simple non-cryptographic checksum, and it could\n>> be checked after decryption. It would be imposible to intentionally change\n>> encrypted page in a way it will pass checksum after decription.\n> \n> No, it wouldn't be impossible when we're talking about non-cryptographic\n> checksums. That is, in fact, why you'd call them that. If it were\n> impossible (or at least utterly impractical) then you'd be able to claim\n> that it's cryptographic-level integrity validation.\n>\n\nYeah, our checksums are probabilistic protection against rare and random \nbitflips cause by hardware, not against an attacker in the crypto sense.\n\nTo explain why it's not enough, consider our checksum is uint16, i.e. \nthere are only 64k possible values. In other words, you can try flipping \nbits in the encrypted page, and after generating 64k you're guaranteed \nto have at least one collision. Yes, it's harder to get collision with \nthe existing checksum, and compression methods that diffuse bits better \nmakes it harder to get a valid page after decryption, but it's simply \nnot the same thing as a crypto integrity.\n\nLet's not try inventing something custom, there's been enough crypto \nfailures due to smart custom stuff in the past already.\n\nBTW I'm not sure what the existing patches do, but I wonder if we should \ncalculate the checksum before or after encryption. I'd say it should be \nafter encryption, because checksums were meant as a protection against \nissues at the storage level, so the checksum should be on what's written \nto storage, and it'd also allow offline verification of checksums etc. \n(Of course, that'd make the whole idea of relying on our checksums even \nmore futile.)\n\nNote: Maybe there are reasons why the checksum needs to be calculated \nbefore encryption, not sure.\n\n>> Currently we have 16bit checksum, and it is very small. But having larger\n>> checksum is orthogonal (ie doesn't bound) to having encryption.\n> \n> Sure, but that would also require a page-format change. We've pointed\n> out the downsides of that and what it would prevent in terms of\n> use-cases. That's still something that might happen but it would be a\n> different effort from this.\n> \n\n... and if such page format ends up happening, it'd be fairly easy to \njust add some extra crypto data into the page header and not rely on the \ndata checksums at all.\n\n>> In fact, Adiantum is easily made close to SIV construction:\n>> - just leave last 8/16 bytes zero. If after decription they are zero,\n>> then integrity check passed.\n>> That is because SIV and Adiantum are very similar in its structure:\n>> - SIV:\n>> -- hash\n>> -- then stream cipher\n>> - Adiantum:\n>> -- hash (except last 16bytes)\n>> -- then encrypt last 16bytes with hash,\n>> -- then stream cipher\n>> -- then hash.\n>> If last N (N>16) bytes is nonce + zero bytes, then \"hash, then encrypt last\n>> 16bytes with hash\" become equivalent to just \"hash\", and Adiantum became\n>> logical equivalent to SIV.\n> \n> While I appreciate your interest in this, I don't think it makes sense\n> for us to try and implement something of our own- we're not\n> cryptographers. Best is to look at published guideance and what other\n> projects have had success doing, and that's what this thread has been\n> about.\n> \n\nYeah, I personally don't see much difference between XTS and Adiantum.\n\nThere are a bunch of benefits, but the main reason why Google developed \nit seems to be performance on low-end ARM machines (i.e. phones). Which \nis nice, but it's probably not hugely important - very few people run Pg \non such machines, especially in performance-sensitive context.\n\nIt's true Adiantum is probably more resilient to IV reuse etc. but it's \nnot like XTS is suddenly obsolete, and it certainly doesn't solve the \nintegrity issue etc.\n\n>>>> - like XTS it doesn't need to change plain text format and doesn't need in\n>>>> additional Nonce/Auth Code.\n>>>\n>>> Sure, in which case it's something that could potentially be added later\n>>> as another option in the future. I don't think we'll always have just\n>>> one encryption method and it's good to generally think about what it\n>>> might look like to have others but I don't think it makes sense to try\n>>> and get everything in all at once.\n>>\n>> And among others Adiantum looks best: it is fast even without hardware\n>> acceleration, it provides whole block encryption (ie every bit depends\n>> on every bit) and it doesn't bound to plain-text format.\n> \n> And it could still be added later as another option if folks really want\n> it to be. I've outlined why it makes sense to go with XTS first but I\n> don't mean that to imply that we'll only ever have that. Indeed, once\n> we've actually got something, adding other methods will almost certainly\n> be simpler. Trying to do everything from the start will make this very\n> difficult to accomplish though.\n> \n\nYeah.\n\nSo maybe the best thing is simply to roll with both - design the whole \nfeature in a way that allows selecting the encryption scheme, with two \noptions. That's generally a good engineering practice, as it ensures \nthings are not coupled too much. And it's not like the encryption \nmethods are expected to be super difficult.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 26 Oct 2021 23:11:39 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Tue, Oct 26, 2021 at 11:11:39PM +0200, Tomas Vondra wrote:\n> BTW I'm not sure what the existing patches do, but I wonder if we should\n> calculate the checksum before or after encryption. I'd say it should be\n> after encryption, because checksums were meant as a protection against\n> issues at the storage level, so the checksum should be on what's written to\n> storage, and it'd also allow offline verification of checksums etc. (Of\n> course, that'd make the whole idea of relying on our checksums even more\n> futile.)\n> \n> Note: Maybe there are reasons why the checksum needs to be calculated before\n> encryption, not sure.\n\nYes, these are the tradeoffs --- allowing offline checksum checking\nwithout requiring the key vs. giving _some_ integrity checking and\nrequiring the key.\n\n> Yeah, I personally don't see much difference between XTS and Adiantum.\n> \n> There are a bunch of benefits, but the main reason why Google developed it\n> seems to be performance on low-end ARM machines (i.e. phones). Which is\n> nice, but it's probably not hugely important - very few people run Pg on\n> such machines, especially in performance-sensitive context.\n> \n> It's true Adiantum is probably more resilient to IV reuse etc. but it's not\n> like XTS is suddenly obsolete, and it certainly doesn't solve the integrity\n> issue etc.\n> \n> > > > > - like XTS it doesn't need to change plain text format and doesn't need in\n> > > > > additional Nonce/Auth Code.\n> > > > \n> > > > Sure, in which case it's something that could potentially be added later\n> > > > as another option in the future. I don't think we'll always have just\n> > > > one encryption method and it's good to generally think about what it\n> > > > might look like to have others but I don't think it makes sense to try\n> > > > and get everything in all at once.\n> > > \n> > > And among others Adiantum looks best: it is fast even without hardware\n> > > acceleration, it provides whole block encryption (ie every bit depends\n> > > on every bit) and it doesn't bound to plain-text format.\n> > \n> > And it could still be added later as another option if folks really want\n> > it to be. I've outlined why it makes sense to go with XTS first but I\n> > don't mean that to imply that we'll only ever have that. Indeed, once\n> > we've actually got something, adding other methods will almost certainly\n> > be simpler. Trying to do everything from the start will make this very\n> > difficult to accomplish though.\n> > \n> \n...\n> So maybe the best thing is simply to roll with both - design the whole\n> feature in a way that allows selecting the encryption scheme, with two\n> options. That's generally a good engineering practice, as it ensures things\n> are not coupled too much. And it's not like the encryption methods are\n> expected to be super difficult.\n\nI am not in favor of adding additional options to this feature unless we\ncan explain why users should choose one over the other. There is also\nthe problem of OpenSSL not supporting Adiantum.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 26 Oct 2021 17:39:30 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Tue, Oct 26, 2021 at 11:11:39PM +0200, Tomas Vondra wrote:\n> > BTW I'm not sure what the existing patches do, but I wonder if we should\n> > calculate the checksum before or after encryption. I'd say it should be\n> > after encryption, because checksums were meant as a protection against\n> > issues at the storage level, so the checksum should be on what's written to\n> > storage, and it'd also allow offline verification of checksums etc. (Of\n> > course, that'd make the whole idea of relying on our checksums even more\n> > futile.)\n> > \n> > Note: Maybe there are reasons why the checksum needs to be calculated before\n> > encryption, not sure.\n> \n> Yes, these are the tradeoffs --- allowing offline checksum checking\n> without requiring the key vs. giving _some_ integrity checking and\n> requiring the key.\n\nI'm in favor of calculating the checksum before encrypting as that will\nstill catch the storage level bit-flips that it was implemented to\naddress in the first place and will also make it so that we're very\nlikely to realize we have an incorrect key before trying to do anything\nwith the page. That it might also serve as a deterrent against\nattackers trying to randomly flip bits in a page has perhaps some value\nbut without a cryptographic-level hash it isn't really enough to prevent\nagainst an attacker who has write access to a page.\n\nAny tools which include checking the checksum on pages already have to\ndeal with clusters where checksums aren't enabled anyway and I wouldn't\nexpect it to generally be an issue for tools which want to validate\nchecksums on an encrypted cluster to be able to have the appropriate\nkey(s) necessary for doing so and to be able to perform the decryption\nto do the check. We can certainly make pg_checksums do this and I don't\nsee it as an issue for pgbackrest, as two examples that I've\nspecifically thought about.\n\n> > > > > > - like XTS it doesn't need to change plain text format and doesn't need in\n> > > > > > additional Nonce/Auth Code.\n> > > > > \n> > > > > Sure, in which case it's something that could potentially be added later\n> > > > > as another option in the future. I don't think we'll always have just\n> > > > > one encryption method and it's good to generally think about what it\n> > > > > might look like to have others but I don't think it makes sense to try\n> > > > > and get everything in all at once.\n> > > > \n> > > > And among others Adiantum looks best: it is fast even without hardware\n> > > > acceleration, it provides whole block encryption (ie every bit depends\n> > > > on every bit) and it doesn't bound to plain-text format.\n> > > \n> > > And it could still be added later as another option if folks really want\n> > > it to be. I've outlined why it makes sense to go with XTS first but I\n> > > don't mean that to imply that we'll only ever have that. Indeed, once\n> > > we've actually got something, adding other methods will almost certainly\n> > > be simpler. Trying to do everything from the start will make this very\n> > > difficult to accomplish though.\n> ...\n> > So maybe the best thing is simply to roll with both - design the whole\n> > feature in a way that allows selecting the encryption scheme, with two\n> > options. That's generally a good engineering practice, as it ensures things\n> > are not coupled too much. And it's not like the encryption methods are\n> > expected to be super difficult.\n> \n> I am not in favor of adding additional options to this feature unless we\n> can explain why users should choose one over the other. There is also\n> the problem of OpenSSL not supporting Adiantum.\n\nI can understand the general idea that we should be sure to engineer\nthis in a way that multiple methods can be used, as surely one day folks\nwill say that AES128 isn't acceptable any more. In terms of what we'll\ndo from the start, I would think providing the options of AES128 and\nAES256 would be good to ensure that we have the bits covered to support\nmultiple methods and I don't think that would put us into a situation of\nhaving to really explain which to use to users (we don't for pgcrypto\nanyway, as an example). I agree that we shouldn't be looking at adding\nin a whole new crypto library for this though, that's a large and\nindependent effort (see the work on NSS happening nearby).\n\nThanks,\n\nStephen", "msg_date": "Mon, 1 Nov 2021 14:24:36 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On 2021/11/2 02:24, Stephen Frost wrote:\r\n> I can understand the general idea that we should be sure to engineer\r\n> this in a way that multiple methods can be used, as surely one day folks\r\n> will say that AES128 isn't acceptable any more.\r\nCheers!", "msg_date": "Tue, 2 Nov 2021 14:22:39 +0800", "msg_from": "Sasasu <i@sasa.su>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Mon, Nov 1, 2021 at 02:24:36PM -0400, Stephen Frost wrote:\n> I can understand the general idea that we should be sure to engineer\n> this in a way that multiple methods can be used, as surely one day folks\n> will say that AES128 isn't acceptable any more. In terms of what we'll\n> do from the start, I would think providing the options of AES128 and\n> AES256 would be good to ensure that we have the bits covered to support\n> multiple methods and I don't think that would put us into a situation of\n\nMy patch supports AES128, AES192, and AES256.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 2 Nov 2021 17:49:03 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Mon, Nov 1, 2021 at 02:24:36PM -0400, Stephen Frost wrote:\n> > I can understand the general idea that we should be sure to engineer\n> > this in a way that multiple methods can be used, as surely one day folks\n> > will say that AES128 isn't acceptable any more. In terms of what we'll\n> > do from the start, I would think providing the options of AES128 and\n> > AES256 would be good to ensure that we have the bits covered to support\n> > multiple methods and I don't think that would put us into a situation of\n> \n> My patch supports AES128, AES192, and AES256.\n\nRight, so we're already showing that it's flexible to allow for multiple\nencryption methods. If folks want more then it's on them to research\nhow they'd work exactly and explain why they'd be useful to add and how\nusers might make an informed choice (though, again, I don't think we\nneed to go *too* deep into that as we don't for, eg, pgcrypto, and I\ndon't believe we've ever heard people complain about that).\n\nThanks,\n\nStephen", "msg_date": "Wed, 3 Nov 2021 14:45:22 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Mon, Nov 1, 2021 at 02:24:36PM -0400, Stephen Frost wrote:\n> I can understand the general idea that we should be sure to engineer\n> this in a way that multiple methods can be used, as surely one day folks\n> will say that AES128 isn't acceptable any more. In terms of what we'll\n> do from the start, I would think providing the options of AES128 and\n> AES256 would be good to ensure that we have the bits covered to support\n> multiple methods and I don't think that would put us into a situation of\n> having to really explain which to use to users (we don't for pgcrypto\n> anyway, as an example). I agree that we shouldn't be looking at adding\n> in a whole new crypto library for this though, that's a large and\n> independent effort (see the work on NSS happening nearby).\n\nSince it has been two weeks since the last activity on this thread, I\nhave updated the TDE wiki to reflect the conclusions and discussions:\n\n\thttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 12 Nov 2021 13:13:07 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n> \n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > On Tue, Oct 19, 2021 at 02:54:56PM -0400, Stephen Frost wrote:\n> > > * Sasasu (i@sasa.su) wrote:\n> > > > A unified block-based I/O API sounds great. Has anyone tried to do this\n> > > > before? It would be nice if the front-end tools could also use these API.\n> > > \n> > > The TDE patch from Cybertec did go down this route, but the API ended up\n> > > being rather different which menat a lot of changes in other parts of\n> > > the system. If we can get a block-based temporary file method that\n> > > maintains more-or-less the same API, that'd be great, but I'm not sure\n> > > that we can really do so and I am not entirely convinced that we should\n> > > make the TDE effort depend on an otherwise quite independent effort of\n> > > making all temp files usage be block based.\n> > \n> > Uh, I thought people felt the Cybertec patch was too large and that a\n> > unified API for temporary file I/O-encryption was a requirement. Would\n> > a CTR-steaming-encryption API for temporary tables be easier to\n> > implement?\n> \n> Having a unified API for temporary file I/O (that could then be extended\n> to provide encryption) would definitely help with reducing the size of a\n> TDE patch. The approach used in the Cybertec patch was to make\n> temporary file access block based, but the way that was implemented was\n> with an API different from pread/pwrite and that meant changing pretty\n> much all of the call sites for temporary file access, which naturally\n> resulted in changes in a lot of otherwise unrelated code.\n\nThe changes to buffile.c are not trivial, but we haven't really changed the\nAPI, as long as you mean BufFileCreateTemp(), BufFileWrite(), BufFileRead().\n\nWhat our patch affects on the caller side is that BufFileOpenTransient(),\nBufFileCloseTransient(), BufFileWriteTransient() and BufFileReadTransient()\nreplace OpenTransientFile(), CloseTransientFile(), write()/fwrite() and\nread()/fread() respectively in reorderbuffer.c and in pgstat.c. These changes\nbecome a little bit less invasive in TDE 1.1 than they were in 1.0, see [1],\nsee the diffs attached.\n\n(I expect that [2] will get committed someday so that the TDE feature won't\naffect pgstat.c in the future at all.)\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n[1] https://github.com/cybertec-postgresql/postgres/tree/PG_14_TDE_1_1\n\n[2] https://commitfest.postgresql.org/34/1708/", "msg_date": "Mon, 29 Nov 2021 08:37:31 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Mon, Nov 29, 2021 at 08:37:31AM +0100, Antonin Houska wrote:\n> The changes to buffile.c are not trivial, but we haven't really changed the\n> API, as long as you mean BufFileCreateTemp(), BufFileWrite(), BufFileRead().\n> \n> What our patch affects on the caller side is that BufFileOpenTransient(),\n> BufFileCloseTransient(), BufFileWriteTransient() and BufFileReadTransient()\n> replace OpenTransientFile(), CloseTransientFile(), write()/fwrite() and\n> read()/fread() respectively in reorderbuffer.c and in pgstat.c. These changes\n> become a little bit less invasive in TDE 1.1 than they were in 1.0, see [1],\n> see the diffs attached.\n\nWith pg_upgrade modified to preserve the relfilenode, tablespace oid, and\ndatabase oid, we are now closer to implementing cluster file encryption\nusing XTS. I think we have a few steps left:\n\n1. modify temporary file I/O to use a more centralized API\n2. modify the existing cluster file encryption patch to use XTS with a\n IV that uses more than the LSN\n3. add XTS regression test code like CTR\n4. create WAL encryption code using CTR\n\nIf we can do #1 in PG 15 I think I can have #2 ready for PG 16 in July.\nThe feature wiki page is:\n\n\thttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption\n\nDo people want to advance this feature forward?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 24 Jan 2022 17:57:18 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Mon, Nov 29, 2021 at 08:37:31AM +0100, Antonin Houska wrote:\n> > The changes to buffile.c are not trivial, but we haven't really changed the\n> > API, as long as you mean BufFileCreateTemp(), BufFileWrite(), BufFileRead().\n> > \n> > What our patch affects on the caller side is that BufFileOpenTransient(),\n> > BufFileCloseTransient(), BufFileWriteTransient() and BufFileReadTransient()\n> > replace OpenTransientFile(), CloseTransientFile(), write()/fwrite() and\n> > read()/fread() respectively in reorderbuffer.c and in pgstat.c. These changes\n> > become a little bit less invasive in TDE 1.1 than they were in 1.0, see [1],\n> > see the diffs attached.\n> \n> With pg_upgrade modified to preserve the relfilenode, tablespace oid, and\n> database oid, we are now closer to implementing cluster file encryption\n> using XTS. I think we have a few steps left:\n> \n> 1. modify temporary file I/O to use a more centralized API\n> 2. modify the existing cluster file encryption patch to use XTS with a\n> IV that uses more than the LSN\n> 3. add XTS regression test code like CTR\n> 4. create WAL encryption code using CTR\n> \n> If we can do #1 in PG 15 I think I can have #2 ready for PG 16 in July.\n> The feature wiki page is:\n> \n> \thttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption\n> \n> Do people want to advance this feature forward?\n\nI confirm that we (Cybertec) do and that we're ready to spend more time on the\ncommunity implementation.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 01 Feb 2022 07:45:06 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Tue, Feb 1, 2022 at 07:45:06AM +0100, Antonin Houska wrote:\n> > With pg_upgrade modified to preserve the relfilenode, tablespace oid, and\n> > database oid, we are now closer to implementing cluster file encryption\n> > using XTS. I think we have a few steps left:\n> > \n> > 1. modify temporary file I/O to use a more centralized API\n> > 2. modify the existing cluster file encryption patch to use XTS with a\n> > IV that uses more than the LSN\n> > 3. add XTS regression test code like CTR\n> > 4. create WAL encryption code using CTR\n> > \n> > If we can do #1 in PG 15 I think I can have #2 ready for PG 16 in July.\n> > The feature wiki page is:\n> > \n> > \thttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption\n> > \n> > Do people want to advance this feature forward?\n> \n> I confirm that we (Cybertec) do and that we're ready to spend more time on the\n> community implementation.\n\nWell, I sent an email a week ago asking if people want to advance this\nfeature forward, and so far you are the only person to reply, which I\nthink means there isn't enough interest in this feature to advance it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 1 Feb 2022 12:50:46 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Greetings,\n\nOn Tue, Feb 1, 2022 at 12:50 Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Feb 1, 2022 at 07:45:06AM +0100, Antonin Houska wrote:\n> > > With pg_upgrade modified to preserve the relfilenode, tablespace oid,\n> and\n> > > database oid, we are now closer to implementing cluster file encryption\n> > > using XTS. I think we have a few steps left:\n> > >\n> > > 1. modify temporary file I/O to use a more centralized API\n> > > 2. modify the existing cluster file encryption patch to use XTS with a\n> > > IV that uses more than the LSN\n> > > 3. add XTS regression test code like CTR\n> > > 4. create WAL encryption code using CTR\n> > >\n> > > If we can do #1 in PG 15 I think I can have #2 ready for PG 16 in July.\n> > > The feature wiki page is:\n> > >\n> > > https://wiki.postgresql.org/wiki/Transparent_Data_Encryption\n> > >\n> > > Do people want to advance this feature forward?\n> >\n> > I confirm that we (Cybertec) do and that we're ready to spend more time\n> on the\n> > community implementation.\n>\n> Well, I sent an email a week ago asking if people want to advance this\n> feature forward, and so far you are the only person to reply, which I\n> think means there isn't enough interest in this feature to advance it.\n\n\nThis confuses me. Clearly there’s plenty of interest, but asking on hackers\nin a deep old sub thread isn’t a terribly good way to judge that. Yet even\nwhen there is an active positive response, you argue that there isn’t\nenough.\n\nIn general, I agree that the items you laid out are what the next steps\nare. There are patches for some of those items already too and some of\nthem, such as consolidating the temporary file access, are beneficial even\nwithout the potential to use them for encryption.\n\nInstead of again asking if people want this feature (many, many, many do),\nI’d encourage Antonin to start a new thread with the patch to do the\ntemporary file access consolidation which then provides a buffered access\nand reduces the number of syscalls and work towards getting that committed,\nideally as part of this release.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Tue, Feb 1, 2022 at 12:50 Bruce Momjian <bruce@momjian.us> wrote:On Tue, Feb  1, 2022 at 07:45:06AM +0100, Antonin Houska wrote:\n> > With pg_upgrade modified to preserve the relfilenode, tablespace oid, and\n> > database oid, we are now closer to implementing cluster file encryption\n> > using XTS.  I think we have a few steps left:\n> > \n> > 1.  modify temporary file I/O to use a more centralized API\n> > 2.  modify the existing cluster file encryption patch to use XTS with a\n> >     IV that uses more than the LSN\n> > 3.  add XTS regression test code like CTR\n> > 4.  create WAL encryption code using CTR\n> > \n> > If we can do #1 in PG 15 I think I can have #2 ready for PG 16 in July.\n> > The feature wiki page is:\n> > \n> >     https://wiki.postgresql.org/wiki/Transparent_Data_Encryption\n> > \n> > Do people want to advance this feature forward?\n> \n> I confirm that we (Cybertec) do and that we're ready to spend more time on the\n> community implementation.\n\nWell, I sent an email a week ago asking if people want to advance this\nfeature forward, and so far you are the only person to reply, which I\nthink means there isn't enough interest in this feature to advance it.This confuses me. Clearly there’s plenty of interest, but asking on hackers in a deep old sub thread isn’t a terribly good way to judge that.  Yet even when there is an active positive response, you argue that there isn’t enough.In general, I agree that the items you laid out are what the next steps are.  There are patches for some of those items already too and some of them, such as consolidating the temporary file access, are beneficial even without the potential to use them for encryption.  Instead of again asking if people want this feature (many, many, many do), I’d encourage Antonin to start a new thread with the patch to do the temporary file access consolidation which then provides a buffered access and reduces the number of syscalls and work towards getting that committed, ideally as part of this release.Thanks,Stephen", "msg_date": "Tue, 1 Feb 2022 13:07:36 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On Tue, Feb 1, 2022 at 01:07:36PM -0500, Stephen Frost wrote:\n> Well, I sent an email a week ago asking if people want to advance this\n> feature forward, and so far you are the only person to reply, which I\n> think means there isn't enough interest in this feature to advance it.\n> \n> This confuses me. Clearly there’s plenty of interest, but asking on hackers in\n> a deep old sub thread isn’t a terribly good way to judge that.  Yet even when\n> there is an active positive response, you argue that there isn’t enough.\n\nUh, I have been lead down the path of disinterest/confusion on this\nfeature enough that I am looking for positive feedback on every new step\nso I don't get stuck out in front with insufficient support. Yes, only\none person replying is enough for me to say there isn't interest. I\nguess I now have two. My email was short and ended with a question so I\nthought the people interested in the steps I suggested would give some\nkind of feedback --- I certainly try to reply to all emails on this\ntopic.\n\n> In general, I agree that the items you laid out are what the next steps are. \n> There are patches for some of those items already too and some of them, such as\n> consolidating the temporary file access, are beneficial even without the\n> potential to use them for encryption.  \n\nGreat. I can update my patch for July consideration.\n\n> Instead of again asking if people want this feature (many, many, many do), I’d\n> encourage Antonin to start a new thread with the patch to do the temporary file\n> access consolidation which then provides a buffered access and reduces the\n> number of syscalls and work towards getting that committed, ideally as part of\n> this release.\n\nYes, agreed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 1 Feb 2022 13:27:03 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Hi,\n\nOn 2022-02-01 13:27:03 -0500, Bruce Momjian wrote:\n> On Tue, Feb 1, 2022 at 01:07:36PM -0500, Stephen Frost wrote:\n> > Well, I sent an email a week ago asking if people want to advance this\n> > feature forward, and so far you are the only person to reply, which I\n> > think means there isn't enough interest in this feature to advance it.\n> > \n> > This confuses me. Clearly there’s plenty of interest, but asking on hackers in\n> > a deep old sub thread isn’t a terribly good way to judge that.  Yet even when\n> > there is an active positive response, you argue that there isn’t enough.\n> \n> Uh, I have been lead down the path of disinterest/confusion on this\n> feature enough that I am looking for positive feedback on every new step\n> so I don't get stuck out in front with insufficient support. Yes, only\n> one person replying is enough for me to say there isn't interest. I\n> guess I now have two. My email was short and ended with a question so I\n> thought the people interested in the steps I suggested would give some\n> kind of feedback --- I certainly try to reply to all emails on this\n> topic.\n\nPersonally I can't keep up with all threads on -hackers all the\ntime. Especially not long and at times very busy threads. So I agree with\nStephen that it's not saying much whether / not people react to an email deep\nin a thread.\n\n\n> > Instead of again asking if people want this feature (many, many, many do), I’d\n> > encourage Antonin to start a new thread with the patch to do the temporary file\n> > access consolidation which then provides a buffered access and reduces the\n> > number of syscalls and work towards getting that committed, ideally as part of\n> > this release.\n\nI think it is quite unlikely that patches of that invasiveness can be merged\nthis release.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 1 Feb 2022 10:36:45 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "Hi,\n\nOn Tue, Feb 01, 2022 at 01:07:36PM -0500, Stephen Frost wrote:\n> On Tue, Feb 1, 2022 at 12:50 Bruce Momjian <bruce@momjian.us> wrote:\n> > On Tue, Feb 1, 2022 at 07:45:06AM +0100, Antonin Houska wrote:\n> > > > With pg_upgrade modified to preserve the relfilenode, tablespace\n> > > > oid, and database oid, we are now closer to implementing cluster\n> > > > file encryption using XTS. I think we have a few steps left:\n> > > >\n> > > > 1. modify temporary file I/O to use a more centralized API\n> > > > 2. modify the existing cluster file encryption patch to use XTS with a\n> > > > IV that uses more than the LSN\n> > > > 3. add XTS regression test code like CTR\n> > > > 4. create WAL encryption code using CTR\n> > > >\n> > > > If we can do #1 in PG 15 I think I can have #2 ready for PG 16 in July.\n> > > > The feature wiki page is:\n> > > >\n> > > > https://wiki.postgresql.org/wiki/Transparent_Data_Encryption\n> > > >\n> > > > Do people want to advance this feature forward?\n> > >\n> > > I confirm that we (Cybertec) do and that we're ready to spend more\n> > > time on the community implementation.\n> >\n> > Well, I sent an email a week ago asking if people want to advance this\n> > feature forward, and so far you are the only person to reply, which I\n> > think means there isn't enough interest in this feature to advance it.\n> \n> This confuses me. Clearly there’s plenty of interest, but asking on hackers\n> in a deep old sub thread isn’t a terribly good way to judge that. Yet even\n> when there is an active positive response, you argue that there isn’t\n> enough.\n\nEven more so because not Antonin not only replied as an individual, but\nin the name of a whole company developing Postgres in general and TDE in\nparticular.\n\n\nMichael\n\n-- \nMichael Banck\nTeamleiter PostgreSQL-Team\nProjektleiter\nTel.: +49 2166 9901-171\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Geoff Richardson, Peter Lilley\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n", "msg_date": "Tue, 1 Feb 2022 23:44:41 +0100", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" }, { "msg_contents": "On 2022/2/2 01:50, Bruce Momjian wrote:\n> Well, I sent an email a week ago asking if people want to advance this\n> feature forward, and so far you are the only person to reply, which I\n> think means there isn't enough interest in this feature to advance it.\n\nI am still focus on this thread.\n\nand I have a small patch to solve the current buffile problem.\nhttps://www.postgresql.org/message-id/a859a753-70f2-bb17-6830-19dbcad11c17%40sasa.su", "msg_date": "Thu, 3 Feb 2022 14:50:22 +0800", "msg_from": "Sasasu <i@sasa.su>", "msg_from_op": false, "msg_subject": "Re: XTS cipher mode for cluster file encryption" } ]
[ { "msg_contents": "Hi, all\nI have some doubts about the request xlog streaming startpoint in WaitForWALToBecomeAvailable(). In this function, RecPtr is the endpoint lsn we are waiting for, tliRecPtr is the position of the WAL record we are interested in, I want to know why use RecPtr rather than tliRecPtr as the startpoint when call RequestXLogStreaming, although the start position will be set as the beginning of the corresponding segment in RequestXLogStreaming.\n\nThanks & Best Regard\nHi, allI have some doubts about the request xlog streaming startpoint in WaitForWALToBecomeAvailable(). In this function, RecPtr is the endpoint lsn we are waiting for, tliRecPtr is the position of the WAL record we are interested in, I want to know why use RecPtr rather than tliRecPtr as the startpoint when call RequestXLogStreaming, although the start position will be set as the beginning of the corresponding segment in RequestXLogStreaming.Thanks & Best Regard", "msg_date": "Thu, 14 Oct 2021 18:03:06 +0800", "msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <mengjuan.cmj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?U29tZSBkb3VidHMgYWJvdXQgc3RyZWFtaW5nIHN0YXJ0cG9pbnQgaW4gV2FpdEZvcldBTFRv?=\n =?UTF-8?B?QmVjb21lQXZhaWxhYmxlKCk=?=" } ]
[ { "msg_contents": "Hi All,\n\nPublisher 'DateStyle' is set as \"SQL, MDY\", whereas in Subscriber as\n\"SQL, DMY\", the logical replication is not working...\n\n From Publisher:\npostgres=# INSERT INTO calendar VALUES ('07-18-1036', '1'), ('05-15-1135', '1');\nINSERT 0 2\n\nGetting below error in the subscriber log file,\n2021-10-14 00:59:23.067 PDT [38262] ERROR: date/time field value out\nof range: \"07/18/1036\"\n2021-10-14 00:59:23.067 PDT [38262] HINT: Perhaps you need a\ndifferent \"datestyle\" setting.\n\nIs this an expected behavior?\n\nThanks & Regards\nSadhuPrasad\nhttp://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 14 Oct 2021 15:48:22 +0530", "msg_from": "Sadhuprasad Patro <b.sadhu@gmail.com>", "msg_from_op": true, "msg_subject": "Fwd: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "On Thu, Oct 14, 2021 at 3:48 PM Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n>\n> Hi All,\n>\n> Publisher 'DateStyle' is set as \"SQL, MDY\", whereas in Subscriber as\n> \"SQL, DMY\", the logical replication is not working...\n>\n> From Publisher:\n> postgres=# INSERT INTO calendar VALUES ('07-18-1036', '1'), ('05-15-1135', '1');\n> INSERT 0 2\n>\n> Getting below error in the subscriber log file,\n> 2021-10-14 00:59:23.067 PDT [38262] ERROR: date/time field value out\n> of range: \"07/18/1036\"\n> 2021-10-14 00:59:23.067 PDT [38262] HINT: Perhaps you need a\n> different \"datestyle\" setting.\n>\n> Is this an expected behavior?\n\nLooks like a problem to me, I think for fixing this, on logical\nreplication connection always set subscriber's DateStlyle, with that\nthe walsender will always send the data in the same DateStyle that\nworker understands and then we are good.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Oct 2021 17:19:59 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "On Thu, 14 Oct 2021 at 19:49, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> On Thu, Oct 14, 2021 at 3:48 PM Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n>>\n>> Hi All,\n>>\n>> Publisher 'DateStyle' is set as \"SQL, MDY\", whereas in Subscriber as\n>> \"SQL, DMY\", the logical replication is not working...\n>>\n>> From Publisher:\n>> postgres=# INSERT INTO calendar VALUES ('07-18-1036', '1'), ('05-15-1135', '1');\n>> INSERT 0 2\n>>\n>> Getting below error in the subscriber log file,\n>> 2021-10-14 00:59:23.067 PDT [38262] ERROR: date/time field value out\n>> of range: \"07/18/1036\"\n>> 2021-10-14 00:59:23.067 PDT [38262] HINT: Perhaps you need a\n>> different \"datestyle\" setting.\n>>\n>> Is this an expected behavior?\n>\n> Looks like a problem to me, I think for fixing this, on logical\n> replication connection always set subscriber's DateStlyle, with that\n> the walsender will always send the data in the same DateStyle that\n> worker understands and then we are good.\n\nRight! Attached fix it.\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Sat, 16 Oct 2021 22:42:02 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "On Sat, 16 Oct 2021 at 22:42, Japin Li <japinli@hotmail.com> wrote:\n> On Thu, 14 Oct 2021 at 19:49, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> On Thu, Oct 14, 2021 at 3:48 PM Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n>>>\n>>> Hi All,\n>>>\n>>> Publisher 'DateStyle' is set as \"SQL, MDY\", whereas in Subscriber as\n>>> \"SQL, DMY\", the logical replication is not working...\n>>>\n>>> From Publisher:\n>>> postgres=# INSERT INTO calendar VALUES ('07-18-1036', '1'), ('05-15-1135', '1');\n>>> INSERT 0 2\n>>>\n>>> Getting below error in the subscriber log file,\n>>> 2021-10-14 00:59:23.067 PDT [38262] ERROR: date/time field value out\n>>> of range: \"07/18/1036\"\n>>> 2021-10-14 00:59:23.067 PDT [38262] HINT: Perhaps you need a\n>>> different \"datestyle\" setting.\n>>>\n>>> Is this an expected behavior?\n>>\n>> Looks like a problem to me, I think for fixing this, on logical\n>> replication connection always set subscriber's DateStlyle, with that\n>> the walsender will always send the data in the same DateStyle that\n>> worker understands and then we are good.\n>\n> Right! Attached fix it.\n\nAdd a test case in subscription/t/100_bugs.pl. Please consider the v2 patch\nfor review.\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Sun, 17 Oct 2021 08:37:39 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "> Add a test case in subscription/t/100_bugs.pl. Please consider the v2 patch\n> for review.\n>\n\nReviewed and tested the patch, it works fine... There are some\ntrailing spaces present in the newly added code lines, which needs to\nbe corrected...\nDoing some further testing with different datestyles, will update further...\n\nThanks & Regards\nSadhuPrasad\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Oct 2021 08:44:24 +0530", "msg_from": "Sadhuprasad Patro <b.sadhu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "On Thu, Oct 14, 2021 at 8:50 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Oct 14, 2021 at 3:48 PM Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n> >\n> > Hi All,\n> >\n> > Publisher 'DateStyle' is set as \"SQL, MDY\", whereas in Subscriber as\n> > \"SQL, DMY\", the logical replication is not working...\n> >\n> > From Publisher:\n> > postgres=# INSERT INTO calendar VALUES ('07-18-1036', '1'), ('05-15-1135', '1');\n> > INSERT 0 2\n> >\n> > Getting below error in the subscriber log file,\n> > 2021-10-14 00:59:23.067 PDT [38262] ERROR: date/time field value out\n> > of range: \"07/18/1036\"\n> > 2021-10-14 00:59:23.067 PDT [38262] HINT: Perhaps you need a\n> > different \"datestyle\" setting.\n> >\n> > Is this an expected behavior?\n>\n> Looks like a problem to me, I think for fixing this, on logical\n> replication connection always set subscriber's DateStlyle, with that\n> the walsender will always send the data in the same DateStyle that\n> worker understands and then we are good.\n\n+1\n\nProbably the same is true for IntervalStyle? If the publisher sets\n'sql_standard', the subscriber sets 'postgres', and an interval value\n'-1 11:22:33' is inserted, these two nodes have different data:\n\n* Publisher\n=# set intervalstyle to 'postgres'; select * from test;\n i\n-------------------\n -1 days -11:22:33\n(1 row)\n\n* Subscriber\n=# set intervalstyle to 'postgres'; select * from test;\n i\n-------------------\n -1 days +11:22:33\n(1 row)\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 18 Oct 2021 12:26:48 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> On Thu, Oct 14, 2021 at 8:50 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> Looks like a problem to me, I think for fixing this, on logical\n>> replication connection always set subscriber's DateStlyle, with that\n>> the walsender will always send the data in the same DateStyle that\n>> worker understands and then we are good.\n\n> +1\n\nAn alternative that wouldn't require a network round trip is for the\npublisher to set its own datestyle to ISO/YMD. I'm pretty sure that\nwill be interpreted correctly regardless of the receiver's datestyle.\n\n> Probably the same is true for IntervalStyle?\n\nNot sure if an equivalent solution applies to intervals ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 17 Oct 2021 23:35:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "I wrote:\n> An alternative that wouldn't require a network round trip is for the\n> publisher to set its own datestyle to ISO/YMD. I'm pretty sure that\n> will be interpreted correctly regardless of the receiver's datestyle.\n\nAh ... see postgres_fdw's set_transmission_modes(). I think we want\nto copy that logic not invent some other way to do it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 17 Oct 2021 23:41:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "On Sun, Oct 17, 2021 at 11:41:35PM -0400, Tom Lane wrote:\n> Ah ... see postgres_fdw's set_transmission_modes(). I think we want\n> to copy that logic not invent some other way to do it.\n\ndblink.c has something similar as of applyRemoteGucs(), except that it\ndoes not do extra_float_digits. It would be nice to avoid more\nduplication for those things, at least on HEAD. On the top of my\nhead, don't we have something similar for parallel workers when\npassing down GUCs from the leader?\n--\nMichael", "msg_date": "Mon, 18 Oct 2021 12:59:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "\nOn Mon, 18 Oct 2021 at 11:14, Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n>> Add a test case in subscription/t/100_bugs.pl. Please consider the v2 patch\n>> for review.\n>>\n>\n> Reviewed and tested the patch, it works fine... There are some\n> trailing spaces present in the newly added code lines, which needs to\n> be corrected...\n> Doing some further testing with different datestyles, will update further...\n>\n\nThanks for your review and test! As Tom Lane said, the postgres_fdw has the similar\nthings, I will update the patch later.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Mon, 18 Oct 2021 12:03:53 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "\nOn Mon, 18 Oct 2021 at 11:59, Michael Paquier <michael@paquier.xyz> wrote:\n> On Sun, Oct 17, 2021 at 11:41:35PM -0400, Tom Lane wrote:\n>> Ah ... see postgres_fdw's set_transmission_modes(). I think we want\n>> to copy that logic not invent some other way to do it.\n>\n> dblink.c has something similar as of applyRemoteGucs(), except that it\n> does not do extra_float_digits. It would be nice to avoid more\n> duplication for those things, at least on HEAD. On the top of my\n> head, don't we have something similar for parallel workers when\n> passing down GUCs from the leader?\n\nSince it will be used in more than one places. IMO, we can implement it in core.\nAny thoughts?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Mon, 18 Oct 2021 12:10:34 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "\nOn Mon, 18 Oct 2021 at 11:41, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n>> An alternative that wouldn't require a network round trip is for the\n>> publisher to set its own datestyle to ISO/YMD. I'm pretty sure that\n>> will be interpreted correctly regardless of the receiver's datestyle.\n>\n> Ah ... see postgres_fdw's set_transmission_modes(). I think we want\n> to copy that logic not invent some other way to do it.\n>\n\nThanks for your remention. As Michael Paquier side, dblink also uses the\nsimilar logical. I will read them then update the patch.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Mon, 18 Oct 2021 12:13:37 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> On Mon, 18 Oct 2021 at 11:59, Michael Paquier <michael@paquier.xyz> wrote:\n>> dblink.c has something similar as of applyRemoteGucs(), except that it\n>> does not do extra_float_digits. It would be nice to avoid more\n>> duplication for those things, at least on HEAD. On the top of my\n>> head, don't we have something similar for parallel workers when\n>> passing down GUCs from the leader?\n\n> Since it will be used in more than one places. IMO, we can implement it in core.\n> Any thoughts?\n\nIt's not going to be the same code everywhere. A logrep sender won't\nhave a need to save-and-restore the settings like postgres_fdw does,\nAFAICS. Also, now that I look at it, dblink is doing the opposite\nthing of absorbing the sender's values.\n\nIt would be good I guess to have some central notion of which\nvariables ought to be set to what, but I'm not sure how to\nmechanize that given the need for different behaviors.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Oct 2021 00:17:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "On Mon, 18 Oct 2021 at 12:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> On Mon, 18 Oct 2021 at 11:59, Michael Paquier <michael@paquier.xyz> wrote:\n>>> dblink.c has something similar as of applyRemoteGucs(), except that it\n>>> does not do extra_float_digits. It would be nice to avoid more\n>>> duplication for those things, at least on HEAD. On the top of my\n>>> head, don't we have something similar for parallel workers when\n>>> passing down GUCs from the leader?\n>\n>> Since it will be used in more than one places. IMO, we can implement it in core.\n>> Any thoughts?\n>\n> It's not going to be the same code everywhere. A logrep sender won't\n> have a need to save-and-restore the settings like postgres_fdw does,\n\nThanks for your explanation. Yeah, we do not need reset the settings in\nlogical replication.\n\n> AFAICS. Also, now that I look at it, dblink is doing the opposite\n> thing of absorbing the sender's values.\n>\n\nSorry I misunderstand. You are right, the dblink applies the remote\nserver's settings to local server.\n\n\nAttached v3 patch modify the settings on sender as you suggest.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Mon, 18 Oct 2021 16:03:26 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "\nOn Mon, 18 Oct 2021 at 11:26, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Thu, Oct 14, 2021 at 8:50 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>\n>> On Thu, Oct 14, 2021 at 3:48 PM Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n>> >\n>> > Hi All,\n>> >\n>> > Publisher 'DateStyle' is set as \"SQL, MDY\", whereas in Subscriber as\n>> > \"SQL, DMY\", the logical replication is not working...\n>> >\n>> > From Publisher:\n>> > postgres=# INSERT INTO calendar VALUES ('07-18-1036', '1'), ('05-15-1135', '1');\n>> > INSERT 0 2\n>> >\n>> > Getting below error in the subscriber log file,\n>> > 2021-10-14 00:59:23.067 PDT [38262] ERROR: date/time field value out\n>> > of range: \"07/18/1036\"\n>> > 2021-10-14 00:59:23.067 PDT [38262] HINT: Perhaps you need a\n>> > different \"datestyle\" setting.\n>> >\n>> > Is this an expected behavior?\n>>\n>> Looks like a problem to me, I think for fixing this, on logical\n>> replication connection always set subscriber's DateStlyle, with that\n>> the walsender will always send the data in the same DateStyle that\n>> worker understands and then we are good.\n>\n> +1\n>\n> Probably the same is true for IntervalStyle? If the publisher sets\n> 'sql_standard', the subscriber sets 'postgres', and an interval value\n> '-1 11:22:33' is inserted, these two nodes have different data:\n>\n> * Publisher\n> =# set intervalstyle to 'postgres'; select * from test;\n> i\n> -------------------\n> -1 days -11:22:33\n> (1 row)\n>\n> * Subscriber\n> =# set intervalstyle to 'postgres'; select * from test;\n> i\n> -------------------\n> -1 days +11:22:33\n> (1 row)\n>\n\nI attached v3 patch that set IntervalStyle to 'postgres' when the\nserver backend is walsender, and this problem has gone.\n\nI test that set IntervalStyle to 'sql_standard' on publisher and\n'iso_8601' on subscriber, it works fine.\n\nPlease try v3 patch and let me know if they work as unexpected.\nThanks in advance.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Mon, 18 Oct 2021 16:11:05 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "On Mon, Oct 18, 2021 at 1:41 PM Japin Li <japinli@hotmail.com> wrote:\n\n> I attached v3 patch that set IntervalStyle to 'postgres' when the\n> server backend is walsender, and this problem has gone.\n\n> I test that set IntervalStyle to 'sql_standard' on publisher and\n> 'iso_8601' on subscriber, it works fine.\n\n> Please try v3 patch and let me know if they work as unexpected.\n> Thanks in advance.\n\nI think the idea of setting the standard DateStyle and the\nIntervalStyle on the walsender process looks fine to me. As this will\navoid extra network round trips as Tom mentioned.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Oct 2021 14:57:22 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "On Mon, 18 Oct 2021 at 17:27, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> On Mon, Oct 18, 2021 at 1:41 PM Japin Li <japinli@hotmail.com> wrote:\n>\n>> I attached v3 patch that set IntervalStyle to 'postgres' when the\n>> server backend is walsender, and this problem has gone.\n>\n>> I test that set IntervalStyle to 'sql_standard' on publisher and\n>> 'iso_8601' on subscriber, it works fine.\n>\n>> Please try v3 patch and let me know if they work as unexpected.\n>> Thanks in advance.\n>\n> I think the idea of setting the standard DateStyle and the\n> IntervalStyle on the walsender process looks fine to me. As this will\n> avoid extra network round trips as Tom mentioned.\n\nAfter some test, I find we also should set the extra_float_digits to avoid\nprecision lossing.\n\nPlease consider the v4 patch to review.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Wed, 20 Oct 2021 19:12:04 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "On Wed, Oct 20, 2021 at 8:12 PM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> On Mon, 18 Oct 2021 at 17:27, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > On Mon, Oct 18, 2021 at 1:41 PM Japin Li <japinli@hotmail.com> wrote:\n> >\n> >> I attached v3 patch that set IntervalStyle to 'postgres' when the\n> >> server backend is walsender, and this problem has gone.\n> >\n> >> I test that set IntervalStyle to 'sql_standard' on publisher and\n> >> 'iso_8601' on subscriber, it works fine.\n> >\n> >> Please try v3 patch and let me know if they work as unexpected.\n> >> Thanks in advance.\n> >\n> > I think the idea of setting the standard DateStyle and the\n> > IntervalStyle on the walsender process looks fine to me. As this will\n> > avoid extra network round trips as Tom mentioned.\n>\n> After some test, I find we also should set the extra_float_digits to avoid\n> precision lossing.\n\nThank you for the patch!\n\n--- a/src/backend/postmaster/postmaster.c\n+++ b/src/backend/postmaster/postmaster.c\n@@ -2223,6 +2223,24 @@ retry1:\n {\n am_walsender = true;\n am_db_walsender = true;\n+\n+ /*\n+ * Force assorted GUC\nparameters to settings that ensure\n+ * that we'll output data\nvalues in a form that is\n+ * unambiguous to the walreceiver.\n+ */\n+ port->guc_options =\nlappend(port->guc_options,\n+\n pstrdup(\"datestyle\"));\n+ port->guc_options =\nlappend(port->guc_options,\n+\n pstrdup(\"ISO\"));\n+ port->guc_options =\nlappend(port->guc_options,\n+\n pstrdup(\"intervalstyle\"));\n+ port->guc_options =\nlappend(port->guc_options,\n+\n pstrdup(\"postgres\"));\n+ port->guc_options =\nlappend(port->guc_options,\n+\n pstrdup(\"extra_float_digits\"));\n+ port->guc_options =\nlappend(port->guc_options,\n+\n pstrdup(\"3\"));\n }\n\nI'm concerned that it sets parameters too early since wal senders end\nup setting the parameters regardless of logical decoding plugins. It\nmight be better to force the parameters within the plugin for logical\nreplication, pgoutput, in order to avoid affecting other plugins? On\nthe other hand, if we do so, we will need to handle table sync worker\ncases separately since they copy data via COPY executed by the wal\nsender process. For example, we can have table sync workers set the\nparameters.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 21 Oct 2021 14:45:49 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "On Thu, Oct 21, 2021 at 11:16 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Oct 20, 2021 at 8:12 PM Japin Li <japinli@hotmail.com> wrote:\n> >\n> >\n> > On Mon, 18 Oct 2021 at 17:27, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > On Mon, Oct 18, 2021 at 1:41 PM Japin Li <japinli@hotmail.com> wrote:\n> > >\n> > >> I attached v3 patch that set IntervalStyle to 'postgres' when the\n> > >> server backend is walsender, and this problem has gone.\n> > >\n> > >> I test that set IntervalStyle to 'sql_standard' on publisher and\n> > >> 'iso_8601' on subscriber, it works fine.\n> > >\n> > >> Please try v3 patch and let me know if they work as unexpected.\n> > >> Thanks in advance.\n> > >\n> > > I think the idea of setting the standard DateStyle and the\n> > > IntervalStyle on the walsender process looks fine to me. As this will\n> > > avoid extra network round trips as Tom mentioned.\n> >\n> > After some test, I find we also should set the extra_float_digits to avoid\n> > precision lossing.\n>\n> Thank you for the patch!\n>\n> --- a/src/backend/postmaster/postmaster.c\n> +++ b/src/backend/postmaster/postmaster.c\n> @@ -2223,6 +2223,24 @@ retry1:\n> {\n> am_walsender = true;\n> am_db_walsender = true;\n> +\n> + /*\n> + * Force assorted GUC\n> parameters to settings that ensure\n> + * that we'll output data\n> values in a form that is\n> + * unambiguous to the walreceiver.\n> + */\n> + port->guc_options =\n> lappend(port->guc_options,\n> +\n> pstrdup(\"datestyle\"));\n> + port->guc_options =\n> lappend(port->guc_options,\n> +\n> pstrdup(\"ISO\"));\n> + port->guc_options =\n> lappend(port->guc_options,\n> +\n> pstrdup(\"intervalstyle\"));\n> + port->guc_options =\n> lappend(port->guc_options,\n> +\n> pstrdup(\"postgres\"));\n> + port->guc_options =\n> lappend(port->guc_options,\n> +\n> pstrdup(\"extra_float_digits\"));\n> + port->guc_options =\n> lappend(port->guc_options,\n> +\n> pstrdup(\"3\"));\n> }\n>\n> I'm concerned that it sets parameters too early since wal senders end\n> up setting the parameters regardless of logical decoding plugins. It\n> might be better to force the parameters within the plugin for logical\n> replication, pgoutput, in order to avoid affecting other plugins? On\n> the other hand, if we do so, we will need to handle table sync worker\n> cases separately since they copy data via COPY executed by the wal\n> sender process. For example, we can have table sync workers set the\n> parameters.\n\nYou mean table sync worker to set over the replication connection\nright? I think that was the first solution where normal workers, as\nwell as table sync workers, were setting over the replication\nconnection, but Tom suggested that setting on the walsender is a\nbetter option as we can avoid the network round trip.\n\nIf we want to set it over the replication connection then do it for\nboth as Japin's first patch is doing, otherwise, I am not seeing any\nbig issue in setting it early in the walsender also. I think it is\ngood to let walsender always send in the standard format which can be\nunderstood by other node, no?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Oct 2021 11:34:39 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "\nOn Thu, 21 Oct 2021 at 14:04, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> On Thu, Oct 21, 2021 at 11:16 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Wed, Oct 20, 2021 at 8:12 PM Japin Li <japinli@hotmail.com> wrote:\n>> >\n>> >\n>> > On Mon, 18 Oct 2021 at 17:27, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> > > On Mon, Oct 18, 2021 at 1:41 PM Japin Li <japinli@hotmail.com> wrote:\n>> > >\n>> > >> I attached v3 patch that set IntervalStyle to 'postgres' when the\n>> > >> server backend is walsender, and this problem has gone.\n>> > >\n>> > >> I test that set IntervalStyle to 'sql_standard' on publisher and\n>> > >> 'iso_8601' on subscriber, it works fine.\n>> > >\n>> > >> Please try v3 patch and let me know if they work as unexpected.\n>> > >> Thanks in advance.\n>> > >\n>> > > I think the idea of setting the standard DateStyle and the\n>> > > IntervalStyle on the walsender process looks fine to me. As this will\n>> > > avoid extra network round trips as Tom mentioned.\n>> >\n>> > After some test, I find we also should set the extra_float_digits to avoid\n>> > precision lossing.\n>>\n>> Thank you for the patch!\n>>\n>> --- a/src/backend/postmaster/postmaster.c\n>> +++ b/src/backend/postmaster/postmaster.c\n>> @@ -2223,6 +2223,24 @@ retry1:\n>> {\n>> am_walsender = true;\n>> am_db_walsender = true;\n>> +\n>> + /*\n>> + * Force assorted GUC\n>> parameters to settings that ensure\n>> + * that we'll output data\n>> values in a form that is\n>> + * unambiguous to the walreceiver.\n>> + */\n>> + port->guc_options =\n>> lappend(port->guc_options,\n>> +\n>> pstrdup(\"datestyle\"));\n>> + port->guc_options =\n>> lappend(port->guc_options,\n>> +\n>> pstrdup(\"ISO\"));\n>> + port->guc_options =\n>> lappend(port->guc_options,\n>> +\n>> pstrdup(\"intervalstyle\"));\n>> + port->guc_options =\n>> lappend(port->guc_options,\n>> +\n>> pstrdup(\"postgres\"));\n>> + port->guc_options =\n>> lappend(port->guc_options,\n>> +\n>> pstrdup(\"extra_float_digits\"));\n>> + port->guc_options =\n>> lappend(port->guc_options,\n>> +\n>> pstrdup(\"3\"));\n>> }\n>>\n>> I'm concerned that it sets parameters too early since wal senders end\n>> up setting the parameters regardless of logical decoding plugins. It\n>> might be better to force the parameters within the plugin for logical\n>> replication, pgoutput, in order to avoid affecting other plugins? On\n>> the other hand, if we do so, we will need to handle table sync worker\n>> cases separately since they copy data via COPY executed by the wal\n>> sender process. For example, we can have table sync workers set the\n>> parameters.\n>\n> You mean table sync worker to set over the replication connection\n> right? I think that was the first solution where normal workers, as\n> well as table sync workers, were setting over the replication\n> connection, but Tom suggested that setting on the walsender is a\n> better option as we can avoid the network round trip.\n>\n> If we want to set it over the replication connection then do it for\n> both as Japin's first patch is doing, otherwise, I am not seeing any\n> big issue in setting it early in the walsender also. I think it is\n> good to let walsender always send in the standard format which can be\n> understood by other node, no?\n\n+1\n\nI inclined to let walsender set the parameters.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Thu, 21 Oct 2021 19:09:13 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "On Thu, Oct 21, 2021 at 3:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Oct 21, 2021 at 11:16 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Oct 20, 2021 at 8:12 PM Japin Li <japinli@hotmail.com> wrote:\n> > >\n> > >\n> > > On Mon, 18 Oct 2021 at 17:27, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > On Mon, Oct 18, 2021 at 1:41 PM Japin Li <japinli@hotmail.com> wrote:\n> > > >\n> > > >> I attached v3 patch that set IntervalStyle to 'postgres' when the\n> > > >> server backend is walsender, and this problem has gone.\n> > > >\n> > > >> I test that set IntervalStyle to 'sql_standard' on publisher and\n> > > >> 'iso_8601' on subscriber, it works fine.\n> > > >\n> > > >> Please try v3 patch and let me know if they work as unexpected.\n> > > >> Thanks in advance.\n> > > >\n> > > > I think the idea of setting the standard DateStyle and the\n> > > > IntervalStyle on the walsender process looks fine to me. As this will\n> > > > avoid extra network round trips as Tom mentioned.\n> > >\n> > > After some test, I find we also should set the extra_float_digits to avoid\n> > > precision lossing.\n> >\n> > Thank you for the patch!\n> >\n> > --- a/src/backend/postmaster/postmaster.c\n> > +++ b/src/backend/postmaster/postmaster.c\n> > @@ -2223,6 +2223,24 @@ retry1:\n> > {\n> > am_walsender = true;\n> > am_db_walsender = true;\n> > +\n> > + /*\n> > + * Force assorted GUC\n> > parameters to settings that ensure\n> > + * that we'll output data\n> > values in a form that is\n> > + * unambiguous to the walreceiver.\n> > + */\n> > + port->guc_options =\n> > lappend(port->guc_options,\n> > +\n> > pstrdup(\"datestyle\"));\n> > + port->guc_options =\n> > lappend(port->guc_options,\n> > +\n> > pstrdup(\"ISO\"));\n> > + port->guc_options =\n> > lappend(port->guc_options,\n> > +\n> > pstrdup(\"intervalstyle\"));\n> > + port->guc_options =\n> > lappend(port->guc_options,\n> > +\n> > pstrdup(\"postgres\"));\n> > + port->guc_options =\n> > lappend(port->guc_options,\n> > +\n> > pstrdup(\"extra_float_digits\"));\n> > + port->guc_options =\n> > lappend(port->guc_options,\n> > +\n> > pstrdup(\"3\"));\n> > }\n> >\n> > I'm concerned that it sets parameters too early since wal senders end\n> > up setting the parameters regardless of logical decoding plugins. It\n> > might be better to force the parameters within the plugin for logical\n> > replication, pgoutput, in order to avoid affecting other plugins? On\n> > the other hand, if we do so, we will need to handle table sync worker\n> > cases separately since they copy data via COPY executed by the wal\n> > sender process. For example, we can have table sync workers set the\n> > parameters.\n>\n> You mean table sync worker to set over the replication connection\n> right? I think that was the first solution where normal workers, as\n> well as table sync workers, were setting over the replication\n> connection, but Tom suggested that setting on the walsender is a\n> better option as we can avoid the network round trip.\n\nRight.\n\nBTW I think we can set the parameters from the subscriber side without\nadditional network round trips by specifying the \"options\" parameter\nin the connection string, no?\n\n> If we want to set it over the replication connection then do it for\n> both as Japin's first patch is doing, otherwise, I am not seeing any\n> big issue in setting it early in the walsender also. I think it is\n> good to let walsender always send in the standard format which can be\n> understood by other node, no?\n\nYeah, probably the change on HEAD is fine but I'm a bit concerned\nabout possible issues on back branches like if the user expects to get\ndate data in the style of DateStyle setting on the server via\npg_recvlogical, this change could break it.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 21 Oct 2021 20:54:59 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "\nOn Thu, 21 Oct 2021 at 19:54, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Thu, Oct 21, 2021 at 3:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>\n>> On Thu, Oct 21, 2021 at 11:16 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> >\n>> > On Wed, Oct 20, 2021 at 8:12 PM Japin Li <japinli@hotmail.com> wrote:\n>> > >\n>> > >\n>> > > On Mon, 18 Oct 2021 at 17:27, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> > > > On Mon, Oct 18, 2021 at 1:41 PM Japin Li <japinli@hotmail.com> wrote:\n>> > > >\n>> > > >> I attached v3 patch that set IntervalStyle to 'postgres' when the\n>> > > >> server backend is walsender, and this problem has gone.\n>> > > >\n>> > > >> I test that set IntervalStyle to 'sql_standard' on publisher and\n>> > > >> 'iso_8601' on subscriber, it works fine.\n>> > > >\n>> > > >> Please try v3 patch and let me know if they work as unexpected.\n>> > > >> Thanks in advance.\n>> > > >\n>> > > > I think the idea of setting the standard DateStyle and the\n>> > > > IntervalStyle on the walsender process looks fine to me. As this will\n>> > > > avoid extra network round trips as Tom mentioned.\n>> > >\n>> > > After some test, I find we also should set the extra_float_digits to avoid\n>> > > precision lossing.\n>> >\n>> > I'm concerned that it sets parameters too early since wal senders end\n>> > up setting the parameters regardless of logical decoding plugins. It\n>> > might be better to force the parameters within the plugin for logical\n>> > replication, pgoutput, in order to avoid affecting other plugins? On\n>> > the other hand, if we do so, we will need to handle table sync worker\n>> > cases separately since they copy data via COPY executed by the wal\n>> > sender process. For example, we can have table sync workers set the\n>> > parameters.\n>>\n>> You mean table sync worker to set over the replication connection\n>> right? I think that was the first solution where normal workers, as\n>> well as table sync workers, were setting over the replication\n>> connection, but Tom suggested that setting on the walsender is a\n>> better option as we can avoid the network round trip.\n>\n> Right.\n>\n> BTW I think we can set the parameters from the subscriber side without\n> additional network round trips by specifying the \"options\" parameter\n> in the connection string, no?\n>\n\nYes, we can. However, each client should be concerned the style for\ndatestyle, IMO it is boring.\n\n>> If we want to set it over the replication connection then do it for\n>> both as Japin's first patch is doing, otherwise, I am not seeing any\n>> big issue in setting it early in the walsender also. I think it is\n>> good to let walsender always send in the standard format which can be\n>> understood by other node, no?\n>\n> Yeah, probably the change on HEAD is fine but I'm a bit concerned\n> about possible issues on back branches like if the user expects to get\n> date data in the style of DateStyle setting on the server via\n> pg_recvlogical, this change could break it.\n>\n\nHow it breaks? The user also can specify the \"options\" to get date data\nin the style which they are wanted. Right?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Thu, 21 Oct 2021 22:17:55 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> On Thu, 21 Oct 2021 at 19:54, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> BTW I think we can set the parameters from the subscriber side without\n>> additional network round trips by specifying the \"options\" parameter\n>> in the connection string, no?\n\n> Yes, we can. However, each client should be concerned the style for\n> datestyle, IMO it is boring.\n\nThere's another issue here: the subscriber can run user-defined code\n(in triggers), while AFAIK the sender cannot. People might be surprised\nif their triggers run with a datestyle setting different from the\ndatabase's prevailing setting. So while I think it should be okay\nto set-and-forget the datestyle on the sender side, we could not get\naway with that in the subscriber. We'd have to set and unset for\neach row, much as (e.g.) postgres_fdw has to do.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Oct 2021 10:46:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "\nOn Thu, 21 Oct 2021 at 22:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> On Thu, 21 Oct 2021 at 19:54, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>> BTW I think we can set the parameters from the subscriber side without\n>>> additional network round trips by specifying the \"options\" parameter\n>>> in the connection string, no?\n>\n>> Yes, we can. However, each client should be concerned the style for\n>> datestyle, IMO it is boring.\n>\n> There's another issue here: the subscriber can run user-defined code\n> (in triggers), while AFAIK the sender cannot.\n\nSorry, I'm not sure about this. Could you give me an example?\n\n> People might be surprised\n> if their triggers run with a datestyle setting different from the\n> database's prevailing setting. So while I think it should be okay\n> to set-and-forget the datestyle on the sender side, we could not get\n> away with that in the subscriber. We'd have to set and unset for\n> each row, much as (e.g.) postgres_fdw has to do.\n>\n\nYeah! As Masahiko said, we can avoid the network round trips by specifying\nthe \"options\" parameter in the connection string.\n\nIf this way is more accepted, I'll update the patch later.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Thu, 21 Oct 2021 23:04:12 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> On Thu, 21 Oct 2021 at 22:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> There's another issue here: the subscriber can run user-defined code\n>> (in triggers), while AFAIK the sender cannot.\n\n> Sorry, I'm not sure about this. Could you give me an example?\n\nIf you're doing logical replication into a table that has triggers,\nthe replication worker has to execute those triggers.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Oct 2021 11:10:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "\nOn Thu, 21 Oct 2021 at 23:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> On Thu, 21 Oct 2021 at 22:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> There's another issue here: the subscriber can run user-defined code\n>>> (in triggers), while AFAIK the sender cannot.\n>\n>> Sorry, I'm not sure about this. Could you give me an example?\n>\n> If you're doing logical replication into a table that has triggers,\n> the replication worker has to execute those triggers.\n>\n\nDoes that mean we should use the subscriber's settings to set the\nreplication's parameter (e.g. datestyle)? If we do this, it might\nloss precision (for example: extra_float_digits on publisher is 3\nand on subscriber is -4), is this accpted?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Thu, 21 Oct 2021 23:32:46 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "On Thu, Oct 21, 2021 at 11:18 PM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> On Thu, 21 Oct 2021 at 19:54, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Thu, Oct 21, 2021 at 3:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >>\n> >> On Thu, Oct 21, 2021 at 11:16 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >> >\n> >> > On Wed, Oct 20, 2021 at 8:12 PM Japin Li <japinli@hotmail.com> wrote:\n> >> > >\n> >> > >\n> >> > > On Mon, 18 Oct 2021 at 17:27, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >> > > > On Mon, Oct 18, 2021 at 1:41 PM Japin Li <japinli@hotmail.com> wrote:\n> >> > > >\n> >> > > >> I attached v3 patch that set IntervalStyle to 'postgres' when the\n> >> > > >> server backend is walsender, and this problem has gone.\n> >> > > >\n> >> > > >> I test that set IntervalStyle to 'sql_standard' on publisher and\n> >> > > >> 'iso_8601' on subscriber, it works fine.\n> >> > > >\n> >> > > >> Please try v3 patch and let me know if they work as unexpected.\n> >> > > >> Thanks in advance.\n> >> > > >\n> >> > > > I think the idea of setting the standard DateStyle and the\n> >> > > > IntervalStyle on the walsender process looks fine to me. As this will\n> >> > > > avoid extra network round trips as Tom mentioned.\n> >> > >\n> >> > > After some test, I find we also should set the extra_float_digits to avoid\n> >> > > precision lossing.\n> >> >\n> >> > I'm concerned that it sets parameters too early since wal senders end\n> >> > up setting the parameters regardless of logical decoding plugins. It\n> >> > might be better to force the parameters within the plugin for logical\n> >> > replication, pgoutput, in order to avoid affecting other plugins? On\n> >> > the other hand, if we do so, we will need to handle table sync worker\n> >> > cases separately since they copy data via COPY executed by the wal\n> >> > sender process. For example, we can have table sync workers set the\n> >> > parameters.\n> >>\n> >> You mean table sync worker to set over the replication connection\n> >> right? I think that was the first solution where normal workers, as\n> >> well as table sync workers, were setting over the replication\n> >> connection, but Tom suggested that setting on the walsender is a\n> >> better option as we can avoid the network round trip.\n> >\n> > Right.\n> >\n> > BTW I think we can set the parameters from the subscriber side without\n> > additional network round trips by specifying the \"options\" parameter\n> > in the connection string, no?\n> >\n>\n> Yes, we can. However, each client should be concerned the style for\n> datestyle, IMO it is boring.\n>\n> >> If we want to set it over the replication connection then do it for\n> >> both as Japin's first patch is doing, otherwise, I am not seeing any\n> >> big issue in setting it early in the walsender also. I think it is\n> >> good to let walsender always send in the standard format which can be\n> >> understood by other node, no?\n> >\n> > Yeah, probably the change on HEAD is fine but I'm a bit concerned\n> > about possible issues on back branches like if the user expects to get\n> > date data in the style of DateStyle setting on the server via\n> > pg_recvlogical, this change could break it.\n> >\n>\n> How it breaks?\n\nI don't know the real case but for example, if an application gets\nchanges via pg_recvlogical with a decoding plugin (say wal2json) from\nthe database whose DateStyle setting is \"SQL, MDY\", it expects that\nthe date values in the streamed data are in the style of \"ISO, MDY\".\nBut with this change, it will get date values in the style of \"ISO\"\nwhich could lead to a parse error in the application.\n\n> The user also can specify the \"options\" to get date data\n> in the style which they are wanted. Right?\n\nRight. But doesn't it mean breaking the compatibility?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 22 Oct 2021 09:26:23 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "\nOn Fri, 22 Oct 2021 at 08:26, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Thu, Oct 21, 2021 at 11:18 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>> How it breaks?\n>\n> I don't know the real case but for example, if an application gets\n> changes via pg_recvlogical with a decoding plugin (say wal2json) from\n> the database whose DateStyle setting is \"SQL, MDY\", it expects that\n> the date values in the streamed data are in the style of \"ISO, MDY\".\n> But with this change, it will get date values in the style of \"ISO\"\n> which could lead to a parse error in the application.\n>\n>> The user also can specify the \"options\" to get date data\n>> in the style which they are wanted. Right?\n>\n> Right. But doesn't it mean breaking the compatibility?\n>\n\nYeah, it might be break the compatibility.\n\nIn conclusion, this bug has two ways to fix.\n\nIn conclusion, this bug has two ways to fix.\n\n1. Set the parameters on publisher, this might be break the compatibility.\n2. Set the parameters on subscriber. In my first patch, I try to set the\n parameters after establish the connection, it will lead more network\n round trips. We can set the parameters when connecting the walsender\n using \"options\".\n\nFor the second way, should we set the parameters same as subscriber or\nuse the parameters (e.g. datestyle = \"ISO\") likes postgres_fdw\nset_transmission_modes()?\n\nAny thoughts?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Fri, 22 Oct 2021 10:37:28 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "On Fri, Oct 22, 2021 at 8:07 AM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> On Fri, 22 Oct 2021 at 08:26, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Thu, Oct 21, 2021 at 11:18 PM Japin Li <japinli@hotmail.com> wrote:\n> >>\n> >> How it breaks?\n> >\n> > I don't know the real case but for example, if an application gets\n> > changes via pg_recvlogical with a decoding plugin (say wal2json) from\n> > the database whose DateStyle setting is \"SQL, MDY\", it expects that\n> > the date values in the streamed data are in the style of \"ISO, MDY\".\n> > But with this change, it will get date values in the style of \"ISO\"\n> > which could lead to a parse error in the application.\n> >\n> >> The user also can specify the \"options\" to get date data\n> >> in the style which they are wanted. Right?\n> >\n> > Right. But doesn't it mean breaking the compatibility?\n> >\n>\n> Yeah, it might be break the compatibility.\n>\n> In conclusion, this bug has two ways to fix.\n>\n> In conclusion, this bug has two ways to fix.\n>\n> 1. Set the parameters on publisher, this might be break the compatibility.\n\nIs it not possible to set the parameter on publisher as \"ISO, MDY\" or\n\"ISO, YMD\", instead of only \"ISO\"?\nDateStyle includes both, so we may set the parameter with date format...\n\n> 2. Set the parameters on subscriber. In my first patch, I try to set the\n> parameters after establish the connection, it will lead more network\n> round trips. We can set the parameters when connecting the walsender\n> using \"options\".\n>\n> For the second way, should we set the parameters same as subscriber or\n> use the parameters (e.g. datestyle = \"ISO\") likes postgres_fdw\n> set_transmission_modes()?\n>\n> Any thoughts?\n\nIMO, setting the parameter value same as the subscriber is better. It\nis always possible that we can set any datestyle in the plugins\nitself...\n\n\nThanks & Regards\nSadhuPrasad\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Oct 2021 12:30:12 +0530", "msg_from": "Sadhuprasad Patro <b.sadhu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "On Fri, 22 Oct 2021 at 15:00, Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n> On Fri, Oct 22, 2021 at 8:07 AM Japin Li <japinli@hotmail.com> wrote:\n>>\n>>\n>> On Fri, 22 Oct 2021 at 08:26, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> > On Thu, Oct 21, 2021 at 11:18 PM Japin Li <japinli@hotmail.com> wrote:\n>> >>\n>> >> How it breaks?\n>> >\n>> > I don't know the real case but for example, if an application gets\n>> > changes via pg_recvlogical with a decoding plugin (say wal2json) from\n>> > the database whose DateStyle setting is \"SQL, MDY\", it expects that\n>> > the date values in the streamed data are in the style of \"ISO, MDY\".\n>> > But with this change, it will get date values in the style of \"ISO\"\n>> > which could lead to a parse error in the application.\n>> >\n>> >> The user also can specify the \"options\" to get date data\n>> >> in the style which they are wanted. Right?\n>> >\n>> > Right. But doesn't it mean breaking the compatibility?\n>> >\n>>\n>> Yeah, it might be break the compatibility.\n>>\n>> In conclusion, this bug has two ways to fix.\n>>\n>> In conclusion, this bug has two ways to fix.\n>>\n>> 1. Set the parameters on publisher, this might be break the compatibility.\n>\n> Is it not possible to set the parameter on publisher as \"ISO, MDY\" or\n> \"ISO, YMD\", instead of only \"ISO\"?\n> DateStyle includes both, so we may set the parameter with date format...\n>\n>> 2. Set the parameters on subscriber. In my first patch, I try to set the\n>> parameters after establish the connection, it will lead more network\n>> round trips. We can set the parameters when connecting the walsender\n>> using \"options\".\n>>\n>> For the second way, should we set the parameters same as subscriber or\n>> use the parameters (e.g. datestyle = \"ISO\") likes postgres_fdw\n>> set_transmission_modes()?\n>>\n>> Any thoughts?\n>\n> IMO, setting the parameter value same as the subscriber is better. It\n> is always possible that we can set any datestyle in the plugins\n> itself...\n>\n\nAttach v5 patch. This patch set the datestyle, intervalstyle and\nextra_float_digits parameters when we connect to publisher, this can\navoid the network round trips (compare with the first patch).\n\nOTOH, the patch uses the subscriber's parameters as connecting parameters,\nwhich is more complex. If we use the parameters likes postgres_fdw\nset_transmission_mode(), the code will be easier [1].\n\n\n[1]\ndiff --git a/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c b/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c\nindex 5c6e56a5b2..0d03edd39f 100644\n--- a/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c\n+++ b/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c\n@@ -128,8 +128,8 @@ libpqrcv_connect(const char *conninfo, bool logical, const char *appname,\n {\n \tWalReceiverConn *conn;\n \tPostgresPollingStatusType status;\n-\tconst char *keys[5];\n-\tconst char *vals[5];\n+\tconst char *keys[6];\n+\tconst char *vals[6];\n \tint\t\t\ti = 0;\n\n \t/*\n@@ -155,6 +155,8 @@ libpqrcv_connect(const char *conninfo, bool logical, const char *appname,\n \t{\n \t\tkeys[++i] = \"client_encoding\";\n \t\tvals[i] = GetDatabaseEncodingName();\n+\t\tkeys[++i] = \"options\";\n+\t\tvals[i] = \"-c datestyle=ISO,\\\\ YMD -c intervalstyle=postgres extra_float_digits=3\";\n \t}\n \tkeys[++i] = NULL;\n \tvals[i] = NULL;\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Sat, 23 Oct 2021 00:40:42 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> Attach v5 patch. This patch set the datestyle, intervalstyle and\n> extra_float_digits parameters when we connect to publisher, this can\n> avoid the network round trips (compare with the first patch).\n\nYou could make it a little less confusing by not insisting on a\nspace in the datestyle. This should work fine:\n\n\t\tvals[i] = \"-c datestyle=ISO,YMD -c intervalstyle=postgres extra_float_digits=3\";\n\nAlso, I think some comments would be appropriate.\n\nI don't see any value whatsoever in the more complicated version\nof the patch. It's just more code to maintain and more things\nto go wrong. And not only at our level, but the DBA's too.\nWhat if the subscriber and publisher are of different PG versions\nand have different ideas of the valid values of these settings?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Oct 2021 12:55:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "On Sat, 23 Oct 2021 at 00:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> Attach v5 patch. This patch set the datestyle, intervalstyle and\n>> extra_float_digits parameters when we connect to publisher, this can\n>> avoid the network round trips (compare with the first patch).\n>\n> You could make it a little less confusing by not insisting on a\n> space in the datestyle. This should work fine:\n>\n> \t\tvals[i] = \"-c datestyle=ISO,YMD -c intervalstyle=postgres extra_float_digits=3\";\n>\n\nOh. My apologies. I try this style before, but find it see \"ISO,\" is not valid,\nso I add backslash, but it seems like that is my environment doesn't cleanup.\n\nFixed.\n\n> Also, I think some comments would be appropriate.\n>\n\nAdd comments for it.\n\n> I don't see any value whatsoever in the more complicated version\n> of the patch. It's just more code to maintain and more things\n> to go wrong. And not only at our level, but the DBA's too.\n\nAgreed.\n\n> What if the subscriber and publisher are of different PG versions\n> and have different ideas of the valid values of these settings?\n>\n\nSorry, I'm a bit confused. Do you mean we should provide a choose for user\nto set thoses parameters when establish logical replication?\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Sat, 23 Oct 2021 01:37:49 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> On Sat, 23 Oct 2021 at 00:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What if the subscriber and publisher are of different PG versions\n>> and have different ideas of the valid values of these settings?\n\n> Sorry, I'm a bit confused. Do you mean we should provide a choose for user\n> to set thoses parameters when establish logical replication?\n\nNo, I'm just pointing out that pushing the subscriber's settings\nto the publisher wouldn't be guaranteed to work. As long as we\nuse curated values that we know do what we want on all versions,\nI think we're okay.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Oct 2021 14:00:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "\nOn Sat, 23 Oct 2021 at 02:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> On Sat, 23 Oct 2021 at 00:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> What if the subscriber and publisher are of different PG versions\n>>> and have different ideas of the valid values of these settings?\n>\n>> Sorry, I'm a bit confused. Do you mean we should provide a choose for user\n>> to set thoses parameters when establish logical replication?\n>\n> No, I'm just pointing out that pushing the subscriber's settings\n> to the publisher wouldn't be guaranteed to work. As long as we\n> use curated values that we know do what we want on all versions,\n> I think we're okay.\n>\n\nThanks for your clarification.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 27 Oct 2021 09:40:09 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" }, { "msg_contents": "Pushed with some adjustment of the comments. I also simplified the\ndatestyle setting to just \"ISO\", because that's sufficient: that\nDateStyle doesn't care about DateOrder. Since the settings are\nsupposed to match what pg_dump uses, it's just confusing if they don't.\n\nAlso, I didn't commit the test case. It was useful for development,\nbut it seemed entirely too expensive to keep forevermore compared to its\nlikely future value. It increased the runtime of 100_bugs.pl by about\na third, and I'm afraid the likely future value is nil. The most likely\nbug in this area would be introducing some new GUC that we need to set\nand forgetting to do so here; but this test case could not expose that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Nov 2021 14:36:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Bug] Logical Replication failing if the DateStyle is different\n in Publisher & Subscriber" } ]
[ { "msg_contents": "Hi,\n\n\nHere is a proposal to implement HIDDEN columns feature in PostgreSQL.\n\nThe user defined columns are always visible in the PostgreSQL. If user\nwants to hide some column(s) from a SELECT * returned values then the\nhidden columns feature is useful. Hidden column can always be used and\nreturned by explicitly referring it in the query.\n\nI agree that views are done for that or that using a SELECT * is a bad \npractice\nbut sometime we could need to \"technically\" prevent some columns to be part\nof a star expansion and nbot be forced to use view+rules. For example when\nupgrading a database schema where a column have been added to a table,\nthis will break any old version of the application that is using a \nSELECT * on\nthis table. Being able to \"hide\" this column to such query will make \nmigration\neasier.\n\nAn other common use case for this feature is to implements temporal tables\nor row versionning. On my side I see a direct interest in Oracle to \nPostgreSQL\nmigration to emulate the ROWID system column without the hassle of creating\nviews, it will save lot of time.\n\nThe other advantage over views is that the hidden column can still be used\nin JOIN, WHERE, ORDER BY or GROUP BY clause which is not possible otherwise.\nI don't talk about writing to complex view which would require a RULE.\n\nHidden column is not part of the SQL standard but is implemented in all \nother\nRDBMS which is also called invisible columns [1] [2] [3] [4]. In all \nthese RDBMS\nthe feature is quite the same.\n\n   [1] https://www.ibm.com/docs/en/db2/10.5?topic=concepts-hidden-columns\n   [2] https://oracle-base.com/articles/12c/invisible-columns-12cr1\n   [3] \nhttps://docs.microsoft.com/en-us/sql/t-sql/statements/create-table-transact-sql?view=sql-server-ver15\n   [4] https://dev.mysql.com/doc/refman/8.0/en/invisible-columns.html\n\n\nHere is the full description of the proposal with a patch attached that \nimplements\nthe feature:\n\n   1) Creating hidden columns:\n\n      A column visibility attribute is added to the column definition\n      of CREATE TABLE and ALTER TABLE statements. For example:\n\n          CREATE TABLE htest1 (a bigserial HIDDEN, b text);\n\n          ALTER TABLE htest1 ADD COLUMN c integer HIDDEN;\n\n      Columns are visible by default.\n\n   2) Altering column visibility attribute:\n\n      The ALTER TABLE statement can be used to change hidden columns to not\n      hidden and the opposite. Example:\n\n          ALTER TABLE htest1 ALTER COLUMN c DROP HIDDEN;\n\n   3) Insert and hidden columns:\n\n      If the column list of INSERT or COPY statements is empty\n      then while expanding column list hidden columns are NOT\n      included. DEFAULT or NULL values are inserted for hidden\n      columns in this case. Hidden column should be explicitly\n      referenced in the column list of INSERT and COPY statement\n      to insert a value.\n\n      Example:\n\n        -- Value 'one' is stored in column b and 1 in hidden column.\n        INSERT INTO t1 VALUES ('one');\n\n        -- Value 2 is stored in hidden column and 'two' in b.\n        INSERT INTO htest1 (a, b) VALUES (2, 'two');\n\n   4) Star expansion for SELECT * statements:\n\n      Hidden columns are not included in a column list while\n      expanding wild card '*' in the SELECT statement.\n\n      Example:\n\n          SELECT * FROM htest1;\n            b\n          ------\n           one\n           two\n\n       Hidden columns are accessible when explicitly referenced\n       in the query.\n\n       Example:\n          SELECT f1, f2 FROM t1;\n             a  |  b\n          ------+------\n            1   | one\n            2   | two\n\n   5) psql extended describe lists hidden columns.\n\n       postgres=# \\d+ htest1\n                                       Table \"public.htest1\"\n        Column |  Type  | Collation | Nullable |  Default   | Visible | ...\n--------+--------+-----------+----------+------------+---------+ ...\n        a      | bigint |           | not null | nextval... | hidden  | ...\n        b      | text   |           |          | |         | ...\n\n   6) When a column is flagged as hidden the attishidden column value of\n      table pg_attribute is set to true.\n\n   7) For hidden attributes, column is_hidden of table \ninformation_schema.columns\n      is set to YES. By default the column is visible and the value is 'NO'.\n\nFor a complete description of the feature, see chapter \"Hidden columns\" in\nfile doc/src/sgml/ddl.sgml after applying the patch.\n\n\nThe patch is a full implementation of this feture except that I sill have to\nprevent a ALTER ... SET HIDDEN to be applied of there is no more visible\ncolumns in the table after the change. I will do that when I will recover\nmore time.\n\nI have choose HIDDEN vs INVISIBLE but this could be a minor change or\nwe could use NOT EXPANDABLE. Personnaly I prefer the HIDDEN attribute.\n\n\nAny though and interest in this feature?\n\n-- \nGilles Darold\nhttp://www.migops.com/", "msg_date": "Thu, 14 Oct 2021 13:16:45 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "[PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Hi Gilles,\n\n> Any though and interest in this feature?\n\nPersonally, I wouldn't call this feature particularly useful. `SELECT\n*` is intended for people who are working with DBMS directly e.g. via\npsql and want to see ALL columns. The applications should never use\n`SELECT *`. So I can't see any real benefits of adding this feature to\nPostgreSQL. It will only make the existing code and the existing user\ninterface even more complicated than they are now.\n\nAlso, every yet another feature is x N corner cases when this feature\nworks with other N features of PostgreSQL. How should it work with\npartitioned or inherited tables? Or with logical replication? With\npg_dump? With COPY?\n\nSo all in all, -1. This being said, I very much appreciate your\nattempt to improve PostgreSQL. However next time before writing the\ncode I suggest submitting an RFC first.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 14 Oct 2021 14:47:45 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Hi again,\n\n> So all in all, -1. [...]\n\nHere is something I would like to add:\n\n1. As far as I know, \"all the rest of DBMS have this\" was never a good\nargument in the PostgreSQL community. Generally, using it will turn people\nagainst you.\n2. I recall there was a proposal of making the SQL syntax itself\nextendable. To my knowledge, this is still a wanted feature [1]. In theory,\nthat would allow you to implement the feature you want in an extension.\n\n[1]: https://wiki.postgresql.org/wiki/Todo#Exotic_Features\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi again,> So all in all, -1. [...]Here is something I would like to add:1. As far as I know, \"all the rest of DBMS have this\" was never a good argument in the PostgreSQL community. Generally, using it will turn people against you.2. I recall there was a proposal of making the SQL syntax itself extendable. To my knowledge, this is still a wanted feature [1]. In theory, that would allow you to implement the feature you want in an extension. [1]: https://wiki.postgresql.org/wiki/Todo#Exotic_Features-- Best regards,Aleksander Alekseev", "msg_date": "Thu, 14 Oct 2021 15:09:50 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "On 10/14/21 1:47 PM, Aleksander Alekseev wrote:\n> Hi Gilles,\n> \n>> Any though and interest in this feature?\n> \n> Personally, I wouldn't call this feature particularly useful. `SELECT\n> *` is intended for people who are working with DBMS directly e.g. via\n> psql and want to see ALL columns.\n\nI disagree strongly with this. It is really annoying when working\ninteractively with psql on a table that has a PostGIS geometry column,\nor any other large blobby type column.\n\nI have not looked at the patch, but +1 for the feature.\n-- \nVik Fearing\n\n\n", "msg_date": "Thu, 14 Oct 2021 14:13:00 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Hi Vik,\n\n> I have not looked at the patch, but +1 for the feature.\n\nMaybe you could describe your use case in a little more detail? How\ndid you end up working with PostGIS geometry via psql on regular\nbasis? What exactly do you find of annoyance? How will the proposed\npatch help?\n\nI find it great that we have people with polar opinions in the\ndiscussion. But to reach any consensus you should make the opponent\nunderstand your situation. Also, please don't simply discard the\ndisadvantages stated above. If you don't believe these are significant\ndisadvantages, please explain why do you think so.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 14 Oct 2021 15:21:42 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "čt 14. 10. 2021 v 14:13 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> On 10/14/21 1:47 PM, Aleksander Alekseev wrote:\n> > Hi Gilles,\n> >\n> >> Any though and interest in this feature?\n> >\n> > Personally, I wouldn't call this feature particularly useful. `SELECT\n> > *` is intended for people who are working with DBMS directly e.g. via\n> > psql and want to see ALL columns.\n>\n> I disagree strongly with this. It is really annoying when working\n> interactively with psql on a table that has a PostGIS geometry column,\n> or any other large blobby type column.\n>\n> I have not looked at the patch, but +1 for the feature.\n>\n\nCannot be better to redefine some strategies for output for some types.\n\nI can agree so sometimes in some environments proposed features can be\nnice, but it can be a strong footgun too.\n\nMaybe some strange data can be filtered in psql and it can be better\nsolution. I agree, so usually print long geometry in psql is useless.\n\nRegards\n\nPavel\n\n\n\n-- \n> Vik Fearing\n>\n>\n>\n\nčt 14. 10. 2021 v 14:13 odesílatel Vik Fearing <vik@postgresfriends.org> napsal:On 10/14/21 1:47 PM, Aleksander Alekseev wrote:\n> Hi Gilles,\n> \n>> Any though and interest in this feature?\n> \n> Personally, I wouldn't call this feature particularly useful. `SELECT\n> *` is intended for people who are working with DBMS directly e.g. via\n> psql and want to see ALL columns.\n\nI disagree strongly with this.  It is really annoying when working\ninteractively with psql on a table that has a PostGIS geometry column,\nor any other large blobby type column.\n\nI have not looked at the patch, but +1 for the feature.Cannot be better to redefine some strategies for output for some types.I can agree so sometimes in some environments proposed features can be nice, but it can be a strong footgun too. Maybe some strange data can be filtered in psql and it can be better solution. I agree, so usually print long geometry in psql is useless. RegardsPavel\n-- \nVik Fearing", "msg_date": "Thu, 14 Oct 2021 14:28:32 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "On Thu, 14 Oct 2021 at 07:17, Gilles Darold <gilles@migops.com> wrote:\n\n\n> The user defined columns are always visible in the PostgreSQL. If user\n> wants to hide some column(s) from a SELECT * returned values then the\n> hidden columns feature is useful. Hidden column can always be used and\n> returned by explicitly referring it in the query.\n>\n\nIt seems to me we've gone in the reverse direction recently. It used to be\nthat the oid columns of the system tables were hidden (hardcoded, as far as\nI know), but as of Postgres 12 I believe there are no more hidden columns:\nSELECT * from a table always gives all the columns.\n\nI think a \"select all columns except …\" would be more useful; or another\napproach would be to use a display tool that defaults to displaying only a\nportion of large fields.\n\nOn Thu, 14 Oct 2021 at 07:17, Gilles Darold <gilles@migops.com> wrote: \nThe user defined columns are always visible in the PostgreSQL. If user\nwants to hide some column(s) from a SELECT * returned values then the\nhidden columns feature is useful. Hidden column can always be used and\nreturned by explicitly referring it in the query.It seems to me we've gone in the reverse direction recently. It used to be that the oid columns of the system tables were hidden (hardcoded, as far as I know), but as of Postgres 12 I believe there are no more hidden columns: SELECT * from a table always gives all the columns.I think a \"select all columns except …\" would be more useful; or another approach would be to use a display tool that defaults to displaying only a portion of large fields.", "msg_date": "Thu, 14 Oct 2021 08:28:48 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 14/10/2021 à 13:47, Aleksander Alekseev a écrit :\n> Hi Gilles,\n>\n>> Any though and interest in this feature?\n> Personally, I wouldn't call this feature particularly useful. `SELECT\n> *` is intended for people who are working with DBMS directly e.g. via\n> psql and want to see ALL columns. The applications should never use\n> `SELECT *`. So I can't see any real benefits of adding this feature to\n> PostgreSQL. It will only make the existing code and the existing user\n> interface even more complicated than they are now.\n\n\nThanks for your comments Aleksander. This was also my thougth at \nbegining but unfortunately there is cases where things are not so simple \nand just relying on SELECT * is dirty or forbidden.  The hidden column \nare not only useful for SELECT * but also for INSERT without column \nlist, but INSERT without column list is also a bad practice.\n\n\n> Also, every yet another feature is x N corner cases when this feature\n> works with other N features of PostgreSQL. How should it work with\n> partitioned or inherited tables? Or with logical replication? With\n> pg_dump? With COPY?\n\n\nI recommand you to have look to my patch because the partitioned and \ninherited case are covered, you can have a . For logical replication I \nguess that any change in pg_attribute is also replicated so I I would \nsaid that it is fully supported. But obviously I may miss something. \npg_dump and COPY are also supported.\n\n\nActually the patch only prevent an hidden column to be part of a star \nexpansion for the returned column, I don't think there is corner case \nwith the other part of the code outside that we need to prevent a table \nto have all columns hidden. But I could miss something, I agree.\n\n\n> So all in all, -1. This being said, I very much appreciate your\n> attempt to improve PostgreSQL. However next time before writing the\n> code I suggest submitting an RFC first.\n\n\nDon't worry about my time spent for the PG community, this patch is a \ndust in my contribution to open source :-) If I have provided the patch \nto show the concept and how it can be easily implemented.  Also it can \nbe used in some PostgreSQL forks if one is interested by this feature.\n\n\n-- \n\nGilles Darold\n\n\n\n", "msg_date": "Thu, 14 Oct 2021 14:52:39 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 14/10/2021 à 14:09, Aleksander Alekseev a écrit :\n> Hi again,\n>\n> > So all in all, -1. [...]\n>\n> Here is something I would like to add:\n>\n> 1. As far as I know, \"all the rest of DBMS have this\" was never a good \n> argument in the PostgreSQL community. Generally, using it will turn \n> people against you.\n\n\nI have cited the implementation in the other RDBMS because it helps to \nunderstand the feature, it shows the state of the art on it and \nillustrates my needs. If making references to other implementation turns \npeople against me I think that they have the wrong approach on this \nproposal and if we refuse feature because they are implemented in other \nRDBMS this is even worst. I'm not agree with this comment.\n\n\n> 2. I recall there was a proposal of making the SQL syntax itself \n> extendable. To my knowledge, this is still a wanted feature [1]. In \n> theory, that would allow you to implement the feature you want in an \n> extension.\n\n\nFor what I've read in this thread \nhttps://www.postgresql.org/message-id/flat/20210501072458.adqjoaqnmhg4l34l%40nol \nthere is no real consensus in how implementing this feature should be \ndone. But I agree that if the implementation through an extension was \npossible I would not write a patch to core but an extension, this is my \ncommon behavior.\n\n\nBest regards,\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n", "msg_date": "Thu, 14 Oct 2021 15:19:19 +0200", "msg_from": "Gilles Darold <gillesdarold@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 14/10/2021 à 14:28, Pavel Stehule a écrit :\n>\n>\n> čt 14. 10. 2021 v 14:13 odesílatel Vik Fearing \n> <vik@postgresfriends.org <mailto:vik@postgresfriends.org>> napsal:\n>\n> On 10/14/21 1:47 PM, Aleksander Alekseev wrote:\n> > Hi Gilles,\n> >\n> >> Any though and interest in this feature?\n> >\n> > Personally, I wouldn't call this feature particularly useful.\n> `SELECT\n> > *` is intended for people who are working with DBMS directly\n> e.g. via\n> > psql and want to see ALL columns.\n>\n> I disagree strongly with this.  It is really annoying when working\n> interactively with psql on a table that has a PostGIS geometry column,\n> or any other large blobby type column.\n>\n> I have not looked at the patch, but +1 for the feature.\n>\n>\n> Cannot be better to redefine some strategies for output for some types.\n>\n> I can agree so sometimes in some environments proposed features can be \n> nice, but it can be a strong footgun too.\n>\n> Maybe some strange data can be filtered in psql and it can be better \n> solution. I agree, so usually print long geometry in psql is useless.\n\n\nPavel this doesn't concern only output but input too, think about the \nINSERT or COPY without a column list. We can add such filter in psql but \nhow about other clients? They all have to implement their own filtering \nmethod. I think the HIDDEN attribute provide a common and basic way to \nimplement that in all client application.\n\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n\n\n\n\nLe 14/10/2021 à 14:28, Pavel Stehule a\n écrit :\n\n\n\n\n\n\n\n\nčt 14. 10. 2021 v 14:13\n odesílatel Vik Fearing <vik@postgresfriends.org>\n napsal:\n\nOn 10/14/21 1:47 PM,\n Aleksander Alekseev wrote:\n > Hi Gilles,\n > \n >> Any though and interest in this feature?\n > \n > Personally, I wouldn't call this feature particularly\n useful. `SELECT\n > *` is intended for people who are working with DBMS\n directly e.g. via\n > psql and want to see ALL columns.\n\n I disagree strongly with this.  It is really annoying when\n working\n interactively with psql on a table that has a PostGIS\n geometry column,\n or any other large blobby type column.\n\n I have not looked at the patch, but +1 for the feature.\n\n\n\nCannot be better to redefine some strategies for output\n for some types.\n\n\nI can agree so sometimes in some environments proposed\n features can be nice, but it can be a strong footgun too. \n\n\n\nMaybe some strange data can be filtered in psql and it\n can be better solution. I agree, so usually print long\n geometry in psql is useless. \n\n\n\n\n\n\nPavel this doesn't concern only output but input too, think about\n the INSERT or COPY without a column list. We can add such filter\n in psql but how about other clients? They all have to implement\n their own filtering method. I think the HIDDEN attribute provide a\n common and basic way to implement that in all client application.\n\n\n-- \nGilles Darold\nhttp://www.darold.net/", "msg_date": "Thu, 14 Oct 2021 15:32:00 +0200", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "On Thu, Oct 14, 2021 at 2:32 PM Gilles Darold <gilles@darold.net> wrote:\n\n> Le 14/10/2021 à 14:28, Pavel Stehule a écrit :\n>\n>\n>\n> čt 14. 10. 2021 v 14:13 odesílatel Vik Fearing <vik@postgresfriends.org>\n> napsal:\n>\n>> On 10/14/21 1:47 PM, Aleksander Alekseev wrote:\n>> > Hi Gilles,\n>> >\n>> >> Any though and interest in this feature?\n>> >\n>> > Personally, I wouldn't call this feature particularly useful. `SELECT\n>> > *` is intended for people who are working with DBMS directly e.g. via\n>> > psql and want to see ALL columns.\n>>\n>> I disagree strongly with this. It is really annoying when working\n>> interactively with psql on a table that has a PostGIS geometry column,\n>> or any other large blobby type column.\n>>\n>> I have not looked at the patch, but +1 for the feature.\n>>\n>\n> Cannot be better to redefine some strategies for output for some types.\n>\n> I can agree so sometimes in some environments proposed features can be\n> nice, but it can be a strong footgun too.\n>\n> Maybe some strange data can be filtered in psql and it can be better\n> solution. I agree, so usually print long geometry in psql is useless.\n>\n>\n> Pavel this doesn't concern only output but input too, think about the\n> INSERT or COPY without a column list. We can add such filter in psql but\n> how about other clients? They all have to implement their own filtering\n> method. I think the HIDDEN attribute provide a common and basic way to\n> implement that in all client application.\n>\n\nI like the idea - being able to hide computed columns such as tsvectors\nfrom CRUD queries by default seems like it would be very nice for example.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Thu, Oct 14, 2021 at 2:32 PM Gilles Darold <gilles@darold.net> wrote:\n\nLe 14/10/2021 à 14:28, Pavel Stehule a\n écrit :\n\n\n\n\n\n\n\nčt 14. 10. 2021 v 14:13\n odesílatel Vik Fearing <vik@postgresfriends.org>\n napsal:\n\nOn 10/14/21 1:47 PM,\n Aleksander Alekseev wrote:\n > Hi Gilles,\n > \n >> Any though and interest in this feature?\n > \n > Personally, I wouldn't call this feature particularly\n useful. `SELECT\n > *` is intended for people who are working with DBMS\n directly e.g. via\n > psql and want to see ALL columns.\n\n I disagree strongly with this.  It is really annoying when\n working\n interactively with psql on a table that has a PostGIS\n geometry column,\n or any other large blobby type column.\n\n I have not looked at the patch, but +1 for the feature.\n\n\n\nCannot be better to redefine some strategies for output\n for some types.\n\n\nI can agree so sometimes in some environments proposed\n features can be nice, but it can be a strong footgun too. \n\n\n\nMaybe some strange data can be filtered in psql and it\n can be better solution. I agree, so usually print long\n geometry in psql is useless. \n\n\n\n\n\n\nPavel this doesn't concern only output but input too, think about\n the INSERT or COPY without a column list. We can add such filter\n in psql but how about other clients? They all have to implement\n their own filtering method. I think the HIDDEN attribute provide a\n common and basic way to implement that in all client application.\n\nI like the idea - being able to hide computed columns such as tsvectors from CRUD queries by default seems like it would be very nice for example.-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com", "msg_date": "Thu, 14 Oct 2021 14:38:13 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "On Thu, 14 Oct 2021 at 07:16, Gilles Darold <gilles@migops.com> wrote:\n\n> Hi,\n>\n> Here is a proposal to implement HIDDEN columns feature in PostgreSQL.\n>\n> The user defined columns are always visible in the PostgreSQL. If user\n> wants to hide some column(s) from a SELECT * returned values then the\n> hidden columns feature is useful. Hidden column can always be used and\n> returned by explicitly referring it in the query.\n>\n\nThe behaviour of SELECT * is well defined and consistent across many\ndatabases, so I don't like changing the behaviour of it.\n\nI would be in favour of a different symbol which expands to a more\nselective column set. Perhaps by default it picks up short textish columns;\nskip bytea or long text fields for example but can be adjusted with HIDDEN.\nPerhaps \"SELECT +\"?\n\n\n-- \nRod Taylor\n\nOn Thu, 14 Oct 2021 at 07:16, Gilles Darold <gilles@migops.com> wrote:Hi,\n\nHere is a proposal to implement HIDDEN columns feature in PostgreSQL.\n\nThe user defined columns are always visible in the PostgreSQL. If user\nwants to hide some column(s) from a SELECT * returned values then the\nhidden columns feature is useful. Hidden column can always be used and\nreturned by explicitly referring it in the query.The behaviour of SELECT * is well defined and consistent across many databases, so I don't like changing the behaviour of it.I would be in favour of a different symbol which expands to a more selective column set. Perhaps by default it picks up short textish columns; skip bytea or long text fields for example but can be adjusted with HIDDEN. Perhaps \"SELECT +\"?-- Rod Taylor", "msg_date": "Thu, 14 Oct 2021 10:41:53 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "On Thu, Oct 14, 2021 at 01:16:45PM +0200, Gilles Darold wrote:\n> Hi,\n> \n> \n> Here is a proposal to implement HIDDEN columns feature in PostgreSQL.\n> \n\nGreat! Actually I found this very useful, especially for those people\nusing big fields (geometry, files, large texts).\n\n> The user defined columns are always visible in the PostgreSQL. If user\n> wants to hide some column(s) from a SELECT * returned values then the\n> hidden columns feature is useful. Hidden column can always be used and\n> returned by explicitly referring it in the query.\n> \n> I agree that views are done for that or that using a SELECT * is a bad\n> practice\n\nAn a common one, even if we want to think otherwise. I have found that\nin almost every customer I have the bad luck to get to see code or\nSELECTs.\n\nNot counting that sometimes we have columns for optimization like Dave\nsaved about hidden a ts_vector column.\n\nAnother use case I can think of is not covered in this patch, but it\ncould be (I hope!) or even if not I would like opinions on this idea. \nWhat about a boolean GUC log_hidden_column that throws a LOG message when \na hidden column is used directly?\n\nThe intention is to mark a to-be-deleted column as HIDDEN and then check\nthe logs to understand if is still being used somewhere. I know systems\nwhere they carry the baggage of deprecated columns only because they\ndon't know if some system is still using them.\n\nI know this would be extending your original proposal, and understand if\nyou decide is not a first patch material. \n\nAnyway, a +1 to your proposal. \n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Thu, 14 Oct 2021 10:38:56 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 14/10/2021 à 17:38, Jaime Casanova a écrit :\n> On Thu, Oct 14, 2021 at 01:16:45PM +0200, Gilles Darold wrote:\n>> Hi,\n>>\n>>\n>> Here is a proposal to implement HIDDEN columns feature in PostgreSQL.\n>>\n> Great! Actually I found this very useful, especially for those people\n> using big fields (geometry, files, large texts).\n>\n>> The user defined columns are always visible in the PostgreSQL. If user\n>> wants to hide some column(s) from a SELECT * returned values then the\n>> hidden columns feature is useful. Hidden column can always be used and\n>> returned by explicitly referring it in the query.\n>>\n>> I agree that views are done for that or that using a SELECT * is a bad\n>> practice\n> An a common one, even if we want to think otherwise. I have found that\n> in almost every customer I have the bad luck to get to see code or\n> SELECTs.\n>\n> Not counting that sometimes we have columns for optimization like Dave\n> saved about hidden a ts_vector column.\n>\n> Another use case I can think of is not covered in this patch, but it\n> could be (I hope!) or even if not I would like opinions on this idea.\n> What about a boolean GUC log_hidden_column that throws a LOG message when\n> a hidden column is used directly?\n>\n> The intention is to mark a to-be-deleted column as HIDDEN and then check\n> the logs to understand if is still being used somewhere. I know systems\n> where they carry the baggage of deprecated columns only because they\n> don't know if some system is still using them.\n>\n> I know this would be extending your original proposal, and understand if\n> you decide is not a first patch material.\n\n\nWhy not, I will add it if there is a consencus about logging hidden \ncolumn use, this is not a big work.\n\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Thu, 14 Oct 2021 18:02:15 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Gilles Darold <gilles@migops.com> writes:\n> Le 14/10/2021 à 17:38, Jaime Casanova a écrit :\n>> On Thu, Oct 14, 2021 at 01:16:45PM +0200, Gilles Darold wrote:\n>>> Here is a proposal to implement HIDDEN columns feature in PostgreSQL.\n\n>> Another use case I can think of is not covered in this patch, but it\n>> could be (I hope!) or even if not I would like opinions on this idea.\n>> What about a boolean GUC log_hidden_column that throws a LOG message when\n>> a hidden column is used directly?\n\n> Why not, I will add it if there is a consencus about logging hidden \n> column use, this is not a big work.\n\nThis seems like a completely orthogonal idea. If you are trying\nto figure out whether you have any applications that depend on\ncolumn X (without breaking anything), you should absolutely not\nstart by marking the column \"hidden\", because that'll break the\ncase where the apps are expecting \"SELECT *\" to return the column.\nBut if you're okay with breaking things, you might as well just\ndrop the column, or else revoke SELECT privilege on it, and see\nwhat happens.\n\nI'm not sure about the utility of logging explicit references to a\nspecific column --- seems like grepping the results of \"log_statement\"\nwould serve. But in any case I think it is not a good idea to tie\nit to this proposal.\n\nAs for the proposal itself, I'm kind of allergic to the terminology\nyou've suggested, because the column is in no way hidden. It's\nstill visible in the catalogs, you can still select it explicitly,\netc. Anybody who thinks this is useful from a security standpoint\nis mistaken, but these words suggest that it is. Perhaps some\nterminology like \"not expanded\" or \"unexpanded\" would serve better\nto indicate that \"SELECT *\" doesn't expand to include the column.\nOr STAR versus NO STAR, maybe.\n\nI also do not care for the syntax you propose: AFAICS the only reason\nyou've gotten away with making HIDDEN not fully reserved is that you\nrequire it to be the last attribute of a column, which is something\nthat will trip users up all the time. Plus, it does not scale to the\nnext thing we might want to add. So if you can't make it a regular,\nposition-independent element of the ColQualList you shouldn't do it\nat all.\n\nWhat I think is actually important is the ALTER COLUMN syntax.\nWe could easily get away with having that be the only syntax for\nthis --- compare the precedent of ALTER COLUMN SET STATISTICS.\n\nBTW, you do NOT get to add an information_schema column for\nthis. The information_schema is defined in the SQL standard.\nYes, I'm aware that mysql feels free to \"extend\" the standard\nin that area; but our policy is that the only point of having the\ninformation_schema views at all is if they're standard-compliant.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Oct 2021 13:44:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "čt 14. 10. 2021 v 13:17 odesílatel Gilles Darold <gilles@migops.com> napsal:\n>\n> Hi,\n>\n>\n> Here is a proposal to implement HIDDEN columns feature in PostgreSQL.\n>\n> The user defined columns are always visible in the PostgreSQL. If user\n> wants to hide some column(s) from a SELECT * returned values then the\n> hidden columns feature is useful. Hidden column can always be used and\n> returned by explicitly referring it in the query.\n>\n> I agree that views are done for that or that using a SELECT * is a bad\n> practice\n> but sometime we could need to \"technically\" prevent some columns to be part\n> of a star expansion and nbot be forced to use view+rules.\n\nJust to remind here, there was recently a proposal to handle this\nproblem another way - provide a list of columns to skip for \"star\nselection\" aka \"SELECT * EXCEPT col1...\".\n\nhttps://postgrespro.com/list/id/d51371a2-f221-1cf3-4a7d-b2242d4dafdb@gmail.com\n\n> For example when\n> upgrading a database schema where a column have been added to a table,\n> this will break any old version of the application that is using a\n> SELECT * on\n> this table. Being able to \"hide\" this column to such query will make\n> migration\n> easier.\n>\n> An other common use case for this feature is to implements temporal tables\n> or row versionning. On my side I see a direct interest in Oracle to\n> PostgreSQL\n> migration to emulate the ROWID system column without the hassle of creating\n> views, it will save lot of time.\n>\n> The other advantage over views is that the hidden column can still be used\n> in JOIN, WHERE, ORDER BY or GROUP BY clause which is not possible otherwise.\n> I don't talk about writing to complex view which would require a RULE.\n>\n> Hidden column is not part of the SQL standard but is implemented in all\n> other\n> RDBMS which is also called invisible columns [1] [2] [3] [4]. In all\n> these RDBMS\n> the feature is quite the same.\n>\n> [1] https://www.ibm.com/docs/en/db2/10.5?topic=concepts-hidden-columns\n> [2] https://oracle-base.com/articles/12c/invisible-columns-12cr1\n> [3]\n> https://docs.microsoft.com/en-us/sql/t-sql/statements/create-table-transact-sql?view=sql-server-ver15\n> [4] https://dev.mysql.com/doc/refman/8.0/en/invisible-columns.html\n>\n>\n> Here is the full description of the proposal with a patch attached that\n> implements\n> the feature:\n>\n> 1) Creating hidden columns:\n>\n> A column visibility attribute is added to the column definition\n> of CREATE TABLE and ALTER TABLE statements. For example:\n>\n> CREATE TABLE htest1 (a bigserial HIDDEN, b text);\n>\n> ALTER TABLE htest1 ADD COLUMN c integer HIDDEN;\n>\n> Columns are visible by default.\n>\n> 2) Altering column visibility attribute:\n>\n> The ALTER TABLE statement can be used to change hidden columns to not\n> hidden and the opposite. Example:\n>\n> ALTER TABLE htest1 ALTER COLUMN c DROP HIDDEN;\n>\n> 3) Insert and hidden columns:\n>\n> If the column list of INSERT or COPY statements is empty\n> then while expanding column list hidden columns are NOT\n> included. DEFAULT or NULL values are inserted for hidden\n> columns in this case. Hidden column should be explicitly\n> referenced in the column list of INSERT and COPY statement\n> to insert a value.\n>\n> Example:\n>\n> -- Value 'one' is stored in column b and 1 in hidden column.\n> INSERT INTO t1 VALUES ('one');\n>\n> -- Value 2 is stored in hidden column and 'two' in b.\n> INSERT INTO htest1 (a, b) VALUES (2, 'two');\n>\n> 4) Star expansion for SELECT * statements:\n>\n> Hidden columns are not included in a column list while\n> expanding wild card '*' in the SELECT statement.\n>\n> Example:\n>\n> SELECT * FROM htest1;\n> b\n> ------\n> one\n> two\n>\n> Hidden columns are accessible when explicitly referenced\n> in the query.\n>\n> Example:\n> SELECT f1, f2 FROM t1;\n> a | b\n> ------+------\n> 1 | one\n> 2 | two\n>\n> 5) psql extended describe lists hidden columns.\n>\n> postgres=# \\d+ htest1\n> Table \"public.htest1\"\n> Column | Type | Collation | Nullable | Default | Visible | ...\n> --------+--------+-----------+----------+------------+---------+ ...\n> a | bigint | | not null | nextval... | hidden | ...\n> b | text | | | | | ...\n>\n> 6) When a column is flagged as hidden the attishidden column value of\n> table pg_attribute is set to true.\n>\n> 7) For hidden attributes, column is_hidden of table\n> information_schema.columns\n> is set to YES. By default the column is visible and the value is 'NO'.\n>\n> For a complete description of the feature, see chapter \"Hidden columns\" in\n> file doc/src/sgml/ddl.sgml after applying the patch.\n>\n>\n> The patch is a full implementation of this feture except that I sill have to\n> prevent a ALTER ... SET HIDDEN to be applied of there is no more visible\n> columns in the table after the change. I will do that when I will recover\n> more time.\n>\n> I have choose HIDDEN vs INVISIBLE but this could be a minor change or\n> we could use NOT EXPANDABLE. Personnaly I prefer the HIDDEN attribute.\n>\n>\n> Any though and interest in this feature?\n>\n> --\n> Gilles Darold\n> http://www.migops.com/\n>\n\n\n", "msg_date": "Thu, 14 Oct 2021 20:01:55 +0200", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "On Thursday, October 14, 2021, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Gilles Darold <gilles@migops.com> writes:\n> > Le 14/10/2021 à 17:38, Jaime Casanova a écrit :\n> >> On Thu, Oct 14, 2021 at 01:16:45PM +0200, Gilles Darold wrote:\n>\n> > Why not, I will add it if there is a consencus about logging hidden\n> > column use, this is not a big work.\n>\n> This seems like a completely orthogonal idea.\n\n\n>\n+1\n\n\n> As for the proposal itself, I'm kind of allergic to the terminology\n> you've suggested, because the column is in no way hidden. It's\n> still visible in the catalogs, you can still select it explicitly,\n> etc. Anybody who thinks this is useful from a security standpoint\n> is mistaken, but these words suggest that it is. Perhaps some\n> terminology like \"not expanded\" or \"unexpanded\" would serve better\n> to indicate that \"SELECT *\" doesn't expand to include the column.\n> Or STAR versus NO STAR, maybe.\n\n\nTaking this a bit further, I dislike tying the suppression of the column\nfrom the select-list star to the behavior of insert without a column list\nprovided. I’m not fully on board with having an attribute that is not\nfundamental to the data model but rather an instruction about how that\ncolumn interacts with SQL; separating the two aspects, though, would help.\nI accept the desire to avoid star expansion much more than default columns\nfor insert. Especially since the most compelling example of the later, not\nhaving to specify generated columns on insert, would directly conflict with\nthe fact that it is those generated columns that are most likely to be\nuseful to display when specifying a star in the select query.\n\n\n\n> What I think is actually important is the ALTER COLUMN syntax.\n> We could easily get away with having that be the only syntax for\n> this --- compare the precedent of ALTER COLUMN SET STATISTICS.\n\n\n+1\n\n\n>\n> BTW, you do NOT get to add an information_schema column for\n> this.\n\n\nFWIW, +1, though the project policy reminder does stand on its own.\n\nDavid J.\n\nOn Thursday, October 14, 2021, Tom Lane <tgl@sss.pgh.pa.us> wrote:Gilles Darold <gilles@migops.com> writes:\n> Le 14/10/2021 à 17:38, Jaime Casanova a écrit :\n>> On Thu, Oct 14, 2021 at 01:16:45PM +0200, Gilles Darold wrote:\n> Why not, I will add it if there is a consencus about logging hidden \n> column use, this is not a big work.\n\nThis seems like a completely orthogonal idea.+1\n\nAs for the proposal itself, I'm kind of allergic to the terminology\nyou've suggested, because the column is in no way hidden.  It's\nstill visible in the catalogs, you can still select it explicitly,\netc.  Anybody who thinks this is useful from a security standpoint\nis mistaken, but these words suggest that it is.  Perhaps some\nterminology like \"not expanded\" or \"unexpanded\" would serve better\nto indicate that \"SELECT *\" doesn't expand to include the column.\nOr STAR versus NO STAR, maybe.Taking this a bit further, I dislike tying the suppression of the column from the select-list star to the behavior of insert without a column list provided.  I’m not fully on board with having an attribute that is not fundamental to the data model but rather an instruction about how that column interacts with SQL; separating the two aspects, though, would help.  I accept the desire to avoid star expansion much more than default columns for insert.  Especially since the most compelling example of the later, not having to specify generated columns on insert, would directly conflict with the fact that it is those generated columns that are most likely to be useful to display when specifying a star in the select query. What I think is actually important is the ALTER COLUMN syntax.\nWe could easily get away with having that be the only syntax for\nthis --- compare the precedent of ALTER COLUMN SET STATISTICS.+1 \n\nBTW, you do NOT get to add an information_schema column for\nthis.  FWIW, +1, though the project policy reminder does stand on its own.David J.", "msg_date": "Thu, 14 Oct 2021 11:08:51 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Taking this a bit further, I dislike tying the suppression of the column\n> from the select-list star to the behavior of insert without a column list\n> provided. I’m not fully on board with having an attribute that is not\n> fundamental to the data model but rather an instruction about how that\n> column interacts with SQL; separating the two aspects, though, would help.\n> I accept the desire to avoid star expansion much more than default columns\n> for insert.\n\nYeah, me too. I think it would add a lot of clarity if we defined this\nas \"this affects the behavior of SELECT * and nothing else\" ... although\neven then, there are squishy questions about how much it affects the\nbehavior of composite datums that are using the column's rowtype.\nBut as soon as you want it to bleed into INSERT, you start having a\nlot of questions about what else it should bleed into, as Aleksander\nalready mentioned.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Oct 2021 14:26:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 14/10/2021 à 19:44, Tom Lane a écrit :\n> As for the proposal itself, I'm kind of allergic to the terminology\n> you've suggested, because the column is in no way hidden. It's\n> still visible in the catalogs, you can still select it explicitly,\n> etc. Anybody who thinks this is useful from a security standpoint\n> is mistaken, but these words suggest that it is. Perhaps some\n> terminology like \"not expanded\" or \"unexpanded\" would serve better\n> to indicate that \"SELECT *\" doesn't expand to include the column.\n> Or STAR versus NO STAR, maybe.\n\n\nAgree, I also had this feeling. I decide to use HIDDEN like in DB2 just \nbecause UNEXPANDED looks to me difficult to understand by users and that \nhidden or Invisible column are well known. This is a kind of \"vendor \nstandard\" now. But I agree that it can confuse uninformed people and \ndoesn't reflect the real feature. I will rename the keyword as \n\"UNEXPANDED\", will do.\n\n\n> I also do not care for the syntax you propose: AFAICS the only reason\n> you've gotten away with making HIDDEN not fully reserved is that you\n> require it to be the last attribute of a column, which is something\n> that will trip users up all the time. Plus, it does not scale to the\n> next thing we might want to add. So if you can't make it a regular,\n> position-independent element of the ColQualList you shouldn't do it\n> at all.\n\n\nYes I have also noted that and wanted to improve this later if the \nproposal was accepted.\n\n\n> What I think is actually important is the ALTER COLUMN syntax.\n> We could easily get away with having that be the only syntax for\n> this --- compare the precedent of ALTER COLUMN SET STATISTICS.\n\n\nOk great, I'm fine with that, especially for the previous point :-) I \nwill remove it from the CREATE TABLE syntax except in the INCLUDING like \noption.\n\n\n> BTW, you do NOT get to add an information_schema column for\n> this. The information_schema is defined in the SQL standard.\n> Yes, I'm aware that mysql feels free to \"extend\" the standard\n> in that area; but our policy is that the only point of having the\n> information_schema views at all is if they're standard-compliant.\n\nOk, I will remove it.\n\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Thu, 14 Oct 2021 20:26:39 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "I wrote:\n> Yeah, me too. I think it would add a lot of clarity if we defined this\n> as \"this affects the behavior of SELECT * and nothing else\" ... although\n> even then, there are squishy questions about how much it affects the\n> behavior of composite datums that are using the column's rowtype.\n\nRe-reading that, I realize I probably left way too much unstated,\nso let me spell it out.\n\nShould this feature affect\n\tSELECT * FROM my_table t;\n? Yes, absolutely.\n\nHow about\n\tSELECT t.* FROM my_table t;\n? Yup, one would think so.\n\nNow how about\n\tSELECT row_to_json(t.*) FROM my_table t;\n? All of a sudden, I'm a lot less sure --- not least because we *can't*\nsimply omit some columns, without the composite datum suddenly not being\nof the table's rowtype anymore, which could have unexpected effects on\nquery semantics. In particular, if we have a user-defined function\nthat's defined to accept composite type my_table, I don't think we can\nsuppress columns in\n\tSELECT myfunction(t.*) FROM my_table t;\n\nAnd don't forget that these can also be spelled like\n\tSELECT row_to_json(t) FROM my_table t;\nwithout any star visible anywhere.\n\nSo the more I think about this, the squishier it gets. I'm now sharing\nthe fears expressed upthread about whether it's even possible to define\nthis in a way that won't have a lot of gotchas.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Oct 2021 14:43:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 14/10/2021 à 20:43, Tom Lane a écrit :\n> Re-reading that, I realize I probably left way too much unstated,\n> so let me spell it out.\n>\n> Should this feature affect\n> \tSELECT * FROM my_table t;\n> ? Yes, absolutely.\n>\n> How about\n> \tSELECT t.* FROM my_table t;\n> ? Yup, one would think so.\n>\n> Now how about\n> \tSELECT row_to_json(t.*) FROM my_table t;\n> ? All of a sudden, I'm a lot less sure --- not least because we *can't*\n> simply omit some columns, without the composite datum suddenly not being\n> of the table's rowtype anymore, which could have unexpected effects on\n> query semantics. In particular, if we have a user-defined function\n> that's defined to accept composite type my_table, I don't think we can\n> suppress columns in\n> \tSELECT myfunction(t.*) FROM my_table t;\n>\n> And don't forget that these can also be spelled like\n> \tSELECT row_to_json(t) FROM my_table t;\n> without any star visible anywhere.\n>\n> So the more I think about this, the squishier it gets. I'm now sharing\n> the fears expressed upthread about whether it's even possible to define\n> this in a way that won't have a lot of gotchas.\n>\n> \t\t\tregards, tom lane\n\n\nYou mean this ? :-)\n\n\ngilles=# CREATE TABLE htest0 (a int PRIMARY KEY, b text NOT NULL HIDDEN);\nCREATE TABLE\ngilles=# INSERT INTO htest0 (a, b) VALUES (1, 'htest0 one');\nINSERT 0 1\ngilles=# INSERT INTO htest0 (a, b) VALUES (2, 'htest0 two');\nINSERT 0 1\n\ngilles=# SELECT * FROM htest0 t;\n  a\n---\n  1\n  2\n(2 rows)\n\ngilles=# SELECT t.* FROM htest0 t;\n  a\n---\n  1\n  2\n(2 rows)\n\ngilles=# SELECT row_to_json(t.*) FROM htest0 t;\n        row_to_json\n--------------------------\n  {\"a\":1,\"b\":\"htest0 one\"}\n  {\"a\":2,\"b\":\"htest0 two\"}\n(2 rows)\n\ngilles=# SELECT row_to_json(t) FROM htest0 t;\n        row_to_json\n--------------------------\n  {\"a\":1,\"b\":\"htest0 one\"}\n  {\"a\":2,\"b\":\"htest0 two\"}\n(2 rows)\n\n\nYou should have a look at the patch, I don't think that the way it is \ndone there could have gotchas.\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Thu, 14 Oct 2021 20:55:10 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 14/10/2021 à 20:26, Tom Lane a écrit :\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>> Taking this a bit further, I dislike tying the suppression of the column\n>> from the select-list star to the behavior of insert without a column list\n>> provided. I’m not fully on board with having an attribute that is not\n>> fundamental to the data model but rather an instruction about how that\n>> column interacts with SQL; separating the two aspects, though, would help.\n>> I accept the desire to avoid star expansion much more than default columns\n>> for insert.\n> Yeah, me too. I think it would add a lot of clarity if we defined this\n> as \"this affects the behavior of SELECT * and nothing else\" ... although\n> even then, there are squishy questions about how much it affects the\n> behavior of composite datums that are using the column's rowtype.\n> But as soon as you want it to bleed into INSERT, you start having a\n> lot of questions about what else it should bleed into, as Aleksander\n> already mentioned.\n\n\nI not agree, expansion in executed when there is no column list provided \nand this affect SELECT and INSERT. It cover the same needs: being able \nto remove a column for the target list when it is not explicitly set. \nThis feature is known like this and I'm not in favor to tear off a leg.\n\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Thu, 14 Oct 2021 21:00:17 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 14/10/2021 à 20:55, Gilles Darold a écrit :\n>\n> gilles=# SELECT row_to_json(t.*) FROM htest0 t;\n>        row_to_json\n> --------------------------\n>  {\"a\":1,\"b\":\"htest0 one\"}\n>  {\"a\":2,\"b\":\"htest0 two\"}\n> (2 rows)\n>\n> gilles=# SELECT row_to_json(t) FROM htest0 t;\n>        row_to_json\n> --------------------------\n>  {\"a\":1,\"b\":\"htest0 one\"}\n>  {\"a\":2,\"b\":\"htest0 two\"}\n> (2 rows)\n\n\nTom, I have probably not well understood what you said about do the \ncases above. Do you mean that the column should not be visible too? I \nhave though not but maybe I'm wrong, I will fix that.\n\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Thu, 14 Oct 2021 21:35:00 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "On 15/10/21 07:01, Josef Šimánek wrote:\n> čt 14. 10. 2021 v 13:17 odesílatel Gilles Darold <gilles@migops.com> napsal:\n>> Hi,\n>>\n>>\n>> Here is a proposal to implement HIDDEN columns feature in PostgreSQL.\n>>\n>> The user defined columns are always visible in the PostgreSQL. If user\n>> wants to hide some column(s) from a SELECT * returned values then the\n>> hidden columns feature is useful. Hidden column can always be used and\n>> returned by explicitly referring it in the query.\n>>\n>> I agree that views are done for that or that using a SELECT * is a bad\n>> practice\n>> but sometime we could need to \"technically\" prevent some columns to be part\n>> of a star expansion and nbot be forced to use view+rules.\n> Just to remind here, there was recently a proposal to handle this\n> problem another way - provide a list of columns to skip for \"star\n> selection\" aka \"SELECT * EXCEPT col1...\".\n>\n> https://postgrespro.com/list/id/d51371a2-f221-1cf3-4a7d-b2242d4dafdb@gmail.com\n\n[...]\n\nI feel using EXCEPT would be a lot clearer, no one is likely to be \nmislead into thinking that its is a security feature unlike 'HIDDEN'.  \nAlso you know that SELECT * will select all columns.\n\nIf this kind of feature were to be added, then I'd give a +1 to use the \nEXCEPT syntax.\n\n\nCheers,\nGavin\n\n\n\n\n", "msg_date": "Fri, 15 Oct 2021 09:01:59 +1300", "msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 14/10/2021 à 22:01, Gavin Flower a écrit :\n> On 15/10/21 07:01, Josef Šimánek wrote:\n>> čt 14. 10. 2021 v 13:17 odesílatel Gilles Darold <gilles@migops.com>\n>> napsal:\n>>> Hi,\n>>>\n>>>\n>>> Here is a proposal to implement HIDDEN columns feature in PostgreSQL.\n>>>\n>>> The user defined columns are always visible in the PostgreSQL. If user\n>>> wants to hide some column(s) from a SELECT * returned values then the\n>>> hidden columns feature is useful. Hidden column can always be used and\n>>> returned by explicitly referring it in the query.\n>>>\n>>> I agree that views are done for that or that using a SELECT * is a bad\n>>> practice\n>>> but sometime we could need to \"technically\" prevent some columns to\n>>> be part\n>>> of a star expansion and nbot be forced to use view+rules.\n>> Just to remind here, there was recently a proposal to handle this\n>> problem another way - provide a list of columns to skip for \"star\n>> selection\" aka \"SELECT * EXCEPT col1...\".\n>>\n>> https://postgrespro.com/list/id/d51371a2-f221-1cf3-4a7d-b2242d4dafdb@gmail.com\n>>\n>\n> [...]\n>\n> I feel using EXCEPT would be a lot clearer, no one is likely to be\n> mislead into thinking that its is a security feature unlike 'HIDDEN'. \n> Also you know that SELECT * will select all columns.\n>\n> If this kind of feature were to be added, then I'd give a +1 to use\n> the EXCEPT syntax.\n\n\nI don't think that the EXCEPT syntax will be adopted as it change the\nSQL syntax for SELECT in a non standard way. This is not the case of the\nhidden column feature which doesn't touch of the SELECT or INSERT syntax.\n\n\n\n", "msg_date": "Thu, 14 Oct 2021 23:03:23 +0200", "msg_from": "Gilles Darold <gillesdarold@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Hi hackers,\n\n> > Just to remind here, there was recently a proposal to handle this\n> > problem another way - provide a list of columns to skip for \"star\n> > selection\" aka \"SELECT * EXCEPT col1...\".\n> >\n> > https://postgrespro.com/list/id/d51371a2-f221-1cf3-4a7d-b2242d4dafdb@gmail.com\n>\n> [...]\n>\n> I feel using EXCEPT would be a lot clearer, no one is likely to be\n> mislead into thinking that its is a security feature unlike 'HIDDEN'.\n> Also you know that SELECT * will select all columns.\n>\n> If this kind of feature were to be added, then I'd give a +1 to use the\n> EXCEPT syntax.\n\n+1 to that, personally I would love to have SELECT * EXCEPT ... syntax\nin PostgreSQL. Also, I discovered this feature was requested even\nearlier, in 2007 [1]\n\n> I don't think that the EXCEPT syntax will be adopted as it change the\n> SQL syntax for SELECT in a non standard way. This is not the case of the\n> hidden column feature which doesn't touch of the SELECT or INSERT syntax.\n\nHIDDEN columns affect SELECT and INSERT behaviour in the same\nnon-standard way, although maybe without changing the syntax.\nPersonally, I believe this is even worse. The difference is that with\n`SELECT * EXCEPT` you explicitly state what you want, while HIDDEN\ncolumns do this implicitly. Extending the syntax beyond standards in a\nreasonable way doesn't seem to be a problem. As a recent example in\nthis thread [2] the community proposed to change the syntax in\nmultiple places at the same time.\n\n`SELECT * EXCEPT` solves the same problem as HIDDEN columns, but is\nmuch easier to implement and maintain. Since it's a simple syntax\nsugar it doesn't affect the rest of the system.\n\n[1]: https://www.postgresql.org/message-id/flat/8A38B86D9187B34FA18766E261AB3AEA0D2072%40sageograma.GEO-I.local\n[2]: https://www.postgresql.org/message-id/flat/CAJ7c6TPx7N-bVw0dZ1ASCDQKZJHhBYkT6w4HV1LzfS%2BUUTUfmA%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 15 Oct 2021 10:47:39 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 15/10/2021 à 09:47, Aleksander Alekseev a écrit :\n>\n>>> Just to remind here, there was recently a proposal to handle this\n>>> problem another way - provide a list of columns to skip for \"star\n>>> selection\" aka \"SELECT * EXCEPT col1...\".\n>>>\n>>> https://postgrespro.com/list/id/d51371a2-f221-1cf3-4a7d-b2242d4dafdb@gmail.com\n>> [...]\n>>\n>> I feel using EXCEPT would be a lot clearer, no one is likely to be\n>> mislead into thinking that its is a security feature unlike 'HIDDEN'.\n>> Also you know that SELECT * will select all columns.\n>>\n>> If this kind of feature were to be added, then I'd give a +1 to use the\n>> EXCEPT syntax.\n> +1 to that, personally I would love to have SELECT * EXCEPT ... syntax\n> in PostgreSQL. Also, I discovered this feature was requested even\n> earlier, in 2007 [1]\n>\n>> I don't think that the EXCEPT syntax will be adopted as it change the\n>> SQL syntax for SELECT in a non standard way. This is not the case of the\n>> hidden column feature which doesn't touch of the SELECT or INSERT syntax.\n> HIDDEN columns affect SELECT and INSERT behaviour in the same\n> non-standard way, although maybe without changing the syntax.\n> Personally, I believe this is even worse. The difference is that with\n> `SELECT * EXCEPT` you explicitly state what you want, while HIDDEN\n> columns do this implicitly. Extending the syntax beyond standards in a\n> reasonable way doesn't seem to be a problem. As a recent example in\n> this thread [2] the community proposed to change the syntax in\n> multiple places at the same time.\n>\n> `SELECT * EXCEPT` solves the same problem as HIDDEN columns, but is\n> much easier to implement and maintain. Since it's a simple syntax\n> sugar it doesn't affect the rest of the system.\n\n\nThat's not true, this is not the same feature. the EXCEPT clause will \nnot return column that you don't want in a specific request. I have \nnothing against that but you have to explicitly name them. I think about \nkind of bad design that we can find commonly like a table with \nattribute1 ... attribute20. If we can use regexp with EXCEPT like \n'attribute\\d+' that could be helpful too. But this is another thread.\n\n\nThe hidden column feature hidden the column for all queries using the \nwilcard on the concerned table. For example if I have to import a \ndatabase with OID enabled from an old dump and I want to prevent the OID \ncolumn to be returned through the star use, I can turn the column hidden \nand I will not have to modify my old very good application. I caricature \nbut this is the kind of thing that could happen. I see several other \npossible use of this feature with extensions that could use a technical \ncolumn that the user must not see using the wildcard. Also as Vik or \nDave mention being able to hide all tsvector columns from query without \nhaving to specify it as exception in each query used can save some time.\n\n\nIMHO this is definitively not the same feature.\n\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Fri, 15 Oct 2021 10:19:01 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Hi Gilles,\n\n> I can turn the column hidden and I will not have to modify my old very\ngood application.\n\nI see your point. At the same time, I believe the statement above shows the\nroot reason why we have a different view on this feature. The application\nshould have never use SELECT * in the first place. This is a terrible\ndesign - you add a column or change their order and the application is\nbroken. And I don't believe the DBMS core is the right place for placing\nhacks for applications like this. This should be solved in the application\nitself or in some sort of proxy server between the application and DBMS.\nSELECT * is intended to be used by people e.g. DBA.\n\n> Also as Vik or Dave mention being able to hide all tsvector columns from\nquery without\n> having to specify it as exception in each query used can save some time.\n\nAgree, this sometimes can be inconvenient. But I don't think there are many\ncases when you have a table with tens of columns you want to hide. SELECT *\nEXCEPT should work just fine for 1 or 2 columns. For other cases, you can\nsimply create a VIEW.\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi Gilles,> I can turn the column hidden and I will not have to modify my old very good application.I see your point. At the same time, I believe the statement above shows the root reason why we have a different view on this feature. The application should have never use SELECT * in the first place. This is a terrible design - you add a column or change their order and the application is broken. And I don't believe the DBMS core is the right place for placing hacks for applications like this. This should be solved in the application itself or in some sort of proxy server between the application and DBMS. SELECT * is intended to be used by people e.g. DBA.> Also as Vik or Dave mention being able to hide all tsvector columns from query without> having to specify it as exception in each query used can save some time.Agree, this sometimes can be inconvenient. But I don't think there are many cases when you have a table with tens of columns you want to hide. SELECT * EXCEPT should work just fine for 1 or 2 columns. For other cases, you can simply create a VIEW.-- Best regards,Aleksander Alekseev", "msg_date": "Fri, 15 Oct 2021 11:37:37 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 15/10/2021 à 10:37, Aleksander Alekseev a écrit :\n> Hi Gilles,\n>\n> > I can turn the column hidden and I will not have to modify my old \n> very good application.\n>\n> I see your point. At the same time, I believe the statement above \n> shows the root reason why we have a different view on this feature. \n> The application should have never use SELECT * in the first place. \n> This is a terrible design - you add a column or change their order and \n> the application is broken. And I don't believe the DBMS core is the \n> right place for placing hacks for applications like this. This should \n> be solved in the application itself or in some sort of proxy server \n> between the application and DBMS. SELECT * is intended to be used by \n> people e.g. DBA.\n\n\nYes I understand this point. Personally I have always used PostgreSQL \nand exclusively PostgreSQL in 25 years so I am aware of that and try to \ngive my best to SQL code quality. But we have more and more application \ncoming from others RDBMS with sometime no real possibility to modify the \ncode or which requires lot of work. To give an other use case, some time \nago I have written an extension (https://github.com/darold/pgtt-rsl) \nwhich use a technical column based on a composite type based on the \nbackend start time and pid to emulate Global Temporary Table. To be able \nto hide this column from the user query point of view,  I had to create \na view and route any action on this view to the real underlying table in \nthe extension C code. If the hidden feature was implemented it would \nhave same me some time. I see several other possible extensions that \ncould benefit of this feature.\n\n\nAs I said when you develop an extension you can not just say to the user \nto never used SELECT * if he want to use your extension. At least this \nis something I will never said, even if this is a bad practice so I have \nto find a solution to avoid showing technical columns. If we really want \nSELECT * to be reserved to DBA then why not removing the star from PG \nunless you have the admin privilege?\n\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Fri, 15 Oct 2021 11:16:04 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "On Thu, 2021-10-14 at 13:16 +0200, Gilles Darold wrote:\n> Here is a proposal to implement HIDDEN columns feature in PostgreSQL.\n> \n> The user defined columns are always visible in the PostgreSQL. If user\n> wants to hide some column(s) from a SELECT * returned values then the\n> hidden columns feature is useful. Hidden column can always be used and\n> returned by explicitly referring it in the query.\n\nWhen I read your proposal, I had strangely mixed feelings:\n\"This is cute!\" versus \"Do we need that?\". After some thinking, I think\nthat it boils down to the following:\n\nThat feature is appealing to people who type SQL statements into psql,\nwhich is probably the majority of the readers on this list. It is\nimmediately clear that this can be used for all kinds of nice things.\n\nOn the other hand: a relational database is not a spreadsheet, where\nI want to hide or highlight columns. Sure, the interactive user may\nuse it in that way, but that is not the target of a relational database.\nDatabases usually are not user visible, but used by an application.\nSo the appeal for the interactive user is really pretty irrelevant.\n\nNow this patch makes certain things easier, but it adds no substantially\nnew functionality: I can exclude a column from display as it is, simply\nby listing all the other columns. Sure, that's a pain for the interactive\nuser, but it is irrelevant for a query in an application.\n\nThis together with the fact that it poses complicated questions when\nwe dig deeper, such as \"what about whole-row references?\", tilts my vote.\nIf it were for free, I would say +1. But given the ratio of potential\nheadache versus added real-life benefit, I find myself voting -1.\n\nStill, it is cute!\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 15 Oct 2021 11:32:53 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Hi Gilles,\n\n> But we have more and more application coming from others RDBMS with sometime\n> no real possibility to modify the code or which requires lot of work.\n\nSomehow I feel everyone here very well understood the real motivation\nbehind this\nproposal from the beginning, considering the e-mail of the author. And came to\nhis or her own conclusions.\n\n> If we really want SELECT * to be reserved to DBA then why not removing the\n> star from PG unless you have the admin privilege?\n\nRespectfully, I perceive this as a trolling (presumably, non-intentional one)\nand not going to answer this.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 15 Oct 2021 15:24:03 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 15/10/2021 à 14:24, Aleksander Alekseev a écrit :\n> Hi Gilles,\n>\n>> If we really want SELECT * to be reserved to DBA then why not removing the\n>> star from PG unless you have the admin privilege?\n> Respectfully, I perceive this as a trolling (presumably, non-intentional one)\n> and not going to answer this.\n\n\nYes, I don't wanted to offend you or to troll. This was just to point \nthat the position of \"SELECT * is bad practice\" is not a good argument \nin my point of view, just because it is allowed for every one. I mean \nthat in an extension or a client which allow user query input we must \nhandle the case.\n\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Fri, 15 Oct 2021 15:29:13 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "On Fri, 15 Oct 2021 at 09:29, Gilles Darold <gilles@migops.com> wrote:\n\n> Le 15/10/2021 à 14:24, Aleksander Alekseev a écrit :\n> > Hi Gilles,\n> >\n> >> If we really want SELECT * to be reserved to DBA then why not removing\n> the\n> >> star from PG unless you have the admin privilege?\n> > Respectfully, I perceive this as a trolling (presumably, non-intentional\n> one)\n> > and not going to answer this.\n>\n>\n> Yes, I don't wanted to offend you or to troll. This was just to point\n> that the position of \"SELECT * is bad practice\" is not a good argument\n> in my point of view, just because it is allowed for every one. I mean\n> that in an extension or a client which allow user query input we must\n> handle the case.\n>\n>\nThis would break an awful lot of apps.\n\nDave\n\nOn Fri, 15 Oct 2021 at 09:29, Gilles Darold <gilles@migops.com> wrote:Le 15/10/2021 à 14:24, Aleksander Alekseev a écrit :\n> Hi Gilles,\n>\n>> If we really want SELECT * to be reserved to DBA then why not removing the\n>> star from PG unless you have the admin privilege?\n> Respectfully, I perceive this as a trolling (presumably, non-intentional one)\n> and not going to answer this.\n\n\nYes, I don't wanted to offend you or to troll. This was just to point \nthat the position of \"SELECT * is bad practice\" is not a good argument \nin my point of view, just because it is allowed for every one. I mean \nthat in an extension or a client which allow user query input we must \nhandle the case.\nThis would break an awful lot of apps.Dave", "msg_date": "Fri, 15 Oct 2021 09:39:36 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Hi Gilles,\n\n> Yes, I don't wanted to offend you or to troll. This was just to point\n> that the position of \"SELECT * is bad practice\" is not a good argument\n> in my point of view, just because it is allowed for every one. I mean\n> that in an extension or a client which allow user query input we must\n> handle the case.\n\nSure, no worries. And my apologies if my feedback seemed a little harsh.\n\nI'm sure our goal is mutual - to make PostgreSQL even better than it\nis now. Finding a consensus occasionally can take time though.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 15 Oct 2021 19:42:57 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "On Fri, Oct 15, 2021 at 9:40 PM Dave Cramer <davecramer@postgres.rocks> wrote:\n>\n> On Fri, 15 Oct 2021 at 09:29, Gilles Darold <gilles@migops.com> wrote:\n>>\n>> Yes, I don't wanted to offend you or to troll. This was just to point\n>> that the position of \"SELECT * is bad practice\" is not a good argument\n>> in my point of view, just because it is allowed for every one. I mean\n>> that in an extension or a client which allow user query input we must\n>> handle the case.\n>\n> This would break an awful lot of apps.\n\nWhich is also why allowing to hide some custom columns from a \"SELECT\n*\" powerful. It's no doubt a niche usage, but as Gilles mentioned\nextensions can make use of that to build interesting things. If DBA\ncan also make use of it to ease manual queries if the client apps are\ncorrectly written, that's icing on the cake.\n\n\n", "msg_date": "Sat, 16 Oct 2021 01:34:03 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "On Fri, Oct 15, 2021 at 11:32:53AM +0200, Laurenz Albe wrote:\n> On Thu, 2021-10-14 at 13:16 +0200, Gilles Darold wrote:\n> > Here is a proposal to implement HIDDEN columns feature in PostgreSQL.\n> > \n> > The user defined columns are always visible in the PostgreSQL. If user\n> > wants to hide some column(s) from a SELECT * returned values then the\n> > hidden columns feature is useful. Hidden column can always be used and\n> > returned by explicitly referring it in the query.\n> \n> When I read your proposal, I had strangely mixed feelings:\n> \"This is cute!\" versus \"Do we need that?\". After some thinking, I think\n> that it boils down to the following:\n> \n> That feature is appealing to people who type SQL statements into psql,\n> which is probably the majority of the readers on this list. It is\n> immediately clear that this can be used for all kinds of nice things.\n> \n> On the other hand: a relational database is not a spreadsheet, where\n> I want to hide or highlight columns. Sure, the interactive user may\n> use it in that way, but that is not the target of a relational database.\n> Databases usually are not user visible, but used by an application.\n> So the appeal for the interactive user is really pretty irrelevant.\n> \n> Now this patch makes certain things easier, but it adds no substantially\n> new functionality: I can exclude a column from display as it is, simply\n> by listing all the other columns. Sure, that's a pain for the interactive\n> user, but it is irrelevant for a query in an application.\n> \n> This together with the fact that it poses complicated questions when\n> we dig deeper, such as \"what about whole-row references?\", tilts my vote.\n> If it were for free, I would say +1. But given the ratio of potential\n> headache versus added real-life benefit, I find myself voting -1.\n\nI can see the usefulness of this, though UNEXPANDED seems clearer. \nHowever, it also is likely to confuse someone who does SELECT * and then\ncan't figure out why another query is showing a column that doesn't\nappear in SELECT *. I do think SELECT * EXCEPT is the better and less\nconfusing solution. I can imagine people using different EXCEPT columns\nfor different queries, which HIDDEN/UNEXPANDED does not allow. I\nfrankly can't think of a single case where output is specified at the\nDDL level.\n\nWhy is this not better addressed by creating a view on the original\ntable, even perhaps renaming the original table and create a view using\nthe old table name.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 15 Oct 2021 14:51:29 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "\nOn 10/15/21 2:51 PM, Bruce Momjian wrote:\n> On Fri, Oct 15, 2021 at 11:32:53AM +0200, Laurenz Albe wrote:\n>> On Thu, 2021-10-14 at 13:16 +0200, Gilles Darold wrote:\n>>> Here is a proposal to implement HIDDEN columns feature in PostgreSQL.\n>>>\n>>> The user defined columns are always visible in the PostgreSQL. If user\n>>> wants to hide some column(s) from a SELECT * returned values then the\n>>> hidden columns feature is useful. Hidden column can always be used and\n>>> returned by explicitly referring it in the query.\n>> When I read your proposal, I had strangely mixed feelings:\n>> \"This is cute!\" versus \"Do we need that?\". After some thinking, I think\n>> that it boils down to the following:\n>>\n>> That feature is appealing to people who type SQL statements into psql,\n>> which is probably the majority of the readers on this list. It is\n>> immediately clear that this can be used for all kinds of nice things.\n>>\n>> On the other hand: a relational database is not a spreadsheet, where\n>> I want to hide or highlight columns. Sure, the interactive user may\n>> use it in that way, but that is not the target of a relational database.\n>> Databases usually are not user visible, but used by an application.\n>> So the appeal for the interactive user is really pretty irrelevant.\n>>\n>> Now this patch makes certain things easier, but it adds no substantially\n>> new functionality: I can exclude a column from display as it is, simply\n>> by listing all the other columns. Sure, that's a pain for the interactive\n>> user, but it is irrelevant for a query in an application.\n>>\n>> This together with the fact that it poses complicated questions when\n>> we dig deeper, such as \"what about whole-row references?\", tilts my vote.\n>> If it were for free, I would say +1. But given the ratio of potential\n>> headache versus added real-life benefit, I find myself voting -1.\n> I can see the usefulness of this, though UNEXPANDED seems clearer. \n> However, it also is likely to confuse someone who does SELECT * and then\n> can't figure out why another query is showing a column that doesn't\n> appear in SELECT *. I do think SELECT * EXCEPT is the better and less\n> confusing solution. I can imagine people using different EXCEPT columns\n> for different queries, which HIDDEN/UNEXPANDED does not allow. I\n> frankly can't think of a single case where output is specified at the\n> DDL level.\n>\n> Why is this not better addressed by creating a view on the original\n> table, even perhaps renaming the original table and create a view using\n> the old table name.\n\n\nThat's pretty much my feeling. This seems a bit too cute.\n\n\nI have a little function I use to create a skeleton query on tables with\nlots of columns just so I can delete a few and leave the rest, a problem\nthat would be solved neatly by the EXCEPT proposal and not but the\nHIDDEN proposal.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 15 Oct 2021 15:52:13 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 15/10/2021 à 21:52, Andrew Dunstan a écrit :\n> On 10/15/21 2:51 PM, Bruce Momjian wrote:\n>> On Fri, Oct 15, 2021 at 11:32:53AM +0200, Laurenz Albe wrote:\n>>> On Thu, 2021-10-14 at 13:16 +0200, Gilles Darold wrote:\n>>>> Here is a proposal to implement HIDDEN columns feature in PostgreSQL.\n>>>>\n>>>> The user defined columns are always visible in the PostgreSQL. If user\n>>>> wants to hide some column(s) from a SELECT * returned values then the\n>>>> hidden columns feature is useful. Hidden column can always be used and\n>>>> returned by explicitly referring it in the query.\n>>> When I read your proposal, I had strangely mixed feelings:\n>>> \"This is cute!\" versus \"Do we need that?\". After some thinking, I think\n>>> that it boils down to the following:\n>>>\n>>> That feature is appealing to people who type SQL statements into psql,\n>>> which is probably the majority of the readers on this list. It is\n>>> immediately clear that this can be used for all kinds of nice things.\n>>>\n>>> On the other hand: a relational database is not a spreadsheet, where\n>>> I want to hide or highlight columns. Sure, the interactive user may\n>>> use it in that way, but that is not the target of a relational database.\n>>> Databases usually are not user visible, but used by an application.\n>>> So the appeal for the interactive user is really pretty irrelevant.\n>>>\n>>> Now this patch makes certain things easier, but it adds no substantially\n>>> new functionality: I can exclude a column from display as it is, simply\n>>> by listing all the other columns. Sure, that's a pain for the interactive\n>>> user, but it is irrelevant for a query in an application.\n>>>\n>>> This together with the fact that it poses complicated questions when\n>>> we dig deeper, such as \"what about whole-row references?\", tilts my vote.\n>>> If it were for free, I would say +1. But given the ratio of potential\n>>> headache versus added real-life benefit, I find myself voting -1.\n>> I can see the usefulness of this, though UNEXPANDED seems clearer.\n>> However, it also is likely to confuse someone who does SELECT * and then\n>> can't figure out why another query is showing a column that doesn't\n>> appear in SELECT *. I do think SELECT * EXCEPT is the better and less\n>> confusing solution. I can imagine people using different EXCEPT columns\n>> for different queries, which HIDDEN/UNEXPANDED does not allow. I\n>> frankly can't think of a single case where output is specified at the\n>> DDL level.\n>>\n>> Why is this not better addressed by creating a view on the original\n>> table, even perhaps renaming the original table and create a view using\n>> the old table name.\n>\n> That's pretty much my feeling. This seems a bit too cute.\n>\n>\n> I have a little function I use to create a skeleton query on tables with\n> lots of columns just so I can delete a few and leave the rest, a problem\n> that would be solved neatly by the EXCEPT proposal and not but the\n> HIDDEN proposal.\n>\n\nI have nothing against seeing the EXCEPT included into core except that \nthis is a big sprain to the SQL standard and I doubt that personally I \nwill used it for portability reason. Saying that, by this syntax we will \nalso encourage the use of SELECT * which is incontradiction with the \ncommon opinion.\n\n\nBut again I don't think this is the same feature, the only thing where \nSELECT * EXCEPT is useful is for a single non portable statement. It \ndoes not help to extend PostgreSQL through extensions or can solves \napplication migration issues. I'm a bit surprise by this confusion with \nthe EXCEPT syntax.\n\n\n-- \nGilles Darold\n\n\n\n\n\n\n\nLe 15/10/2021 à 21:52, Andrew Dunstan a\n écrit :\n\n\n\nOn 10/15/21 2:51 PM, Bruce Momjian wrote:\n\n\nOn Fri, Oct 15, 2021 at 11:32:53AM +0200, Laurenz Albe wrote:\n\n\nOn Thu, 2021-10-14 at 13:16 +0200, Gilles Darold wrote:\n\n\nHere is a proposal to implement HIDDEN columns feature in PostgreSQL.\n\nThe user defined columns are always visible in the PostgreSQL. If user\nwants to hide some column(s) from a SELECT * returned values then the\nhidden columns feature is useful. Hidden column can always be used and\nreturned by explicitly referring it in the query.\n\n\nWhen I read your proposal, I had strangely mixed feelings:\n\"This is cute!\" versus \"Do we need that?\". After some thinking, I think\nthat it boils down to the following:\n\nThat feature is appealing to people who type SQL statements into psql,\nwhich is probably the majority of the readers on this list. It is\nimmediately clear that this can be used for all kinds of nice things.\n\nOn the other hand: a relational database is not a spreadsheet, where\nI want to hide or highlight columns. Sure, the interactive user may\nuse it in that way, but that is not the target of a relational database.\nDatabases usually are not user visible, but used by an application.\nSo the appeal for the interactive user is really pretty irrelevant.\n\nNow this patch makes certain things easier, but it adds no substantially\nnew functionality: I can exclude a column from display as it is, simply\nby listing all the other columns. Sure, that's a pain for the interactive\nuser, but it is irrelevant for a query in an application.\n\nThis together with the fact that it poses complicated questions when\nwe dig deeper, such as \"what about whole-row references?\", tilts my vote.\nIf it were for free, I would say +1. But given the ratio of potential\nheadache versus added real-life benefit, I find myself voting -1.\n\n\nI can see the usefulness of this, though UNEXPANDED seems clearer. \nHowever, it also is likely to confuse someone who does SELECT * and then\ncan't figure out why another query is showing a column that doesn't\nappear in SELECT *. I do think SELECT * EXCEPT is the better and less\nconfusing solution. I can imagine people using different EXCEPT columns\nfor different queries, which HIDDEN/UNEXPANDED does not allow. I\nfrankly can't think of a single case where output is specified at the\nDDL level.\n\nWhy is this not better addressed by creating a view on the original\ntable, even perhaps renaming the original table and create a view using\nthe old table name.\n\n\n\n\nThat's pretty much my feeling. This seems a bit too cute.\n\n\nI have a little function I use to create a skeleton query on tables with\nlots of columns just so I can delete a few and leave the rest, a problem\nthat would be solved neatly by the EXCEPT proposal and not but the\nHIDDEN proposal.\n\n\n\n\n\nI have nothing against seeing the EXCEPT included into core\n except that this is a big sprain to the SQL standard and I doubt\n that personally I will used it for portability reason. Saying\n that, by this syntax we will also encourage the use of SELECT *\n which is in contradiction with the\n common opinion.\n\n\nBut again I don't think this\n is the same feature, the only thing where SELECT * EXCEPT is\n useful is for a single non portable statement. It does not\n help to extend PostgreSQL through extensions or can solves\n application migration issues. I'm a bit surprise by this\n confusion with the EXCEPT syntax.\n\n\n \n-- \nGilles Darold", "msg_date": "Fri, 15 Oct 2021 23:42:40 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 15/10/2021 à 18:42, Aleksander Alekseev a écrit :\n> Hi Gilles,\n>\n>> Yes, I don't wanted to offend you or to troll. This was just to point\n>> that the position of \"SELECT * is bad practice\" is not a good argument\n>> in my point of view, just because it is allowed for every one. I mean\n>> that in an extension or a client which allow user query input we must\n>> handle the case.\n> Sure, no worries. And my apologies if my feedback seemed a little harsh.\n>\n> I'm sure our goal is mutual - to make PostgreSQL even better than it\n> is now. Finding a consensus occasionally can take time though.\n>\nRight, no problem Aleksander, my english speaking and understanding is\nnot very good so it doesn't help too.  Let's have a beer next time :-)\n\n\n\n", "msg_date": "Sat, 16 Oct 2021 08:57:48 +0200", "msg_from": "Gilles Darold <gillesdarold@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 15/10/2021 à 20:51, Bruce Momjian a écrit :\n> Why is this not better addressed by creating a view on the original\n> table, even perhaps renaming the original table and create a view using\n> the old table name.\n\nBecause when you use the view for the select you can not use the\n\"hidden\" column in your query, for example in the WHERE or ORDER BY\nclause.  Also if you have a hundred of tables, let's says with a\nts_vector column that you want to unexpand, you will have to create a\nhundred of view.  The other problem it for write in the view, it you\nhave a complex modification involving other tables in the query you have\nto define rules. Handling a technical column through a view over the\nreal table require lot of work, this feature will help a lot to save\nthis time.\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Sat, 16 Oct 2021 09:10:13 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Hi,\n\n\nHere is a new version of the patch for the hidden column feature with \nthe following changes:\n\n\n   - Rename the HIDDEN into UNEXPANDED and replace all references to \nhidden column into unexpanded column\n\n   - Remove changes in the information_schema\n\n   - Limit use of the UNEXPANDED attribute to ALTER COLUMN SET/DROP \ncommands.\n\n   - Add a check into SET UNEXPANDED code to verify that there is at \nleast one column expanded.\n\n   - Verify that INSERT INTO table SELECT * FROM table respect the \nunexpanded column feature.\n\n   - Verify that RETURNING * clause also respect the unexpanded column \nfeature.\n\n\nI have kept the behavior on function using the wildcard * which does not \ntake care of the unexpanded column attribute.\n\n\nI have not though of other gotcha for the moment, I will update the \npatch if other cases come. In psql the Expended  information is \ndisplayed when using \\d+, perhaps it could be better to see this \ninformation directly with \\d so that the information comes to the eyes \nimmediately.\n\n\n-- \nGilles Darold", "msg_date": "Sun, 17 Oct 2021 23:01:08 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "On 10/17/21 11:01 PM, Gilles Darold wrote:\n> \n>   - Add a check into SET UNEXPANDED code to verify that there is at\n> least one column expanded.\n\nWhat is the point of this? Postgres allows column-less tables.\n\nBoth of these statements are valid:\n\n - CREATE TABLE nada ();\n - SELECT;\n-- \nVik Fearing\n\n\n", "msg_date": "Sun, 17 Oct 2021 23:04:15 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 17/10/2021 à 23:04, Vik Fearing a écrit :\n> On 10/17/21 11:01 PM, Gilles Darold wrote:\n>>   - Add a check into SET UNEXPANDED code to verify that there is at\n>> least one column expanded.\n> What is the point of this? Postgres allows column-less tables.\n>\n> Both of these statements are valid:\n>\n> - CREATE TABLE nada ();\n> - SELECT;\n\n\nYes, my first though was to allow all columns to be unexpandable like a\ntable without column, but the the problem is that when you execute\n\"SELECT * FROM nada\" it returns no rows which is not the case of a table\nwith hidden column. I could fix that to return no rows if all columns\nare unexpandable but I think that all column hidden is a nonsens so I\nhave prefered to not allow it and an error is raised.\n\n\nAlso I've just though that applying unexpandable column feature to\nplpgsql breaks the use of ROWTYPE. It contains all columns so when use\nas a variable to receive a SELECT * or RETURNING * INTO it will not\nworks, I will try to fix that.\n\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Sun, 17 Oct 2021 23:42:04 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "On Sun, 17 Oct 2021 at 17:42, Gilles Darold <gilles@migops.com> wrote:\n\n\n> Yes, my first though was to allow all columns to be unexpandable like a\n> table without column, but the the problem is that when you execute\n> \"SELECT * FROM nada\" it returns no rows which is not the case of a table\n> with hidden column. I could fix that to return no rows if all columns\n> are unexpandable but I think that all column hidden is a nonsens so I\n> have prefered to not allow it and an error is raised.\n>\n\nPerhaps I misunderstand what you are saying, but a no-columns table\ndefinitely can return rows:\n\npsql (12.2)\nType \"help\" for help.\n\npostgres=# create table nada ();\nCREATE TABLE\npostgres=# insert into nada default values;\nINSERT 0 1\npostgres=# insert into nada default values;\nINSERT 0 1\npostgres=# table nada;\n--\n(2 rows)\n\npostgres=#\n\nNote that psql doesn't display a separate line for each row in this case,\nbut the actual result coming back from the server does contain the\nappropriate number of rows.\n\nOn Sun, 17 Oct 2021 at 17:42, Gilles Darold <gilles@migops.com> wrote: \nYes, my first though was to allow all columns to be unexpandable like a\ntable without column, but the the problem is that when you execute\n\"SELECT * FROM nada\" it returns no rows which is not the case of a table\nwith hidden column. I could fix that to return no rows if all columns\nare unexpandable but I think that all column hidden is a nonsens so I\nhave prefered to not allow it and an error is raised.\nPerhaps I misunderstand what you are saying, but a no-columns table definitely can return rows:psql (12.2)Type \"help\" for help.postgres=# create table nada ();CREATE TABLEpostgres=# insert into nada default values;INSERT 0 1postgres=# insert into nada default values;INSERT 0 1postgres=# table nada;--(2 rows)postgres=# Note that psql doesn't display a separate line for each row in this case, but the actual result coming back from the server does contain the appropriate number of rows.", "msg_date": "Sun, 17 Oct 2021 17:48:06 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 17/10/2021 à 23:48, Isaac Morland a écrit :\n> On Sun, 17 Oct 2021 at 17:42, Gilles Darold <gilles@migops.com\n> <mailto:gilles@migops.com>> wrote:\n>\n> Perhaps I misunderstand what you are saying, but a no-columns table\n> definitely can return rows:\n>\n> psql (12.2)\n> Type \"help\" for help.\n>\n> postgres=# create table nada ();\n> CREATE TABLE\n> postgres=# insert into nada default values;\n> INSERT 0 1\n> postgres=# insert into nada default values;\n> INSERT 0 1\n> postgres=# table nada;\n> --\n> (2 rows)\n>\n> postgres=#\n>\n> Note that psql doesn't display a separate line for each row in this\n> case, but the actual result coming back from the server does contain\n> the appropriate number of rows. \n\n\nI was not aware of that. In this case perhaps that we can remove the\nrestriction on having at least on expandable column and we will have the\nsame behavior but I can't think of an interest to allow that.\n\n\n-- \nGilles Darold\n\n\n\n\n\n\n\nLe 17/10/2021 à 23:48, Isaac Morland a\n écrit :\n\n\n\n\nOn Sun, 17 Oct 2021 at 17:42, Gilles Darold <gilles@migops.com>\n wrote:\n\n\n\n\nPerhaps I misunderstand what you are saying, but a\n no-columns table definitely can return rows:\n\n\n psql (12.2)\n Type \"help\" for help.\n\n postgres=# create table nada ();\n CREATE TABLE\n postgres=# insert into nada default values;\n INSERT 0 1\n postgres=# insert into nada default values;\n INSERT 0 1\n postgres=# table nada;\n --\n (2 rows)\n\n postgres=# \n\n\nNote that psql doesn't display a separate line for each\n row in this case, but the actual result coming back from the\n server does contain the appropriate number of rows. \n\n\n\n\n\nI was not aware of that. In this case perhaps that we can remove the\n restriction on having at least on expandable column and we\n will have the same behavior but I can't think of an interest\n to allow that.\n\n\n-- \nGilles Darold", "msg_date": "Mon, 18 Oct 2021 08:44:51 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "On 10/18/21 8:44 AM, Gilles Darold wrote:\n> Le 17/10/2021 à 23:48, Isaac Morland a écrit :\n>> On Sun, 17 Oct 2021 at 17:42, Gilles Darold <gilles@migops.com\n>> <mailto:gilles@migops.com>> wrote:\n>>\n>> Note that psql doesn't display a separate line for each row in this\n>> case, but the actual result coming back from the server does contain\n>> the appropriate number of rows. \n> \n> I was not aware of that. In this case perhaps that we can remove the\n> restriction on having at least on expandable column and we will have the\n> same behavior but I can't think of an interest to allow that.\n\nAllowing no-column tables removed the need to handle a bunch of corner\ncases. Useful for users or not, the precedent is set.\n-- \nVik Fearing\n\n\n", "msg_date": "Mon, 18 Oct 2021 17:24:38 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 18/10/2021 à 17:24, Vik Fearing a écrit :\n> On 10/18/21 8:44 AM, Gilles Darold wrote:\n>> Le 17/10/2021 à 23:48, Isaac Morland a écrit :\n>>> On Sun, 17 Oct 2021 at 17:42, Gilles Darold <gilles@migops.com\n>>> <mailto:gilles@migops.com>> wrote:\n>>>\n>>> Note that psql doesn't display a separate line for each row in this\n>>> case, but the actual result coming back from the server does contain\n>>> the appropriate number of rows. \n>> I was not aware of that. In this case perhaps that we can remove the\n>> restriction on having at least on expandable column and we will have the\n>> same behavior but I can't think of an interest to allow that.\n> Allowing no-column tables removed the need to handle a bunch of corner\n> cases. Useful for users or not, the precedent is set.\n\n\nI agree, now that I know that this is perfectly possible to return N\nrows without any data/column I also think that we should allow it in\nrespect to PostgreSQL behavior with a table with no column. I will\nremove the check at SET UNEXPANDED.\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Mon, 18 Oct 2021 18:30:07 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "I suggest to look for output test files that are being massively\nmodified by this patch. I think those are likely unintended:\n\n> diff --git a/src/test/regress/expected/collate.icu.utf8.out b/src/test/regress/expected/collate.icu.utf8.out\n> diff --git a/src/test/regress/expected/collate.linux.utf8.out b/src/test/regress/expected/collate.linux.utf8.out\n> diff --git a/src/test/regress/expected/compression.out b/src/test/regress/expected/compression.out\n> diff --git a/src/test/regress/expected/xml.out b/src/test/regress/expected/xml.out\n> diff --git a/src/test/regress/expected/xmlmap.out b/src/test/regress/expected/xmlmap.out\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 18 Oct 2021 13:54:56 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 18/10/2021 à 18:54, Alvaro Herrera a écrit :\n> I suggest to look for output test files that are being massively\n> modified by this patch. I think those are likely unintended:\n>\n>> diff --git a/src/test/regress/expected/collate.icu.utf8.out b/src/test/regress/expected/collate.icu.utf8.out\n>> diff --git a/src/test/regress/expected/collate.linux.utf8.out b/src/test/regress/expected/collate.linux.utf8.out\n>> diff --git a/src/test/regress/expected/compression.out b/src/test/regress/expected/compression.out\n>> diff --git a/src/test/regress/expected/xml.out b/src/test/regress/expected/xml.out\n>> diff --git a/src/test/regress/expected/xmlmap.out b/src/test/regress/expected/xmlmap.out\n\n\nMy bad, thanks for the report Alvaro. New patch version v3 should fix that.\n\n-- \nGilles Darold", "msg_date": "Mon, 18 Oct 2021 22:15:23 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "On 2021-Oct-18, Gilles Darold wrote:\n\n> Le 18/10/2021 à 18:54, Alvaro Herrera a écrit :\n> > I suggest to look for output test files that are being massively\n> > modified by this patch. I think those are likely unintended:\n> >\n> >> diff --git a/src/test/regress/expected/collate.icu.utf8.out b/src/test/regress/expected/collate.icu.utf8.out\n> >> diff --git a/src/test/regress/expected/collate.linux.utf8.out b/src/test/regress/expected/collate.linux.utf8.out\n> >> diff --git a/src/test/regress/expected/compression.out b/src/test/regress/expected/compression.out\n> >> diff --git a/src/test/regress/expected/xml.out b/src/test/regress/expected/xml.out\n> >> diff --git a/src/test/regress/expected/xmlmap.out b/src/test/regress/expected/xmlmap.out\n> \n> My bad, thanks for the report Alvaro. New patch version v3 should fix that.\n\nHmm, the attachment was 500kB before, about 30% of that was the\ncollate.*.out files, and it is 2.2 MB now. Something is still not\nright.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Porque francamente, si para saber manejarse a uno mismo hubiera que\nrendir examen... ¿Quién es el machito que tendría carnet?\" (Mafalda)\n\n\n", "msg_date": "Mon, 18 Oct 2021 17:36:34 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 18/10/2021 à 22:36, Alvaro Herrera a écrit :\n> On 2021-Oct-18, Gilles Darold wrote:\n>\n>> Le 18/10/2021 à 18:54, Alvaro Herrera a écrit :\n>>> I suggest to look for output test files that are being massively\n>>> modified by this patch. I think those are likely unintended:\n>>>\n>>>> diff --git a/src/test/regress/expected/collate.icu.utf8.out b/src/test/regress/expected/collate.icu.utf8.out\n>>>> diff --git a/src/test/regress/expected/collate.linux.utf8.out b/src/test/regress/expected/collate.linux.utf8.out\n>>>> diff --git a/src/test/regress/expected/compression.out b/src/test/regress/expected/compression.out\n>>>> diff --git a/src/test/regress/expected/xml.out b/src/test/regress/expected/xml.out\n>>>> diff --git a/src/test/regress/expected/xmlmap.out b/src/test/regress/expected/xmlmap.out\n>> My bad, thanks for the report Alvaro. New patch version v3 should fix that.\n> Hmm, the attachment was 500kB before, about 30% of that was the\n> collate.*.out files, and it is 2.2 MB now. Something is still not\n> right.\n\n\nRight I don't know what I have done yesterday, look like I have included\ntests output autogenerated files. However I've attached a new version v4\nof the patch that include the right list of files changed and some fixes:\n\n\n- Allow a table to have all columns unexpanded, doc updated.\n\n- Add a note to documentation about use of ROWTYPE when there is an\nunexpanded column.\n\n- Fix documentation about some sgml tag broken.\n\n\nAbout ROWTYPE generating an error when SELECT * INTO or RETURNING * INTO\nis used with unexpanded column, I have kept things like that because it\nis the normal behavior. I have checked on others database engine and\nthis is the same.\n\n\n\n-- \nGilles Darold", "msg_date": "Tue, 19 Oct 2021 07:43:51 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 19/10/2021 à 07:43, Gilles Darold a écrit :\n> Le 18/10/2021 à 22:36, Alvaro Herrera a écrit :\n>> On 2021-Oct-18, Gilles Darold wrote:\n>>\n>>> Le 18/10/2021 à 18:54, Alvaro Herrera a écrit :\n>>>> I suggest to look for output test files that are being massively\n>>>> modified by this patch. I think those are likely unintended:\n>>>>\n>>>>> diff --git a/src/test/regress/expected/collate.icu.utf8.out b/src/test/regress/expected/collate.icu.utf8.out\n>>>>> diff --git a/src/test/regress/expected/collate.linux.utf8.out b/src/test/regress/expected/collate.linux.utf8.out\n>>>>> diff --git a/src/test/regress/expected/compression.out b/src/test/regress/expected/compression.out\n>>>>> diff --git a/src/test/regress/expected/xml.out b/src/test/regress/expected/xml.out\n>>>>> diff --git a/src/test/regress/expected/xmlmap.out b/src/test/regress/expected/xmlmap.out\n>>> My bad, thanks for the report Alvaro. New patch version v3 should fix that.\n>> Hmm, the attachment was 500kB before, about 30% of that was the\n>> collate.*.out files, and it is 2.2 MB now. Something is still not\n>> right.\n>\n> Right I don't know what I have done yesterday, look like I have included\n> tests output autogenerated files. However I've attached a new version v4\n> of the patch that include the right list of files changed and some fixes:\n>\n>\n> - Allow a table to have all columns unexpanded, doc updated.\n>\n> - Add a note to documentation about use of ROWTYPE when there is an\n> unexpanded column.\n>\n> - Fix documentation about some sgml tag broken.\n>\n>\n> About ROWTYPE generating an error when SELECT * INTO or RETURNING * INTO\n> is used with unexpanded column, I have kept things like that because it\n> is the normal behavior. I have checked on others database engine and\n> this is the same.1\n\n\nAnd finally I found the reason of the diff on compression.out and \ncollate.linux.utf8.out, new version v5 of the patch attached.\n\n\n-- \nGilles Darold", "msg_date": "Wed, 27 Oct 2021 16:33:29 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Op 27-10-2021 om 16:33 schreef Gilles Darold:\n>>\n>> - Fix documentation about some sgml tag broken.\n>>\n>>\n>> About ROWTYPE generating an error when SELECT * INTO or RETURNING * INTO\n>> is used with unexpanded column, I have kept things like that because it\n>> is the normal behavior. I have checked on others database engine and\n>> this is the same.1\n> \n> \n> And finally I found the reason of the diff on compression.out and \n> collate.linux.utf8.out, new version v5 of the patch attached.\n> \n > [ 0001-hidden-column-v5.patch ]\n\n\nThis warning during compile from gcc 11.2:\n\npg_dump.c: In function ‘dumpTableSchema’:\npg_dump.c:16327:56: warning: comparison of constant ‘0’ with boolean \nexpression is always true [-Wbool-compare]\n16327 | if (tbinfo->attisunexpanded[j] >= 0)\n | ^~\n\nOtherwise, build, make check, chekc-world are OK. Also the pdf builds ok.\n\nThanks,\n\nErik Rijkers\n\n\n\n\n\n\n\n\n", "msg_date": "Wed, 27 Oct 2021 17:47:21 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 27/10/2021 à 17:47, Erik Rijkers a écrit :\n> Op 27-10-2021 om 16:33 schreef Gilles Darold:\n>>>\n>>> - Fix documentation about some sgml tag broken.\n>>>\n>>>\n>>> About ROWTYPE generating an error when SELECT * INTO or RETURNING *\n>>> INTO\n>>> is used with unexpanded column, I have kept things like that because it\n>>> is the normal behavior. I have checked on others database engine and\n>>> this is the same.1\n>>\n>>\n>> And finally I found the reason of the diff on compression.out and\n>> collate.linux.utf8.out, new version v5 of the patch attached.\n>>\n> > [ 0001-hidden-column-v5.patch ]\n>\n>\n> This warning during compile from gcc 11.2:\n>\n> pg_dump.c: In function ‘dumpTableSchema’:\n> pg_dump.c:16327:56: warning: comparison of constant ‘0’ with boolean\n> expression is always true [-Wbool-compare]\n> 16327 |                         if (tbinfo->attisunexpanded[j] >= 0)\n>       |                                                        ^~\n>\n> Otherwise, build, make check, chekc-world are OK.  Also the pdf builds\n> ok.\n>\n> Thanks,\n>\n> Erik Rijkers\n\n\nThanks Erik, new version v6 attached.\n\n\n-- \nGilles Darold", "msg_date": "Wed, 27 Oct 2021 18:02:39 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Op 27-10-2021 om 18:02 schreef Gilles Darold:\n>>\n>> Otherwise, build, make check, chekc-world are OK.  Also the pdf builds\n>> ok.\n> \n> Thanks Erik, new version v6 attached.\n\nHi,\n\nAnther small thing: the test_decoding module was overlooked, I think. \nBelow is output from make check-world (this error does not occur in master)\n\n\nErik\n\n\n============== running regression test queries ==============\ntest ddl ... FAILED 1210 ms\ntest xact ... ok 22 ms\ntest rewrite ... ok 176 ms\ntest toast ... ok 292 ms\ntest permissions ... ok 24 ms\ntest decoding_in_xact ... ok 23 ms\ntest decoding_into_rel ... ok 33 ms\ntest binary ... ok 16 ms\ntest prepared ... ok 21 ms\ntest replorigin ... ok 23 ms\ntest time ... ok 22 ms\ntest messages ... ok 26 ms\ntest spill ... ok 2407 ms\ntest slot ... ok 424 ms\ntest truncate ... ok 21 ms\ntest stream ... ok 31 ms\ntest stats ... ok 1097 ms\ntest twophase ... ok 46 ms\ntest twophase_stream ... ok 28 ms\n============== shutting down postmaster ==============\n\n=======================\n 1 of 19 tests failed.\n=======================\n\nThe differences that caused some tests to fail can be viewed in the\nfile \n\"/home/aardvark/pg_stuff/pg_sandbox/pgsql.hide_column/contrib/test_decoding/regression.diffs\". \n A copy of the test summary that you see\nabove is saved in the file \n\"/home/aardvark/pg_stuff/pg_sandbox/pgsql.hide_column/contrib/test_decoding/regression.out\".\n\n../../src/makefiles/pgxs.mk:451: recipe for target 'check' failed\nmake[2]: *** [check] Error 1\nmake[2]: Leaving directory \n'/home/aardvark/pg_stuff/pg_sandbox/pgsql.hide_column/contrib/test_decoding'\nMakefile:94: recipe for target 'check-test_decoding-recurse' failed\nmake[1]: *** [check-test_decoding-recurse] Error 2\nmake[1]: Leaving directory \n'/home/aardvark/pg_stuff/pg_sandbox/pgsql.hide_column/contrib'\nGNUmakefile:71: recipe for target 'check-world-contrib-recurse' failed\nmake: *** [check-world-contrib-recurse] Error 2", "msg_date": "Thu, 28 Oct 2021 09:29:10 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 28/10/2021 à 09:29, Erik Rijkers a écrit :\n> Op 27-10-2021 om 18:02 schreef Gilles Darold:\n>>>\n>>> Otherwise, build, make check, chekc-world are OK.  Also the pdf builds\n>>> ok.\n>>\n>> Thanks Erik, new version v6 attached.\n>\n> Hi,\n>\n> Anther small thing: the test_decoding module was overlooked, I think.\n> Below is output from make check-world (this error does not occur in\n> master)\n>\n>\n> Erik\n>\n\nFixed with new patch version v7 attached. It also fixes unwanted change\nof some regression tests output reported by the cfbot because I forgot\nto change my locale.\n\n\nI will also add a pg_dump test to verify that ALTER ... SET UNEXPANDED\nstatements are well generated in the dump.\n\n\n-- \nGilles Darold", "msg_date": "Thu, 28 Oct 2021 11:30:27 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "On Thu, Oct 28, 2021 at 11:30:27AM +0200, Gilles Darold wrote:\n> Fixed with new patch version v7 attached. It also fixes unwanted change\n> of some regression tests output reported by the cfbot because I forgot\n> to change my locale.\n> \n> \n> I will also add a pg_dump test to verify that ALTER ... SET UNEXPANDED\n> statements are well generated in the dump.\n\nI want to state I still think this feature is not generally desired, and\nis better implemented at the query level.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 28 Oct 2021 10:31:13 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" }, { "msg_contents": "Le 28/10/2021 à 16:31, Bruce Momjian a écrit :\n> On Thu, Oct 28, 2021 at 11:30:27AM +0200, Gilles Darold wrote:\n>> Fixed with new patch version v7 attached. It also fixes unwanted change\n>> of some regression tests output reported by the cfbot because I forgot\n>> to change my locale.\n>>\n>>\n>> I will also add a pg_dump test to verify that ALTER ... SET UNEXPANDED\n>> statements are well generated in the dump.\n> I want to state I still think this feature is not generally desired, and\n> is better implemented at the query level.\n\nI think that with an implementation at query level we will cover the\nuser need but not the developer need to \"hide\" technical columns, and\nalso it does not cover the INSERT statement without column.\n\n\nPersonally I will not try to convince more I'm lacking of arguments, I\njust wanted to attach a full working patch to test the proposal. So\nunless there is more persons interested by this feature I suggest us to\nnot waste more time on this proposal.\n\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Thu, 28 Oct 2021 17:55:24 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Proposal for HIDDEN/INVISIBLE column" } ]
[ { "msg_contents": "According to getpwnam(3):\n\n An application that wants to determine its user's home directory\n should inspect the value of HOME (rather than the value\n getpwuid(getuid())->pw_dir) since this allows the user to modify\n their notion of \"the home directory\" during a login session.\n\nThis is important for systems where many users share the same UID, and for test systems that change HOME to avoid interference with the user’s real home directory. It matches what most applications do, as well as what glibc does for glob(\"~\", GLOB_TILDE, …) and wordexp(\"~\", …).\n\nThere was some previous discussion of this in 2016, where although there were some questions about the use case, there seemed to be general support for the concept:\n\nhttps://www.postgresql.org/message-id/flat/CAEH6cQqbdbXoUHJBbX9ixwfjFFsUC-a8hFntKcci%3DdiWgBb3fQ%40mail.gmail.com\n\nRegardless of whether one thinks modifying HOME is a good idea, if we happen to find ourselves in that case, we should respect the modified HOME, so that when the user creates (say) a ~/.pgpass file, we’ll look for it at the same place the user’s editor created it. getenv() also skips the overhead of reading /etc/passwd as an added bonus.\n\nThe way I ran into this issue myself was in a test suite that runs on GitHub Actions, which automatically sets HOME=/github/home.\n\nAnders", "msg_date": "Thu, 14 Oct 2021 23:04:14 +0000", "msg_from": "Anders Kaseorg <andersk@mit.edu>", "msg_from_op": true, "msg_subject": "[PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "On 2021-Oct-14, Anders Kaseorg wrote:\n\n> This is important for systems where many users share the same UID, and\n> for test systems that change HOME to avoid interference with the\n> user’s real home directory. It matches what most applications do, as\n> well as what glibc does for glob(\"~\", GLOB_TILDE, …) and wordexp(\"~\",\n> …).\n> \n> There was some previous discussion of this in 2016, where although\n> there were some questions about the use case, there seemed to be\n> general support for the concept:\n> \n> https://www.postgresql.org/message-id/flat/CAEH6cQqbdbXoUHJBbX9ixwfjFFsUC-a8hFntKcci%3DdiWgBb3fQ%40mail.gmail.com\n\nI think modifying $HOME is a strange way to customize things, but given\nhow widespread it is [claimed to be] today, it seems reasonable to do\nthings that way.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 18 Oct 2021 19:23:50 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "On Mon, Oct 18, 2021 at 07:23:50PM -0300, Alvaro Herrera wrote:\n> I think modifying $HOME is a strange way to customize things, but given\n> how widespread it is [claimed to be] today, it seems reasonable to do\n> things that way.\n\nI am not sure about this claim, but it seems to me that we could get\nrid of the duplications in src/port/path.c, libpq/fe-connect.c and\npsql/command.c (this one is different for WIN32 but consistency would\nbe a good thing) as the proposed patch outlines. So I would suggest\nto begin with that rather than changing three places to do the same\nthing.\n--\nMichael", "msg_date": "Tue, 19 Oct 2021 13:26:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "At Mon, 18 Oct 2021 19:23:50 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2021-Oct-14, Anders Kaseorg wrote:\n> \n> > This is important for systems where many users share the same UID, and\n> > for test systems that change HOME to avoid interference with the\n> > user’s real home directory. It matches what most applications do, as\n> > well as what glibc does for glob(\"~\", GLOB_TILDE, …) and wordexp(\"~\",\n> > …).\n> > \n> > There was some previous discussion of this in 2016, where although\n> > there were some questions about the use case, there seemed to be\n> > general support for the concept:\n> > \n> > https://www.postgresql.org/message-id/flat/CAEH6cQqbdbXoUHJBbX9ixwfjFFsUC-a8hFntKcci%3DdiWgBb3fQ%40mail.gmail.com\n> \n> I think modifying $HOME is a strange way to customize things, but given\n> how widespread it is [claimed to be] today, it seems reasonable to do\n> things that way.\n\nI tend to agree to this, but seeing ssh ignoring $HOME, I'm not sure\nit's safe that we follow the variable at least when accessing\nconfidentiality(?) files. Since I don't understand the exact\nreasoning for the ssh's behavior so it's just my humbole opinion.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 19 Oct 2021 17:34:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "On 10/19/21 01:34, Kyotaro Horiguchi wrote:\n> I tend to agree to this, but seeing ssh ignoring $HOME, I'm not sure\n> it's safe that we follow the variable at least when accessing\n> confidentiality(?) files. Since I don't understand the exact\n> reasoning for the ssh's behavior so it's just my humbole opinion.\n\nAccording to https://bugzilla.mindrot.org/show_bug.cgi?id=3048#c1, it \nused to be supported to install the ssh binary as setuid. A \nsetuid/setgid binary needs to treat all environment variables with \nsuspicion: if it can be convinced to write a file to $HOME with root \nprivileges, then a user who modifies $HOME before invoking the binary \ncould cause it to write to a file that the user normally couldn’t.\n\nThere’s no such concern for a binary that isn’t setuid/setgid. Anyone \nwith the ability to modify $HOME can be assumed to already have full \ncontrol of the user account.\n\nAnders\n\n\n", "msg_date": "Tue, 19 Oct 2021 02:44:03 -0700", "msg_from": "Anders Kaseorg <andersk@mit.edu>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "At Tue, 19 Oct 2021 02:44:03 -0700, Anders Kaseorg <andersk@mit.edu> wrote in \r\n> On 10/19/21 01:34, Kyotaro Horiguchi wrote:\r\n> > I tend to agree to this, but seeing ssh ignoring $HOME, I'm not sure\r\n> > it's safe that we follow the variable at least when accessing\r\n> > confidentiality(?) files. Since I don't understand the exact\r\n> > reasoning for the ssh's behavior so it's just my humbole opinion.\r\n> \r\n> According to https://bugzilla.mindrot.org/show_bug.cgi?id=3048#c1, it\r\n> used to be supported to install the ssh binary as setuid. A\r\n> setuid/setgid binary needs to treat all environment variables with\r\n> suspicion: if it can be convinced to write a file to $HOME with root\r\n> privileges, then a user who modifies $HOME before invoking the binary\r\n> could cause it to write to a file that the user normally couldn’t.\r\n> \r\n> There’s no such concern for a binary that isn’t setuid/setgid. Anyone\r\n> with the ability to modify $HOME can be assumed to already have full\r\n> control of the user account.\r\n\r\nThansk for the link. Still I'm not sure it's the fact but it sounds\r\nreasonable enough. If that's the case, I vote +1 for psql or other\r\ncommands honoring $HOME.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Wed, 20 Oct 2021 14:40:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "> On 20 Oct 2021, at 07:40, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n> At Tue, 19 Oct 2021 02:44:03 -0700, Anders Kaseorg <andersk@mit.edu> wrote in \n>> On 10/19/21 01:34, Kyotaro Horiguchi wrote:\n>>> I tend to agree to this, but seeing ssh ignoring $HOME, I'm not sure\n>>> it's safe that we follow the variable at least when accessing\n>>> confidentiality(?) files. Since I don't understand the exact\n>>> reasoning for the ssh's behavior so it's just my humbole opinion.\n>> \n>> According to https://bugzilla.mindrot.org/show_bug.cgi?id=3048#c1, it\n>> used to be supported to install the ssh binary as setuid. A\n>> setuid/setgid binary needs to treat all environment variables with\n>> suspicion: if it can be convinced to write a file to $HOME with root\n>> privileges, then a user who modifies $HOME before invoking the binary\n>> could cause it to write to a file that the user normally couldn’t.\n>> \n>> There’s no such concern for a binary that isn’t setuid/setgid. Anyone\n>> with the ability to modify $HOME can be assumed to already have full\n>> control of the user account.\n> \n> Thansk for the link. Still I'm not sure it's the fact but it sounds\n> reasonable enough. If that's the case, I vote +1 for psql or other\n> commands honoring $HOME.\n\nIs the proposed change portable across all linux/unix systems we support?\nReading aobut indicates that it's likely to be, but neither NetBSD nor FreeBSD\nhave the upthread referenced wording in their manpages.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 20 Oct 2021 13:55:46 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "On 10/20/21 04:55, Daniel Gustafsson wrote:\n> Is the proposed change portable across all linux/unix systems we support?\n> Reading aobut indicates that it's likely to be, but neither NetBSD nor FreeBSD\n> have the upthread referenced wording in their manpages.\n\nSince the proposed change falls back to the old behavior if HOME is \nunset or empty, I assume this is a question about convention and not \nliterally about whether it will work on these systems. I don’t find it \nsurprising that this convention isn’t explicitly called out in every \nsystem’s manpage for the wrong function, but it still applies to these \nsystems.\n\nPOSIX specifies that the shell uses the HOME environment variable for \n‘cd’ with no arguments and for the expansion of ~. This implies by \nreference that this behavior is required of wordexp() as well.\n\nhttps://pubs.opengroup.org/onlinepubs/9699919799/utilities/cd.html\nhttps://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_06_01\nhttps://pubs.opengroup.org/onlinepubs/9699919799/functions/wordexp.html\n\nlibc’s glob() and wordexp() respect HOME in glibc, musl, NetBSD, and \nFreeBSD.\n\nhttps://sourceware.org/git/?p=glibc.git;a=blob;f=posix/glob.c;hb=glibc-2.34#l622\nhttps://sourceware.org/git/?p=glibc.git;a=blob;f=posix/wordexp.c;hb=glibc-2.34#l293\n\nhttps://git.musl-libc.org/cgit/musl/tree/src/regex/glob.c?h=v1.2.2#n203\nhttps://git.musl-libc.org/cgit/musl/tree/src/misc/wordexp.c?h=v1.2.2#n111\n\nhttps://github.com/NetBSD/src/blob/netbsd-9/lib/libc/gen/glob.c#L424\nhttps://github.com/NetBSD/src/blob/netbsd-9/lib/libc/gen/wordexp.c#L129-L150\nhttps://github.com/NetBSD/src/blob/netbsd-9/bin/sh/expand.c#L434-L441\n\nhttps://github.com/freebsd/freebsd-src/blob/release/13.0.0/lib/libc/gen/glob.c#L457\nhttps://github.com/freebsd/freebsd-src/blob/release/13.0.0/lib/libc/gen/wordexp.c#L171-L190\nhttps://github.com/freebsd/freebsd-src/blob/release/13.0.0/bin/sh/expand.c#L396\n\n(Today I learned that musl and BSD libc literally spawn a shell process \nto handle wordexp(). Wow.)\n\nAnders\n\n\n", "msg_date": "Wed, 20 Oct 2021 10:09:51 -0700", "msg_from": "Anders Kaseorg <andersk@mit.edu>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "Hi,\n\nI sometimes do some testing as nobody, on a distro where\ngetpwent(nobody)->pw_dir is a directory that nobody can't write.\nSo I end up setting $HOME to a directory that, um, is writable.\n\nWhen I start psql, strace shows $HOME being honored when looking\nfor .terminfo and .inputrc, and getpwent()->pw_dir being used\nto look for .pgpass, .psqlrc, and .psql_history, which of course\naren't there.\n\nI'm sure the .terminfo and .inputrc lookups are being done by library code.\nIn my experience, it seems traditionally unixy to let $HOME take precedence.\n\nMaybe things that are pointedly cross-platform are more likely to rely\non the getpwent lookup. I run into the same issue with Java, which is\npointedly cross-platform.\n\nBut there, I can alias java to java -Duser.home=\"$HOME\" and all is well.\n\nWould a patch be acceptable for psql to allow such an option\non the command line? I assume that would be more acceptable than\njust changing the default behavior.\n\nAnd if so, would it be preferable to add a whole new option for it,\n(--home ?) or, analogously to the way java works, just to add a\nHOME variable so it can be set on the command line with -v ?\n\nOr would a name like HOME pose too much risk that somebody is using\nsuch a variable in psql scripts for unrelated purposes?\n\nIn a moment of hopefulness I tried \\set and looked to see if such\na thing already exists, but I didn't see it. I see that I can set\na HISTFILE variable (or set PSQL_HISTORY in the environment),\nand can set PSQLRC in the environment (but not as a variable),\nand nothing can set the .pgpass location. One HOME variable could\ntake care of all three in one foop.\n\n(Or could it? Perhaps .pgpass is handled in libpq at a layer unaware\nof psql variables? But maybe the variable could have a modify event\nthat alerts libpq.)\n\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sat, 18 Dec 2021 15:57:55 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Is my home $HOME or is it getpwent()->pw_dir ?" }, { "msg_contents": "On 12/18/21 15:57, Chapman Flack wrote:\n> I see that I can set\n> a HISTFILE variable (or set PSQL_HISTORY in the environment),\n> and can set PSQLRC in the environment (but not as a variable),\n> and nothing can set the .pgpass location\n\nwell, not in the psql docs, but in the environment variable section\nfor libpq I do see a PGPASSFILE.\n\n-C\n\n\n", "msg_date": "Sat, 18 Dec 2021 16:07:47 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Is my home $HOME or is it getpwent()->pw_dir ?" }, { "msg_contents": "On Sat, Dec 18, 2021 at 2:07 PM Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 12/18/21 15:57, Chapman Flack wrote:\n> > I see that I can set\n> > a HISTFILE variable (or set PSQL_HISTORY in the environment),\n> > and can set PSQLRC in the environment (but not as a variable),\n> > and nothing can set the .pgpass location\n>\n> well, not in the psql docs, but in the environment variable section\n> for libpq I do see a PGPASSFILE.\n>\n>\npsql docs saith:\n\n\"This utility, like most other PostgreSQL utilities, also uses the\nenvironment variables supported by libpq (see Section 34.15).\"\n\nDavid J.\n\nOn Sat, Dec 18, 2021 at 2:07 PM Chapman Flack <chap@anastigmatix.net> wrote:On 12/18/21 15:57, Chapman Flack wrote:\n> I see that I can set\n> a HISTFILE variable (or set PSQL_HISTORY in the environment),\n> and can set PSQLRC in the environment (but not as a variable),\n> and nothing can set the .pgpass location\n\nwell, not in the psql docs, but in the environment variable section\nfor libpq I do see a PGPASSFILE.psql docs saith:\"This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq (see Section 34.15).\"David J.", "msg_date": "Sat, 18 Dec 2021 14:16:21 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is my home $HOME or is it getpwent()->pw_dir ?" }, { "msg_contents": "On 12/18/21 16:16, David G. Johnston wrote:\n> psql docs saith:\n> \n> \"This utility, like most other PostgreSQL utilities, also uses the\n> environment variables supported by libpq (see Section 34.15).\"\n\nI'm sure that's adequate as far as that goes. I just happened to miss it\nwhen composing the longer email (and then I just thought \"I bet there are\nenvironment variables supported by libpq\" and looked there).\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sat, 18 Dec 2021 16:21:16 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Is my home $HOME or is it getpwent()->pw_dir ?" }, { "msg_contents": "On 18.12.21 21:57, Chapman Flack wrote:\n> I sometimes do some testing as nobody, on a distro where\n> getpwent(nobody)->pw_dir is a directory that nobody can't write.\n> So I end up setting $HOME to a directory that, um, is writable.\n> \n> When I start psql, strace shows $HOME being honored when looking\n> for .terminfo and .inputrc, and getpwent()->pw_dir being used\n> to look for .pgpass, .psqlrc, and .psql_history, which of course\n> aren't there.\n> \n> I'm sure the .terminfo and .inputrc lookups are being done by library code.\n> In my experience, it seems traditionally unixy to let $HOME take precedence.\n\nSee this patch: https://commitfest.postgresql.org/36/3362/\n\n\n", "msg_date": "Mon, 20 Dec 2021 15:15:12 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Is my home $HOME or is it getpwent()->pw_dir ?" }, { "msg_contents": "On 12/20/21 09:15, Peter Eisentraut wrote:\n> On 18.12.21 21:57, Chapman Flack wrote:\n>> When I start psql, strace shows $HOME being honored when looking\n>> for .terminfo and .inputrc, and getpwent()->pw_dir being used\n>> to look for .pgpass, .psqlrc, and .psql_history, which of course\n>> aren't there.\n>>\n>> I'm sure the .terminfo and .inputrc lookups are being done by library code.\n>> In my experience, it seems traditionally unixy to let $HOME take precedence.\n> \n> See this patch: https://commitfest.postgresql.org/36/3362/\n\nWow, just a couple months ago. Yes, I should have tagged on to that\nrather than starting a new thread.\n\nI was proposing an option or variable on the assumption that just changing\nthe default behavior would be off the table. But I am +1 on just changing\nthe default behavior, if that's not off the table.\n\nRegards,\n-Chap\n\n*seeing that RFC 5322 3.6.4 permits more than one msg-id for in-reply-to,\ncrosses fingers to see what PGLister will make of it*\n\n\n", "msg_date": "Mon, 20 Dec 2021 10:07:09 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Is my home $HOME or is it getpwent()->pw_dir ?" }, { "msg_contents": "Anders Kaseorg <andersk@mit.edu> writes:\n> On 10/20/21 04:55, Daniel Gustafsson wrote:\n>> Is the proposed change portable across all linux/unix systems we support?\n>> Reading aobut indicates that it's likely to be, but neither NetBSD nor FreeBSD\n>> have the upthread referenced wording in their manpages.\n\n> Since the proposed change falls back to the old behavior if HOME is \n> unset or empty, I assume this is a question about convention and not \n> literally about whether it will work on these systems. I don’t find it \n> surprising that this convention isn’t explicitly called out in every \n> system’s manpage for the wrong function, but it still applies to these \n> systems.\n\nGiven the POSIX requirements, it's basically impossible to believe\nthat there are interesting cases where $HOME isn't set. Thus, it\nseems to me that keeping the getpwuid calls will just mean carrying\nuntestable dead code, so we should simplify matters by ripping\nthose out and *only* consulting $HOME.\n\nThe v1 patch also neglects the matter of documentation. I think\nthe simplest and most transparent thing to do is just to explicitly\nmention $HOME everyplace we talk about files that are sought there,\nin place of our current convention to write \"~\". (I'm too lazy\nto go digging in the git history, but I have a feeling that this is\nundoing somebody's intentional change from a long time back.)\n\nBTW, not directly impacted by this patch but adjacent to it,\nI noted that on Windows psql's \\cd defaults to changing to \"/\".\nThat seems a bit surprising, and we definitely fail to document it.\nI settled for noting it in the documentation, but should we make\nit do something else?\n\nPFA v2 patch.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 09 Jan 2022 13:59:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "On 1/9/22 10:59, Tom Lane wrote:\n> Given the POSIX requirements, it's basically impossible to believe\n> that there are interesting cases where $HOME isn't set. Thus, it\n> seems to me that keeping the getpwuid calls will just mean carrying\n> untestable dead code, so we should simplify matters by ripping\n> those out and *only* consulting $HOME.\n\nWhile POSIX requires that the login program put you in a conforming \nenvironment, nothing stops the user from building a non-conforming \nenvironment, such as with ‘env -i’. One could argue that such a user \ndeserves whatever broken behavior they might get. But to me it seems \nprudent to continue working there if it worked before.\n\n> The v1 patch also neglects the matter of documentation. I think\n> the simplest and most transparent thing to do is just to explicitly\n> mention $HOME everyplace we talk about files that are sought there,\n> in place of our current convention to write \"~\". (I'm too lazy\n> to go digging in the git history, but I have a feeling that this is\n> undoing somebody's intentional change from a long time back.)\n\nThe reason I didn’t change the documentation is that this is already \nwhat “~” is supposed to mean according to POSIX and common \nimplementations. See previous discussion:\n\nhttps://www.postgresql.org/message-id/1634252654444.90107%40mit.edu\nhttps://www.postgresql.org/message-id/d452fd57-8c34-0a94-79c1-4498eb4ffbdc%40mit.edu\n\nI consider my patch a bug fix that implements the behavior one would \nalready expect from the existing documentation.\n\nTherefore, I still prefer my v1 patch on both counts. I am willing to \nbe overruled if you still disagree, but I wanted to explain my reasoning.\n\nAnders\n\n\n", "msg_date": "Sun, 9 Jan 2022 12:50:54 -0800", "msg_from": "Anders Kaseorg <andersk@mit.edu>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "Anders Kaseorg <andersk@mit.edu> writes:\n> On 1/9/22 10:59, Tom Lane wrote:\n>> Given the POSIX requirements, it's basically impossible to believe\n>> that there are interesting cases where $HOME isn't set. Thus, it\n>> seems to me that keeping the getpwuid calls will just mean carrying\n>> untestable dead code, so we should simplify matters by ripping\n>> those out and *only* consulting $HOME.\n\n> While POSIX requires that the login program put you in a conforming \n> environment, nothing stops the user from building a non-conforming \n> environment, such as with ‘env -i’. One could argue that such a user \n> deserves whatever broken behavior they might get. But to me it seems \n> prudent to continue working there if it worked before.\n\nThe only case that the v1 patch helps such a user for is if they\nunset HOME or set it precisely to ''. If they set it to anything\nelse, it's still broken from their perspective. So I do not find\nthat that argument holds water.\n\nMoreover, ISTM that the only plausible use-case for unsetting HOME\nis to prevent programs from finding stuff in your home directory.\nWhat would be the point otherwise? So it's pretty hard to envision\na case where somebody is actually using, and happy with, the\nbehavior you argue we ought to keep.\n\n>> The v1 patch also neglects the matter of documentation.\n\n> The reason I didn’t change the documentation is that this is already \n> what “~” is supposed to mean according to POSIX and common \n> implementations.\n\nThe point here is precisely that we're changing what *we* think ~\nmeans. I don't think we can just leave the docs unchanged.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 09 Jan 2022 16:04:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "On 1/9/22 13:04, Tom Lane wrote:\n> The only case that the v1 patch helps such a user for is if they\n> unset HOME or set it precisely to ''. If they set it to anything\n> else, it's still broken from their perspective. So I do not find\n> that that argument holds water.\n> \n> Moreover, ISTM that the only plausible use-case for unsetting HOME\n> is to prevent programs from finding stuff in your home directory.\n> What would be the point otherwise? So it's pretty hard to envision\n> a case where somebody is actually using, and happy with, the\n> behavior you argue we ought to keep.\n\nObviously a user who intentionally breaks their environment should \nexpect problems. But what I’m saying is that a user could have written \na script that unsets HOME by *accident* while intending to clear *other* \nthings out of the environment. They might have developed it by starting \nwith an empty environment and adding back the minimal set of variables \nthey needed to get something to work. Since most programs (including \nmost libcs and shells) do in fact fall back to getpwuid when HOME is \nunset, they may not have noticed an unset HOME as a problem. Unsetting \nHOME does not, in practice, prevent most programs from finding stuff in \nyour home directory.\n\nAnders\n\n\n", "msg_date": "Sun, 9 Jan 2022 13:21:19 -0800", "msg_from": "Anders Kaseorg <andersk@mit.edu>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "On Sun, Jan 09, 2022 at 01:59:02PM -0500, Tom Lane wrote:\n> Given the POSIX requirements, it's basically impossible to believe\n> that there are interesting cases where $HOME isn't set.\n\nI've run into this before - children of init may not have HOME set.\n\nIt's easy enough to add it if it's needed, but should probably be called out in\nthe release notes.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 9 Jan 2022 15:26:28 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "## Tom Lane (tgl@sss.pgh.pa.us):\n\n> Given the POSIX requirements, it's basically impossible to believe\n> that there are interesting cases where $HOME isn't set.\n\nWhen I look at a random Debian with the usual PGDG packages, the\npostmaster process (and every backend) has a rather minimal environment\nwithout HOME. When I remember the code correctly, walreceiver uses\nthe functions from fe-connect.c and may need to find the service file,\na password file or certificates. If I'm correct with that, requiring\nHOME to be set would be a significant change for existing \"normal\"\ninstallations.\nWhat about containers and similar \"reduced\" environments?\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n", "msg_date": "Sun, 9 Jan 2022 22:51:37 +0100", "msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "Christoph Moench-Tegeder <cmt@burggraben.net> writes:\n> ## Tom Lane (tgl@sss.pgh.pa.us):\n>> Given the POSIX requirements, it's basically impossible to believe\n>> that there are interesting cases where $HOME isn't set.\n\n> When I look at a random Debian with the usual PGDG packages, the\n> postmaster process (and every backend) has a rather minimal environment\n> without HOME. When I remember the code correctly, walreceiver uses\n> the functions from fe-connect.c and may need to find the service file,\n> a password file or certificates. If I'm correct with that, requiring\n> HOME to be set would be a significant change for existing \"normal\"\n> installations.\n> What about containers and similar \"reduced\" environments?\n\nIsn't that a flat out violation of POSIX 8.3 Other Environment Variables?\n\n HOME\n The system shall initialize this variable at the time of login to\n be a pathname of the user's home directory. See <pwd.h>.\n\nTo claim it's not, you have to claim these programs aren't logged in,\nin which case where did they get any privileges from?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 09 Jan 2022 17:40:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "## Tom Lane (tgl@sss.pgh.pa.us):\n\n> Isn't that a flat out violation of POSIX 8.3 Other Environment Variables?\n> \n> HOME\n> The system shall initialize this variable at the time of login to\n> be a pathname of the user's home directory. See <pwd.h>.\n> \n> To claim it's not, you have to claim these programs aren't logged in,\n> in which case where did they get any privileges from?\n\nAfter poking around across some Linuxes, it looks like people silently\nagreed that \"services\" are not logged-in users: among the daemons,\nhaving HOME set (as observed in /proc/*/environ) is an exception,\nnot the norm. I'm not sure if that's a \"new\" thing with systemd,\nI don't have a linux with pure SysV-init available (but I guess those\nare rare animals anyways).\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n", "msg_date": "Mon, 10 Jan 2022 00:11:59 +0100", "msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "Christoph Moench-Tegeder <cmt@burggraben.net> writes:\n> ## Tom Lane (tgl@sss.pgh.pa.us):\n>> Isn't that a flat out violation of POSIX 8.3 Other Environment Variables?\n\n> After poking around across some Linuxes, it looks like people silently\n> agreed that \"services\" are not logged-in users: among the daemons,\n> having HOME set (as observed in /proc/*/environ) is an exception,\n> not the norm.\n\nMeh. I guess there's not much point in arguing with facts on the\nground. Anders' proposed behavior seems like the way to go then,\nat least so far as libpq is concerned. (I remain skeptical that\npsql would be run in such an environment, but there's no value\nin making it act different from libpq.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 09 Jan 2022 18:16:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" }, { "msg_contents": "I wrote:\n> Meh. I guess there's not much point in arguing with facts on the\n> ground. Anders' proposed behavior seems like the way to go then,\n> at least so far as libpq is concerned.\n\nSo I pushed that, but while working on it I grew quite annoyed\nat the messy API exhibited by src/port/thread.c, particularly\nat how declaring its functions in port.h requires #including\n<netdb.h> and <pwd.h> there. That means those headers are\nread by every compile in our tree, though only a tiny number\nof modules actually need either. So here are a couple of\nfollow-on patches to improve that situation.\n\nFor pqGethostbyname, there is no consumer other than\nsrc/port/getaddrinfo.c, which makes it even sillier because that\nfile isn't even compiled on a large majority of platforms, making\npqGethostbyname dead code that we nonetheless build everywhere.\nSo 0001 attached just moves that function into getaddrinfo.c.\n\nFor pqGetpwuid, the best solution seemed to be to add a\nless platform-dependent API layer, which I did in 0002\nattached. Perhaps someone would object to the API details\nI chose, but by and large this seems like an improvement\nthat reduces the amount of code duplication in the callers.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 09 Jan 2022 21:17:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Prefer getenv(\"HOME\") to find the UNIX home directory" } ]
[ { "msg_contents": "Hello!\n \nWhen extension  pg_stat_statsement is loaded into memory or compute_query_id=on in postgesql.conf\nmany of installcheck tests give errors.\nThe thing is that in *.out files appear lines \"queryid = xxxxx\" where xxxxx - some numeric value.\nSo 24 of 209 installcheck tests will fail.\nIt seems to be a normal behaviour as queryid calculation was moved into core in \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5fd9dfa5f5 \nbut tests say that something wrong.\nCreate test variants is not the way, as queryid value varies randomly \nfrom test to test at the same enviroment.\nI think this is a problem because these fake errors can mask a real errors in relevant tests.\nWhat’s your opinion?\n \nBest regards,\nAnton Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n \n \nHello! When extension  pg_stat_statsement is loaded into memory or compute_query_id=on in postgesql.confmany of installcheck tests give errors.The thing is that in *.out files appear lines \"queryid = xxxxx\" where xxxxx - some numeric value.So 24 of 209 installcheck tests will fail.It seems to be a normal behaviour as queryid calculation was moved into core in https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5fd9dfa5f5 but tests say that something wrong.Create test variants is not the way, as queryid value varies randomly from test to test at the same enviroment.I think this is a problem because these fake errors can mask a real errors in relevant tests.What’s your opinion? Best regards,Anton MelnikovPostgres Professional: http://www.postgrespro.comThe Russian Postgres Company", "msg_date": "Fri, 15 Oct 2021 10:36:36 +0300", "msg_from": "=?UTF-8?B?0JzQtdC70YzQvdC40LrQvtCyINCQ0L3RgtC+0L0g0JDQvdC00YDQtdC10LI=?=\n =?UTF-8?B?0LjRhw==?= <aamelnikov@inbox.ru>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?aW5zdGFsbGNoZWNrIGZhaWxzIHdoZW4gY29tcHV0ZV9xdWVyeV9pZD1vbiBv?=\n =?UTF-8?B?ciBwZ19zdGF0X3N0YXRzZW1lbnQgaXMgbG9hZGVk?=" }, { "msg_contents": "On Fri, Oct 15, 2021 at 3:36 PM Мельников Антон Андреевич\n<aamelnikov@inbox.ru> wrote:\n>\n> When extension pg_stat_statsement is loaded into memory or compute_query_id=on in postgesql.conf\n> many of installcheck tests give errors.\n> [...]\n> I think this is a problem because these fake errors can mask a real errors in relevant tests.\n> What’s your opinion?\n\nThis has been discussed previously (although I can't find the thread\nright now). Note that you don't really need to enable\npg_stat_statements, enabling compute_query_id is enough. The query\nidentifier is only displayed for EXPLAIN (VERBOSE), so it's already a\nbit filtered. I don't see any simple way to entirely avoid the\nproblem though.\n\nThere are already many options that can break the regression tests, so\nmaybe it's ok to accept that this is yet another one. If not, the\nonly alternative I see is to add a boolean QUERY_ID option to EXPLAIN\nand make sure that all tests use it, but it seems like a big hammer,\nerror prone, for a maybe small problem.\n\n\n", "msg_date": "Fri, 15 Oct 2021 16:35:25 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: installcheck fails when compute_query_id=on or pg_stat_statsement\n is loaded" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> ... Note that you don't really need to enable\n> pg_stat_statements, enabling compute_query_id is enough. The query\n> identifier is only displayed for EXPLAIN (VERBOSE), so it's already a\n> bit filtered. I don't see any simple way to entirely avoid the\n> problem though.\n\n> There are already many options that can break the regression tests, so\n> maybe it's ok to accept that this is yet another one.\n\nYeah, that's my reaction. We could add \"compute_query_id = off\" to\nthe database-level settings set up by pg_regress, but that cure seems\nworse than the disease. It would make it impossible to run the\nregression tests with pg_stat_statements loaded, which you might wish\nto do (just ignoring the bogus test failures) as a way of testing\npg_stat_statements.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 15 Oct 2021 09:46:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: installcheck fails when compute_query_id=on or pg_stat_statsement\n is loaded" } ]
[ { "msg_contents": "Hi,\n\nIn another logical replication related thread[1], my colleague Greg found that\nif publish_via_partition_root is true, then the child table's data will be\ncopied twice when adding both child and parent table to the publication.\n\nExample:\n\n-----\nPub:\ncreate table tbl1 (a int) partition by range (a);\ncreate table tbl1_part1 partition of tbl1 for values from (1) to (10);\ncreate table tbl1_part2 partition of tbl1 for values from (10) to (20);\ncreate publication pub for table tbl1, tbl1_part1 with (publish_via_partition_root=on);\n\ninsert into tbl1_part1 values(1);\n\nSub:\ncreate table tbl1 (a int) partition by range (a);\ncreate table tbl1_part1 partition of tbl1 for values from (1) to (10);\ncreate table tbl1_part2 partition of tbl1 for values from (10) to (20);\ncreate subscription sub CONNECTION 'dbname=postgres port=10000' publication pub;\n\n-- data is copied twice\nselect * from tbl1_part1;\n a\n---\n 1\n 1\n-----\n\nThe reason is that the subscriber will fetch the table list from publisher\nusing the following sql[2] and the subscriber will execute table\nsynchronization for each table in the query results in this case. But\ntbl1_part1 is a partition of tbl1, so the data of tbl1_part1 was copied twice.\n\n[2]\nselect * from pg_publication_tables;\n pubname | schemaname | tablename\n---------+------------+------------\n pub | public | tbl1\n pub | public | tbl1_part1\n\nIMO, it looks like a bug and it's more natural to only execute the table\nsynchronization for the parent table in the above case. Because as the document\nsaid: if publish_via_partition_root is true, \"changes in a partitioned table\n(or on its partitions) contained in the publication will be published using the\nidentity and schema of the partitioned table rather than that of the individual\npartitions that are actually changed;\"\n\nTo fix it, I think we should fix function GetPublicationRelations which\ngenerate data for the view pg_publication_tables and make it only show the\nparent table if publish_via_partition_root is true. And for other future\nfeature like schema level publication, we can also follow this to exclude\npartitions if their parent is specified by FOR TABLE in the same publication.\n\nAttach a patch to fix it.\nThoughts ?\n\n[1] https://www.postgresql.org/message-id/CAJcOf-eBhDUT2J5zs8Z0qEMiZUdhinX%2BbuGX3GN4V83fPnZV3Q%40mail.gmail.com\n\nBest regards,\nHou zhijie", "msg_date": "Fri, 15 Oct 2021 11:22:50 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Data is copied twice when specifying both child and parent table in\n publication" }, { "msg_contents": "On Friday, October 15, 2021 7:23 PM houzj.fnst@fujitsu.com wrote:\n> Attach a patch to fix it.\nAttach a new version patch which refactor the fix code in a cleaner way.\n\nBest regards,\nHou zj", "msg_date": "Sat, 16 Oct 2021 06:30:47 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Sat, Oct 16, 2021 at 5:30 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, October 15, 2021 7:23 PM houzj.fnst@fujitsu.com wrote:\n> > Attach a patch to fix it.\n> Attach a new version patch which refactor the fix code in a cleaner way.\n>\n\nI have not debugged it yet to find out why, but with the patch\napplied, the original double-publish problem that I reported\n(converted to just use TABLE rather than ALL TABLES IN SCHEMA) still\noccurs.\n\nThe steps are below:\n\n\nCREATE SCHEMA sch;\nCREATE SCHEMA sch1;\nCREATE TABLE sch.sale (sale_date date not null, country_code text,\nproduct_sku text, units integer) PARTITION BY RANGE (sale_date);\nCREATE TABLE sch1.sale_201901 PARTITION OF sch.sale FOR VALUES FROM\n('2019-01-01') TO ('2019-02-01');\nCREATE TABLE sch1.sale_201902 PARTITION OF sch.sale FOR VALUES FROM\n('2019-02-01') TO ('2019-03-01');\n\n(1) PUB: CREATE PUBLICATION pub FOR TABLE sch1.sale_201901,\nsch1.sale_201902 WITH (publish_via_partition_root=true);\n(2) SUB: CREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres\nhost=localhost port=5432' PUBLICATION pub;\n(3) PUB: INSERT INTO sch.sale VALUES('2019-01-01', 'AU', 'cpu', 5),\n('2019-01-02', 'AU', 'disk', 8);\n(4) SUB: SELECT * FROM sch.sale;\n(5) PUB: ALTER PUBLICATION pub ADD TABLE sch.sale;\n(6) SUB: ALTER SUBSCRIPTION sub REFRESH PUBLICATION;\n(7) SUB: SELECT * FROM sch.sale;\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Mon, 18 Oct 2021 13:57:25 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Oct 18, 2021 at 8:27 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Sat, Oct 16, 2021 at 5:30 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Friday, October 15, 2021 7:23 PM houzj.fnst@fujitsu.com wrote:\n> > > Attach a patch to fix it.\n> > Attach a new version patch which refactor the fix code in a cleaner way.\n> >\n>\n> I have not debugged it yet to find out why, but with the patch\n> applied, the original double-publish problem that I reported\n> (converted to just use TABLE rather than ALL TABLES IN SCHEMA) still\n> occurs.\n>\n\nYeah, I think this is a variant of the problem being fixed by\nHou-San's patch. I think one possible idea to investigate is that on\nthe subscriber-side, after fetching tables, we check the already\nsubscribed tables and if the child tables already exist then we ignore\nthe parent table and vice versa. We might want to consider the case\nwhere a user has toggled the \"publish_via_partition_root\" parameter.\n\nIt seems both these behaviours/problems exist since commit 17b9e7f9\n(Support adding partitioned tables to publication). Adding Amit L and\nPeter E (people involved in this work) to know their opinion?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 18 Oct 2021 11:30:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Saturday, October 16, 2021 2:31 PM houzj.fnst@fujitsu.com wrote:\n> On Friday, October 15, 2021 7:23 PM houzj.fnst@fujitsu.com wrote:\n> > Attach a patch to fix it.\n> Attach a new version patch which refactor the fix code in a cleaner way.\n\nAlthough the discussion about the partition behavior[1] is going on,\nattach a refactored fix patch which make the pg_publication_tables view be\nconsistent for FOR TABLE and FOR ALL TABLES here in case someone want\nto have a look.\n\n[1] https://www.postgresql.org/message-id/CAA4eK1JC5sy5M_UVoGdgubHN2--peYqApOJkT%3DFLCq%2BVUxqerQ%40mail.gmail.com\n\nBest regards,\nHou zj", "msg_date": "Mon, 18 Oct 2021 08:46:13 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Oct 18, 2021 at 3:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Mon, Oct 18, 2021 at 8:27 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > On Sat, Oct 16, 2021 at 5:30 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Friday, October 15, 2021 7:23 PM houzj.fnst@fujitsu.com wrote:\n> > > > Attach a patch to fix it.\n> > > Attach a new version patch which refactor the fix code in a cleaner way.\n> > >\n> >\n> > I have not debugged it yet to find out why, but with the patch\n> > applied, the original double-publish problem that I reported\n> > (converted to just use TABLE rather than ALL TABLES IN SCHEMA) still\n> > occurs.\n> >\n>\n> Yeah, I think this is a variant of the problem being fixed by\n> Hou-San's patch. I think one possible idea to investigate is that on\n> the subscriber-side, after fetching tables, we check the already\n> subscribed tables and if the child tables already exist then we ignore\n> the parent table and vice versa. We might want to consider the case\n> where a user has toggled the \"publish_via_partition_root\" parameter.\n>\n> It seems both these behaviours/problems exist since commit 17b9e7f9\n> (Support adding partitioned tables to publication). Adding Amit L and\n> Peter E (people involved in this work) to know their opinion?\n\nI can imagine that the behavior seen here may look surprising, but not\nsure if I would call it a bug as such. I do remember thinking about\nthis case and the current behavior is how I may have coded it to be.\n\nLooking at this command in Hou-san's email:\n\n create publication pub for table tbl1, tbl1_part1 with\n(publish_via_partition_root=on);\n\nIt's adding both the root partitioned table and the leaf partition\n*explicitly*, and it's not clear to me if the latter's inclusion in\nthe publication should be assumed because the former is found to have\nbeen added to the publication, that is, as far as the latter's\nvisibility to the subscriber is concerned. It's not a stretch to\nimagine that a user may write the command this way to account for a\nsubscriber node on which tbl1 and tbl1_part1 are unrelated tables.\n\nI don't think we assume anything on the publisher side regarding the\nstate/configuration of tables on the subscriber side, at least with\npublication commands where tables are added to a publication\nexplicitly, so it is up to the user to make sure that the tables are\nnot added duplicatively. One may however argue that the way we've\ndecided to handle FOR ALL TABLES does assume something about\npartitions where it skips advertising them to subscribers when\npublish_via_partition_root flag is set to true, but that is exactly to\navoid the duplication of data that goes to a subscriber.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Oct 2021 18:02:35 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Oct 18, 2021 at 2:32 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Mon, Oct 18, 2021 at 3:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Mon, Oct 18, 2021 at 8:27 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > >\n> > > On Sat, Oct 16, 2021 at 5:30 PM houzj.fnst@fujitsu.com\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > > > On Friday, October 15, 2021 7:23 PM houzj.fnst@fujitsu.com wrote:\n> > > > > Attach a patch to fix it.\n> > > > Attach a new version patch which refactor the fix code in a cleaner way.\n> > > >\n> > >\n> > > I have not debugged it yet to find out why, but with the patch\n> > > applied, the original double-publish problem that I reported\n> > > (converted to just use TABLE rather than ALL TABLES IN SCHEMA) still\n> > > occurs.\n> > >\n> >\n> > Yeah, I think this is a variant of the problem being fixed by\n> > Hou-San's patch. I think one possible idea to investigate is that on\n> > the subscriber-side, after fetching tables, we check the already\n> > subscribed tables and if the child tables already exist then we ignore\n> > the parent table and vice versa. We might want to consider the case\n> > where a user has toggled the \"publish_via_partition_root\" parameter.\n> >\n> > It seems both these behaviours/problems exist since commit 17b9e7f9\n> > (Support adding partitioned tables to publication). Adding Amit L and\n> > Peter E (people involved in this work) to know their opinion?\n>\n> I can imagine that the behavior seen here may look surprising, but not\n> sure if I would call it a bug as such. I do remember thinking about\n> this case and the current behavior is how I may have coded it to be.\n>\n> Looking at this command in Hou-san's email:\n>\n> create publication pub for table tbl1, tbl1_part1 with\n> (publish_via_partition_root=on);\n>\n> It's adding both the root partitioned table and the leaf partition\n> *explicitly*, and it's not clear to me if the latter's inclusion in\n> the publication should be assumed because the former is found to have\n> been added to the publication, that is, as far as the latter's\n> visibility to the subscriber is concerned. It's not a stretch to\n> imagine that a user may write the command this way to account for a\n> subscriber node on which tbl1 and tbl1_part1 are unrelated tables.\n>\n> I don't think we assume anything on the publisher side regarding the\n> state/configuration of tables on the subscriber side, at least with\n> publication commands where tables are added to a publication\n> explicitly, so it is up to the user to make sure that the tables are\n> not added duplicatively. One may however argue that the way we've\n> decided to handle FOR ALL TABLES does assume something about\n> partitions where it skips advertising them to subscribers when\n> publish_via_partition_root flag is set to true, but that is exactly to\n> avoid the duplication of data that goes to a subscriber.\n>\n\nI think the same confusion will then apply to the new feature (For All\nTables In Schema) being discussed [1] (that is a bit long thread so\nshared the email where the latest patch version is posted). There\nalso, the partitioned table and partition can be in a different\nschema. We either need to follow \"For All Tables\" or \"For Table\"\nbehavior. Then, there is also an argument that such behavior is not\ndocumented, and by reading \"publish_via_partition_root\", it is not\nclear why would the user expect the current behavior?\n\nAlso, what about Greg's case [2], where I think it is clear that the\nsubscriber also has partitions?\n\n[1] - https://www.postgresql.org/message-id/OS0PR01MB5716B523961FE338EB9B3F9A94BC9%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n[2] - https://www.postgresql.org/message-id/CAJcOf-eQR_%3Dq0f4ZVHd342QdLvBd_995peSr4xCU05hrS3TeTg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 18 Oct 2021 14:58:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Monday, October 18, 2021 5:03 PM Amit Langote <amitlangote09@gmail.com> wrote:\r\n> I can imagine that the behavior seen here may look surprising, but not\r\n> sure if I would call it a bug as such. I do remember thinking about\r\n> this case and the current behavior is how I may have coded it to be.\r\n> \r\n> Looking at this command in Hou-san's email:\r\n> \r\n> create publication pub for table tbl1, tbl1_part1 with\r\n> (publish_via_partition_root=on);\r\n> \r\n> It's adding both the root partitioned table and the leaf partition\r\n> *explicitly*, and it's not clear to me if the latter's inclusion in\r\n> the publication should be assumed because the former is found to have\r\n> been added to the publication, that is, as far as the latter's\r\n> visibility to the subscriber is concerned. It's not a stretch to\r\n> imagine that a user may write the command this way to account for a\r\n> subscriber node on which tbl1 and tbl1_part1 are unrelated tables.\r\n> \r\n> I don't think we assume anything on the publisher side regarding the\r\n> state/configuration of tables on the subscriber side, at least with\r\n> publication commands where tables are added to a publication\r\n> explicitly, so it is up to the user to make sure that the tables are\r\n> not added duplicatively. One may however argue that the way we've\r\n> decided to handle FOR ALL TABLES does assume something about\r\n> partitions where it skips advertising them to subscribers when\r\n> publish_via_partition_root flag is set to true, but that is exactly to\r\n> avoid the duplication of data that goes to a subscriber.\r\n\r\nHi,\r\n\r\nThanks for the explanation.\r\n\r\nI think one reason that I consider this behavior a bug is that: If we add\r\nboth the root partitioned table and the leaf partition explicitly to the\r\npublication (and set publish_via_partition_root = on), the behavior of the\r\napply worker is inconsistent with the behavior of table sync worker.\r\n\r\nIn this case, all changes in the leaf the partition will be applied using the\r\nidentity and schema of the partitioned(root) table. But for the table sync, it\r\nwill execute table sync for both the leaf and the root table which cause\r\nduplication of data.\r\n\r\nWouldn't it be better to make the behavior consistent here ?\r\n\r\nBest regards,\r\nHou zj\r\n\r\n\r\n", "msg_date": "Tue, 19 Oct 2021 02:47:13 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Oct 18, 2021 at 2:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Oct 18, 2021 at 2:32 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > Looking at this command in Hou-san's email:\n> >\n> > create publication pub for table tbl1, tbl1_part1 with\n> > (publish_via_partition_root=on);\n> >\n> > It's adding both the root partitioned table and the leaf partition\n> > *explicitly*, and it's not clear to me if the latter's inclusion in\n> > the publication should be assumed because the former is found to have\n> > been added to the publication, that is, as far as the latter's\n> > visibility to the subscriber is concerned. It's not a stretch to\n> > imagine that a user may write the command this way to account for a\n> > subscriber node on which tbl1 and tbl1_part1 are unrelated tables.\n> >\n> > I don't think we assume anything on the publisher side regarding the\n> > state/configuration of tables on the subscriber side, at least with\n> > publication commands where tables are added to a publication\n> > explicitly, so it is up to the user to make sure that the tables are\n> > not added duplicatively. One may however argue that the way we've\n> > decided to handle FOR ALL TABLES does assume something about\n> > partitions where it skips advertising them to subscribers when\n> > publish_via_partition_root flag is set to true, but that is exactly to\n> > avoid the duplication of data that goes to a subscriber.\n> >\n>\n> I think the same confusion will then apply to the new feature (For All\n> Tables In Schema) being discussed [1] (that is a bit long thread so\n> shared the email where the latest patch version is posted). There\n> also, the partitioned table and partition can be in a different\n> schema.\n>\n\nSorry, I wanted to say that table partition and partitioned table can\nbe in the same schema. Now, if the user publishes all tables in a\nschema, if we want to follow the \"For All Tables\" behavior then we\nshould skip the leaf table and publish only the parent table, OTOH, if\nwant to follow \"For Table\" behavior, we need to publish both\npartitioned table and partition table. I feel it is better to be\nconsistent here in all three cases (\"For Table\", \"For All Tables\", and\n\"For All Tables In Schema\") as it will be easier to explain and\ndocument it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 19 Oct 2021 08:45:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tue, Oct 19, 2021 at 8:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Oct 18, 2021 at 2:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Oct 18, 2021 at 2:32 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > >\n> > > Looking at this command in Hou-san's email:\n> > >\n> > > create publication pub for table tbl1, tbl1_part1 with\n> > > (publish_via_partition_root=on);\n> > >\n> > > It's adding both the root partitioned table and the leaf partition\n> > > *explicitly*, and it's not clear to me if the latter's inclusion in\n> > > the publication should be assumed because the former is found to have\n> > > been added to the publication, that is, as far as the latter's\n> > > visibility to the subscriber is concerned. It's not a stretch to\n> > > imagine that a user may write the command this way to account for a\n> > > subscriber node on which tbl1 and tbl1_part1 are unrelated tables.\n> > >\n> > > I don't think we assume anything on the publisher side regarding the\n> > > state/configuration of tables on the subscriber side, at least with\n> > > publication commands where tables are added to a publication\n> > > explicitly, so it is up to the user to make sure that the tables are\n> > > not added duplicatively. One may however argue that the way we've\n> > > decided to handle FOR ALL TABLES does assume something about\n> > > partitions where it skips advertising them to subscribers when\n> > > publish_via_partition_root flag is set to true, but that is exactly to\n> > > avoid the duplication of data that goes to a subscriber.\n> > >\n> >\n> > I think the same confusion will then apply to the new feature (For All\n> > Tables In Schema) being discussed [1] (that is a bit long thread so\n> > shared the email where the latest patch version is posted). There\n> > also, the partitioned table and partition can be in a different\n> > schema.\n> >\n>\n> Sorry, I wanted to say that table partition and partitioned table can\n> be in the same schema. Now, if the user publishes all tables in a\n> schema, if we want to follow the \"For All Tables\" behavior then we\n> should skip the leaf table and publish only the parent table, OTOH, if\n> want to follow \"For Table\" behavior, we need to publish both\n> partitioned table and partition table. I feel it is better to be\n> consistent here in all three cases (\"For Table\", \"For All Tables\", and\n> \"For All Tables In Schema\") as it will be easier to explain and\n> document it.\n>\n\nThinking some more about it, I think we also have a problem when the\npartitioned and partition tables are in different schemas especially\nwhen the user created a publication having a combination of \"For\nTable\" and \"For All Tables In Schema\", see below:\n\ncreate schema sch1;\ncreate schema sch2;\n\ncreate table sch1.tbl1 (a int) partition by range ( a );\ncreate table sch2.tbl1_part1 partition of sch1.tbl1 for values from\n(1) to (101);\ncreate table sch1.tbl1_part2 partition of sch1.tbl1 for values from\n(101) to (200);\n\ncreate publication mypub for table sch2.tbl1_part1, all tables in\nschema sch1 WITH (publish_via_partition_root = true);\n\nNow, here if we follow the rules of \"For Table\", then we should get\nboth partitioned and partition tables which will be different from the\ncase when both are in the same schema considering we follow \"For All\nTables\" behavior in \"For All Tables In Schema\" case.\n\nThe point is that as we extend the current feature, I think the\ncomplications will increase if we don't have a consistent behavior for\nall cases and it will also be difficult to explain it to users.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 19 Oct 2021 09:59:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tue, Oct 19, 2021 at 8:17 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n\n> Thanks for the explanation.\n>\n> I think one reason that I consider this behavior a bug is that: If we add\n> both the root partitioned table and the leaf partition explicitly to the\n> publication (and set publish_via_partition_root = on), the behavior of the\n> apply worker is inconsistent with the behavior of table sync worker.\n>\n> In this case, all changes in the leaf the partition will be applied using the\n> identity and schema of the partitioned(root) table. But for the table sync, it\n> will execute table sync for both the leaf and the root table which cause\n> duplication of data.\n>\n> Wouldn't it be better to make the behavior consistent here ?\n\nI agree with the point, whether we are doing the initial sync or we\nare doing transaction streaming the behavior should be the same. I\nthink the right behavior should be that even if user has given both\nparent table and the child table in the published table list, it\nshould sync it only once, because consider the case where we add a\nsame table twice e.g (CREATE PUBLICATION mypub FOR TABLE t1,t1;) but\nin that case also we consider this table only once and there will be\nno duplicate data.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Oct 2021 11:07:46 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Oct 18, 2021 at 5:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > I have not debugged it yet to find out why, but with the patch\n> > applied, the original double-publish problem that I reported\n> > (converted to just use TABLE rather than ALL TABLES IN SCHEMA) still\n> > occurs.\n> >\n>\n> Yeah, I think this is a variant of the problem being fixed by\n> Hou-San's patch. I think one possible idea to investigate is that on\n> the subscriber-side, after fetching tables, we check the already\n> subscribed tables and if the child tables already exist then we ignore\n> the parent table and vice versa. We might want to consider the case\n> where a user has toggled the \"publish_via_partition_root\" parameter.\n>\n> It seems both these behaviours/problems exist since commit 17b9e7f9\n> (Support adding partitioned tables to publication). Adding Amit L and\n> Peter E (people involved in this work) to know their opinion?\n>\n\nActually, at least with the scenario I gave steps for, after looking\nat it again and debugging, I think that the behavior is understandable\nand not a bug.\nThe reason is that the INSERTed data is first published though the\npartitions, since initially there is no partitioned table in the\npublication (so publish_via_partition_root=true doesn't have any\neffect). But then adding the partitioned table to the publication and\nrefreshing the publication in the subscriber, the data is then\npublished \"using the identity and schema of the partitioned table\" due\nto publish_via_partition_root=true. Note that the corresponding table\nin the subscriber may well be a non-partitioned table (or the\npartitions arranged differently) so the data does need to be\nreplicated again.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Wed, 20 Oct 2021 18:14:07 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Oct 20, 2021 at 12:44 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Mon, Oct 18, 2021 at 5:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > I have not debugged it yet to find out why, but with the patch\n> > > applied, the original double-publish problem that I reported\n> > > (converted to just use TABLE rather than ALL TABLES IN SCHEMA) still\n> > > occurs.\n> > >\n> >\n> > Yeah, I think this is a variant of the problem being fixed by\n> > Hou-San's patch. I think one possible idea to investigate is that on\n> > the subscriber-side, after fetching tables, we check the already\n> > subscribed tables and if the child tables already exist then we ignore\n> > the parent table and vice versa. We might want to consider the case\n> > where a user has toggled the \"publish_via_partition_root\" parameter.\n> >\n> > It seems both these behaviours/problems exist since commit 17b9e7f9\n> > (Support adding partitioned tables to publication). Adding Amit L and\n> > Peter E (people involved in this work) to know their opinion?\n> >\n>\n> Actually, at least with the scenario I gave steps for, after looking\n> at it again and debugging, I think that the behavior is understandable\n> and not a bug.\n> The reason is that the INSERTed data is first published though the\n> partitions, since initially there is no partitioned table in the\n> publication (so publish_via_partition_root=true doesn't have any\n> effect). But then adding the partitioned table to the publication and\n> refreshing the publication in the subscriber, the data is then\n> published \"using the identity and schema of the partitioned table\" due\n> to publish_via_partition_root=true. Note that the corresponding table\n> in the subscriber may well be a non-partitioned table (or the\n> partitions arranged differently) so the data does need to be\n> replicated again.\n\nI don't think this behavior is consistent, I mean for the initial sync\nwe will replicate the duplicate data, whereas for later streaming we\nwill only replicate it once. From the user POW, this behavior doesn't\nlook correct.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Oct 2021 13:32:02 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Oct 20, 2021 at 1:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Oct 20, 2021 at 12:44 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > On Mon, Oct 18, 2021 at 5:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > I have not debugged it yet to find out why, but with the patch\n> > > > applied, the original double-publish problem that I reported\n> > > > (converted to just use TABLE rather than ALL TABLES IN SCHEMA) still\n> > > > occurs.\n> > > >\n> > >\n> > > Yeah, I think this is a variant of the problem being fixed by\n> > > Hou-San's patch. I think one possible idea to investigate is that on\n> > > the subscriber-side, after fetching tables, we check the already\n> > > subscribed tables and if the child tables already exist then we ignore\n> > > the parent table and vice versa. We might want to consider the case\n> > > where a user has toggled the \"publish_via_partition_root\" parameter.\n> > >\n> > > It seems both these behaviours/problems exist since commit 17b9e7f9\n> > > (Support adding partitioned tables to publication). Adding Amit L and\n> > > Peter E (people involved in this work) to know their opinion?\n> > >\n> >\n> > Actually, at least with the scenario I gave steps for, after looking\n> > at it again and debugging, I think that the behavior is understandable\n> > and not a bug.\n> > The reason is that the INSERTed data is first published though the\n> > partitions, since initially there is no partitioned table in the\n> > publication (so publish_via_partition_root=true doesn't have any\n> > effect). But then adding the partitioned table to the publication and\n> > refreshing the publication in the subscriber, the data is then\n> > published \"using the identity and schema of the partitioned table\" due\n> > to publish_via_partition_root=true. Note that the corresponding table\n> > in the subscriber may well be a non-partitioned table (or the\n> > partitions arranged differently) so the data does need to be\n> > replicated again.\n>\n\nEven if the partitions are arranged differently why would the user\nexpect the same data to be replicated twice?\n\n> I don't think this behavior is consistent, I mean for the initial sync\n> we will replicate the duplicate data, whereas for later streaming we\n> will only replicate it once. From the user POW, this behavior doesn't\n> look correct.\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 20 Oct 2021 14:29:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Oct 20, 2021 at 7:02 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > Actually, at least with the scenario I gave steps for, after looking\n> > at it again and debugging, I think that the behavior is understandable\n> > and not a bug.\n> > The reason is that the INSERTed data is first published though the\n> > partitions, since initially there is no partitioned table in the\n> > publication (so publish_via_partition_root=true doesn't have any\n> > effect). But then adding the partitioned table to the publication and\n> > refreshing the publication in the subscriber, the data is then\n> > published \"using the identity and schema of the partitioned table\" due\n> > to publish_via_partition_root=true. Note that the corresponding table\n> > in the subscriber may well be a non-partitioned table (or the\n> > partitions arranged differently) so the data does need to be\n> > replicated again.\n>\n> I don't think this behavior is consistent, I mean for the initial sync\n> we will replicate the duplicate data, whereas for later streaming we\n> will only replicate it once. From the user POW, this behavior doesn't\n> look correct.\n>\n\nThe scenario I gave steps for didn't have any table data when the\nsubscription was made, so the initial sync did not replicate any data.\nI was referring to the double-publish that occurs when\npublish_via_partition_root=true and then the partitioned table is\nadded to the publication and the subscriber does ALTER SUBSCRIPTION\n... REFRESH PUBLICATION.\nIf I modify my example to include both the partitioned table and\n(explicitly) its child partitions in the publication, and insert some\ndata on the publisher side prior to the subscription, then I am seeing\nduplicate data on the initial sync on the subscriber side, and I would\nagree that this doesn't seem correct.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Wed, 20 Oct 2021 20:00:18 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Oct 20, 2021 at 7:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > > Actually, at least with the scenario I gave steps for, after looking\n> > > at it again and debugging, I think that the behavior is understandable\n> > > and not a bug.\n> > > The reason is that the INSERTed data is first published though the\n> > > partitions, since initially there is no partitioned table in the\n> > > publication (so publish_via_partition_root=true doesn't have any\n> > > effect). But then adding the partitioned table to the publication and\n> > > refreshing the publication in the subscriber, the data is then\n> > > published \"using the identity and schema of the partitioned table\" due\n> > > to publish_via_partition_root=true. Note that the corresponding table\n> > > in the subscriber may well be a non-partitioned table (or the\n> > > partitions arranged differently) so the data does need to be\n> > > replicated again.\n> >\n>\n> Even if the partitions are arranged differently why would the user\n> expect the same data to be replicated twice?\n>\n\nIt's the same data, but published in different ways because of changes\nthe user made to the publication.\nI am not talking in general, I am specifically referring to the\nscenario I gave steps for.\nIn the example scenario I gave, initially when the subscription was\nmade, the publication just explicitly included the partitions, but\npublish_via_partition_root was true. So in this case it publishes\nthrough the individual partitions (as no partitioned table is present\nin the publication). Then on the publisher side, the partitioned table\nwas then added to the publication and then ALTER SUBSCRIPTION ...\nREFRESH PUBLICATION done on the subscriber side. Now that the\npartitioned table is present in the publication and\npublish_via_partition_root is true, it is \"published using the\nidentity and schema of the partitioned table rather than that of the\nindividual partitions that are actually changed\". So the data is\nreplicated again.\nThis scenario didn't use initial table data, so initial table sync\ndidn't come into play (although as I previously posted, I can see a\ndouble-publish issue on initial sync if data is put in the table prior\nto subscription and partitions have been explicitly added to the\npublication).\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Wed, 20 Oct 2021 20:33:17 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Oct 20, 2021 at 3:03 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Wed, Oct 20, 2021 at 7:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > > Actually, at least with the scenario I gave steps for, after looking\n> > > > at it again and debugging, I think that the behavior is understandable\n> > > > and not a bug.\n> > > > The reason is that the INSERTed data is first published though the\n> > > > partitions, since initially there is no partitioned table in the\n> > > > publication (so publish_via_partition_root=true doesn't have any\n> > > > effect). But then adding the partitioned table to the publication and\n> > > > refreshing the publication in the subscriber, the data is then\n> > > > published \"using the identity and schema of the partitioned table\" due\n> > > > to publish_via_partition_root=true. Note that the corresponding table\n> > > > in the subscriber may well be a non-partitioned table (or the\n> > > > partitions arranged differently) so the data does need to be\n> > > > replicated again.\n> > >\n> >\n> > Even if the partitions are arranged differently why would the user\n> > expect the same data to be replicated twice?\n> >\n>\n> It's the same data, but published in different ways because of changes\n> the user made to the publication.\n> I am not talking in general, I am specifically referring to the\n> scenario I gave steps for.\n> In the example scenario I gave, initially when the subscription was\n> made, the publication just explicitly included the partitions, but\n> publish_via_partition_root was true. So in this case it publishes\n> through the individual partitions (as no partitioned table is present\n> in the publication). Then on the publisher side, the partitioned table\n> was then added to the publication and then ALTER SUBSCRIPTION ...\n> REFRESH PUBLICATION done on the subscriber side. Now that the\n> partitioned table is present in the publication and\n> publish_via_partition_root is true, it is \"published using the\n> identity and schema of the partitioned table rather than that of the\n> individual partitions that are actually changed\". So the data is\n> replicated again.\n>\n\nI don't see why data need to be replicated again even in that case.\nCan you see any such duplicate data replicated for non-partitioned\ntables?\n\n> This scenario didn't use initial table data, so initial table sync\n> didn't come into play\n>\n\nIt will be equivalent to initial sync because the tablesync worker\nwould copy the entire data again in this case unless during refresh we\npass copy_data as false.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 20 Oct 2021 15:49:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Oct 20, 2021 at 9:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I don't see why data need to be replicated again even in that case.\n> Can you see any such duplicate data replicated for non-partitioned\n> tables?\n>\n\nIf my example is slightly modified to use the same-named tables on the\nsubscriber side, but without partitioning, i.e.:\n\nPUB:\n\nCREATE SCHEMA sch;\nCREATE SCHEMA sch1;\nCREATE TABLE sch.sale (sale_date date not null, country_code text,\nproduct_sku text, units integer) PARTITION BY RANGE (sale_date);\nCREATE TABLE sch1.sale_201901 PARTITION OF sch.sale FOR VALUES FROM\n('2019-01-01') TO ('2019-02-01');\nCREATE TABLE sch1.sale_201902 PARTITION OF sch.sale FOR VALUES FROM\n('2019-02-01') TO ('2019-03-01');\n\n\nSUB:\n\nCREATE SCHEMA sch;\nCREATE SCHEMA sch1;\nCREATE TABLE sch.sale (sale_date date not null, country_code text,\nproduct_sku text, units integer);\nCREATE TABLE sch1.sale_201901 (sale_date date not null, country_code\ntext, product_sku text, units integer);\nCREATE TABLE sch1.sale_201902 (sale_date date not null, country_code\ntext, product_sku text, units integer);\n\nthen the INSERTed data on the publisher side gets replicated to the\nsubscriber's \"sch1.sale_201901\" and \"sch1.sale_201902\" tables (only),\ndepending on the date values.\nNow if the partitioned table is then added to the publication and\nALTER SUBSCRIPTION ... REFRESH PUBLICATION done by the subscriber,\nthen the current functionality is that the existing sch.sale data is\nreplicated (only) to the subscriber's \"sch.sale\" table (even though\ndata had been replicated previously to the \"sch1.sale_201901\" and\n\"sch1.sale_201902\" tables, only).\nSo, just to be clear, you think that this current functionality isn't\ncorrect (i.e. no data should be replicated on the REFRESH in this\ncase)?\nI think it's debatable because here copy_data=true and sch.sale was\nnot a previously-subscribed table (so pre-existing data in that table\nshould be copied, in accordance with the current documentation).\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 21 Oct 2021 00:40:53 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Hi,\n\nI just wanted to bring to your attention an earlier thread [1] in\nwhich I had already encountered/reported behaviour that is either\nexactly the same or is closely related to what is being discussed in\nthis current thread. If it is different please take that into account\nalso.\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPvJMRB-ZyC80we2kiUFv4cVjmA6jxXpEMhm1rmz%3D1ryeA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 21 Oct 2021 14:56:15 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Oct 21, 2021 at 2:56 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> I just wanted to bring to your attention an earlier thread [1] in\n> which I had already encountered/reported behaviour that is either\n> exactly the same or is closely related to what is being discussed in\n> this current thread. If it is different please take that into account\n> also.\n>\n> ------\n> [1] https://www.postgresql.org/message-id/CAHut%2BPvJMRB-ZyC80we2kiUFv4cVjmA6jxXpEMhm1rmz%3D1ryeA%40mail.gmail.com\n>\n\nThanks, I was able to reproduce that behavior, which is similar (but\nin that case the publish_via_partition_root flag is toggled with the\npartitioned table present in the publication, whereas in the case\nbeing discussed the presence of the partitioned table in the\npublication is toggled with publish_via_partition_root always true).\n\nWhat seems to happen internally when a partitioned table is published\nis that when publish_via_partition_root=true the subscriber to that\npublication is effectively subscribed to the parent partitioned table\nbut not the child partitions. If publish_via_partition_root is changed\nto false and the subscriber refreshes, the partitioned table is\nunsubscribed and it subscribes to the partitions. This explains why\ndata gets \"re-copied\" when this happens, because then it is\nsubscribing to a \"new\" table and copy_data=true by default.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 21 Oct 2021 16:57:48 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Oct 20, 2021 at 7:11 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Wed, Oct 20, 2021 at 9:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I don't see why data need to be replicated again even in that case.\n> > Can you see any such duplicate data replicated for non-partitioned\n> > tables?\n> >\n>\n> If my example is slightly modified to use the same-named tables on the\n> subscriber side, but without partitioning, i.e.:\n>\n> PUB:\n>\n> CREATE SCHEMA sch;\n> CREATE SCHEMA sch1;\n> CREATE TABLE sch.sale (sale_date date not null, country_code text,\n> product_sku text, units integer) PARTITION BY RANGE (sale_date);\n> CREATE TABLE sch1.sale_201901 PARTITION OF sch.sale FOR VALUES FROM\n> ('2019-01-01') TO ('2019-02-01');\n> CREATE TABLE sch1.sale_201902 PARTITION OF sch.sale FOR VALUES FROM\n> ('2019-02-01') TO ('2019-03-01');\n>\n>\n> SUB:\n>\n> CREATE SCHEMA sch;\n> CREATE SCHEMA sch1;\n> CREATE TABLE sch.sale (sale_date date not null, country_code text,\n> product_sku text, units integer);\n> CREATE TABLE sch1.sale_201901 (sale_date date not null, country_code\n> text, product_sku text, units integer);\n> CREATE TABLE sch1.sale_201902 (sale_date date not null, country_code\n> text, product_sku text, units integer);\n>\n> then the INSERTed data on the publisher side gets replicated to the\n> subscriber's \"sch1.sale_201901\" and \"sch1.sale_201902\" tables (only),\n> depending on the date values.\n> Now if the partitioned table is then added to the publication and\n> ALTER SUBSCRIPTION ... REFRESH PUBLICATION done by the subscriber,\n> then the current functionality is that the existing sch.sale data is\n> replicated (only) to the subscriber's \"sch.sale\" table (even though\n> data had been replicated previously to the \"sch1.sale_201901\" and\n> \"sch1.sale_201902\" tables, only).\n> So, just to be clear, you think that this current functionality isn't\n> correct (i.e. no data should be replicated on the REFRESH in this\n> case)?\n>\n\nRight, I don't think it is correct because it will behave differently\nwhen the tables on the subscriber are partitioned. Also, the idea I\nspeculated in one of my above emails should be able to deal with this\ncase.\n\n> I think it's debatable because here copy_data=true and sch.sale was\n> not a previously-subscribed table (so pre-existing data in that table\n> should be copied, in accordance with the current documentation).\n>\n\nWhat about the partition (child) table? In this case, the same data\nwill be present in two tables sch.sale and sch1.sale_201901 after you\nhave refreshed the publication, and then any future insertions will\nonly be inserted into parent table sch.sale in this case which doesn't\nsound consistent. The bigger problem is that it will lead to duplicate\ndata when tables are partitioned. I think if the user really wants to\ndo in a way you are describing, there is no need to keep sub-tables\n(*_201901 and *__201902). I understand that it depends on the use case\nbut we should also behave sanely when tables/partitions are created in\nthe same way in both publisher and subscriber which I guess will most\nlikely be the case.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 21 Oct 2021 15:15:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tuesday, October 19, 2021 10:47 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> \r\n> On Monday, October 18, 2021 5:03 PM Amit Langote\r\n> <amitlangote09@gmail.com> wrote:\r\n> > I can imagine that the behavior seen here may look surprising, but not\r\n> > sure if I would call it a bug as such. I do remember thinking about\r\n> > this case and the current behavior is how I may have coded it to be.\r\n> >\r\n> > Looking at this command in Hou-san's email:\r\n> >\r\n> > create publication pub for table tbl1, tbl1_part1 with\r\n> > (publish_via_partition_root=on);\r\n> >\r\n> > It's adding both the root partitioned table and the leaf partition\r\n> > *explicitly*, and it's not clear to me if the latter's inclusion in\r\n> > the publication should be assumed because the former is found to have\r\n> > been added to the publication, that is, as far as the latter's\r\n> > visibility to the subscriber is concerned. It's not a stretch to\r\n> > imagine that a user may write the command this way to account for a\r\n> > subscriber node on which tbl1 and tbl1_part1 are unrelated tables.\r\n> >\r\n> > I don't think we assume anything on the publisher side regarding the\r\n> > state/configuration of tables on the subscriber side, at least with\r\n> > publication commands where tables are added to a publication\r\n> > explicitly, so it is up to the user to make sure that the tables are\r\n> > not added duplicatively. One may however argue that the way we've\r\n> > decided to handle FOR ALL TABLES does assume something about\r\n> > partitions where it skips advertising them to subscribers when\r\n> > publish_via_partition_root flag is set to true, but that is exactly to\r\n> > avoid the duplication of data that goes to a subscriber.\r\n> \r\n> Hi,\r\n> \r\n> Thanks for the explanation.\r\n> \r\n> I think one reason that I consider this behavior a bug is that: If we add\r\n> both the root partitioned table and the leaf partition explicitly to the\r\n> publication (and set publish_via_partition_root = on), the behavior of the\r\n> apply worker is inconsistent with the behavior of table sync worker.\r\n> \r\n> In this case, all changes in the leaf the partition will be applied using the\r\n> identity and schema of the partitioned(root) table. But for the table sync, it\r\n> will execute table sync for both the leaf and the root table which cause\r\n> duplication of data.\r\n> \r\n> Wouldn't it be better to make the behavior consistent here ?\r\n> \r\n\r\nI agree with this point. \r\n\r\nAbout this case,\r\n\r\n> > create publication pub for table tbl1, tbl1_part1 with\r\n> > (publish_via_partition_root=on);\r\n\r\nAs a user, although partitioned table includes the partition, publishing partitioned\r\ntable and its partition is allowed. So, I think we should take this case into\r\nconsideration. Initial data is copied once via the parent table seems reasonable.\r\n\r\nRegards\r\nShi yu\r\n", "msg_date": "Fri, 22 Oct 2021 02:01:24 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Hi,\r\n\r\nAs there are basically two separate issues mentioned in the thread, I tried to\r\nsummarize the discussion so far which might be helpful to others.\r\n\r\n* The first issue[1]:\r\n\r\nIf we include both the partitioned table and (explicitly) its child partitions\r\nin the publication when set publish_via_partition_root=true, like:\r\n---\r\nCREATE PUBLICATION pub FOR TABLE parent_table, child_table with (publish_via_partition_root=on);\r\n---\r\nIt could execute initial sync for both the partitioned(parent_table) table and\r\n(explicitly) its child partitions(child_table) which cause duplication of\r\ndata in partition(child_table) in subscriber side.\r\n\r\nThe reasons I considered this behavior a bug are:\r\n\r\na) In this case, the behavior of initial sync is inconsistent with the behavior\r\nof transaction streaming. All changes in the leaf the partition will be applied\r\nusing the identity and schema of the partitioned(root) table. But for the\r\ninitial sync, it will execute sync for both the partitioned(root) table and\r\n(explicitly) its child partitions which cause duplication of data.\r\n\r\nb) The behavior of FOR TABLE is inconsistent with the behavior of FOR ALL TABLE.\r\nIf user create a FOR ALL TABLE publication and set publish_via_partition_root=true,\r\nthen only the top most partitioned(root) table will execute initial sync.\r\n\r\nIIRC, most people in this thread agreed that the current behavior is not\r\nexpected. So, maybe it's time to try to fix it.\r\n\r\nAttach my fix patch here. The patch try to fix this issue by making the\r\npg_publication_tables view only show partitioned table when\r\npublish_via_partition_root is true.\r\n\r\n\r\n* The second issue[2]:\r\n-----\r\nCREATE TABLE sale (sale_date date not null,country_code text, product_sku text,\r\nunits integer) PARTITION BY RANGE (sale_date);\r\nCREATE TABLE sale_201901 PARTITION OF sale FOR VALUES FROM ('2019-01-01') TO\r\n('2019-02-01');\r\nCREATE TABLE sale_201902 PARTITION OF sale FOR VALUES FROM ('2019-02-01') TO\r\n('2019-03-01');\r\n\r\n(1) PUB: CREATE PUBLICATION pub FOR TABLE sale_201901,\r\nsale_201902 WITH (publish_via_partition_root=true);\r\n(2) SUB: CREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres host=localhost port=5432' PUBLICATION pub;\r\n(3) PUB: INSERT INTO sale VALUES('2019-01-01', 'AU', 'cpu', 5), ('2019-01-02', 'AU', 'disk', 8);\r\n(4) SUB: SELECT * FROM sale;\r\n(5) PUB: ALTER PUBLICATION pub ADD TABLE sale;\r\n(6) SUB: ALTER SUBSCRIPTION sub REFRESH PUBLICATION;\r\n(7) SUB: SELECT * FROM sale;\r\n-----\r\n\r\nIn step (7), we can see duplication of data.\r\n\r\nThe reason is that the INSERTed data is first published though the partitions,\r\nsince initially there is no partitioned table in the publication (so\r\npublish_via_partition_root=true doesn't have any effect). But then adding the\r\npartitioned table to the publication and refreshing the publication in the\r\nsubscriber, the data is then published \"using the identity and schema of the\r\npartitioned table\" due to publish_via_partition_root=true.\r\n(Copied from Greg's analysis).\r\n\r\nWhether this behavior is correct is still under debate.\r\n\r\n\r\nOverall, I think the second issue still needs further discussion while the\r\nfirst issue seems clear that most people think it's unexpected. So, I think it\r\nmight be better to fix the first issue.\r\n\r\n[1] https://www.postgresql.org/message-id/OS0PR01MB57167F45D481F78CDC5986F794B99%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n[2] https://www.postgresql.org/message-id/flat/CAJcOf-d8SWk3z3fJaLW9yuVux%3D2ESTsXOSdKzCq1O3AWBpgnMQ%40mail.gmail.com#fc96a42158b5e98ace26d077a6f7eac5\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 28 Oct 2021 07:35:07 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Oct 28, 2021 at 4:35 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> As there are basically two separate issues mentioned in the thread, I tried to\n> summarize the discussion so far which might be helpful to others.\n>\n> * The first issue[1]:\n>\n> If we include both the partitioned table and (explicitly) its child partitions\n> in the publication when set publish_via_partition_root=true, like:\n> ---\n> CREATE PUBLICATION pub FOR TABLE parent_table, child_table with (publish_via_partition_root=on);\n> ---\n> It could execute initial sync for both the partitioned(parent_table) table and\n> (explicitly) its child partitions(child_table) which cause duplication of\n> data in partition(child_table) in subscriber side.\n>\n> The reasons I considered this behavior a bug are:\n>\n> a) In this case, the behavior of initial sync is inconsistent with the behavior\n> of transaction streaming. All changes in the leaf the partition will be applied\n> using the identity and schema of the partitioned(root) table. But for the\n> initial sync, it will execute sync for both the partitioned(root) table and\n> (explicitly) its child partitions which cause duplication of data.\n>\n> b) The behavior of FOR TABLE is inconsistent with the behavior of FOR ALL TABLE.\n> If user create a FOR ALL TABLE publication and set publish_via_partition_root=true,\n> then only the top most partitioned(root) table will execute initial sync.\n>\n> IIRC, most people in this thread agreed that the current behavior is not\n> expected. So, maybe it's time to try to fix it.\n>\n> Attach my fix patch here. The patch try to fix this issue by making the\n> pg_publication_tables view only show partitioned table when\n> publish_via_partition_root is true.\n>\n>\n> * The second issue[2]:\n> -----\n> CREATE TABLE sale (sale_date date not null,country_code text, product_sku text,\n> units integer) PARTITION BY RANGE (sale_date);\n> CREATE TABLE sale_201901 PARTITION OF sale FOR VALUES FROM ('2019-01-01') TO\n> ('2019-02-01');\n> CREATE TABLE sale_201902 PARTITION OF sale FOR VALUES FROM ('2019-02-01') TO\n> ('2019-03-01');\n>\n> (1) PUB: CREATE PUBLICATION pub FOR TABLE sale_201901,\n> sale_201902 WITH (publish_via_partition_root=true);\n> (2) SUB: CREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres host=localhost port=5432' PUBLICATION pub;\n> (3) PUB: INSERT INTO sale VALUES('2019-01-01', 'AU', 'cpu', 5), ('2019-01-02', 'AU', 'disk', 8);\n> (4) SUB: SELECT * FROM sale;\n> (5) PUB: ALTER PUBLICATION pub ADD TABLE sale;\n> (6) SUB: ALTER SUBSCRIPTION sub REFRESH PUBLICATION;\n> (7) SUB: SELECT * FROM sale;\n> -----\n>\n> In step (7), we can see duplication of data.\n>\n> The reason is that the INSERTed data is first published though the partitions,\n> since initially there is no partitioned table in the publication (so\n> publish_via_partition_root=true doesn't have any effect). But then adding the\n> partitioned table to the publication and refreshing the publication in the\n> subscriber, the data is then published \"using the identity and schema of the\n> partitioned table\" due to publish_via_partition_root=true.\n> (Copied from Greg's analysis).\n>\n> Whether this behavior is correct is still under debate.\n>\n>\n> Overall, I think the second issue still needs further discussion while the\n> first issue seems clear that most people think it's unexpected. So, I think it\n> might be better to fix the first issue.\n\nThanks for the summary, Hou-san, and sorry about my late reply.\n\nI had thought about this some last week and I am coming around to\nrecognizing the confusing user experience of the current behavior.\nThough, I am not sure if removing partitions from the result of\npg_publication_tables view for pubviaroot publications is acceptable\nas a fix, because that prevents replicating into a subscriber node\nwhere tables that are partition root and a partition respectively on\nthe publisher are independent tables on the subscriber. ISTM that\nwhat Amit K mentioned in his first reply may be closer to what we may\nultimately need to do, which is this:\n\n\"I think one possible idea to investigate is that on the\nsubscriber-side, after fetching tables, we check the already\nsubscribed tables and if the child tables already exist then we ignore\nthe parent table and vice versa.\"\n\nI had also thought about a way to implement that a bit and part of\nthat is to make pg_publication_tables always expose leaf partitions,\nthat is, even if pubviaroot is true for a given publication. So, when\npuviaroot is true we include the partition root actually mentioned in\nthe publication as we do currently (changes streamed after initial\nsync will always use its schema so the subscriber better know about\nit), and also leaf partitions which we currently don't. The latter\ntoo so that a subscriber can determine which leaf partitions to sync\nand which ones to not based on whether a given leaf partitions is\nalready known to the subscriber, which means their pg_subscription_rel\nentries would need to be made which we don't currently. Leaf\npartition state entries will not be referred to after all of the known\nones are synced, because the subsequent changes will be streamed with\nthe root's schema. The initial sync worker would need to be taught to\nskip processing any partitioned tables that it sees, because now we'd\nbe handling the initial sync part via individual leaf partitions.\n\nThough the thing that makes this a bit tricky to implement is that the\npg_publication_tables view exposes way less information to be able to\ndetermine how the tables listed are related to each other. So I think\nwe'd need to fix the view to return leaf partitions' owning root table\n(if they're published through a pubviaroot publication) or rewrite the\nquery that fetch_table_list() uses to access the underlying catalogs\ndirectly in the back-ported version.\n\nThat is roughly how I'd go about this. Thoughts?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 28 Oct 2021 17:25:12 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Oct 28, 2021 at 1:55 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, Oct 28, 2021 at 4:35 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> > As there are basically two separate issues mentioned in the thread, I tried to\n> > summarize the discussion so far which might be helpful to others.\n> >\n> > * The first issue[1]:\n> >\n> > If we include both the partitioned table and (explicitly) its child partitions\n> > in the publication when set publish_via_partition_root=true, like:\n> > ---\n> > CREATE PUBLICATION pub FOR TABLE parent_table, child_table with (publish_via_partition_root=on);\n> > ---\n> > It could execute initial sync for both the partitioned(parent_table) table and\n> > (explicitly) its child partitions(child_table) which cause duplication of\n> > data in partition(child_table) in subscriber side.\n> >\n> > The reasons I considered this behavior a bug are:\n> >\n> > a) In this case, the behavior of initial sync is inconsistent with the behavior\n> > of transaction streaming. All changes in the leaf the partition will be applied\n> > using the identity and schema of the partitioned(root) table. But for the\n> > initial sync, it will execute sync for both the partitioned(root) table and\n> > (explicitly) its child partitions which cause duplication of data.\n> >\n> > b) The behavior of FOR TABLE is inconsistent with the behavior of FOR ALL TABLE.\n> > If user create a FOR ALL TABLE publication and set publish_via_partition_root=true,\n> > then only the top most partitioned(root) table will execute initial sync.\n> >\n> > IIRC, most people in this thread agreed that the current behavior is not\n> > expected. So, maybe it's time to try to fix it.\n> >\n> > Attach my fix patch here. The patch try to fix this issue by making the\n> > pg_publication_tables view only show partitioned table when\n> > publish_via_partition_root is true.\n> >\n> >\n> > * The second issue[2]:\n> > -----\n> > CREATE TABLE sale (sale_date date not null,country_code text, product_sku text,\n> > units integer) PARTITION BY RANGE (sale_date);\n> > CREATE TABLE sale_201901 PARTITION OF sale FOR VALUES FROM ('2019-01-01') TO\n> > ('2019-02-01');\n> > CREATE TABLE sale_201902 PARTITION OF sale FOR VALUES FROM ('2019-02-01') TO\n> > ('2019-03-01');\n> >\n> > (1) PUB: CREATE PUBLICATION pub FOR TABLE sale_201901,\n> > sale_201902 WITH (publish_via_partition_root=true);\n> > (2) SUB: CREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres host=localhost port=5432' PUBLICATION pub;\n> > (3) PUB: INSERT INTO sale VALUES('2019-01-01', 'AU', 'cpu', 5), ('2019-01-02', 'AU', 'disk', 8);\n> > (4) SUB: SELECT * FROM sale;\n> > (5) PUB: ALTER PUBLICATION pub ADD TABLE sale;\n> > (6) SUB: ALTER SUBSCRIPTION sub REFRESH PUBLICATION;\n> > (7) SUB: SELECT * FROM sale;\n> > -----\n> >\n> > In step (7), we can see duplication of data.\n> >\n> > The reason is that the INSERTed data is first published though the partitions,\n> > since initially there is no partitioned table in the publication (so\n> > publish_via_partition_root=true doesn't have any effect). But then adding the\n> > partitioned table to the publication and refreshing the publication in the\n> > subscriber, the data is then published \"using the identity and schema of the\n> > partitioned table\" due to publish_via_partition_root=true.\n> > (Copied from Greg's analysis).\n> >\n> > Whether this behavior is correct is still under debate.\n> >\n> >\n> > Overall, I think the second issue still needs further discussion while the\n> > first issue seems clear that most people think it's unexpected. So, I think it\n> > might be better to fix the first issue.\n>\n> Thanks for the summary, Hou-san, and sorry about my late reply.\n>\n> I had thought about this some last week and I am coming around to\n> recognizing the confusing user experience of the current behavior.\n> Though, I am not sure if removing partitions from the result of\n> pg_publication_tables view for pubviaroot publications is acceptable\n> as a fix, because that prevents replicating into a subscriber node\n> where tables that are partition root and a partition respectively on\n> the publisher are independent tables on the subscriber.\n>\n\nBut we already do that way when the publication is \"For All Tables\".\nAnyway, for the purpose of initial sync, it will just replicate the\nsame data in two different tables if the corresponding tables on the\nsubscriber-side are non-partitioned which I am not sure is what the\nuser will be expecting.\n\n> ISTM that\n> what Amit K mentioned in his first reply may be closer to what we may\n> ultimately need to do, which is this:\n>\n> \"I think one possible idea to investigate is that on the\n> subscriber-side, after fetching tables, we check the already\n> subscribed tables and if the child tables already exist then we ignore\n> the parent table and vice versa.\"\n>\n> I had also thought about a way to implement that a bit and part of\n> that is to make pg_publication_tables always expose leaf partitions,\n> that is, even if pubviaroot is true for a given publication. So, when\n> puviaroot is true we include the partition root actually mentioned in\n> the publication as we do currently (changes streamed after initial\n> sync will always use its schema so the subscriber better know about\n> it), and also leaf partitions which we currently don't. The latter\n> too so that a subscriber can determine which leaf partitions to sync\n> and which ones to not based on whether a given leaf partitions is\n> already known to the subscriber, which means their pg_subscription_rel\n> entries would need to be made which we don't currently. Leaf\n> partition state entries will not be referred to after all of the known\n> ones are synced, because the subsequent changes will be streamed with\n> the root's schema. The initial sync worker would need to be taught to\n> skip processing any partitioned tables that it sees, because now we'd\n> be handling the initial sync part via individual leaf partitions.\n>\n\nYeah, we can do something like this as well but my guess is that this\nwill be a bit complicated. OTOH, if we adopt Hou-san's patch for the\nfirst problem as described in his previous email then I think the\nsolution for the second problem could be simpler than what you\ndescribed above.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 3 Nov 2021 15:43:30 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Nov 3, 2021 at 3:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 28, 2021 at 1:55 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> >\n> > Thanks for the summary, Hou-san, and sorry about my late reply.\n> >\n> > I had thought about this some last week and I am coming around to\n> > recognizing the confusing user experience of the current behavior.\n> > Though, I am not sure if removing partitions from the result of\n> > pg_publication_tables view for pubviaroot publications is acceptable\n> > as a fix, because that prevents replicating into a subscriber node\n> > where tables that are partition root and a partition respectively on\n> > the publisher are independent tables on the subscriber.\n> >\n>\n> But we already do that way when the publication is \"For All Tables\".\n> Anyway, for the purpose of initial sync, it will just replicate the\n> same data in two different tables if the corresponding tables on the\n> subscriber-side are non-partitioned which I am not sure is what the\n> user will be expecting.\n>\n\nOn further thinking about this, I think we should define the behavior\nof replication among partitioned (on the publisher) and\nnon-partitioned (on the subscriber) tables a bit more clearly.\n\n- If the \"publish_via_partition_root\" is set for a publication then we\ncan always replicate to the table with the same name as the root table\nin publisher.\n- If the \"publish_via_partition_root\" is *not* set for a publication\nthen we can always replicate to the tables with the same name as the\nnon-root tables in publisher.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 4 Nov 2021 09:43:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Nov 4, 2021 at 3:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On further thinking about this, I think we should define the behavior\n> of replication among partitioned (on the publisher) and\n> non-partitioned (on the subscriber) tables a bit more clearly.\n>\n> - If the \"publish_via_partition_root\" is set for a publication then we\n> can always replicate to the table with the same name as the root table\n> in publisher.\n> - If the \"publish_via_partition_root\" is *not* set for a publication\n> then we can always replicate to the tables with the same name as the\n> non-root tables in publisher.\n>\n> Thoughts?\n>\n\nI'd adjust that wording slightly, because \"we can always replicate to\n...\" sounds a bit vague, and saying that an option is set or not set\ncould be misinterpreted, as the option could be \"set\" to false.\n\nHow about:\n\n- If \"publish_via_partition_root\" is true for a publication, then data\nis replicated to the table with the same name as the root (i.e.\npartitioned) table in the publisher.\n- If \"publish_via_partition_root\" is false (the default) for a\npublication, then data is replicated to tables with the same name as\nthe non-root (i.e. partition) tables in the publisher.\n\n?\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 4 Nov 2021 17:53:24 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Nov 4, 2021 at 12:23 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Thu, Nov 4, 2021 at 3:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On further thinking about this, I think we should define the behavior\n> > of replication among partitioned (on the publisher) and\n> > non-partitioned (on the subscriber) tables a bit more clearly.\n> >\n> > - If the \"publish_via_partition_root\" is set for a publication then we\n> > can always replicate to the table with the same name as the root table\n> > in publisher.\n> > - If the \"publish_via_partition_root\" is *not* set for a publication\n> > then we can always replicate to the tables with the same name as the\n> > non-root tables in publisher.\n> >\n> > Thoughts?\n> >\n>\n> I'd adjust that wording slightly, because \"we can always replicate to\n> ...\" sounds a bit vague, and saying that an option is set or not set\n> could be misinterpreted, as the option could be \"set\" to false.\n>\n> How about:\n>\n> - If \"publish_via_partition_root\" is true for a publication, then data\n> is replicated to the table with the same name as the root (i.e.\n> partitioned) table in the publisher.\n> - If \"publish_via_partition_root\" is false (the default) for a\n> publication, then data is replicated to tables with the same name as\n> the non-root (i.e. partition) tables in the publisher.\n>\n\nSounds good to me. If we follow this then I think the patch by Hou-San\nis good to solve the first problem as described in his last email [1]?\n\n[1] - https://www.postgresql.org/message-id/OS0PR01MB5716C756312959F293A822C794869%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 4 Nov 2021 13:40:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Nov 4, 2021 at 7:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Nov 4, 2021 at 12:23 PM Greg Nancarrow <gregn4422@gmail.com>\nwrote:\n> >\n> > On Thu, Nov 4, 2021 at 3:13 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n> > >\n> > > On further thinking about this, I think we should define the behavior\n> > > of replication among partitioned (on the publisher) and\n> > > non-partitioned (on the subscriber) tables a bit more clearly.\n> > >\n> > > - If the \"publish_via_partition_root\" is set for a publication then we\n> > > can always replicate to the table with the same name as the root table\n> > > in publisher.\n> > > - If the \"publish_via_partition_root\" is *not* set for a publication\n> > > then we can always replicate to the tables with the same name as the\n> > > non-root tables in publisher.\n> > >\n> > > Thoughts?\n> > >\n> >\n> > I'd adjust that wording slightly, because \"we can always replicate to\n> > ...\" sounds a bit vague, and saying that an option is set or not set\n> > could be misinterpreted, as the option could be \"set\" to false.\n> >\n> > How about:\n> >\n> > - If \"publish_via_partition_root\" is true for a publication, then data\n> > is replicated to the table with the same name as the root (i.e.\n> > partitioned) table in the publisher.\n> > - If \"publish_via_partition_root\" is false (the default) for a\n> > publication, then data is replicated to tables with the same name as\n> > the non-root (i.e. partition) tables in the publisher.\n> >\n>\n> Sounds good to me. If we follow this then I think the patch by Hou-San\n> is good to solve the first problem as described in his last email [1]?\n>\n> [1] -\nhttps://www.postgresql.org/message-id/OS0PR01MB5716C756312959F293A822C794869%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n>\n\nAlmost.\nThe patch does seem to solve that first problem (double publish on\ntablesync).\nI used the following test (taken from [2]), and variations of it:\n\n--- Setup\ncreate schema sch1;\ncreate schema sch2;\ncreate table sch1.tbl1 (a int) partition by range (a);\ncreate table sch2.tbl1_part1 partition of sch1.tbl1 for values from (1) to\n(10);\ncreate table sch2.tbl1_part2 partition of sch1.tbl1 for values from\n(10) to (20);\ncreate schema sch3;\ncreate table sch3.t1(c1 int);\n\n--- Publication\ncreate publication pub1 for all tables in schema sch3, table\nsch1.tbl1, table sch2.tbl1_part1 with ( publish_via_partition_root=on);\ninsert into sch1.tbl1 values(1);\ninsert into sch1.tbl1 values(11);\ninsert into sch3.t1 values(1);\n\n---- Subscription\nCREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres host=localhost\nport=5432' PUBLICATION pub1;\n\n\n[2] -\nhttps://postgr.es/m/CALDaNm3vxjPMMSrVDNK0f8UWP+EQ5ry14xfEukmXsVg_UcwZNA@mail.gmail.com\n\n\nHowever, there did still seem to be a problem, if\npublish_via_partition_root is then set to false; it seems that can result\nin duplicate partition entries in the pg_publication_tables view, see below\n(this follows on from the test scenario given above):\n\npostgres=# select * from pg_publication_tables;\n pubname | schemaname | tablename\n---------+------------+-----------\n pub1 | sch1 | tbl1\n pub1 | sch3 | t1\n(2 rows)\n\npostgres=# alter publication pub1 set (publish_via_partition_root=false);\nALTER PUBLICATION\npostgres=# select * from pg_publication_tables;\n pubname | schemaname | tablename\n---------+------------+------------\n pub1 | sch2 | tbl1_part1\n pub1 | sch2 | tbl1_part2\n pub1 | sch2 | tbl1_part1\n pub1 | sch3 | t1\n(4 rows)\n\nSo I think the patch would need to be updated to prevent that.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\nOn Thu, Nov 4, 2021 at 7:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:>> On Thu, Nov 4, 2021 at 12:23 PM Greg Nancarrow <gregn4422@gmail.com> wrote:> >> > On Thu, Nov 4, 2021 at 3:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:> > >> > > On further thinking about this, I think we should define the behavior> > > of replication among partitioned (on the publisher) and> > > non-partitioned (on the subscriber) tables a bit more clearly.> > >> > > - If the \"publish_via_partition_root\" is set for a publication then we> > > can always replicate to the table with the same name as the root table> > > in publisher.> > > - If the \"publish_via_partition_root\" is *not* set for a publication> > > then we can always replicate to the tables with the same name as the> > > non-root tables in publisher.> > >> > > Thoughts?> > >> >> > I'd adjust that wording slightly, because \"we can always replicate to> > ...\" sounds a bit vague, and saying that an option is set or not set> > could be misinterpreted, as the option could be \"set\" to false.> >> > How about:> >> > - If \"publish_via_partition_root\" is true for a publication, then data> > is replicated to the table with the same name as the root (i.e.> > partitioned) table in the publisher.> > - If \"publish_via_partition_root\" is false (the default) for a> > publication, then data is replicated to tables with the same name as> > the non-root (i.e. partition) tables in the publisher.> >>> Sounds good to me. If we follow this then I think the patch by Hou-San> is good to solve the first problem as described in his last email [1]?>> [1] - https://www.postgresql.org/message-id/OS0PR01MB5716C756312959F293A822C794869%40OS0PR01MB5716.jpnprd01.prod.outlook.com>Almost.The patch does seem to solve that first problem (double publish on tablesync).I used the following test (taken from [2]), and variations of it:  --- Setupcreate schema sch1;create schema sch2;create table sch1.tbl1 (a int) partition by range (a);create table sch2.tbl1_part1 partition of sch1.tbl1 for values from (1) to (10);create table sch2.tbl1_part2 partition of sch1.tbl1 for values from(10) to (20);create schema sch3;create table sch3.t1(c1 int);--- Publicationcreate publication pub1 for all tables in schema sch3, tablesch1.tbl1, table sch2.tbl1_part1 with ( publish_via_partition_root=on);insert into sch1.tbl1 values(1);insert into sch1.tbl1 values(11);insert into sch3.t1 values(1);---- SubscriptionCREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres host=localhostport=5432' PUBLICATION pub1;[2] - https://postgr.es/m/CALDaNm3vxjPMMSrVDNK0f8UWP+EQ5ry14xfEukmXsVg_UcwZNA@mail.gmail.comHowever, there did still seem to be a problem, if publish_via_partition_root is then set to false; it seems that can result in duplicate partition entries in the pg_publication_tables view, see below (this follows on from the test scenario given above):postgres=# select * from pg_publication_tables; pubname | schemaname | tablename ---------+------------+----------- pub1    | sch1       | tbl1 pub1    | sch3       | t1(2 rows)postgres=#  alter publication pub1 set (publish_via_partition_root=false);ALTER PUBLICATIONpostgres=# select * from pg_publication_tables; pubname | schemaname | tablename  ---------+------------+------------ pub1    | sch2       | tbl1_part1 pub1    | sch2       | tbl1_part2 pub1    | sch2       | tbl1_part1 pub1    | sch3       | t1(4 rows)So I think the patch would need to be updated to prevent that.Regards,Greg NancarrowFujitsu Australia", "msg_date": "Fri, 5 Nov 2021 14:20:15 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Friday, November 5, 2021 11:20 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\r\n>On Thu, Nov 4, 2021 at 7:10 PM Amit Kapila <mailto:amit.kapila16@gmail.com> wrote:\r\n>>\r\n>> On Thu, Nov 4, 2021 at 12:23 PM Greg Nancarrow <mailto:gregn4422@gmail.com> wrote:\r\n>> >\r\n>> > On Thu, Nov 4, 2021 at 3:13 PM Amit Kapila <mailto:amit.kapila16@gmail.com> wrote:\r\n>> > >\r\n>> > > On further thinking about this, I think we should define the behavior\r\n>> > > of replication among partitioned (on the publisher) and\r\n>> > > non-partitioned (on the subscriber) tables a bit more clearly.\r\n>> > >\r\n>> > > - If the \"publish_via_partition_root\" is set for a publication then we\r\n>> > > can always replicate to the table with the same name as the root table\r\n>> > > in publisher.\r\n>> > > - If the \"publish_via_partition_root\" is *not* set for a publication\r\n>> > > then we can always replicate to the tables with the same name as the\r\n>> > > non-root tables in publisher.\r\n>> > >\r\n>> > > Thoughts?\r\n>> > >\r\n>> >\r\n>> > I'd adjust that wording slightly, because \"we can always replicate to\r\n>> > ...\" sounds a bit vague, and saying that an option is set or not set\r\n>> > could be misinterpreted, as the option could be \"set\" to false.\r\n>> >\r\n>> > How about:\r\n>> >\r\n>> > - If \"publish_via_partition_root\" is true for a publication, then data\r\n>> > is replicated to the table with the same name as the root (i.e.\r\n>> > partitioned) table in the publisher.\r\n>> > - If \"publish_via_partition_root\" is false (the default) for a\r\n>> > publication, then data is replicated to tables with the same name as\r\n>> > the non-root (i.e. partition) tables in the publisher.\r\n>> >\r\n>>\r\n>> Sounds good to me. If we follow this then I think the patch by Hou-San\r\n>> is good to solve the first problem as described in his last email [1]?\r\n>>\r\n>> [1] - https://www.postgresql.org/message-id/OS0PR01MB5716C756312959F293A822C794869%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n>>\r\n>\r\n>Almost.\r\n>The patch does seem to solve that first problem (double publish on tablesync).\r\n>I used the following test (taken from [2]), and variations of it: \r\n>\r\n>However, there did still seem to be a problem, if publish_via_partition_root is then set to false; it seems that can result in \r\n>duplicate partition entries in the pg_publication_tables view, see below (this follows on from the test scenario given above):\r\n>\r\n>postgres=# select * from pg_publication_tables;\r\n> pubname | schemaname | tablename \r\n>---------+------------+-----------\r\n> pub1 | sch1 | tbl1\r\n> pub1 | sch3 | t1\r\n>(2 rows)\r\n>\r\n>postgres=# alter publication pub1 set (publish_via_partition_root=false);\r\n>ALTER PUBLICATION\r\n>postgres=# select * from pg_publication_tables;\r\n> pubname | schemaname | tablename \r\n>---------+------------+------------\r\n> pub1 | sch2 | tbl1_part1\r\n> pub1 | sch2 | tbl1_part2\r\n> pub1 | sch2 | tbl1_part1\r\n> pub1 | sch3 | t1\r\n>(4 rows)\r\n>\r\n>So I think the patch would need to be updated to prevent that.\r\n\r\nThanks for testing the patch.\r\n\r\nThe reason of the duplicate output is that:\r\nThe existing function GetPublicationRelations doesn't de-duplicate the output\r\noid list. So, when adding both child and parent table to the\r\npublication(pubviaroot = false), the pg_publication_tables view will output\r\nduplicate partition.\r\n\r\nAttach the fix patch.\r\n0001 fix data double publish(first issue in this thread)\r\n0002 fix duplicate partition in view pg_publication_tables(reported by greg when testing the 0001 patch)\r\n\r\nAbout the fix for second issue in this thread.\r\n> \"I think one possible idea to investigate is that on the \r\n> subscriber-side, after fetching tables, we check the already \r\n> subscribed tables and if the child tables already exist then we ignore \r\n> the parent table and vice versa.\"\r\n\r\nWhen looking into how to fix the second issue, I have a question:\r\n\r\nAfter changing publish_via_partition_root from false to true, the\r\nsubcriber will fetch the partitioned table from publisher when refreshing.\r\n\r\nIn subsriber side, If all the child tables of the partitioned table already\r\nsubscribed, then we can just skip the table sync for the partitioned table. But\r\nif only some of the child tables(not all child tables) were already subscribed,\r\nshould we skip the partitioned table's table sync ? I am not sure about the\r\nappropriate behavior here.\r\n\r\nWhat do you think ?\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 11 Nov 2021 06:52:40 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Nov 11, 2021 at 5:52 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> When looking into how to fix the second issue, I have a question:\n>\n> After changing publish_via_partition_root from false to true, the\n> subcriber will fetch the partitioned table from publisher when refreshing.\n>\n> In subsriber side, If all the child tables of the partitioned table already\n> subscribed, then we can just skip the table sync for the partitioned table. But\n> if only some of the child tables(not all child tables) were already subscribed,\n> should we skip the partitioned table's table sync ? I am not sure about the\n> appropriate behavior here.\n>\n> What do you think ?\n>\n\nI'm not sure you can skip the partitioned table's table sync as you\nare suggesting, because on the subscriber side, the tables are mapped\nby name, so what is a partitioned table on the publisher side might\nnot be a partitioned table on the subscriber side (e.g. might be an\nordinary table; and similarly for the partitions) or it might be\npartitioned differently to that on the publisher side. (I might be\nwrong here, and I don't have a good solution, but I can see the\npotential for inconsistent data resulting in this case, unless say,\nthe subscriber \"child tables\" are first truncated on the refresh, if\nthey are in fact partitions of the root, and then the table sync\npublishes the existing data via the root)\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 11 Nov 2021 19:14:26 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Nov 11, 2021 at 1:44 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Thu, Nov 11, 2021 at 5:52 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > When looking into how to fix the second issue, I have a question:\n> >\n> > After changing publish_via_partition_root from false to true, the\n> > subcriber will fetch the partitioned table from publisher when refreshing.\n> >\n> > In subsriber side, If all the child tables of the partitioned table already\n> > subscribed, then we can just skip the table sync for the partitioned table. But\n> > if only some of the child tables(not all child tables) were already subscribed,\n> > should we skip the partitioned table's table sync ? I am not sure about the\n> > appropriate behavior here.\n> >\n> > What do you think ?\n> >\n>\n> I'm not sure you can skip the partitioned table's table sync as you\n> are suggesting, because on the subscriber side, the tables are mapped\n> by name, so what is a partitioned table on the publisher side might\n> not be a partitioned table on the subscriber side (e.g. might be an\n> ordinary table; and similarly for the partitions) or it might be\n> partitioned differently to that on the publisher side.\n>\n\nSure, we don't know about that, or at least there is no such mapping\nthat is recorded. So, I think we should skip it even if any one of the\nchild table is present.\n\n> (I might be\n> wrong here, and I don't have a good solution, but I can see the\n> potential for inconsistent data resulting in this case, unless say,\n> the subscriber \"child tables\" are first truncated on the refresh, if\n> they are in fact partitions of the root, and then the table sync\n> publishes the existing data via the root)\n>\n\nDo you want to say current behavior for this case where data is copied\ntwice is okay? I think we need to find a way to handle this and\ndocument it to set the expectations right.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 12 Nov 2021 09:41:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Nov 11, 2021 at 12:22 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, November 5, 2021 11:20 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >On Thu, Nov 4, 2021 at 7:10 PM Amit Kapila <mailto:amit.kapila16@gmail.com> wrote:\n> >\n> >Almost.\n> >The patch does seem to solve that first problem (double publish on tablesync).\n> >I used the following test (taken from [2]), and variations of it:\n> >\n> >However, there did still seem to be a problem, if publish_via_partition_root is then set to false; it seems that can result in\n> >duplicate partition entries in the pg_publication_tables view, see below (this follows on from the test scenario given above):\n> >\n> >postgres=# select * from pg_publication_tables;\n> > pubname | schemaname | tablename\n> >---------+------------+-----------\n> > pub1 | sch1 | tbl1\n> > pub1 | sch3 | t1\n> >(2 rows)\n> >\n> >postgres=# alter publication pub1 set (publish_via_partition_root=false);\n> >ALTER PUBLICATION\n> >postgres=# select * from pg_publication_tables;\n> > pubname | schemaname | tablename\n> >---------+------------+------------\n> > pub1 | sch2 | tbl1_part1\n> > pub1 | sch2 | tbl1_part2\n> > pub1 | sch2 | tbl1_part1\n> > pub1 | sch3 | t1\n> >(4 rows)\n> >\n> >So I think the patch would need to be updated to prevent that.\n>\n> Thanks for testing the patch.\n>\n> The reason of the duplicate output is that:\n> The existing function GetPublicationRelations doesn't de-duplicate the output\n> oid list. So, when adding both child and parent table to the\n> publication(pubviaroot = false), the pg_publication_tables view will output\n> duplicate partition.\n>\n> Attach the fix patch.\n> 0001 fix data double publish(first issue in this thread)\n> 0002 fix duplicate partition in view pg_publication_tables(reported by greg when testing the 0001 patch)\n>\n\nCan we start a separate thread to discuss the 0002 patch as that\ndoesn't seem directly to duplicate data issues being discussed here?\nPlease specify the exact test in the email as that would make it\neasier to understand the problem.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 12 Nov 2021 09:57:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Friday, November 12, 2021 12:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Nov 11, 2021 at 12:22 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Friday, November 5, 2021 11:20 AM Greg Nancarrow\r\n> <gregn4422@gmail.com> wrote:\r\n> > >On Thu, Nov 4, 2021 at 7:10 PM Amit Kapila\r\n> <mailto:amit.kapila16@gmail.com> wrote:\r\n> > >\r\n> > >Almost.\r\n> > >The patch does seem to solve that first problem (double publish on\r\n> tablesync).\r\n> > >I used the following test (taken from [2]), and variations of it:\r\n> > >\r\n> > >However, there did still seem to be a problem, if\r\n> > >publish_via_partition_root is then set to false; it seems that can result in\r\n> duplicate partition entries in the pg_publication_tables view, see below (this\r\n> follows on from the test scenario given above):\r\n> > >\r\n> > >postgres=# select * from pg_publication_tables; pubname | schemaname\r\n> > >| tablename\r\n> > >---------+------------+-----------\r\n> > > pub1 | sch1 | tbl1\r\n> > > pub1 | sch3 | t1\r\n> > >(2 rows)\r\n> > >\r\n> > >postgres=# alter publication pub1 set\r\n> > >(publish_via_partition_root=false);\r\n> > >ALTER PUBLICATION\r\n> > >postgres=# select * from pg_publication_tables; pubname | schemaname\r\n> > >| tablename\r\n> > >---------+------------+------------\r\n> > > pub1 | sch2 | tbl1_part1\r\n> > > pub1 | sch2 | tbl1_part2\r\n> > > pub1 | sch2 | tbl1_part1\r\n> > > pub1 | sch3 | t1\r\n> > >(4 rows)\r\n> > >\r\n> > >So I think the patch would need to be updated to prevent that.\r\n> >\r\n> > Thanks for testing the patch.\r\n> >\r\n> > The reason of the duplicate output is that:\r\n> > The existing function GetPublicationRelations doesn't de-duplicate the\r\n> > output oid list. So, when adding both child and parent table to the\r\n> > publication(pubviaroot = false), the pg_publication_tables view will\r\n> > output duplicate partition.\r\n> >\r\n> > Attach the fix patch.\r\n> > 0001 fix data double publish(first issue in this thread)\r\n> > 0002 fix duplicate partition in view pg_publication_tables(reported by\r\n> > greg when testing the 0001 patch)\r\n> >\r\n> \r\n> Can we start a separate thread to discuss the 0002 patch as that doesn't seem\r\n> directly to duplicate data issues being discussed here?\r\n> Please specify the exact test in the email as that would make it easier to\r\n> understand the problem.\r\n\r\nThanks for the suggestion.\r\nI have started a new thread about this issue[1].\r\n\r\n[1] https://www.postgresql.org/message-id/OS0PR01MB5716E97F00732B52DC2BBC2594989%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Tue, 16 Nov 2021 01:56:41 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thursday, November 11, 2021 2:53 PM houzj.fnst@fujitsu.com wrote:\r\n> Attach the fix patch.\r\n> 0001 fix data double publish(first issue in this thread)\r\n\r\nIn another thread[1], Amit L suggested that it'd be nice to add a testcase in\r\nsrc/test/subscription/. So, attach a new version patch which add a testcase in\r\nt/013_partition.pl.\r\n\r\n[1] https://www.postgresql.org/message-id/CA%2BHiwqEjV%3D7iEW8hxnr73pWsDQuonDPLgsxXTYDQzDA7W9vrmw%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Mon, 29 Nov 2021 08:51:35 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Nov 29, 2021 at 2:21 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Thursday, November 11, 2021 2:53 PM houzj.fnst@fujitsu.com wrote:\n> > Attach the fix patch.\n> > 0001 fix data double publish(first issue in this thread)\n>\n> In another thread[1], Amit L suggested that it'd be nice to add a testcase in\n> src/test/subscription/. So, attach a new version patch which add a testcase in\n> t/013_partition.pl.\n>\n\nThanks, your patch looks good to me. I have slightly changed the\ncomments and commit message in the attached.\n\nI think we should back-patch this but I am slightly worried that if\nsomeone is dependent on the view pg_publication_tables to return both\nparent and child tables for publications that have both of those\ntables and published with publish_via_partition_root as true then this\nmight break his usage. But OTOH, I don't see why someone would do like\nthat and she might face some problems like what we are trying to solve\nhere.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 1 Dec 2021 16:45:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Dec 1, 2021 at 10:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Thanks, your patch looks good to me. I have slightly changed the\n> comments and commit message in the attached.\n>\n\nI'd suggest tidying the patch comment a bit:\n\n\"We publish the child table's data twice for a publication that has both\nchild and parent tables and is published with publish_via_partition_root\nas true. This happens because subscribers will initiate synchronization\nusing both parent and child tables, since it gets both as separate tables\nin the initial table list.\"\n\nAlso, perhaps the following additional comment (or similar) could be\nadded to the pg_publication_tables documentation in catalogs.sgml:\n\nFor publications of partitioned tables with publish_via_partition_root\nset to true, the partitioned table itself (rather than the individual\npartitions) is included in the view.\n\n> I think we should back-patch this but I am slightly worried ...\n\nI'd be in favor of back-patching this.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 2 Dec 2021 10:21:39 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Dec 2, 2021 at 4:51 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Wed, Dec 1, 2021 at 10:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> Also, perhaps the following additional comment (or similar) could be\n> added to the pg_publication_tables documentation in catalogs.sgml:\n>\n> For publications of partitioned tables with publish_via_partition_root\n> set to true, the partitioned table itself (rather than the individual\n> partitions) is included in the view.\n>\n\nOkay, but I think it is better to add the behavior both when\npublish_via_partition_root is set to true and false. As in the case of\nfalse, it won't include the partitioned table itself.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 2 Dec 2021 08:03:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Dec 2, 2021 at 1:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > For publications of partitioned tables with publish_via_partition_root\n> > set to true, the partitioned table itself (rather than the individual\n> > partitions) is included in the view.\n> >\n>\n> Okay, but I think it is better to add the behavior both when\n> publish_via_partition_root is set to true and false. As in the case of\n> false, it won't include the partitioned table itself.\n>\n\nIf you updated my original description to say \"(instead of just the\nindividual partitions)\", it would imply the same I think.\nBut I don't mind if you want to explicitly state both cases to make it clear.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 2 Dec 2021 13:48:34 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Dec 1, 2021 at 8:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Mon, Nov 29, 2021 at 2:21 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Thursday, November 11, 2021 2:53 PM houzj.fnst@fujitsu.com wrote:\n> > > Attach the fix patch.\n> > > 0001 fix data double publish(first issue in this thread)\n> >\n> > In another thread[1], Amit L suggested that it'd be nice to add a testcase in\n> > src/test/subscription/. So, attach a new version patch which add a testcase in\n> > t/013_partition.pl.\n> >\n>\n> Thanks, your patch looks good to me. I have slightly changed the\n> comments and commit message in the attached.\n\nPatch looks good to me too. I confirmed that the newly added\nsubscription test fails with HEAD.\n\n> I think we should back-patch this but I am slightly worried that if\n> someone is dependent on the view pg_publication_tables to return both\n> parent and child tables for publications that have both of those\n> tables and published with publish_via_partition_root as true then this\n> might break his usage. But OTOH, I don't see why someone would do like\n> that and she might face some problems like what we are trying to solve\n> here.\n\nYeah, back-patching may not be such a bad idea.\n\nThank you.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Dec 2021 12:30:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Dec 2, 2021 at 1:48 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> If you updated my original description to say \"(instead of just the\n> individual partitions)\", it would imply the same I think.\n> But I don't mind if you want to explicitly state both cases to make it clear.\n>\n\nFor example, something like:\n\nFor publications of partitioned tables with publish_via_partition_root\nset to true, only the partitioned table (and not its partitions) is\nincluded in the view, whereas if publish_via_partition_root is set to\nfalse, only the individual partitions are included in the view.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 2 Dec 2021 15:10:54 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Dec 2, 2021 at 9:41 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Thu, Dec 2, 2021 at 1:48 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > If you updated my original description to say \"(instead of just the\n> > individual partitions)\", it would imply the same I think.\n> > But I don't mind if you want to explicitly state both cases to make it clear.\n> >\n>\n> For example, something like:\n>\n> For publications of partitioned tables with publish_via_partition_root\n> set to true, only the partitioned table (and not its partitions) is\n> included in the view, whereas if publish_via_partition_root is set to\n> false, only the individual partitions are included in the view.\n>\n\nYeah, that sounds good to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 2 Dec 2021 10:20:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thursday, December 2, 2021 12:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Dec 2, 2021 at 9:41 AM Greg Nancarrow <gregn4422@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Thu, Dec 2, 2021 at 1:48 PM Greg Nancarrow <gregn4422@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > If you updated my original description to say \"(instead of just the\r\n> > > individual partitions)\", it would imply the same I think.\r\n> > > But I don't mind if you want to explicitly state both cases to make it clear.\r\n> > >\r\n> >\r\n> > For example, something like:\r\n> >\r\n> > For publications of partitioned tables with publish_via_partition_root\r\n> > set to true, only the partitioned table (and not its partitions) is\r\n> > included in the view, whereas if publish_via_partition_root is set to\r\n> > false, only the individual partitions are included in the view.\r\n> >\r\n> \r\n> Yeah, that sounds good to me.\r\n\r\nIt looks good to me as well.\r\nAttach the patches for (HEAD~13) which merge the suggested doc change. I\r\nprepared the code patch and test patch separately to make it easier for committer \r\nto confirm.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 2 Dec 2021 08:54:18 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thursday, December 2, 2021 4:54 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> On Thursday, December 2, 2021 12:50 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > On Thu, Dec 2, 2021 at 9:41 AM Greg Nancarrow <gregn4422@gmail.com>\r\n> > wrote:\r\n> > >\r\n> > > On Thu, Dec 2, 2021 at 1:48 PM Greg Nancarrow <gregn4422@gmail.com>\r\n> > wrote:\r\n> > > >\r\n> > > > If you updated my original description to say \"(instead of just\r\n> > > > the individual partitions)\", it would imply the same I think.\r\n> > > > But I don't mind if you want to explicitly state both cases to make it clear.\r\n> > > >\r\n> > >\r\n> > > For example, something like:\r\n> > >\r\n> > > For publications of partitioned tables with\r\n> > > publish_via_partition_root set to true, only the partitioned table\r\n> > > (and not its partitions) is included in the view, whereas if\r\n> > > publish_via_partition_root is set to false, only the individual partitions are\r\n> included in the view.\r\n> > >\r\n> >\r\n> > Yeah, that sounds good to me.\r\n> \r\n> It looks good to me as well.\r\n> Attach the patches for (HEAD~13) which merge the suggested doc change. I\r\n> prepared the code patch and test patch separately to make it easier for\r\n> committer to confirm.\r\n\r\nIt seems we might not need to backpatch the doc change, so\r\nattach another version which remove the doc changes from backpatch patches.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Fri, 3 Dec 2021 05:54:21 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, Dec 3, 2021 at 11:24 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Thursday, December 2, 2021 4:54 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n> > On Thursday, December 2, 2021 12:50 PM Amit Kapila\n> > <amit.kapila16@gmail.com> wrote:\n> > > On Thu, Dec 2, 2021 at 9:41 AM Greg Nancarrow <gregn4422@gmail.com>\n> > > wrote:\n> > > >\n> > > > On Thu, Dec 2, 2021 at 1:48 PM Greg Nancarrow <gregn4422@gmail.com>\n> > > wrote:\n> > > > >\n> > > > > If you updated my original description to say \"(instead of just\n> > > > > the individual partitions)\", it would imply the same I think.\n> > > > > But I don't mind if you want to explicitly state both cases to make it clear.\n> > > > >\n> > > >\n> > > > For example, something like:\n> > > >\n> > > > For publications of partitioned tables with\n> > > > publish_via_partition_root set to true, only the partitioned table\n> > > > (and not its partitions) is included in the view, whereas if\n> > > > publish_via_partition_root is set to false, only the individual partitions are\n> > included in the view.\n> > > >\n> > >\n> > > Yeah, that sounds good to me.\n> >\n> > It looks good to me as well.\n> > Attach the patches for (HEAD~13) which merge the suggested doc change. I\n> > prepared the code patch and test patch separately to make it easier for\n> > committer to confirm.\n>\n> It seems we might not need to backpatch the doc change, so\n> attach another version which remove the doc changes from backpatch patches.\n\nThanks for the patches, the patch applies and the test passes in head\nand the back branches. one minor suggestion:\n1) Shall we change:\n+ <para>\n+ For publications of partitioned tables with\n+ <literal>publish_via_partition_root</literal> set to\n+ <literal>true</literal>, only the partitioned table (and not its partitions)\n+ is included in the view, whereas if\n+ <literal>publish_via_partition_root</literal> is set to\n+ <literal>false</literal>, only the individual partitions are included in the\n+ view.\n+ </para>\nTo:\n+ <para>\n+ For publications of partitioned tables with\n+ <literal>publish_via_partition_root</literal> set to\n+ <literal>true</literal>, only the partitioned table (and not its partitions)\n+ is included in the view, whereas if\n+ <literal>publish_via_partition_root</literal> is set to\n+ <literal>false</literal>, only the individual partitions (and not the\n+ partitioned table) are included in the\n+ view.\n+ </para>\n\n2) Any particular reason why the code and tests are backbranched but\nnot the document changes?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 7 Dec 2021 17:53:00 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tue, Dec 7, 2021 at 5:53 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, Dec 3, 2021 at 11:24 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n>\n> 2) Any particular reason why the code and tests are backbranched but\n> not the document changes?\n>\n\nI am not sure whether we need the doc change or not as this is not a\nnew feature and even if we need it as an improvement to docs, shall we\nconsider backpatching it? I felt that code changes are required to fix\na known issue so the case of backpatching it is clear.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 8 Dec 2021 11:11:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Dec 8, 2021 at 11:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Dec 7, 2021 at 5:53 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Fri, Dec 3, 2021 at 11:24 AM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> >\n> > 2) Any particular reason why the code and tests are backbranched but\n> > not the document changes?\n> >\n>\n> I am not sure whether we need the doc change or not as this is not a\n> new feature and even if we need it as an improvement to docs, shall we\n> consider backpatching it? I felt that code changes are required to fix\n> a known issue so the case of backpatching it is clear.\n\nThanks for the clarification, I got your point. I'm fine either way\nregarding the documentation change. The rest of the patch looks good\nto me.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 8 Dec 2021 11:30:27 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Dec 8, 2021 at 11:30 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Dec 8, 2021 at 11:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Dec 7, 2021 at 5:53 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Fri, Dec 3, 2021 at 11:24 AM houzj.fnst@fujitsu.com\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > >\n> > > 2) Any particular reason why the code and tests are backbranched but\n> > > not the document changes?\n> > >\n> >\n> > I am not sure whether we need the doc change or not as this is not a\n> > new feature and even if we need it as an improvement to docs, shall we\n> > consider backpatching it? I felt that code changes are required to fix\n> > a known issue so the case of backpatching it is clear.\n>\n> Thanks for the clarification, I got your point. I'm fine either way\n> regarding the documentation change. The rest of the patch looks good\n> to me.\n>\n\nOkay, I have also verified the code and test changes for all branches.\nI'll wait for a day to see if anybody else has any comments and then\ncommit this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 8 Dec 2021 11:38:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Dec 8, 2021 at 11:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 8, 2021 at 11:30 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Wed, Dec 8, 2021 at 11:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Dec 7, 2021 at 5:53 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > On Fri, Dec 3, 2021 at 11:24 AM houzj.fnst@fujitsu.com\n> > > > <houzj.fnst@fujitsu.com> wrote:\n> > > > >\n> > > >\n> > > > 2) Any particular reason why the code and tests are backbranched but\n> > > > not the document changes?\n> > > >\n> > >\n> > > I am not sure whether we need the doc change or not as this is not a\n> > > new feature and even if we need it as an improvement to docs, shall we\n> > > consider backpatching it? I felt that code changes are required to fix\n> > > a known issue so the case of backpatching it is clear.\n> >\n> > Thanks for the clarification, I got your point. I'm fine either way\n> > regarding the documentation change. The rest of the patch looks good\n> > to me.\n> >\n>\n> Okay, I have also verified the code and test changes for all branches.\n> I'll wait for a day to see if anybody else has any comments and then\n> commit this.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 9 Dec 2021 16:21:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Hi,\r\n\r\nWhen reviewing some logical replication related features. I noticed another\r\npossible problem if the subscriber subscribes multiple publications which\r\npublish parent and child table.\r\n\r\nFor example:\r\n\r\n----pub\r\ncreate table t (a int, b int, c int) partition by range (a);\r\ncreate table t_1 partition of t for values from (1) to (10);\r\n\r\ncreate publication pub1 for table t\r\n with (PUBLISH_VIA_PARTITION_ROOT);\r\ncreate publication pub2 for table t_1\r\n with (PUBLISH_VIA_PARTITION_ROOT);\r\n\r\n----sub\r\n---- prepare table t and t_1\r\nCREATE SUBSCRIPTION sub CONNECTION 'port=10000 dbname=postgres' PUBLICATION pub1, pub2;\r\n\r\nselect * from pg_subscription_rel ;\r\n srsubid | srrelid | srsubstate | srsublsn\r\n---------+---------+------------+-----------\r\n 16391 | 16385(t) | r | 0/150D100\r\n 16391 | 16388(t_1) | r | 0/150D138\r\n\r\nIf subscribe two publications one of them publish parent table with\r\n(pubviaroot=true) and another publish child table. Both the parent table and\r\nchild table will exist in pg_subscription_rel which also means we will do\r\ninitial copy for both tables.\r\n\r\nBut after initial copy, we only publish change with the schema of the parent\r\ntable(t). It looks a bit inconsistent.\r\n\r\nBased on the document of PUBLISH_VIA_PARTITION_ROOT option. I think the\r\nexpected behavior could be we only store the top most parent(table t) in\r\npg_subscription_rel and do initial copy for it if pubviaroot is on. I haven't\r\nthought about how to fix this and will investigate this later.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Thu, 10 Mar 2022 02:17:32 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thur, Mar 10, 2021 at 10:08 AM houzj.fnst@fujitsu.com wrote:\r\n> Hi,\r\n> \r\n> When reviewing some logical replication related features. I noticed another\r\n> possible problem if the subscriber subscribes multiple publications which\r\n> publish parent and child table.\r\n> \r\n> For example:\r\n> \r\n> ----pub\r\n> create table t (a int, b int, c int) partition by range (a);\r\n> create table t_1 partition of t for values from (1) to (10);\r\n> \r\n> create publication pub1 for table t\r\n> with (PUBLISH_VIA_PARTITION_ROOT);\r\n> create publication pub2 for table t_1\r\n> with (PUBLISH_VIA_PARTITION_ROOT);\r\n> \r\n> ----sub\r\n> ---- prepare table t and t_1\r\n> CREATE SUBSCRIPTION sub CONNECTION 'port=10000 dbname=postgres'\r\n> PUBLICATION pub1, pub2;\r\n> \r\n> select * from pg_subscription_rel ;\r\n> srsubid | srrelid | srsubstate | srsublsn\r\n> ---------+---------+------------+-----------\r\n> 16391 | 16385(t) | r | 0/150D100\r\n> 16391 | 16388(t_1) | r | 0/150D138\r\n> \r\n> If subscribe two publications one of them publish parent table with\r\n> (pubviaroot=true) and another publish child table. Both the parent table and\r\n> child table will exist in pg_subscription_rel which also means we will do\r\n> initial copy for both tables.\r\n> \r\n> But after initial copy, we only publish change with the schema of the parent\r\n> table(t). It looks a bit inconsistent.\r\n> \r\n> Based on the document of PUBLISH_VIA_PARTITION_ROOT option. I think the\r\n> expected behavior could be we only store the top most parent(table t) in\r\n> pg_subscription_rel and do initial copy for it if pubviaroot is on. I haven't\r\n> thought about how to fix this and will investigate this later.\r\nHi,\r\nI try to fix this bug. Attach the patch.\r\n\r\nThe current HEAD get table list for one publication by invoking function\r\npg_get_publication_tables. If multiple publications are subscribed, then this\r\nfunction is invoked multiple times. So option PUBLISH_VIA_PARTITION_ROOT works\r\nindependently on every publication, I think it does not work correctly on\r\ndifferent publications of the same subscription.\r\n\r\nSo I fix this bug by the following two steps:\r\nFirst step,\r\nI get oids of subscribed tables by publication list. Then for tables with the\r\nsame topmost root table, I filter them base on the option\r\nPUBLISH_VIA_PARTITION_ROOT(see new function filter_partitions_oids).\r\nAfter filtering, I get the final oid list.\r\nSecond step,\r\nI get the required informations(nspname and relname) base on the oid list of\r\nfirst step.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Thu, 7 Apr 2022 03:08:14 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "\r\n\r\n> -----Original Message-----\r\n> From: Wang, Wei/王 威 <wangw.fnst@fujitsu.com>\r\nOn Thursday, April 7, 2022 11:08 AM\r\n> \r\n> On Thur, Mar 10, 2021 at 10:08 AM houzj.fnst@fujitsu.com wrote:\r\n> > Hi,\r\n> >\r\n> > When reviewing some logical replication related features. I noticed another\r\n> > possible problem if the subscriber subscribes multiple publications which\r\n> > publish parent and child table.\r\n> >\r\n> > For example:\r\n> >\r\n> > ----pub\r\n> > create table t (a int, b int, c int) partition by range (a);\r\n> > create table t_1 partition of t for values from (1) to (10);\r\n> >\r\n> > create publication pub1 for table t\r\n> > with (PUBLISH_VIA_PARTITION_ROOT);\r\n> > create publication pub2 for table t_1\r\n> > with (PUBLISH_VIA_PARTITION_ROOT);\r\n> >\r\n> > ----sub\r\n> > ---- prepare table t and t_1\r\n> > CREATE SUBSCRIPTION sub CONNECTION 'port=10000 dbname=postgres'\r\n> > PUBLICATION pub1, pub2;\r\n> >\r\n> > select * from pg_subscription_rel ;\r\n> > srsubid | srrelid | srsubstate | srsublsn\r\n> > ---------+---------+------------+-----------\r\n> > 16391 | 16385(t) | r | 0/150D100\r\n> > 16391 | 16388(t_1) | r | 0/150D138\r\n> >\r\n> > If subscribe two publications one of them publish parent table with\r\n> > (pubviaroot=true) and another publish child table. Both the parent table and\r\n> > child table will exist in pg_subscription_rel which also means we will do\r\n> > initial copy for both tables.\r\n> >\r\n> > But after initial copy, we only publish change with the schema of the parent\r\n> > table(t). It looks a bit inconsistent.\r\n> >\r\n> > Based on the document of PUBLISH_VIA_PARTITION_ROOT option. I think\r\n> the\r\n> > expected behavior could be we only store the top most parent(table t) in\r\n> > pg_subscription_rel and do initial copy for it if pubviaroot is on. I haven't\r\n> > thought about how to fix this and will investigate this later.\r\n> Hi,\r\n> I try to fix this bug. Attach the patch.\r\n> \r\n> The current HEAD get table list for one publication by invoking function\r\n> pg_get_publication_tables. If multiple publications are subscribed, then this\r\n> function is invoked multiple times. So option PUBLISH_VIA_PARTITION_ROOT\r\n> works\r\n> independently on every publication, I think it does not work correctly on\r\n> different publications of the same subscription.\r\n> \r\n> So I fix this bug by the following two steps:\r\n> First step,\r\n> I get oids of subscribed tables by publication list. Then for tables with the\r\n> same topmost root table, I filter them base on the option\r\n> PUBLISH_VIA_PARTITION_ROOT(see new function filter_partitions_oids).\r\n> After filtering, I get the final oid list.\r\n> Second step,\r\n> I get the required informations(nspname and relname) base on the oid list of\r\n> first step.\r\n\r\nThanks for updating the patch.\r\nI confirmed that the bug is fixed by this patch.\r\n\r\nOne suggestion is that can we simplify the code by moving the logic of checking\r\nthe ancestor into the SQL ?. For example, we could filter the outpout of\r\npg_publication_tables by adding A WHERE clause which checks whether the table\r\nis a partition and if its ancestor is also in the output. I think we can also\r\nfilter the needless partition in this approach.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Tue, 19 Apr 2022 07:04:43 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tue, Apr 19, 2022 3:05 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n>\r\n> > -----Original Message-----\r\n> > From: Wang, Wei/王 威 <wangw.fnst@fujitsu.com>\r\n> On Thursday, April 7, 2022 11:08 AM\r\n> >\r\n> > On Thur, Mar 10, 2021 at 10:08 AM houzj.fnst@fujitsu.com wrote:\r\n> > > Hi,\r\n> > >\r\n> > > When reviewing some logical replication related features. I noticed another\r\n> > > possible problem if the subscriber subscribes multiple publications which\r\n> > > publish parent and child table.\r\n> > >\r\n> > > For example:\r\n> > >\r\n> > > ----pub\r\n> > > create table t (a int, b int, c int) partition by range (a);\r\n> > > create table t_1 partition of t for values from (1) to (10);\r\n> > >\r\n> > > create publication pub1 for table t\r\n> > > with (PUBLISH_VIA_PARTITION_ROOT);\r\n> > > create publication pub2 for table t_1\r\n> > > with (PUBLISH_VIA_PARTITION_ROOT);\r\n> > >\r\n> > > ----sub\r\n> > > ---- prepare table t and t_1\r\n> > > CREATE SUBSCRIPTION sub CONNECTION 'port=10000 dbname=postgres'\r\n> > > PUBLICATION pub1, pub2;\r\n> > >\r\n> > > select * from pg_subscription_rel ;\r\n> > > srsubid | srrelid | srsubstate | srsublsn\r\n> > > ---------+---------+------------+-----------\r\n> > > 16391 | 16385(t) | r | 0/150D100\r\n> > > 16391 | 16388(t_1) | r | 0/150D138\r\n> > >\r\n> > > If subscribe two publications one of them publish parent table with\r\n> > > (pubviaroot=true) and another publish child table. Both the parent table and\r\n> > > child table will exist in pg_subscription_rel which also means we will do\r\n> > > initial copy for both tables.\r\n> > >\r\n> > > But after initial copy, we only publish change with the schema of the parent\r\n> > > table(t). It looks a bit inconsistent.\r\n> > >\r\n> > > Based on the document of PUBLISH_VIA_PARTITION_ROOT option. I think\r\n> > the\r\n> > > expected behavior could be we only store the top most parent(table t) in\r\n> > > pg_subscription_rel and do initial copy for it if pubviaroot is on. I haven't\r\n> > > thought about how to fix this and will investigate this later.\r\n> > Hi,\r\n> > I try to fix this bug. Attach the patch.\r\n> >\r\n> > The current HEAD get table list for one publication by invoking function\r\n> > pg_get_publication_tables. If multiple publications are subscribed, then this\r\n> > function is invoked multiple times. So option PUBLISH_VIA_PARTITION_ROOT\r\n> > works\r\n> > independently on every publication, I think it does not work correctly on\r\n> > different publications of the same subscription.\r\n> >\r\n> > So I fix this bug by the following two steps:\r\n> > First step,\r\n> > I get oids of subscribed tables by publication list. Then for tables with the\r\n> > same topmost root table, I filter them base on the option\r\n> > PUBLISH_VIA_PARTITION_ROOT(see new function filter_partitions_oids).\r\n> > After filtering, I get the final oid list.\r\n> > Second step,\r\n> > I get the required informations(nspname and relname) base on the oid list of\r\n> > first step.\r\n> \r\n> Thanks for updating the patch.\r\n> I confirmed that the bug is fixed by this patch.\r\n> \r\n> One suggestion is that can we simplify the code by moving the logic of checking\r\n> the ancestor into the SQL ?. For example, we could filter the outpout of\r\n> pg_publication_tables by adding A WHERE clause which checks whether the table\r\n> is a partition and if its ancestor is also in the output. I think we can also\r\n> filter the needless partition in this approach.\r\n> \r\n\r\nI agreed with you and I tried to fix this problem in a simpler way. What we want\r\nis to exclude the partitioned table whose ancestor is also need to be\r\nreplicated, so how about implementing that by using the following SQL when\r\ngetting the table list from publisher?\r\n\r\nSELECT DISTINCT ns.nspname, c.relname\r\nFROM pg_catalog.pg_publication_tables t\r\nJOIN pg_catalog.pg_namespace ns ON ns.nspname = t.schemaname\r\nJOIN pg_catalog.pg_class c ON c.relname = t.tablename AND c.relnamespace = ns.oid\r\nWHERE t.pubname IN ('p0','p2')\r\nAND (c.relispartition IS FALSE OR NOT EXISTS (SELECT 1 FROM pg_partition_ancestors(c.oid)\r\nWHERE relid IN ( SELECT DISTINCT (schemaname||'.'||tablename)::regclass::oid\r\nFROM pg_catalog.pg_publication_tables t\r\nWHERE t.pubname IN ('p0','p2') ) AND relid != c.oid));\r\n\r\nPlease find the attached patch which used this approach, I also merged the test\r\nin Wang's patch into it.\r\n\r\nRegards,\r\nShi yu", "msg_date": "Tue, 19 Apr 2022 08:53:09 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tue, Apr 19, 2022 4:53 PM Shi, Yu/侍 雨 <shiy.fnst@cn.fujitsu.com> wrote:\r\n> On Tue, Apr 19, 2022 3:05 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\r\n> wrote:\r\n> >\r\n> > > -----Original Message-----\r\n> > > From: Wang, Wei/王 威 <wangw.fnst@fujitsu.com>\r\n> > On Thursday, April 7, 2022 11:08 AM\r\n> > >\r\n> > > On Thur, Mar 10, 2021 at 10:08 AM houzj.fnst@fujitsu.com wrote:\r\n> > > > Hi,\r\n> > > >\r\n> > > > When reviewing some logical replication related features. I noticed\r\n> another\r\n> > > > possible problem if the subscriber subscribes multiple publications which\r\n> > > > publish parent and child table.\r\n> > > >\r\n> > > > For example:\r\n> > > >\r\n> > > > ----pub\r\n> > > > create table t (a int, b int, c int) partition by range (a);\r\n> > > > create table t_1 partition of t for values from (1) to (10);\r\n> > > >\r\n> > > > create publication pub1 for table t\r\n> > > > with (PUBLISH_VIA_PARTITION_ROOT);\r\n> > > > create publication pub2 for table t_1\r\n> > > > with (PUBLISH_VIA_PARTITION_ROOT);\r\n> > > >\r\n> > > > ----sub\r\n> > > > ---- prepare table t and t_1\r\n> > > > CREATE SUBSCRIPTION sub CONNECTION 'port=10000 dbname=postgres'\r\n> > > > PUBLICATION pub1, pub2;\r\n> > > >\r\n> > > > select * from pg_subscription_rel ;\r\n> > > > srsubid | srrelid | srsubstate | srsublsn\r\n> > > > ---------+---------+------------+-----------\r\n> > > > 16391 | 16385(t) | r | 0/150D100\r\n> > > > 16391 | 16388(t_1) | r | 0/150D138\r\n> > > >\r\n> > > > If subscribe two publications one of them publish parent table with\r\n> > > > (pubviaroot=true) and another publish child table. Both the parent table\r\n> and\r\n> > > > child table will exist in pg_subscription_rel which also means we will do\r\n> > > > initial copy for both tables.\r\n> > > >\r\n> > > > But after initial copy, we only publish change with the schema of the\r\n> parent\r\n> > > > table(t). It looks a bit inconsistent.\r\n> > > >\r\n> > > > Based on the document of PUBLISH_VIA_PARTITION_ROOT option. I think\r\n> > > the\r\n> > > > expected behavior could be we only store the top most parent(table t) in\r\n> > > > pg_subscription_rel and do initial copy for it if pubviaroot is on. I haven't\r\n> > > > thought about how to fix this and will investigate this later.\r\n> > > Hi,\r\n> > > I try to fix this bug. Attach the patch.\r\n> > >\r\n> > > The current HEAD get table list for one publication by invoking function\r\n> > > pg_get_publication_tables. If multiple publications are subscribed, then this\r\n> > > function is invoked multiple times. So option\r\n> PUBLISH_VIA_PARTITION_ROOT\r\n> > > works\r\n> > > independently on every publication, I think it does not work correctly on\r\n> > > different publications of the same subscription.\r\n> > >\r\n> > > So I fix this bug by the following two steps:\r\n> > > First step,\r\n> > > I get oids of subscribed tables by publication list. Then for tables with the\r\n> > > same topmost root table, I filter them base on the option\r\n> > > PUBLISH_VIA_PARTITION_ROOT(see new function filter_partitions_oids).\r\n> > > After filtering, I get the final oid list.\r\n> > > Second step,\r\n> > > I get the required informations(nspname and relname) base on the oid list\r\n> of\r\n> > > first step.\r\n> >\r\n> > Thanks for updating the patch.\r\n> > I confirmed that the bug is fixed by this patch.\r\n> >\r\n> > One suggestion is that can we simplify the code by moving the logic of\r\n> checking\r\n> > the ancestor into the SQL ?. For example, we could filter the outpout of\r\n> > pg_publication_tables by adding A WHERE clause which checks whether the\r\n> table\r\n> > is a partition and if its ancestor is also in the output. I think we can also\r\n> > filter the needless partition in this approach.\r\n> >\r\n> \r\n> I agreed with you and I tried to fix this problem in a simpler way. What we want\r\n> is to exclude the partitioned table whose ancestor is also need to be\r\n> replicated, so how about implementing that by using the following SQL when\r\n> getting the table list from publisher?\r\n> \r\n> SELECT DISTINCT ns.nspname, c.relname\r\n> FROM pg_catalog.pg_publication_tables t\r\n> JOIN pg_catalog.pg_namespace ns ON ns.nspname = t.schemaname\r\n> JOIN pg_catalog.pg_class c ON c.relname = t.tablename AND c.relnamespace =\r\n> ns.oid\r\n> WHERE t.pubname IN ('p0','p2')\r\n> AND (c.relispartition IS FALSE OR NOT EXISTS (SELECT 1 FROM\r\n> pg_partition_ancestors(c.oid)\r\n> WHERE relid IN ( SELECT DISTINCT (schemaname||'.'||tablename)::regclass::oid\r\n> FROM pg_catalog.pg_publication_tables t\r\n> WHERE t.pubname IN ('p0','p2') ) AND relid != c.oid));\r\n> \r\n> Please find the attached patch which used this approach, I also merged the test\r\n> in Wang's patch into it.\r\nThanks for your review and patch.\r\n\r\nI think the approach of v2 is better than v1. It does not increase the query.\r\nOnly move the test cases from 100_bugs.pl to 013_partition.pl and simplify it.\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Thu, 21 Apr 2022 03:05:13 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tue, Apr 19, 2022 at 2:23 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Tue, Apr 19, 2022 3:05 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n> >\n> > One suggestion is that can we simplify the code by moving the logic of checking\n> > the ancestor into the SQL ?. For example, we could filter the outpout of\n> > pg_publication_tables by adding A WHERE clause which checks whether the table\n> > is a partition and if its ancestor is also in the output. I think we can also\n> > filter the needless partition in this approach.\n> >\n>\n> I agreed with you and I tried to fix this problem in a simpler way. What we want\n> is to exclude the partitioned table whose ancestor is also need to be\n> replicated, so how about implementing that by using the following SQL when\n> getting the table list from publisher?\n>\n> SELECT DISTINCT ns.nspname, c.relname\n> FROM pg_catalog.pg_publication_tables t\n> JOIN pg_catalog.pg_namespace ns ON ns.nspname = t.schemaname\n> JOIN pg_catalog.pg_class c ON c.relname = t.tablename AND c.relnamespace = ns.oid\n> WHERE t.pubname IN ('p0','p2')\n> AND (c.relispartition IS FALSE OR NOT EXISTS (SELECT 1 FROM pg_partition_ancestors(c.oid)\n> WHERE relid IN ( SELECT DISTINCT (schemaname||'.'||tablename)::regclass::oid\n> FROM pg_catalog.pg_publication_tables t\n> WHERE t.pubname IN ('p0','p2') ) AND relid != c.oid));\n>\n> Please find the attached patch which used this approach, I also merged the test\n> in Wang's patch into it.\n>\n\nI think this will work but do we need \"... relid != c.oid\" at the end\nof the query? If so, why? Please use an alias for\npg_partition_ancestors to make the statement understandable.\n\nNow, this solution will work but I find this query a bit complex and\nwill add some overhead as we are calling pg_publication_tables\nmultiple times. So, I was wondering if we can have a new function\npg_get_publication_tables which takes multiple publications as input\nand return the list of qualified tables? I think for back branches we\nneed something on the lines of what you have proposed but for HEAD we\ncan have a better solution.\n\nIIRC, the column list and row filter also have some issues exactly due\nto this reason, so, I would like those cases to be also mentioned here\nand probably include the tests for them in the patch for HEAD.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 21 Apr 2022 15:11:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thur, Apr 21, 2022 at 5:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>\r\nThanks for your comments.\r\n\r\n> On Tue, Apr 19, 2022 at 2:23 PM shiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com>\r\n> wrote:\r\n> >\r\n> > On Tue, Apr 19, 2022 3:05 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > One suggestion is that can we simplify the code by moving the logic\r\n> > > of checking the ancestor into the SQL ?. For example, we could\r\n> > > filter the outpout of pg_publication_tables by adding A WHERE clause\r\n> > > which checks whether the table is a partition and if its ancestor is\r\n> > > also in the output. I think we can also filter the needless partition in this\r\n> approach.\r\n> > >\r\n> >\r\n> > I agreed with you and I tried to fix this problem in a simpler way.\r\n> > What we want is to exclude the partitioned table whose ancestor is\r\n> > also need to be replicated, so how about implementing that by using\r\n> > the following SQL when getting the table list from publisher?\r\n> >\r\n> > SELECT DISTINCT ns.nspname, c.relname\r\n> > FROM pg_catalog.pg_publication_tables t JOIN pg_catalog.pg_namespace\r\n> > ns ON ns.nspname = t.schemaname JOIN pg_catalog.pg_class c ON\r\n> > c.relname = t.tablename AND c.relnamespace = ns.oid WHERE t.pubname IN\r\n> > ('p0','p2') AND (c.relispartition IS FALSE OR NOT EXISTS (SELECT 1\r\n> > FROM pg_partition_ancestors(c.oid) WHERE relid IN ( SELECT DISTINCT\r\n> > (schemaname||'.'||tablename)::regclass::oid\r\n> > FROM pg_catalog.pg_publication_tables t WHERE t.pubname IN ('p0','p2')\r\n> > ) AND relid != c.oid));\r\n> >\r\n> > Please find the attached patch which used this approach, I also merged\r\n> > the test in Wang's patch into it.\r\n> >\r\n> \r\n> I think this will work but do we need \"... relid != c.oid\" at the end of the query? If\r\n> so, why? Please use an alias for pg_partition_ancestors to make the statement\r\n> understandable.\r\nI think we need this (relid != c.oid). Because when we use function\r\npg_partition_ancestors(c.oid), its return value not only has ancestors, but\r\nalso the input table. That is to say, when we use the table (c.oid) of the\r\nouter query to filter in the sub-query, the table of the outer query will also\r\nappear in the result of the sub-query.\r\nSo, I think we need this condition to prevent filtering out itself.\r\n\r\n> Now, this solution will work but I find this query a bit complex and will add some\r\n> overhead as we are calling pg_publication_tables multiple times. So, I was\r\n> wondering if we can have a new function pg_get_publication_tables which\r\n> takes multiple publications as input and return the list of qualified tables? I think\r\n> for back branches we need something on the lines of what you have proposed\r\n> but for HEAD we can have a better solution.\r\nYes, it sounds reasonable to me. Now, to fix this bug:\r\nIn the patch for HEAD, add a new function pg_get_publications_tables to get\r\ntables info from a publications array.\r\nIn the patch for back-branch (now just share the patch for REL14), modify the\r\nSQL to get tables info.\r\n\r\n> IIRC, the column list and row filter also have some issues exactly due to this\r\n> reason, so, I would like those cases to be also mentioned here and probably\r\n> include the tests for them in the patch for HEAD.\r\nImprove the test case about the column list and row filter to cover this bug.\r\n\r\nAttach the new patches.[suggestions by Amit-San]\r\nThe patch for HEAD:\r\n1. Add a new function to get tables info by a publications array.\r\nThe patch for REL14:\r\n1. Use an alias to make the statement understandable. BTW, I adjusted the alignment.\r\n2. Improve the test cast about the column list and row filter to cover this bug.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Sun, 24 Apr 2022 06:16:04 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Sun, Apr 24, 2022 at 2:16 PM I wrote:\r\n> On Thur, Apr 21, 2022 at 5:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > IIRC, the column list and row filter also have some issues exactly due to this\r\n> > reason, so, I would like those cases to be also mentioned here and probably\r\n> > include the tests for them in the patch for HEAD.\r\n> Improve the test case about the column list and row filter to cover this bug.\r\nSorry, I forgot to explain why I modify the tests for row filter and column\r\nfilter. If we specify different filters on the parent and child table\r\nrespectively, this bug will make us use the wrong filter.\r\n\r\nLike the following cases:\r\n[row filter]\r\n- environment in publisher-side.\r\ncreate table t (a int) partition by range (a);\r\ncreate table t_1 partition of t default;\r\ncreate publication pub1 for table t where (a<=10) with (PUBLISH_VIA_PARTITION_ROOT=true);\r\ncreate publication pub2 for table t_1 where (a>10) with (PUBLISH_VIA_PARTITION_ROOT=true);\r\ninsert into t values (9),(11);\r\n\r\n- environment in subscriber-side.\r\ncreate table t (a int) partition by range (a);\r\ncreate table t_1 partition of t default;\r\ncreate subscription sub connection 'dbname=postgres user=postgres' publication pub1,pub2;\r\n\r\nWhen we execute the following SQL in subscriber-side, what we expect should be:\r\nselect * from t;\r\n a\r\n---\r\n 9\r\n(1 row)\r\n\r\nbut the HEAD is:\r\n a\r\n----\r\n 9\r\n 11\r\n(2 rows)\r\n\r\n[column filter]\r\n- environment in publisher-side.\r\ncreate table t (a int primary key, b int, c int) partition by range (a);\r\ncreate table t_1 partition of t default;\r\ncreate publication pub1 for table t(a, b) with (PUBLISH_VIA_PARTITION_ROOT=true);\r\ncreate publication pub2 for table t_1(a, c) with (PUBLISH_VIA_PARTITION_ROOT=true);\r\ninsert into t values (1,1,1);\r\n\r\n- environment in subscriber-side.\r\ncreate table t (a int, b int, c int) partition by range (a);\r\ncreate table t_1 partition of t default;\r\ncreate subscription sub connection 'dbname=postgres user=postgres' publication pub1,pub2;\r\n\r\nWhen we execute the following SQL in subscriber-side, what we expect should be:\r\nselect * from t;\r\n a | b | c\r\n---+---+---\r\n 1 | 1 |\r\n(1 row)\r\n\r\nbut the HEAD is:\r\n a | b | c\r\n---+---+---\r\n 1 | 1 |\r\n 1 | | 1\r\n(2 rows)\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Mon, 25 Apr 2022 01:23:42 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Sun, Apr 24, 2022 2:16 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> \r\n> Attach the new patches.[suggestions by Amit-San]\r\n> The patch for HEAD:\r\n> 1. Add a new function to get tables info by a publications array.\r\n> The patch for REL14:\r\n> 1. Use an alias to make the statement understandable. BTW, I adjusted the\r\n> alignment.\r\n> 2. Improve the test cast about the column list and row filter to cover this bug.\r\n> \r\n\r\nThanks for your patches.\r\n\r\nHere's a comment on the patch for REL14.\r\n\r\n+\tappendStringInfo(&cmd, \"SELECT DISTINCT ns.nspname, c.relname\\n\"\r\n+\t\t\t\t\t \" FROM pg_catalog.pg_publication_tables t\\n\"\r\n+\t\t\t\t\t \" JOIN pg_catalog.pg_namespace ns\\n\"\r\n+\t\t\t\t\t \" ON ns.nspname = t.schemaname\\n\"\r\n+\t\t\t\t\t \" JOIN pg_catalog.pg_class c\\n\"\r\n+\t\t\t\t\t \" ON c.relname = t.tablename AND c.relnamespace = ns.oid\\n\"\r\n+\t\t\t\t\t \" WHERE t.pubname IN (%s)\\n\"\r\n+\t\t\t\t\t \" AND (c.relispartition IS FALSE\\n\"\r\n+\t\t\t\t\t \" OR NOT EXISTS\\n\"\r\n+\t\t\t\t\t \" ( SELECT 1 FROM pg_partition_ancestors(c.oid) as relid\\n\"\r\n+\t\t\t\t\t \" WHERE relid IN\\n\"\r\n+\t\t\t\t\t \" (SELECT DISTINCT (schemaname || '.' || tablename)::regclass::oid\\n\"\r\n+\t\t\t\t\t \" FROM pg_catalog.pg_publication_tables t\\n\"\r\n+\t\t\t\t\t \" WHERE t.pubname IN (%s))\\n\"\r\n+\t\t\t\t\t \" AND relid != c.oid))\\n\",\r\n+\t\t\t\t\t pub_names.data, pub_names.data);\r\n\r\nI think we can use an alias like 'pa' for pg_partition_ancestors, and modify the SQL as follows. \r\n\r\n+\tappendStringInfo(&cmd, \"SELECT DISTINCT ns.nspname, c.relname\\n\"\r\n+\t\t\t\t\t \" FROM pg_catalog.pg_publication_tables t\\n\"\r\n+\t\t\t\t\t \" JOIN pg_catalog.pg_namespace ns\\n\"\r\n+\t\t\t\t\t \" ON ns.nspname = t.schemaname\\n\"\r\n+\t\t\t\t\t \" JOIN pg_catalog.pg_class c\\n\"\r\n+\t\t\t\t\t \" ON c.relname = t.tablename AND c.relnamespace = ns.oid\\n\"\r\n+\t\t\t\t\t \" WHERE t.pubname IN (%s)\\n\"\r\n+\t\t\t\t\t \" AND (c.relispartition IS FALSE\\n\"\r\n+\t\t\t\t\t \" OR NOT EXISTS\\n\"\r\n+\t\t\t\t\t \" ( SELECT 1 FROM pg_partition_ancestors(c.oid) pa\\n\"\r\n+\t\t\t\t\t \" WHERE pa.relid IN\\n\"\r\n+\t\t\t\t\t \" (SELECT DISTINCT (t.schemaname || '.' || t.tablename)::regclass::oid\\n\"\r\n+\t\t\t\t\t \" FROM pg_catalog.pg_publication_tables t\\n\"\r\n+\t\t\t\t\t \" WHERE t.pubname IN (%s))\\n\"\r\n+\t\t\t\t\t \" AND pa.relid != c.oid))\\n\",\r\n+\t\t\t\t\t pub_names.data, pub_names.data);\r\n\r\nRegards,\r\nShi yu\r\n\r\n", "msg_date": "Thu, 28 Apr 2022 01:22:13 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tue, Apr 28, 2022 9:22 AM Shi, Yu/侍 雨 <shiy.fnst@cn.fujitsu.com> wrote:\r\n> Thanks for your patches.\r\n> \r\n> Here's a comment on the patch for REL14.\r\nThanks for your comments.\r\n\r\n> +\tappendStringInfo(&cmd, \"SELECT DISTINCT ns.nspname, c.relname\\n\"\r\n> +\t\t\t\t\t \" FROM\r\n> pg_catalog.pg_publication_tables t\\n\"\r\n> +\t\t\t\t\t \" JOIN pg_catalog.pg_namespace\r\n> ns\\n\"\r\n> +\t\t\t\t\t \" ON ns.nspname =\r\n> t.schemaname\\n\"\r\n> +\t\t\t\t\t \" JOIN pg_catalog.pg_class c\\n\"\r\n> +\t\t\t\t\t \" ON c.relname = t.tablename AND\r\n> c.relnamespace = ns.oid\\n\"\r\n> +\t\t\t\t\t \" WHERE t.pubname IN (%s)\\n\"\r\n> +\t\t\t\t\t \" AND (c.relispartition IS FALSE\\n\"\r\n> +\t\t\t\t\t \" OR NOT EXISTS\\n\"\r\n> +\t\t\t\t\t \" ( SELECT 1 FROM\r\n> pg_partition_ancestors(c.oid) as relid\\n\"\r\n> +\t\t\t\t\t \" WHERE relid IN\\n\"\r\n> +\t\t\t\t\t \" (SELECT DISTINCT (schemaname\r\n> || '.' || tablename)::regclass::oid\\n\"\r\n> +\t\t\t\t\t \" FROM\r\n> pg_catalog.pg_publication_tables t\\n\"\r\n> +\t\t\t\t\t \" WHERE t.pubname IN (%s))\\n\"\r\n> +\t\t\t\t\t \" AND relid != c.oid))\\n\",\r\n> +\t\t\t\t\t pub_names.data, pub_names.data);\r\n> \r\n> I think we can use an alias like 'pa' for pg_partition_ancestors, and modify the\r\n> SQL as follows.\r\n> \r\n> +\tappendStringInfo(&cmd, \"SELECT DISTINCT ns.nspname, c.relname\\n\"\r\n> +\t\t\t\t\t \" FROM\r\n> pg_catalog.pg_publication_tables t\\n\"\r\n> +\t\t\t\t\t \" JOIN pg_catalog.pg_namespace\r\n> ns\\n\"\r\n> +\t\t\t\t\t \" ON ns.nspname =\r\n> t.schemaname\\n\"\r\n> +\t\t\t\t\t \" JOIN pg_catalog.pg_class c\\n\"\r\n> +\t\t\t\t\t \" ON c.relname = t.tablename AND\r\n> c.relnamespace = ns.oid\\n\"\r\n> +\t\t\t\t\t \" WHERE t.pubname IN (%s)\\n\"\r\n> +\t\t\t\t\t \" AND (c.relispartition IS FALSE\\n\"\r\n> +\t\t\t\t\t \" OR NOT EXISTS\\n\"\r\n> +\t\t\t\t\t \" ( SELECT 1 FROM\r\n> pg_partition_ancestors(c.oid) pa\\n\"\r\n> +\t\t\t\t\t \" WHERE pa.relid IN\\n\"\r\n> +\t\t\t\t\t \" (SELECT DISTINCT\r\n> (t.schemaname || '.' || t.tablename)::regclass::oid\\n\"\r\n> +\t\t\t\t\t \" FROM\r\n> pg_catalog.pg_publication_tables t\\n\"\r\n> +\t\t\t\t\t \" WHERE t.pubname IN (%s))\\n\"\r\n> +\t\t\t\t\t \" AND pa.relid != c.oid))\\n\",\r\n> +\t\t\t\t\t pub_names.data, pub_names.data);\r\nFix it.\r\n\r\nIn addition, I try to modify the approach for the HEAD.\r\nI enhance the API of function pg_get_publication_tables. Change the parameter\r\ntype from 'text' to 'any'. Then we can use this function to get tables from one\r\npublication or an array of publications. Any thoughts on this approach?\r\n\r\nAttach new patches.\r\nThe patch for HEAD:\r\n1. Modify the approach. Enhance the API of function pg_get_publication_tables to\r\nhandle one publication or an array of publications.\r\nThe patch for REL14:\r\n1. Improve the table sync SQL. [suggestions by Shi yu]\r\n\r\nRegards,\r\nWang wei", "msg_date": "Mon, 9 May 2022 01:51:09 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Monday, May 9, 2022 10:51 AM wangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com> wrote:\r\n> Attach new patches.\r\n> The patch for HEAD:\r\n> 1. Modify the approach. Enhance the API of function\r\n> pg_get_publication_tables to handle one publication or an array of\r\n> publications.\r\n> The patch for REL14:\r\n> 1. Improve the table sync SQL. [suggestions by Shi yu]\r\nHi, thank you for updating the patch !\r\n\r\nMinor comments on your patch for HEAD v2.\r\n\r\n(1) commit message sentence\r\n\r\nI suggest below sentence.\r\n\r\nKindly change from\r\n\"... when subscribing to both publications using one subscription, the data is replicated\r\ntwice in inital copy\"\r\nto \"subscribing to both publications from one subscription causes initial copy twice\".\r\n\r\n(2) unused variable\r\n\r\npg_publication.c: In function ‘pg_get_publication_tables’:\r\npg_publication.c:1091:11: warning: unused variable ‘pubname’ [-Wunused-variable]\r\n char *pubname;\r\n\r\nWe can remove this.\r\n\r\n(3) free of allocated memory\r\n\r\nIn the pg_get_publication_tables(),\r\nwe don't free 'elems'. Don't we need it ?\r\n\r\n(4) some coding alignments\r\n\r\n4-1.\r\n\r\n+ List *tables_viaroot = NIL,\r\n...\r\n+ *current_table = NIL;\r\n\r\nI suggest we can put some variables\r\ninto the condition for the first time call of this function,\r\nlike tables_viaroot and current_table.\r\nWhen you agree, kindly change it.\r\n\r\n4-2.\r\n\r\n+ /*\r\n+ * Publications support partitioned tables, although all changes\r\n+ * are replicated using leaf partition identity and schema, so we\r\n+ * only need those.\r\n+ */\r\n+ if (publication->alltables)\r\n+ {\r\n+ current_table = GetAllTablesPublicationRelations(publication->pubviaroot);\r\n+ }\r\n\r\nThis is not related to the change itself and now\r\nwe are inheriting the previous curly brackets, but\r\nI think there's no harm in removing it, since it's only for one statement.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Wed, 11 May 2022 02:32:36 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wednesday, May 11, 2022 11:33 AM I wrote:\r\n> On Monday, May 9, 2022 10:51 AM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> > Attach new patches.\r\n> > The patch for HEAD:\r\n> > 1. Modify the approach. Enhance the API of function\r\n> > pg_get_publication_tables to handle one publication or an array of\r\n> > publications.\r\n> > The patch for REL14:\r\n> > 1. Improve the table sync SQL. [suggestions by Shi yu]\r\n> Hi, thank you for updating the patch !\r\n> \r\n> Minor comments on your patch for HEAD v2.\r\n> \r\n> (1) commit message sentence\r\n> \r\n> I suggest below sentence.\r\n> \r\n> Kindly change from\r\n> \"... when subscribing to both publications using one subscription, the data is\r\n> replicated twice in inital copy\"\r\n> to \"subscribing to both publications from one subscription causes initial copy\r\n> twice\".\r\n> \r\n> (2) unused variable\r\n> \r\n> pg_publication.c: In function ‘pg_get_publication_tables’:\r\n> pg_publication.c:1091:11: warning: unused variable ‘pubname’\r\n> [-Wunused-variable]\r\n> char *pubname;\r\n> \r\n> We can remove this.\r\n> \r\n> (3) free of allocated memory\r\n> \r\n> In the pg_get_publication_tables(),\r\n> we don't free 'elems'. Don't we need it ?\r\n> \r\n> (4) some coding alignments\r\n> \r\n> 4-1.\r\n> \r\n> + List *tables_viaroot = NIL,\r\n> ...\r\n> + *current_table = NIL;\r\n> \r\n> I suggest we can put some variables\r\n> into the condition for the first time call of this function, like tables_viaroot and\r\n> current_table.\r\n> When you agree, kindly change it.\r\n> \r\n> 4-2.\r\n> \r\n> + /*\r\n> + * Publications support partitioned tables, although\r\n> all changes\r\n> + * are replicated using leaf partition identity and\r\n> schema, so we\r\n> + * only need those.\r\n> + */\r\n> + if (publication->alltables)\r\n> + {\r\n> + current_table =\r\n> GetAllTablesPublicationRelations(publication->pubviaroot);\r\n> + }\r\n> \r\n> This is not related to the change itself and now we are inheriting the previous\r\n> curly brackets, but I think there's no harm in removing it, since it's only for one\r\n> statement.\r\nHi, \r\n\r\nOne more thing I'd like to add is that\r\nwe don't hit the below code by tests.\r\nIn the HEAD v2, we add a new filtering logic in pg_get_publication_tables.\r\nAlthough I'm not sure if this is related to the bug fix itself,\r\nwhen we want to include it in this patch, then\r\nI feel it's better to add some simple test for this part,\r\nto cover all the new main paths and check if\r\nnew logic works correctly.\r\n\r\n\r\n+ /*\r\n+ * If a partition table is published in a publication with viaroot,\r\n+ * and its parent or child table is published in another publication\r\n+ * without viaroot. Then we need to move the parent or child table\r\n+ * from tables to tables_viaroot.\r\n+ *\r\n+ * If all publication(s)'s viaroot are the same, then skip this part.\r\n+ */\r\n\r\n....\r\n if (ancestor_viaroot == ancestor)\r\n+ {\r\n+ tables = foreach_delete_current(tables, lc2);\r\n+ change_tables = list_append_unique_oid(change_tables,\r\n+ relid);\r\n+ }\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Thu, 12 May 2022 01:47:32 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thur, May 12, 2022 9:48 AM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\r\n> On Wednesday, May 11, 2022 11:33 AM I wrote:\r\n> > On Monday, May 9, 2022 10:51 AM wangw.fnst@fujitsu.com\r\n> > <wangw.fnst@fujitsu.com> wrote:\r\n> > > Attach new patches.\r\n> > > The patch for HEAD:\r\n> > > 1. Modify the approach. Enhance the API of function\r\n> > > pg_get_publication_tables to handle one publication or an array of\r\n> > > publications.\r\n> > > The patch for REL14:\r\n> > > 1. Improve the table sync SQL. [suggestions by Shi yu]\r\n> > Hi, thank you for updating the patch !\r\n> >\r\n> > Minor comments on your patch for HEAD v2.\r\nThanks for your comments.\r\n\r\n> > (1) commit message sentence\r\n> >\r\n> > I suggest below sentence.\r\n> >\r\n> > Kindly change from\r\n> > \"... when subscribing to both publications using one subscription, the data is\r\n> > replicated twice in inital copy\"\r\n> > to \"subscribing to both publications from one subscription causes initial copy\r\n> > twice\".\r\nImprove it according to your suggestion.\r\n\r\n> > (2) unused variable\r\n> >\r\n> > pg_publication.c: In function ‘pg_get_publication_tables’:\r\n> > pg_publication.c:1091:11: warning: unused variable ‘pubname’\r\n> > [-Wunused-variable]\r\n> > char *pubname;\r\n> >\r\n> > We can remove this.\r\nFix it.\r\n\r\n> > (3) free of allocated memory\r\n> >\r\n> > In the pg_get_publication_tables(),\r\n> > we don't free 'elems'. Don't we need it ?\r\nImprove it according to your suggestion. Free 'elems'.\r\n\r\n> > (4) some coding alignments\r\n> >\r\n> > 4-1.\r\n> >\r\n> > + List *tables_viaroot = NIL,\r\n> > ...\r\n> > + *current_table = NIL;\r\n> >\r\n> > I suggest we can put some variables\r\n> > into the condition for the first time call of this function, like tables_viaroot and\r\n> > current_table.\r\n> > When you agree, kindly change it.\r\nImprove these according to your suggestions.\r\nAlso, I put the code for getting publication(s) into the condition for the\r\nfirst time call of this function.\r\n\r\n> > 4-2.\r\n> >\r\n> > + /*\r\n> > + * Publications support partitioned tables, although\r\n> > all changes\r\n> > + * are replicated using leaf partition identity and\r\n> > schema, so we\r\n> > + * only need those.\r\n> > + */\r\n> > + if (publication->alltables)\r\n> > + {\r\n> > + current_table =\r\n> > GetAllTablesPublicationRelations(publication->pubviaroot);\r\n> > + }\r\n> >\r\n> > This is not related to the change itself and now we are inheriting the previous\r\n> > curly brackets, but I think there's no harm in removing it, since it's only for one\r\n> > statement.\r\nImprove these according to your suggestions.\r\n\r\n> Hi,\r\n> \r\n> One more thing I'd like to add is that\r\n> we don't hit the below code by tests.\r\n> In the HEAD v2, we add a new filtering logic in pg_get_publication_tables.\r\n> Although I'm not sure if this is related to the bug fix itself,\r\n> when we want to include it in this patch, then\r\n> I feel it's better to add some simple test for this part,\r\n> to cover all the new main paths and check if\r\n> new logic works correctly.\r\n> \r\n> \r\n> + /*\r\n> + * If a partition table is published in a publication with viaroot,\r\n> + * and its parent or child table is published in another publication\r\n> + * without viaroot. Then we need to move the parent or child table\r\n> + * from tables to tables_viaroot.\r\n> + *\r\n> + * If all publication(s)'s viaroot are the same, then skip this part.\r\n> + */\r\n> \r\n> ....\r\n> if (ancestor_viaroot == ancestor)\r\n> + {\r\n> + tables = foreach_delete_current(tables, lc2);\r\n> + change_tables =\r\n> list_append_unique_oid(change_tables,\r\n> + relid);\r\n> + }\r\nYes, I agree.\r\nBut when I was adding the test, I found we could improve this part.\r\nSo, I removed this part of the code.\r\n\r\nAlso rebase it because the change in HEAD(23e7b38).\r\n\r\nAttach the patches.(Only changed the patch for HEAD.).\r\n1. Improve the commit message. [suggestions by Osumi-san]\r\n2. Improve coding alignments and the usage for SRFs. [suggestions by Osumi-san and I]\r\n3. Simplify the modifications in function pg_get_publication_tables.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Fri, 13 May 2022 02:02:25 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, May 13, 2022 at 7:32 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> Attach the patches.(Only changed the patch for HEAD.).\n>\n\nFew comments:\n=============\n1.\n@@ -1135,6 +1172,15 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)\n if (publication->pubviaroot)\n tables = filter_partitions(tables);\n }\n+ pfree(elems);\n+\n+ /*\n+ * We need an additional filter for this case : A partition table is\n+ * published in a publication with viaroot, and its parent or child\n+ * table is published in another publication without viaroot. In this\n+ * case, we should publish only parent table.\n+ */\n+ tables = filter_partitions(tables);\n\nDo we need to filter partitions twice? Can't we check if any of the\npublications has 'pubviaroot' option set, if so, call\nfilter_partitions at the end?\n\n2. \" FROM pg_class c JOIN pg_namespace n\"\n+ \" ON n.oid = c.relnamespace,\"\n+ \" LATERAL pg_get_publication_tables(array[ %s ]) gst\"\n\nHere, it is better to have an alias name as gpt.\n\n3.\n }\n+ pfree(elems);\n+\n\nAn extra line between these two lines makes it looks slightly better.\n\n4. Not able to apply patch cleanly.\npatching file src/test/subscription/t/013_partition.pl\nHunk #1 FAILED at 477.\nHunk #2 FAILED at 556.\nHunk #3 FAILED at 584.\n3 out of 3 hunks FAILED -- saving rejects to file\nsrc/test/subscription/t/013_partition.pl.rej\npatching file src/test/subscription/t/028_row_filter.pl\nHunk #1 succeeded at 394 (offset 1 line).\nHunk #2 FAILED at 722.\n1 out of 2 hunks FAILED -- saving rejects to file\nsrc/test/subscription/t/028_row_filter.pl.rej\npatching file src/test/subscription/t/031_column_list.pl\nHunk #1 succeeded at 948 (offset -92 lines).\nHunk #2 FAILED at 1050.\n1 out of 2 hunks FAILED -- saving rejects to file\nsrc/test/subscription/t/031_column_list.pl.rej\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 13 May 2022 11:28:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, May 13, 2022 1:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, May 13, 2022 at 7:32 AM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attach the patches.(Only changed the patch for HEAD.).\r\n> >\r\n> \r\n> Few comments:\r\n> =============\r\nThanks for your comments.\r\n\r\n> 1.\r\n> @@ -1135,6 +1172,15 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)\r\n> if (publication->pubviaroot)\r\n> tables = filter_partitions(tables);\r\n> }\r\n> + pfree(elems);\r\n> +\r\n> + /*\r\n> + * We need an additional filter for this case : A partition table is\r\n> + * published in a publication with viaroot, and its parent or child\r\n> + * table is published in another publication without viaroot. In this\r\n> + * case, we should publish only parent table.\r\n> + */\r\n> + tables = filter_partitions(tables);\r\n> \r\n> Do we need to filter partitions twice? Can't we check if any of the publications\r\n> has 'pubviaroot' option set, if so, call filter_partitions at the end?\r\nImprove it according to your suggestion.\r\n\r\n> 2. \" FROM pg_class c JOIN pg_namespace n\"\r\n> + \" ON n.oid = c.relnamespace,\"\r\n> + \" LATERAL pg_get_publication_tables(array[ %s ]) gst\"\r\n> \r\n> Here, it is better to have an alias name as gpt.\r\nImprove it according to your suggestion.\r\n\r\n> 3.\r\n> }\r\n> + pfree(elems);\r\n> +\r\n> \r\n> An extra line between these two lines makes it looks slightly better.\r\nImprove it according to your suggestion.\r\n\r\n> 4. Not able to apply patch cleanly.\r\n> patching file src/test/subscription/t/013_partition.pl\r\n> Hunk #1 FAILED at 477.\r\n> Hunk #2 FAILED at 556.\r\n> Hunk #3 FAILED at 584.\r\n> 3 out of 3 hunks FAILED -- saving rejects to file\r\n> src/test/subscription/t/013_partition.pl.rej\r\n> patching file src/test/subscription/t/028_row_filter.pl\r\n> Hunk #1 succeeded at 394 (offset 1 line).\r\n> Hunk #2 FAILED at 722.\r\n> 1 out of 2 hunks FAILED -- saving rejects to file\r\n> src/test/subscription/t/028_row_filter.pl.rej\r\n> patching file src/test/subscription/t/031_column_list.pl\r\n> Hunk #1 succeeded at 948 (offset -92 lines).\r\n> Hunk #2 FAILED at 1050.\r\n> 1 out of 2 hunks FAILED -- saving rejects to file\r\n> src/test/subscription/t/031_column_list.pl.rej\r\nNew patch could apply patch cleanly now.\r\n\r\nAttach the patches.(Only changed the patch for HEAD.).\r\n1. Optimize the code. Reduce calls to function filter_partitions. [suggestions by Amit-san]\r\n2. Improve the alias name in SQL. [suggestions by Amit-san]\r\n3. Improve coding alignments. [suggestions by Amit-san] \r\n4. Do some optimizations for list Concatenate.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Fri, 13 May 2022 09:41:42 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Friday, May 13, 2022 6:42 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> Attach the patches.(Only changed the patch for HEAD.).\r\n> 1. Optimize the code. Reduce calls to function filter_partitions. [suggestions by\r\n> Amit-san] 2. Improve the alias name in SQL. [suggestions by Amit-san] 3.\r\n> Improve coding alignments. [suggestions by Amit-san] 4. Do some\r\n> optimizations for list Concatenate.\r\nHi, thank you for updating the patch.\r\n\r\n\r\nI have one minor comment on fetch_table_list() in HEAD v4.\r\n\r\n\r\n@@ -1759,17 +1759,22 @@ static List *\r\n fetch_table_list(WalReceiverConn *wrconn, List *publications)\r\n {\r\n WalRcvExecResult *res;\r\n- StringInfoData cmd;\r\n+ StringInfoData cmd,\r\n+ pub_names;\r\n TupleTableSlot *slot;\r\n Oid tableRow[2] = {TEXTOID, TEXTOID};\r\n List *tablelist = NIL;\r\n\r\n+ initStringInfo(&pub_names);\r\n+ get_publications_str(publications, &pub_names, true);\r\n+\r\n\r\nKindly free the pub_names's data along with the cmd.data.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Fri, 13 May 2022 14:57:08 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, May 13, 2022 at 3:11 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> Attach the patches.(Only changed the patch for HEAD.).\n>\n\n # publications\n-{ oid => '6119', descr => 'get OIDs of tables in a publication',\n+{ oid => '6119', descr => 'get OIDs of tables in one or more publications',\n proname => 'pg_get_publication_tables', prorows => '1000', proretset => 't',\n- provolatile => 's', prorettype => 'oid', proargtypes => 'text',\n- proallargtypes => '{text,oid}', proargmodes => '{i,o}',\n+ provolatile => 's', prorettype => 'oid', proargtypes => 'any',\n+ proallargtypes => '{any,oid}', proargmodes => '{i,o}',\n\nWon't our use case (input one or more publication names) requires the\nparameter type to be 'VARIADIC text[]' instead of 'any'? I might be\nmissing something here so please let me know your reason to change the\ntype to 'any' from 'text'?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 17 May 2022 18:32:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tues, May 17, 2022 9:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, May 13, 2022 at 3:11 PM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attach the patches.(Only changed the patch for HEAD.).\r\n> >\r\nThanks for your comments.\r\n\r\n> # publications\r\n> -{ oid => '6119', descr => 'get OIDs of tables in a publication',\r\n> +{ oid => '6119', descr => 'get OIDs of tables in one or more publications',\r\n> proname => 'pg_get_publication_tables', prorows => '1000', proretset => 't',\r\n> - provolatile => 's', prorettype => 'oid', proargtypes => 'text',\r\n> - proallargtypes => '{text,oid}', proargmodes => '{i,o}',\r\n> + provolatile => 's', prorettype => 'oid', proargtypes => 'any',\r\n> + proallargtypes => '{any,oid}', proargmodes => '{i,o}',\r\n> \r\n> Won't our use case (input one or more publication names) requires the\r\n> parameter type to be 'VARIADIC text[]' instead of 'any'? I might be\r\n> missing something here so please let me know your reason to change the\r\n> type to 'any' from 'text'?\r\nYes, you are right. I improve the approach according to your suggestion.\r\nI didn't notice the field \"provariadic\" in pg_proc before. And now I found we\r\ncould change the type of input from text to variadic text by specifying fields\r\n\"provariadic\" and specifying 'v' in \"proargmodes\".\r\nI also make corresponding changes to the processing of the input in function\r\npg_get_publication_tables.\r\n\r\nBTW, in previous patch HEAD_v4, when invoke function GetPublicationByName in\r\nfunction pg_get_publication_tables, I changed the second input from \"false\" to\r\n\"true\". I changed this because when we invoke function\r\npg_get_publication_tables in the query in function fetch_table_list, if the\r\npublication does not exist, it will error.\r\nBut this change will affect the compatibility of function\r\npg_get_publication_tables. So I revert this change in HEAD_v5, and filter the\r\npublications that do not exist by the query in function fetch_table_list.\r\n\r\nAttach the patches.(Only changed the patch for HEAD.)\r\n1. Improve the approach to modify the input type of the function\r\n pg_get_publication_tables. [suggestions by Amit-san]\r\n2. Free allocated memory in function fetch_table_list. [suggestions by Osumi-san]\r\n3. Improve the approach of the handling of non-existing publications.\r\n\r\nBTW, I rename the patch for REL14\r\nfrom\r\n\"REL14_v4-0001-Fix-data-replicated-twice-when-specifying-PUBLISH.patch\"\r\nto\r\n\"REL14_v5-0001-Fix-data-replicated-twice-when-specifying-PUBLISH_patch\".\r\nJust for the version doesn't mess up between two branches and for cfbot.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Wed, 18 May 2022 08:37:56 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, May 13, 2022 10:57 PM Osumi, Takamichi/大墨 昂道 <osumi.takamichi@fujitsu.com> wrote:\r\n> On Friday, May 13, 2022 6:42 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com>\r\n> wrote:\r\n> > Attach the patches.(Only changed the patch for HEAD.).\r\n> > 1. Optimize the code. Reduce calls to function filter_partitions.\r\n> > [suggestions by Amit-san] 2. Improve the alias name in SQL. [suggestions by\r\n> Amit-san] 3.\r\n> > Improve coding alignments. [suggestions by Amit-san] 4. Do some\r\n> > optimizations for list Concatenate.\r\n> Hi, thank you for updating the patch.\r\n> \r\n> \r\n> I have one minor comment on fetch_table_list() in HEAD v4.\r\nThanks for your comments.\r\n\r\n> @@ -1759,17 +1759,22 @@ static List *\r\n> fetch_table_list(WalReceiverConn *wrconn, List *publications) {\r\n> WalRcvExecResult *res;\r\n> - StringInfoData cmd;\r\n> + StringInfoData cmd,\r\n> + pub_names;\r\n> TupleTableSlot *slot;\r\n> Oid tableRow[2] = {TEXTOID, TEXTOID};\r\n> List *tablelist = NIL;\r\n> \r\n> + initStringInfo(&pub_names);\r\n> + get_publications_str(publications, &pub_names, true);\r\n> +\r\n> \r\n> Kindly free the pub_names's data along with the cmd.data.\r\nImprove it according to your suggestion. Free 'pub_names.data'.\r\n\r\nI also made some other changes.\r\nKindly have a look at new patch shared in [1].\r\n\r\n[1] https://www.postgresql.org/message-id/OS3PR01MB6275B26B6BDF23651B8CE86D9ED19%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Wed, 18 May 2022 08:39:15 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, May 18, 2022 4:38 PM I wrote:\r\n> Attach the patches.(Only changed the patch for HEAD.)\r\nSorry, I forgot to update commit message.\r\n\r\nAttach the new patch.\r\n1. Only update the commit message for HEAD_v5.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Wed, 18 May 2022 08:51:26 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, May 18, 2022 4:51 PM I wrote:\r\n> Attach the new patch.\r\n\r\nSince there are some new commits in HEAD (0ff20288, fd0b9dc and 52b5c53) that\r\nimprove the functions pg_get_publication_tables and fetch_table_list, we cannot\r\napply the patch cleanly. Therefore, I rebased the patch based on the changes in\r\nHEAD.\r\n\r\nI also rebased the patch for REL14 because the commit 52d5ea9 in branch\r\nREL_14_STABLE. BTW, I made a slight adjustments in the function\r\nfetch_table_list to the SQL used to get the publisher-side table information\r\nfor version 14.\r\n\r\nSince we have REL_15_STABLE now, I also attach the patch for version 15.\r\n\r\nAttach the patches.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Fri, 1 Jul 2022 09:46:43 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Here are some review comments for the v6 patch (HEAD only):\n\n============\nHEAD_v6-0001\n============\n\n1. Commit message\n\nIf there are two publications that publish the parent table and the child table\nseparately, and both specify the option PUBLISH_VIA_PARTITION_ROOT, subscribing\nto both publications from one subscription causes initial copy twice. What we\nexpect is to be copied only once.\n\n~\n\nI don’t think the parameter even works in uppercase, so maybe better to say:\nPUBLISH_VIA_PARTITION_ROOT -> 'publish_via_partition_root'\n\n~~~\n\n2.\n\nWhat we expect is to be copied only once.\n\nSUGGESTION\nIt should only be copied once.\n\n~~~\n\n3.\n\nTo fix this, we extend the API of the function pg_get_publication_tables.\nNow, the function pg_get_publication_tables could receive the publication list.\nAnd then, if we specify option viaroot, we could exclude the partitioned table\nwhose ancestor belongs to the publication list when getting the table\ninformations.\n\n~\n\nDon't you mean \"partition table\" instead of \"partitioned table\"?\n\nSUGGESTION (also reworded)\nTo fix this, the API function pg_get_publication_tables has been\nextended to take a publication list. Now, when getting the table\ninformation, if the publish_via_partition_root is true, the function\ncan exclude a partition table whose ancestor is also published by the\nsame publication list.\n\n======\n\n4. src/backend/catalog/pg_publication.c - pg_get_publication_tables\n\n- publication = GetPublicationByName(pubname, false);\n+ arr = PG_GETARG_ARRAYTYPE_P(0);\n+ deconstruct_array(arr, TEXTOID, -1, false, TYPALIGN_INT,\n+ &elems, NULL, &nelems);\n\nMaybe should have some comment to describe that this function\nparameter is now an array of publications names.\n\n~~~\n\n5.\n\n+ /* get Oids of tables from each publication */\n\nUppercase comment\n\n~~~\n\n6.\n\n+ ArrayType *arr;\n+ Datum *elems;\n+ int nelems,\n+ i;\n+ Publication *publication;\n+ bool viaroot = false;\n+ List *pub_infos = NIL;\n+ ListCell *lc1,\n+ *lc2;\n\nThe 'publication' should be declared only in the loop that uses it.\nIt's also not good that this is shadowing the same variable name in a\nlater declaration.\n\n~~~\n\n7.\n\n+ * Publications support partitioned tables, although all changes\n+ * are replicated using leaf partition identity and schema, so we\n+ * only need those.\n */\n+ if (publication->alltables)\n+ current_tables = GetAllTablesPublicationRelations(publication->pubviaroot);\n+ else\n+ {\n+ List *relids,\n+ *schemarelids;\n+\n+ relids = GetPublicationRelations(publication->oid,\n+ publication->pubviaroot ?\n+ PUBLICATION_PART_ROOT :\n+ PUBLICATION_PART_LEAF);\n+ schemarelids = GetAllSchemaPublicationRelations(publication->oid,\n+ publication->pubviaroot ?\n+ PUBLICATION_PART_ROOT :\n+ PUBLICATION_PART_LEAF);\n+ current_tables = list_concat(relids, schemarelids);\n+ }\n\nSomehow I was confused by this comment because it says you only need\nthe LEAF tables but then the subsequent code is getting ROOT relations\nanyway... Can you clarify the comment some more?\n\n~~~\n\n8.\n\n+ bool viaroot = false;\n\nI think that should have a comment something like:\n/* At least one publication is using publish_via_partition_root */\n\n~~~\n\n9.\n\n+ /*\n+ * Record the published table and the corresponding publication so\n+ * that we can get row filters and column list later.\n+ */\n+ foreach(lc1, tables)\n+ {\n+ Oid relid = lfirst_oid(lc1);\n+\n+ foreach(lc2, pub_infos)\n+ {\n+ pub_info *pubinfo = (pub_info *) lfirst(lc2);\n+\n+ if (list_member_oid(pubinfo->table_list, relid))\n+ {\n+ Oid *result = (Oid *) malloc(sizeof(Oid) * 2);\n+\n+ result[0] = relid;\n+ result[1] = pubinfo->pubid;\n+\n+ results = lappend(results, result);\n+ }\n+ }\n }\n\nI felt a bit uneasy about the double-looping here. I wonder if these\n'results' could have been accumulated within the existing loop over\nall publications. Then the results would need to be filtered to remove\nthe ones associated with removed partitions. Otherwise with 10000\ntables and also many publications this (current) double-looping seems\nlike it might be quite expensive.\n\n======\n\n10. src/backend/commands/subscriptioncmds.c - fetch_table_list\n\n+ if (check_columnlist && server_version >= 160000)\n\nThis condition does not make much sense to me. Isn’t it effectively\nsame as saying\nif (server_version >= 150000 && server_version >= 160000)\n\n???\n\n~~~\n\n11.\n\n+ /*\n+ * Get the list of tables from publisher, the partitioned table whose\n+ * ancestor is also in this list should be ignored, otherwise the\n+ * initial date in the partitioned table would be replicated twice.\n+ */\n\n11.a\nIsn't this comment all backwards? I think you mean to say \"partition\"\nor \"partition table\" (not partitioned table) because partitions have\nancestors but partition-ED tables don't.\n\n\n11.b\n\"initial date\" -> \"initial data\"\n\n======\n\n12. src/test/subscription/t/013_partition.pl\n\n-# Note: We create two separate tables, not a partitioned one, so that we can\n-# easily identity through which relation were the changes replicated.\n+# Note: We only create one table for the partition table (tab4) here.\n+# Because we specify option PUBLISH_VIA_PARTITION_ROOT (see pub_all and\n+# pub_lower_level above), all data should be replicated to the partition table.\n+# So we do not need to create table for the partitioned table.\n\n12.a\nAFAIK \"tab4\" is the *partitioned* table, not a partition. I think this\ncomment has all the \"partitioned\" and \"partition\" back-to-front.\n\n12.b\nAlso please say “publish_via_partition_root\" instead of\nPUBLISH_VIA_PARTITION_ROOT\n\n======\n\n13. src/test/subscription/t/028_row_filter.pl\n\n@@ -723,8 +727,11 @@ is($result, qq(t|1), 'check replicated rows to\ntab_rowfilter_toast');\n # - INSERT (16) YES, 16 > 15\n $result =\n $node_subscriber->safe_psql('postgres',\n- \"SELECT a FROM tab_rowfilter_viaroot_part\");\n-is($result, qq(16), 'check replicated rows to tab_rowfilter_viaroot_part');\n+ \"SELECT a FROM tab_rowfilter_viaroot_part ORDER BY 1\");\n+is($result, qq(16\n+17),\n+ 'check replicated rows to tab_rowfilter_viaroot_part'\n+);\n\nThere is a comment above that code like:\n# tab_rowfilter_viaroot_part filter is: (a > 15)\n# - INSERT (14) NO, 14 < 15\n# - INSERT (15) NO, 15 = 15\n# - INSERT (16) YES, 16 > 15\n\nI think should modify that comment to explain the new data this patch\ninserts - e.g. NO for 13 and YES for 17...\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 14 Jul 2022 14:45:35 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thur, Jul 14, 2022 at 12:46 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for the v6 patch (HEAD only):\r\n\r\nThanks for your comments.\r\n\r\n> 1. Commit message\r\n> \r\n> If there are two publications that publish the parent table and the child table\r\n> separately, and both specify the option PUBLISH_VIA_PARTITION_ROOT,\r\n> subscribing\r\n> to both publications from one subscription causes initial copy twice. What we\r\n> expect is to be copied only once.\r\n> \r\n> ~\r\n> \r\n> I don’t think the parameter even works in uppercase, so maybe better to say:\r\n> PUBLISH_VIA_PARTITION_ROOT -> 'publish_via_partition_root'\r\n\r\nIt seems that there are more places to use lowercase than uppercase, so\r\nimproved it as suggested.\r\n\r\n> 2.\r\n> \r\n> What we expect is to be copied only once.\r\n> \r\n> SUGGESTION\r\n> It should only be copied once.\r\n> \r\n> ~~~\r\n> \r\n> 3.\r\n> \r\n> To fix this, we extend the API of the function pg_get_publication_tables.\r\n> Now, the function pg_get_publication_tables could receive the publication list.\r\n> And then, if we specify option viaroot, we could exclude the partitioned table\r\n> whose ancestor belongs to the publication list when getting the table\r\n> informations.\r\n> \r\n> ~\r\n> \r\n> Don't you mean \"partition table\" instead of \"partitioned table\"?\r\n> \r\n> SUGGESTION (also reworded)\r\n> To fix this, the API function pg_get_publication_tables has been\r\n> extended to take a publication list. Now, when getting the table\r\n> information, if the publish_via_partition_root is true, the function\r\n> can exclude a partition table whose ancestor is also published by the\r\n> same publication list.\r\n\r\nImproved and fixed as suggested.\r\n\r\n> 4. src/backend/catalog/pg_publication.c - pg_get_publication_tables\r\n> \r\n> - publication = GetPublicationByName(pubname, false);\r\n> + arr = PG_GETARG_ARRAYTYPE_P(0);\r\n> + deconstruct_array(arr, TEXTOID, -1, false, TYPALIGN_INT,\r\n> + &elems, NULL, &nelems);\r\n> \r\n> Maybe should have some comment to describe that this function\r\n> parameter is now an array of publications names.\r\n\r\nAdd the following comment: `/* Deconstruct the parameter into elements. */`.\r\nAlso improved the comment above the function pg_get_publication_tables:\r\n`Returns information of tables in one or more publications.`\r\n-->\r\n`Returns information of the tables in the given publication array.`\r\n\r\n> 5.\r\n> \r\n> + /* get Oids of tables from each publication */\r\n> \r\n> Uppercase comment\r\n\r\nImproved as suggested.\r\n\r\n> 6.\r\n> \r\n> + ArrayType *arr;\r\n> + Datum *elems;\r\n> + int nelems,\r\n> + i;\r\n> + Publication *publication;\r\n> + bool viaroot = false;\r\n> + List *pub_infos = NIL;\r\n> + ListCell *lc1,\r\n> + *lc2;\r\n> \r\n> The 'publication' should be declared only in the loop that uses it.\r\n> It's also not good that this is shadowing the same variable name in a\r\n> later declaration.\r\n\r\nReverted changes to variable \"publication\" declarations.\r\n\r\n> 7.\r\n> \r\n> + * Publications support partitioned tables, although all changes\r\n> + * are replicated using leaf partition identity and schema, so we\r\n> + * only need those.\r\n> */\r\n> + if (publication->alltables)\r\n> + current_tables = GetAllTablesPublicationRelations(publication->pubviaroot);\r\n> + else\r\n> + {\r\n> + List *relids,\r\n> + *schemarelids;\r\n> +\r\n> + relids = GetPublicationRelations(publication->oid,\r\n> + publication->pubviaroot ?\r\n> + PUBLICATION_PART_ROOT :\r\n> + PUBLICATION_PART_LEAF);\r\n> + schemarelids = GetAllSchemaPublicationRelations(publication->oid,\r\n> + publication->pubviaroot ?\r\n> + PUBLICATION_PART_ROOT :\r\n> + PUBLICATION_PART_LEAF);\r\n> + current_tables = list_concat(relids, schemarelids);\r\n> + }\r\n> \r\n> Somehow I was confused by this comment because it says you only need\r\n> the LEAF tables but then the subsequent code is getting ROOT relations\r\n> anyway... Can you clarify the comment some more?\r\n\r\nI think this is a slight mistake when publication parameter\r\n\"publish_via_partition_root\" was introduced before.\r\nI improved the comment to the following:\r\n```\r\nPublications support partitioned tables. If\r\npublish_via_partition_root is false, all changes are replicated\r\nusing leaf partition identity and schema, so we only need those.\r\nOtherwise, If publish_via_partition_root is true, get the\r\npartitioned table itself.\r\n```\r\n\r\n> 8.\r\n> \r\n> + bool viaroot = false;\r\n> \r\n> I think that should have a comment something like:\r\n> /* At least one publication is using publish_via_partition_root */\r\n\r\nImproved as suggested.\r\n\r\n> 9.\r\n> \r\n> + /*\r\n> + * Record the published table and the corresponding publication so\r\n> + * that we can get row filters and column list later.\r\n> + */\r\n> + foreach(lc1, tables)\r\n> + {\r\n> + Oid relid = lfirst_oid(lc1);\r\n> +\r\n> + foreach(lc2, pub_infos)\r\n> + {\r\n> + pub_info *pubinfo = (pub_info *) lfirst(lc2);\r\n> +\r\n> + if (list_member_oid(pubinfo->table_list, relid))\r\n> + {\r\n> + Oid *result = (Oid *) malloc(sizeof(Oid) * 2);\r\n> +\r\n> + result[0] = relid;\r\n> + result[1] = pubinfo->pubid;\r\n> +\r\n> + results = lappend(results, result);\r\n> + }\r\n> + }\r\n> }\r\n> \r\n> I felt a bit uneasy about the double-looping here. I wonder if these\r\n> 'results' could have been accumulated within the existing loop over\r\n> all publications. Then the results would need to be filtered to remove\r\n> the ones associated with removed partitions. Otherwise with 10000\r\n> tables and also many publications this (current) double-looping seems\r\n> like it might be quite expensive.\r\n\r\nImproved as suggested.\r\n\r\n> 10. src/backend/commands/subscriptioncmds.c - fetch_table_list\r\n> \r\n> + if (check_columnlist && server_version >= 160000)\r\n> \r\n> This condition does not make much sense to me. Isn’t it effectively\r\n> same as saying\r\n> if (server_version >= 150000 && server_version >= 160000)\r\n> \r\n> ???\r\n\r\nFixed as suggested.\r\n\r\n> 11.\r\n> \r\n> + /*\r\n> + * Get the list of tables from publisher, the partitioned table whose\r\n> + * ancestor is also in this list should be ignored, otherwise the\r\n> + * initial date in the partitioned table would be replicated twice.\r\n> + */\r\n> \r\n> 11.a\r\n> Isn't this comment all backwards? I think you mean to say \"partition\"\r\n> or \"partition table\" (not partitioned table) because partitions have\r\n> ancestors but partition-ED tables don't.\r\n> \r\n> \r\n> 11.b\r\n> \"initial date\" -> \"initial data\"\r\n\r\nFixed as suggested.\r\n\r\n> 12. src/test/subscription/t/013_partition.pl\r\n> \r\n> -# Note: We create two separate tables, not a partitioned one, so that we can\r\n> -# easily identity through which relation were the changes replicated.\r\n> +# Note: We only create one table for the partition table (tab4) here.\r\n> +# Because we specify option PUBLISH_VIA_PARTITION_ROOT (see pub_all and\r\n> +# pub_lower_level above), all data should be replicated to the partition table.\r\n> +# So we do not need to create table for the partitioned table.\r\n> \r\n> 12.a\r\n> AFAIK \"tab4\" is the *partitioned* table, not a partition. I think this\r\n> comment has all the \"partitioned\" and \"partition\" back-to-front.\r\n> \r\n> 12.b\r\n> Also please say “publish_via_partition_root\" instead of\r\n> PUBLISH_VIA_PARTITION_ROOT\r\n\r\nFixed as suggested.\r\n\r\n> 13. src/test/subscription/t/028_row_filter.pl\r\n> \r\n> @@ -723,8 +727,11 @@ is($result, qq(t|1), 'check replicated rows to\r\n> tab_rowfilter_toast');\r\n> # - INSERT (16) YES, 16 > 15\r\n> $result =\r\n> $node_subscriber->safe_psql('postgres',\r\n> - \"SELECT a FROM tab_rowfilter_viaroot_part\");\r\n> -is($result, qq(16), 'check replicated rows to tab_rowfilter_viaroot_part');\r\n> + \"SELECT a FROM tab_rowfilter_viaroot_part ORDER BY 1\");\r\n> +is($result, qq(16\r\n> +17),\r\n> + 'check replicated rows to tab_rowfilter_viaroot_part'\r\n> +);\r\n> \r\n> There is a comment above that code like:\r\n> # tab_rowfilter_viaroot_part filter is: (a > 15)\r\n> # - INSERT (14) NO, 14 < 15\r\n> # - INSERT (15) NO, 15 = 15\r\n> # - INSERT (16) YES, 16 > 15\r\n> \r\n> I think should modify that comment to explain the new data this patch\r\n> inserts - e.g. NO for 13 and YES for 17...\r\n\r\nImproved as suggested.\r\n\r\nI also improved the patches for back-branch according to some of Peter's\r\ncomments and added the back-branch patch for REL_13.\r\nIn addition, in the patch (REL15_v6) I attached for REL15 in [1], I forgot to\r\nremove the modification to the function pg_get_publication_tables. I removed\r\nrelated modifications now (REL15_v7).\r\n\r\nAttach the new patches.\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275AFA91925615A4AA782D09EBD9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei", "msg_date": "Thu, 21 Jul 2022 10:07:36 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Here are some review comments for the HEAD_v7-0001 patch:\n\n======\n\n1. <General>\n\nI have a fundamental question about this patch.\n\nIIUC the purpose of this patch is to ensure that (when\npublish_via_root = true) the copy of the partition data will happen\nonly once (e.g. from one parent table on one of the publishers). But I\nthink there is no guarantee that those 2 publishers even had the same\ndata, right? Therefore it seems to me you could get different results\nif the data were copied from pub1 or from pub2. (I have not tried it -\nthis is just my suspicion).\n\nAm I correct or mistaken? If correct, then it means there is a big\n(but subtle) difference related to the ordering of processing ... A)\nis this explicitly documented so the user knows what data to expect?\nB) is the effect of different ordering tested anywhere? Actually, I\nhave no idea what exactly determines the order – is it the original\nCREATE SUBSCRIPTION publication list order? Is it the logic of the\npg_get_publication_tables function? Is it the SQL in function\nfetch_table_list? Or is it not deterministic at all? Please confirm\nit.\n\n======\n\n2. Commit message.\n\n2a.\n\nIf there are two publications that publish the parent table and the child table\nseparately, and both specify the option publish_via_partition_root, subscribing\nto both publications from one subscription causes initial copy twice.\n\nSUGGESTION\nIf there are two publications that publish the parent table and the child table\nrespectively, but both specify publish_via_partition_root = true, subscribing\nto both publications from one subscription causes initial copy twice.\n\n2b. <General>\n\nActually, isn't it more subtle than what that comment is describing?\nMaybe nobody is explicitly publishing a parent table at all. Maybe\npub1 publishes partition1 and pub2 publishes partition2, but both\npublications are using publish_via_partition_root = true. Is this\nscenario even tested? Does the logic of pg_get_publication_tables\ncover this scenario?\n\n======\n\n3. src/backend/catalog/pg_publication.c - pg_get_publication_tables\n\n pg_get_publication_tables(PG_FUNCTION_ARGS)\n {\n #define NUM_PUBLICATION_TABLES_ELEM 3\n- FuncCallContext *funcctx;\n- char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));\n- Publication *publication;\n- List *tables;\n+ FuncCallContext *funcctx;\n+ Publication *publication;\n\nSomething seems strange about having a common Publication declaration.\nFirstly it is used to represent every publication element in the array\nloop. Later, it is overwritten to represent a single publication.\n\nI think it might be easier if you declare these variables separately:\n\nE.g.1\nPublication *pub_elem; -- for the array element processing declared\nwithin the for loop\n\nE.g.2\nPublication *pub; -- declared within if (funcctx->call_cntr <\nlist_length(results))\n\n~~~\n\n4.\n\n+ /* Filter by final published table. */\n+ foreach(lc, results)\n+ {\n+ Oid *table_info = (Oid *) lfirst(lc);\n+\n+ if (!list_member_oid(tables, table_info[0]))\n+ results = foreach_delete_current(results, lc);\n }\n\nThe comment did not convey enough meaning. Can you make it more\ndescriptive to explain why/what the logic is doing here?\n\n======\n\n5. src/backend/commands/subscriptioncmds.c - fetch_table_list\n\n /* Get column lists for each relation if the publisher supports it */\n- if (check_columnlist)\n- appendStringInfoString(&cmd, \", t.attnames\\n\");\n+ if (server_version >= 160000)\n+ appendStringInfo(&cmd, \"SELECT DISTINCT n.nspname, c.relname,\\n\"\n\nThat comment is exactly the same as it was before the patch. But it\ndoesn't seem quite appropriate anymore for this new condition and this\nnew query.\n\n~~~\n\n6.\n\n+ /*\n+ * Get the list of tables from publisher, the partition table whose\n+ * ancestor is also in this list should be ignored, otherwise the\n+ * initial data in the partition table would be replicated twice.\n+ */\n\nWhy say \"should be ignored\" -- don’t you mean \"will be\" or \"must be\" or \"is\".\n\n~~~\n\n7.\n\n initStringInfo(&cmd);\n- appendStringInfoString(&cmd, \"SELECT DISTINCT t.schemaname, t.tablename \\n\");\n\n /* Get column lists for each relation if the publisher supports it */\n- if (check_columnlist)\n- appendStringInfoString(&cmd, \", t.attnames\\n\");\n+ if (server_version >= 160000)\n+ appendStringInfo(&cmd, \"SELECT DISTINCT n.nspname, c.relname,\\n\"\n+ \" ( CASE WHEN (array_length(gpt.attrs, 1) = c.relnatts)\\n\"\n+ \" THEN NULL ELSE gpt.attrs END\\n\"\n+ \" ) AS attnames\\n\"\n+ \" FROM pg_class c\\n\"\n+ \" JOIN pg_namespace n ON n.oid = c.relnamespace\\n\"\n+ \" JOIN ( SELECT (pg_get_publication_tables(VARIADIC\narray_agg(pubname::text))).*\\n\"\n+ \" FROM pg_publication\\n\"\n+ \" WHERE pubname IN ( %s )) as gpt\\n\"\n+ \" ON gpt.relid = c.oid\\n\",\n+ pub_names.data);\n+ else\n+ {\n+ /*\n+ * Get the list of tables from publisher, the partition table whose\n+ * ancestor is also in this list should be ignored, otherwise the\n+ * initial data in the partition table would be replicated twice.\n+ */\n\n- appendStringInfoString(&cmd, \"FROM pg_catalog.pg_publication_tables t\\n\"\n- \" WHERE t.pubname IN (\");\n- get_publications_str(publications, &cmd, true);\n- appendStringInfoChar(&cmd, ')');\n+ appendStringInfoString(&cmd, \"WITH pub_tabs AS(\\n\"\n+ \" SELECT DISTINCT N.nspname, C.oid, C.relname, C.relispartition\\n\");\n+\n+ /* Get column lists for each relation if the publisher supports it */\n+ if (check_columnlist)\n+ appendStringInfoString(&cmd, \",( CASE WHEN (array_length(gpt.attrs,\n1) = c.relnatts)\\n\"\n+ \" THEN NULL ELSE gpt.attrs END\\n\"\n+ \" ) AS attnames\\n\");\n+\n+ appendStringInfo(&cmd, \" FROM pg_publication P,\\n\"\n+ \" LATERAL pg_get_publication_tables(P.pubname) GPT,\\n\"\n+ \" pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\\n\"\n+ \" WHERE C.oid = GPT.relid AND P.pubname IN ( %s )\\n\"\n+ \")\\n\"\n+ \"SELECT DISTINCT pub_tabs.nspname, pub_tabs.relname\\n\",\n+ pub_names.data);\n+\n+ /* Get column lists for each relation if the publisher supports it */\n+ if (check_columnlist)\n+ appendStringInfoString(&cmd, \", pub_tabs.attnames\\n\");\n+\n+ appendStringInfoString(&cmd, \"FROM pub_tabs\\n\"\n+ \" WHERE (pub_tabs.relispartition IS FALSE\\n\"\n+ \" OR NOT EXISTS (SELECT 1 FROM\npg_partition_ancestors(pub_tabs.oid) as pa\\n\"\n+ \" WHERE pa.relid IN (SELECT pub_tabs.oid FROM pub_tabs)\\n\"\n+ \" AND pa.relid != pub_tabs.oid))\\n\");\n+ }\n\nPlease use a consistent case for all the SQL aliases. E.g \"gpt\" versus\n\"GPT\", \"c\" versus \"C\", etc.\n\n======\n\n8. src/test/subscription/t/013_partition.pl\n\n+# Note: We only create one table for the partitioned table (tab4) here. Because\n+# we specify option \"publish_via_partition_root\" (see pub_all and\n+# pub_lower_level above), all data should be replicated to the partitioned\n+# table. So we do not need to create table for the partition table.\n\n\"replicated to the partitioned table\" ??\n\nThe entire comment seems a bit misleading because how can we call the\nsubscriber table a \"partitioned\" table when it has no partitions?!\n\nSUGGESTION (maybe?)\nNote: We only create one table (tab4) here. We specified\npublish_via_partition_root = true (see pub_all and pub_lower_level\nabove), so all data will be replicated to that table.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 28 Jul 2022 19:17:03 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thursday, July 28, 2022 5:17 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for the HEAD_v7-0001 patch:\r\n> \r\n> ======\r\n> \r\n> 1. <General>\r\n> \r\n> I have a fundamental question about this patch.\r\n> \r\n> IIUC the purpose of this patch is to ensure that (when\r\n> publish_via_root = true) the copy of the partition data will happen\r\n> only once (e.g. from one parent table on one of the publishers). But I\r\n> think there is no guarantee that those 2 publishers even had the same\r\n> data, right? Therefore it seems to me you could get different results\r\n> if the data were copied from pub1 or from pub2. (I have not tried it -\r\n> this is just my suspicion).\r\n> \r\n> Am I correct or mistaken?\r\n\r\nSince the subscribed publishers are combined with OR and are from the same database.\r\nAnd we are trying to copy the data from the top most parent table, so I think the results\r\nshould be as expected.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Thu, 4 Aug 2022 11:26:18 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thur, Jul 28, 2022 at 17:17 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for the HEAD_v7-0001 patch:\r\n\r\nThanks for your comments.\r\n\r\n> 2. Commit message.\r\n> \r\n> 2a.\r\n> \r\n> If there are two publications that publish the parent table and the child table\r\n> separately, and both specify the option publish_via_partition_root, subscribing\r\n> to both publications from one subscription causes initial copy twice.\r\n> \r\n> SUGGESTION\r\n> If there are two publications that publish the parent table and the child table\r\n> respectively, but both specify publish_via_partition_root = true, subscribing\r\n> to both publications from one subscription causes initial copy twice.\r\n> \r\n> 2b. <General>\r\n> \r\n> Actually, isn't it more subtle than what that comment is describing?\r\n> Maybe nobody is explicitly publishing a parent table at all. Maybe\r\n> pub1 publishes partition1 and pub2 publishes partition2, but both\r\n> publications are using publish_via_partition_root = true. Is this\r\n> scenario even tested? Does the logic of pg_get_publication_tables\r\n> cover this scenario?\r\n\r\n=>2a.\r\nOkay, changed it as suggested.\r\n\r\n=>2b.\r\nThis is not the case we are trying to fix. The problematic scenario is when the\r\na parent table is published via root partitioned table and in this case we need\r\nto ignore other partitions. And I try to improve the commit message to make it\r\nclear.\r\n\r\n> 4.\r\n> \r\n> + /* Filter by final published table. */\r\n> + foreach(lc, results)\r\n> + {\r\n> + Oid *table_info = (Oid *) lfirst(lc);\r\n> +\r\n> + if (!list_member_oid(tables, table_info[0]))\r\n> + results = foreach_delete_current(results, lc);\r\n> }\r\n> \r\n> The comment did not convey enough meaning. Can you make it more\r\n> descriptive to explain why/what the logic is doing here?\r\n\r\nI think the comments above `tables = filter_partitions(tables);` explain this.\r\n\r\n> 5. src/backend/commands/subscriptioncmds.c - fetch_table_list\r\n> \r\n> /* Get column lists for each relation if the publisher supports it */\r\n> - if (check_columnlist)\r\n> - appendStringInfoString(&cmd, \", t.attnames\\n\");\r\n> + if (server_version >= 160000)\r\n> + appendStringInfo(&cmd, \"SELECT DISTINCT n.nspname, c.relname,\\n\"\r\n> \r\n> That comment is exactly the same as it was before the patch. But it\r\n> doesn't seem quite appropriate anymore for this new condition and this\r\n> new query.\r\n\r\nImproved the comments as following:\r\n```\r\nGet information of the tables belonging to the specified publications\r\n```\r\n\r\nThe rest of the comments are improved as suggested.\r\nI also rebased the patch based on the commit (0c20dd3) on HEAD, and made some\r\nchanges to the back-branch patches based on some of Peter's comments.\r\n\r\nAttach the new patches.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Fri, 5 Aug 2022 09:06:49 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Here are some review comment for the HEAD_v8 patch:\n\n======\n\n1. Commit message\n\nIf there are two publications, one of them publish a parent table with\n(publish_via_partition_root = true) and another publish child table,\nsubscribing to both publications from one subscription results in two initial\nreplications. It should only be copied once.\n\nSUGGESTION (Note**)\nIf there are two publications - one of them publishing a parent table\n(using publish_via_partition_root = true) and the other is publishing\none of the parent's child tables - then subscribing to both\npublications from one subscription results in the same initial child\ndata being copied twice. It should only be copied once.\n\n\nNote** - I've only reworded the original commit message slightly but\notherwise left it saying the same thing. But I still have doubts if\nthis message actually covers all the cases the patch is trying to\naddress. e.g. The comment (see below) in the 'fetch_table_list'\nfunction seemed to cover more cases than what this commit message is\ndescribing.\n/*\n* Get the list of tables from publisher, the partition table whose\n* ancestor is also in this list will be ignored, otherwise the initial\n* data in the partition table would be replicated twice.\n*/\n\n\n======\n\n2. src/backend/catalog/pg_publication.c - pg_get_publication_tables\n\n2a.\n /*\n- * Returns information of tables in a publication.\n+ * Returns information of the tables in the given publication array.\n */\n\nWhat does \"information of the tables\" actually mean? Is it just the\nnames of the tables; is it more than that? IMO a longer, more\nexplanatory comment will be better here instead of a brief ambiguous\none.\n\n\n2b.\n+ *results = NIL;\n\nThis variable name 'results' is too generic, so it is not helpful when\ntrying to understand the subsequent code logic. Please give this a\nmeaningful name/description.\n\n2c.\n/* Deconstruct the parameter into elements. */\n\nSUGGESTION\nDeconstruct the parameter into elements where each element is a\npublication name.\n\n2d.\n+ List *current_tables = NIL;\n\nI think this is the tables only on the current pub_elem, so I thought\n'pub_elem_tables' might make it easier to understand this list's\nmeaning.\n\n2e.\n+ /* Now sort and de-duplicate the result list */\n+ list_sort(tables, list_oid_cmp);\n+ list_deduplicate_oid(tables);\n\nIMO this comment is confusing because there is another list that is\ncalled the 'results' list, but that is not the same list you are\nprocessing here. Also, it does not really add anything helpful to just\nhave comments saying almost the same as the function names\n(sort/de-duplicate), so please give longer comments to say the reason\n*why* the logic does this rather than just describing the steps.\n\n2f.\n+ /* Filter by final published table */\n+ foreach(lc, results)\n\nPlease make this comment more descriptive to explain why/what the\nlogic is doing.\n\n======\n\n3. src/backend/commands/subscriptioncmds.c - fetch_table_list\n\n3a.\n+ bool check_columnlist = (server_version >= 150000);\n\nGiven the assignment, maybe 'columnlist_supported' is a better name?\n\n3b.\n+ /* Get information of the tables belonging to the specified publications */\n\nFor \"Get information of...\" can you elaborate *what* table\ninformation this is getting and why?\n\n3c.\n+ if (server_version >= 160000)\n+ appendStringInfo(&cmd, \"SELECT DISTINCT N.nspname, C.relname,\\n\"\n+ \" ( CASE WHEN (array_length(GPT.attrs, 1) = C.relnatts)\\n\"\n+ \" THEN NULL ELSE GPT.attrs END\\n\"\n+ \" ) AS attnames\\n\"\n+ \" FROM pg_class C\\n\"\n+ \" JOIN pg_namespace N ON N.oid = C.relnamespace\\n\"\n+ \" JOIN ( SELECT (pg_get_publication_tables(VARIADIC\narray_agg(pubname::text))).*\\n\"\n+ \" FROM pg_publication\\n\"\n+ \" WHERE pubname IN ( %s )) as GPT\\n\"\n+ \" ON GPT.relid = C.oid\\n\",\n+ pub_names.data);\n\nAFAICT the main reason for this check was to decide if you can use the\nnew version of 'pg_get_publication_tables' that supports the VARIADIC\narray of pub names or not. If that is correct, then maybe the comment\nshould describe that reason, or maybe add another bool var similar to\nthe 'check_columnlist' one for this.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 9 Aug 2022 17:14:54 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Here are some more review comments for the HEAD_v8 patch:\n\n======\n\n1. Commit message\n\nIf there are two publications, one of them publish a parent table with\n(publish_via_partition_root = true) and another publish child table,\nsubscribing to both publications from one subscription results in two initial\nreplications. It should only be copied once.\n\n~\n\nI took a 2nd look at that commit message and it seemed slightly\nbackwards to me - e.g. don't you really mean for the\n'publish_via_partition_root' parameter to be used when publishing the\n*child* table?\n\nSUGGESTION\nIf there are two publications, one of them publishing a parent table\ndirectly, and the other publishing a child table with\npublish_via_partition_root = true, then subscribing to both those\npublications from one subscription results in two initial\nreplications. It should only be copied once.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 10 Aug 2022 09:45:25 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wednesday, August 10, 2022 7:45 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Here are some more review comments for the HEAD_v8 patch:\r\n> \r\n> ======\r\n> \r\n> 1. Commit message\r\n> \r\n> If there are two publications, one of them publish a parent table with\r\n> (publish_via_partition_root = true) and another publish child table, subscribing\r\n> to both publications from one subscription results in two initial replications. It\r\n> should only be copied once.\r\n> \r\n> ~\r\n> \r\n> I took a 2nd look at that commit message and it seemed slightly backwards to\r\n> me - e.g. don't you really mean for the 'publish_via_partition_root' parameter\r\n> to be used when publishing the\r\n> *child* table?\r\n\r\nI'm not sure about this, I think we are trying to fix the bug when\r\n'publish_via_partition_root' is used when publishing the parent table.\r\n\r\nFor this case(via_root used when publishing parent):\r\n\r\nCREATE PUBLICATION pub1 for TABLE parent with(publish_via_partition_root);\r\nCREATE PUBLICATION pub2 for TABLE child;\r\nCREATE SUBSCRIPTION sub connect xxx PUBLICATION pub1,pub2;\r\n\r\nThe expected behavior is only the parent table is published, all the changes\r\nshould be replicated using the parent table's identity. So, we should only do\r\ninitial sync for the parent table once, but we currently will do table sync for\r\nboth parent and child which we think is a bug.\r\n\r\nFor another case you mentioned(via_root used when publishing child)\r\n\r\nCREATE PUBLICATION pub1 for TABLE parent;\r\nCREATE PUBLICATION pub2 for TABLE child with (publish_via_partition_root);\r\nCREATE SUBSCRIPTION sub connect xxx PUBLICATION pub1,pub2;\r\n\r\nThe expected behavior is only the child table is published, all the changes\r\nshould be replicated using the child table's identity. We should do table sync\r\nonly for child tables and is same as the current behavior on HEAD. So, I think\r\nthere is no bug in this case.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n", "msg_date": "Tue, 30 Aug 2022 02:24:10 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tues, Aug 9, 2022 at 15:15 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comment for the HEAD_v8 patch:\r\n\r\nThanks for your comments.\r\n\r\n> 1. Commit message\r\n> \r\n> If there are two publications, one of them publish a parent table with\r\n> (publish_via_partition_root = true) and another publish child table,\r\n> subscribing to both publications from one subscription results in two initial\r\n> replications. It should only be copied once.\r\n> \r\n> SUGGESTION (Note**)\r\n> If there are two publications - one of them publishing a parent table\r\n> (using publish_via_partition_root = true) and the other is publishing\r\n> one of the parent's child tables - then subscribing to both\r\n> publications from one subscription results in the same initial child\r\n> data being copied twice. It should only be copied once.\r\n> \r\n> \r\n> Note** - I've only reworded the original commit message slightly but\r\n> otherwise left it saying the same thing. But I still have doubts if\r\n> this message actually covers all the cases the patch is trying to\r\n> address. e.g. The comment (see below) in the 'fetch_table_list'\r\n> function seemed to cover more cases than what this commit message is\r\n> describing.\r\n> /*\r\n> * Get the list of tables from publisher, the partition table whose\r\n> * ancestor is also in this list will be ignored, otherwise the initial\r\n> * data in the partition table would be replicated twice.\r\n> */\r\n\r\n=> commit message\r\nChanged.\r\n\r\n=> Note**\r\nI think the commit message and the comment you mentioned refer to the same kind\r\nof scenario.\r\n\r\n> 2. src/backend/catalog/pg_publication.c - pg_get_publication_tables\r\n> \r\n> 2a.\r\n> /*\r\n> - * Returns information of tables in a publication.\r\n> + * Returns information of the tables in the given publication array.\r\n> */\r\n> \r\n> What does \"information of the tables\" actually mean? Is it just the\r\n> names of the tables; is it more than that? IMO a longer, more\r\n> explanatory comment will be better here instead of a brief ambiguous\r\n> one.\r\n\r\nChanged as below:\r\n```\r\nGet information of the tables in the given publication array.\r\n\r\nReturns the oid, column list, row filter for each table.\r\n```\r\n\r\n> 2b.\r\n> + *results = NIL;\r\n> \r\n> This variable name 'results' is too generic, so it is not helpful when\r\n> trying to understand the subsequent code logic. Please give this a\r\n> meaningful name/description.\r\n\r\nChanged the variable name as below:\r\nresults -> table_infos\r\n\r\n> 2c.\r\n> /* Deconstruct the parameter into elements. */\r\n> \r\n> SUGGESTION\r\n> Deconstruct the parameter into elements where each element is a\r\n> publication name.\r\n\r\nChanged.\r\n\r\n> 2d.\r\n> + List *current_tables = NIL;\r\n> \r\n> I think this is the tables only on the current pub_elem, so I thought\r\n> 'pub_elem_tables' might make it easier to understand this list's\r\n> meaning.\r\n\r\nChanged.\r\n\r\n> 2e.\r\n> + /* Now sort and de-duplicate the result list */\r\n> + list_sort(tables, list_oid_cmp);\r\n> + list_deduplicate_oid(tables);\r\n> \r\n> IMO this comment is confusing because there is another list that is\r\n> called the 'results' list, but that is not the same list you are\r\n> processing here. Also, it does not really add anything helpful to just\r\n> have comments saying almost the same as the function names\r\n> (sort/de-duplicate), so please give longer comments to say the reason\r\n> *why* the logic does this rather than just describing the steps.\r\n\r\nFixed the comment. (\"result\" -> \"tables\")\r\nI think the purpose of these two functions is clear. The reason I added the\r\ncomment here is for consistency with other calling locations.\r\n\r\n> 2f.\r\n> + /* Filter by final published table */\r\n> + foreach(lc, results)\r\n> \r\n> Please make this comment more descriptive to explain why/what the\r\n> logic is doing.\r\n\r\nChanged as below:\r\n```\r\nFor tables that have been filtered out, delete the corresponding\r\ntable information in the table_infos list.\r\n```\r\n\r\n> 3. src/backend/commands/subscriptioncmds.c - fetch_table_list\r\n> \r\n> 3a.\r\n> + bool check_columnlist = (server_version >= 150000);\r\n> \r\n> Given the assignment, maybe 'columnlist_supported' is a better name?\r\n\r\nI am not sure if this name could be changed in this thread.\r\n\r\n> 3b.\r\n> + /* Get information of the tables belonging to the specified publications */\r\n> \r\n> For \"Get information of...\" can you elaborate *what* table\r\n> information this is getting and why?\r\n\r\nI am not sure if we need to add a reason.\r\nSo, I only added what information we are going to get:\r\n```\r\nGet namespace, relname and column list (if supported) of the tables\r\nbelonging to the specified publications.\r\n```\r\n\r\n> 3c.\r\n> + if (server_version >= 160000)\r\n> + appendStringInfo(&cmd, \"SELECT DISTINCT N.nspname, C.relname,\\n\"\r\n> + \" ( CASE WHEN (array_length(GPT.attrs, 1) = C.relnatts)\\n\"\r\n> + \" THEN NULL ELSE GPT.attrs END\\n\"\r\n> + \" ) AS attnames\\n\"\r\n> + \" FROM pg_class C\\n\"\r\n> + \" JOIN pg_namespace N ON N.oid = C.relnamespace\\n\"\r\n> + \" JOIN ( SELECT (pg_get_publication_tables(VARIADIC\r\n> array_agg(pubname::text))).*\\n\"\r\n> + \" FROM pg_publication\\n\"\r\n> + \" WHERE pubname IN ( %s )) as GPT\\n\"\r\n> + \" ON GPT.relid = C.oid\\n\",\r\n> + pub_names.data);\r\n> \r\n> AFAICT the main reason for this check was to decide if you can use the\r\n> new version of 'pg_get_publication_tables' that supports the VARIADIC\r\n> array of pub names or not. If that is correct, then maybe the comment\r\n> should describe that reason, or maybe add another bool var similar to\r\n> the 'check_columnlist' one for this.\r\n\r\nI added the comment as following:\r\n```\r\nFrom version 16, the parameter of the function pg_get_publication_tables\r\ncan be an array of publications. The partition table whose ancestor is\r\nalso published in this publication array will be filtered out in this\r\nfunction.\r\n```\r\n\r\nI also rebased the REL_15 patch based on the commit (15014b8), and made some\r\nchanges to the back-branch patches based on Peter's suggestions.\r\n\r\nAttach the new patches.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Tue, 30 Aug 2022 07:43:06 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "FYI, I'm not sure why the cfbot hasn't reported this, but the apply v9\npatch failed for me on HEAD as below:\n\n[postgres@CentOS7-x64 oss_postgres_misc]$ git apply\n../patches_misc/HEAD_v9-0001-Fix-data-replicated-twice-when-specifying-publish.patch\n--verbose\nChecking patch src/backend/catalog/pg_publication.c...\nChecking patch src/backend/commands/subscriptioncmds.c...\nHunk #1 succeeded at 1917 (offset 123 lines).\nChecking patch src/include/catalog/pg_proc.dat...\nHunk #1 succeeded at 11607 (offset -74 lines).\nChecking patch src/test/regress/expected/rules.out...\nerror: while searching for:\n JOIN pg_attribute a ON (((a.attrelid = gpt.relid) AND\n(a.attnum = k.k))))) AS attnames,\n pg_get_expr(gpt.qual, gpt.relid) AS rowfilter\n FROM pg_publication p,\n LATERAL pg_get_publication_tables((p.pubname)::text) gpt(relid,\nattrs, qual),\n (pg_class c\n JOIN pg_namespace n ON ((n.oid = c.relnamespace)))\n WHERE (c.oid = gpt.relid);\n\nerror: patch failed: src/test/regress/expected/rules.out:1449\nerror: src/test/regress/expected/rules.out: patch does not apply\nChecking patch src/test/subscription/t/013_partition.pl...\nChecking patch src/test/subscription/t/028_row_filter.pl...\nChecking patch src/test/subscription/t/031_column_list.pl...\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Mon, 19 Sep 2022 16:51:31 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Sept 19, 2022 at 14:52 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> FYI, I'm not sure why the cfbot hasn't reported this, but the apply v9\r\n> patch failed for me on HEAD as below:\r\n> \r\n> [postgres@CentOS7-x64 oss_postgres_misc]$ git apply\r\n> ../patches_misc/HEAD_v9-0001-Fix-data-replicated-twice-when-specifying-\r\n> publish.patch\r\n> --verbose\r\n> Checking patch src/backend/catalog/pg_publication.c...\r\n> Checking patch src/backend/commands/subscriptioncmds.c...\r\n> Hunk #1 succeeded at 1917 (offset 123 lines).\r\n> Checking patch src/include/catalog/pg_proc.dat...\r\n> Hunk #1 succeeded at 11607 (offset -74 lines).\r\n> Checking patch src/test/regress/expected/rules.out...\r\n> error: while searching for:\r\n> JOIN pg_attribute a ON (((a.attrelid = gpt.relid) AND\r\n> (a.attnum = k.k))))) AS attnames,\r\n> pg_get_expr(gpt.qual, gpt.relid) AS rowfilter\r\n> FROM pg_publication p,\r\n> LATERAL pg_get_publication_tables((p.pubname)::text) gpt(relid,\r\n> attrs, qual),\r\n> (pg_class c\r\n> JOIN pg_namespace n ON ((n.oid = c.relnamespace)))\r\n> WHERE (c.oid = gpt.relid);\r\n> \r\n> error: patch failed: src/test/regress/expected/rules.out:1449\r\n> error: src/test/regress/expected/rules.out: patch does not apply\r\n> Checking patch src/test/subscription/t/013_partition.pl...\r\n> Checking patch src/test/subscription/t/028_row_filter.pl...\r\n> Checking patch src/test/subscription/t/031_column_list.pl...\r\n\r\nThanks for your kindly reminder.\r\n\r\nRebased the patch based on the changes in HEAD (20b6847).\r\nAttach the new patches.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Tue, 20 Sep 2022 06:17:48 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tuesday, September 20, 2022 3:18 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> Rebased the patch based on the changes in HEAD (20b6847).\r\n> Attach the new patches.\r\nHi, thank you for updating the patchset.\r\n\r\n\r\nFYI, I noticed that the patch for head is no longer applicable.\r\n\r\n$ git apply --check HEAD_v10-0001-Fix-data-replicated-twice-when-specifying-publis.patch\r\nerror: patch failed: src/backend/catalog/pg_publication.c:1097\r\nerror: src/backend/catalog/pg_publication.c: patch does not apply\r\n\r\n\r\nAlso, one minor comment on the change in src/include/catalog/pg_proc.dat.\r\n\r\n # publications\r\n-{ oid => '6119', descr => 'get information of tables in a publication',\r\n- proname => 'pg_get_publication_tables', prorows => '1000', proretset => 't',\r\n- provolatile => 's', prorettype => 'record', proargtypes => 'text',\r\n- proallargtypes => '{text,oid,int2vector,pg_node_tree}',\r\n- proargmodes => '{i,o,o,o}', proargnames => '{pubname,relid,attrs,qual}',\r\n+{ oid => '6119',\r\n+ descr => 'get information of the tables belonging to the specified publications.',\r\n\r\nPlease remove the period at the end of 'descr' string.\r\nIt seems we don't write it in this file and removing it makes the code more aligned.\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Mon, 26 Sep 2022 02:31:11 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Sep 26, 2022 at 10:31 AM Osumi, Takamichi/大墨 昂道 <osumi.takamichi@fujitsu.com> wrote:\r\n> Hi, thank you for updating the patchset.\r\n> \r\n> \r\n> FYI, I noticed that the patch for head is no longer applicable.\r\n\r\nThanks for your kindly reminder and comment.\r\n\r\n> $ git apply --check HEAD_v10-0001-Fix-data-replicated-twice-when-specifying-\r\n> publis.patch\r\n> error: patch failed: src/backend/catalog/pg_publication.c:1097\r\n> error: src/backend/catalog/pg_publication.c: patch does not apply\r\n\r\nRebased the patch based on the changes in HEAD (13a185f).\r\n\r\n> Also, one minor comment on the change in src/include/catalog/pg_proc.dat.\r\n> \r\n> # publications\r\n> -{ oid => '6119', descr => 'get information of tables in a publication',\r\n> - proname => 'pg_get_publication_tables', prorows => '1000', proretset => 't',\r\n> - provolatile => 's', prorettype => 'record', proargtypes => 'text',\r\n> - proallargtypes => '{text,oid,int2vector,pg_node_tree}',\r\n> - proargmodes => '{i,o,o,o}', proargnames => '{pubname,relid,attrs,qual}',\r\n> +{ oid => '6119',\r\n> + descr => 'get information of the tables belonging to the specified\r\n> publications.',\r\n> \r\n> Please remove the period at the end of 'descr' string.\r\n> It seems we don't write it in this file and removing it makes the code more\r\n> aligned.\r\n\r\nImproved as suggested.\r\nAlso modified the description to be consistent with the comments atop the\r\nfunction pg_get_publication_tables.\r\n\r\nAttach the new patches.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Mon, 26 Sep 2022 04:44:00 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Here are my review comments for the HEAD_v11-0001 patch:\n\n======\n\n1. General - Another related bug?\n\nIn [1] Hou-san wrote:\n\nFor another case you mentioned (via_root used when publishing child)\nCREATE PUBLICATION pub1 for TABLE parent;\nCREATE PUBLICATION pub2 for TABLE child with (publish_via_partition_root);\nCREATE SUBSCRIPTION sub connect xxx PUBLICATION pub1,pub2;\n\nThe expected behavior is only the child table is published, all the changes\nshould be replicated using the child table's identity. We should do table sync\nonly for child tables and is same as the current behavior on HEAD. So, I think\nthere is no bug in this case.\n\n~\n\nThat behaviour seems different to my understanding because the pgdocs\nsays when the via_root param is true the 'child' table would be using\nthe 'parent' identity:\n\n[2] publish_via_partition_root determines whether changes in a\npartitioned table (or on its partitions) contained in the publication\nwill be published using the identity and schema of the partitioned\ntable rather than that of the individual partitions that are actually\nchanged.\n\n~\n\nSo is this another bug (slightly different from the current one being\npatched), or is it just some different special behaviour? If it's\nanother bug then you need to know that ASAP because I think you may\nwant to fix both of them at the same time which might impact how this\n2x data copy patch should be implemented.\n\nOr perhaps just the pgdocs need more notes about special\ncases/combinations like this?\n\n======\n\n2. General - documentation?\n\nFor this current patch, IIUC it was decided that it is a bug because\nthe data gets duplicated, and then some sensible rule was decided that\nthis patch should use to address it (e.g. publishing a child combined\nwith publishing a parent via_root will just ignore the child's\npublication...).\n\nSo my question is - is this (new/fixed) behaviour something that a\nuser will be able to figure out themselves from the existing\ndocumentation, or does this patch now need its own special notes in\nthe documentation?\n\n======\n\n3. src/backend/catalog/pg_publication.c - pg_get_publication_tables\n\n+ foreach(lc, pub_elem_tables)\n+ {\n+ Oid *result = (Oid *) malloc(sizeof(Oid) * 2);\n+\n+ result[0] = lfirst_oid(lc);\n+ result[1] = pub_elem->oid;\n+ table_infos = lappend(table_infos, result);\n+ }\n\n3a.\nIt looks like each element in the table_infos list is a malloced obj\nof 2x Oids (Oid of table, Oid of pub). IMO better to call this element\n'table_info' instead of the meaningless 'result'\n\n~\n\n3b.\nActually, I think it would be better if this function defines a little\n2-element structure {Oid relid, Oid pubid} to use instead of this\narray (which requires knowledge that [0] means relid and [1] means\npubid).\n\n~~~\n\n4.\n\n+ foreach(lc, table_infos)\n+ {\n+ Oid *table_info_tmp = (Oid *) lfirst(lc);\n+\n+ if (!list_member_oid(tables, table_info_tmp[0]))\n+ table_infos = foreach_delete_current(table_infos, lc);\n }\nI think the '_tmp' suffix is not helpful here - IMO having another\nrelid variable would make this more self-explanatory.\n\nOr better yet adopt the suggestion o f #3b and have a little struct\nwith self-explanatory member names.\n\n=====\n\n5. src/backend/commands/subscriptioncmds.c - fetch_table_list\n\n+ if (server_version >= 160000)\n+ appendStringInfo(&cmd, \"SELECT DISTINCT N.nspname, C.relname,\\n\"\n\nSince there is an else statement block, I think this would be more\nreadable if there was a statement block here too. YMMV\n\nSUGGESTION\nif (server_version >= 160000)\n{\nappendStringInfo(&cmd, \"SELECT DISTINCT N.nspname, C.relname,\\n\"\n...\n}\n\n~~~\n\n6.\n\n+ /*\n+ * Get the list of tables from publisher, the partition table whose\n+ * ancestor is also in this list will be ignored, otherwise the initial\n+ * data in the partition table would be replicated twice.\n+ */\n\n6a.\n\"from publisher, the partition\" -> \"from the publisher. The partition\"\n\n~\n\n6b.\nThis looks like a common comment that also applied to the \"if\" part,\nso it seems more appropriate to move it to where I indicated below.\nPerhaps the whole comment needs a bit of massaging after you move\nit...\n\n+ /*\n+ * Get namespace, relname and column list (if supported) of the tables\n+ * belonging to the specified publications.\n+ *\n+ * HERE <<<<<<<<<\n+ *\n+ * From version 16, the parameter of the function pg_get_publication_tables\n+ * can be an array of publications. The partition table whose ancestor is\n+ * also published in this publication array will be filtered out in this\n+ * function.\n+ */\n\n\n------\n[1] https://www.postgresql.org/message-id/OS0PR01MB5716A30DDEECC59132E1084F94799%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n[2] https://www.postgresql.org/docs/devel/sql-createpublication.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 27 Sep 2022 18:44:58 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tues, Sep 27, 2022 at 16:45 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are my review comments for the HEAD_v11-0001 patch:\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> \r\n> 1. General - Another related bug?\r\n> \r\n> In [1] Hou-san wrote:\r\n> \r\n> For another case you mentioned (via_root used when publishing child)\r\n> CREATE PUBLICATION pub1 for TABLE parent;\r\n> CREATE PUBLICATION pub2 for TABLE child with (publish_via_partition_root);\r\n> CREATE SUBSCRIPTION sub connect xxx PUBLICATION pub1,pub2;\r\n> \r\n> The expected behavior is only the child table is published, all the changes\r\n> should be replicated using the child table's identity. We should do table sync\r\n> only for child tables and is same as the current behavior on HEAD. So, I think\r\n> there is no bug in this case.\r\n> \r\n> ~\r\n> \r\n> That behaviour seems different to my understanding because the pgdocs\r\n> says when the via_root param is true the 'child' table would be using\r\n> the 'parent' identity:\r\n> \r\n> [2] publish_via_partition_root determines whether changes in a\r\n> partitioned table (or on its partitions) contained in the publication\r\n> will be published using the identity and schema of the partitioned\r\n> table rather than that of the individual partitions that are actually\r\n> changed.\r\n> \r\n> ~\r\n> \r\n> So is this another bug (slightly different from the current one being\r\n> patched), or is it just some different special behaviour? If it's\r\n> another bug then you need to know that ASAP because I think you may\r\n> want to fix both of them at the same time which might impact how this\r\n> 2x data copy patch should be implemented.\r\n> \r\n> Or perhaps just the pgdocs need more notes about special\r\n> cases/combinations like this?\r\n> \r\n> ======\r\n> \r\n> 2. General - documentation?\r\n> \r\n> For this current patch, IIUC it was decided that it is a bug because\r\n> the data gets duplicated, and then some sensible rule was decided that\r\n> this patch should use to address it (e.g. publishing a child combined\r\n> with publishing a parent via_root will just ignore the child's\r\n> publication...).\r\n> \r\n> So my question is - is this (new/fixed) behaviour something that a\r\n> user will be able to figure out themselves from the existing\r\n> documentation, or does this patch now need its own special notes in\r\n> the documentation?\r\n\r\nIMO this behaviour doesn't look like a bug.\r\nI think the behaviour of multiple publications with parameter\r\npublish_via_partition_root could be added to the pg-doc later in a separate\r\npatch.\r\n\r\n> ======\r\n> \r\n> 3. src/backend/catalog/pg_publication.c - pg_get_publication_tables\r\n> \r\n> + foreach(lc, pub_elem_tables)\r\n> + {\r\n> + Oid *result = (Oid *) malloc(sizeof(Oid) * 2);\r\n> +\r\n> + result[0] = lfirst_oid(lc);\r\n> + result[1] = pub_elem->oid;\r\n> + table_infos = lappend(table_infos, result);\r\n> + }\r\n> \r\n> 3a.\r\n> It looks like each element in the table_infos list is a malloced obj\r\n> of 2x Oids (Oid of table, Oid of pub). IMO better to call this element\r\n> 'table_info' instead of the meaningless 'result'\r\n> \r\n> ~\r\n> \r\n> 3b.\r\n> Actually, I think it would be better if this function defines a little\r\n> 2-element structure {Oid relid, Oid pubid} to use instead of this\r\n> array (which requires knowledge that [0] means relid and [1] means\r\n> pubid).\r\n> \r\n> ~~~\r\n> \r\n> 4.\r\n> \r\n> + foreach(lc, table_infos)\r\n> + {\r\n> + Oid *table_info_tmp = (Oid *) lfirst(lc);\r\n> +\r\n> + if (!list_member_oid(tables, table_info_tmp[0]))\r\n> + table_infos = foreach_delete_current(table_infos, lc);\r\n> }\r\n> I think the '_tmp' suffix is not helpful here - IMO having another\r\n> relid variable would make this more self-explanatory.\r\n> \r\n> Or better yet adopt the suggestion o f #3b and have a little struct\r\n> with self-explanatory member names.\r\n\r\nImproved as suggested.\r\n\r\n> =====\r\n> \r\n> 5. src/backend/commands/subscriptioncmds.c - fetch_table_list\r\n> \r\n> + if (server_version >= 160000)\r\n> + appendStringInfo(&cmd, \"SELECT DISTINCT N.nspname, C.relname,\\n\"\r\n> \r\n> Since there is an else statement block, I think this would be more\r\n> readable if there was a statement block here too. YMMV\r\n> \r\n> SUGGESTION\r\n> if (server_version >= 160000)\r\n> {\r\n> appendStringInfo(&cmd, \"SELECT DISTINCT N.nspname, C.relname,\\n\"\r\n> ...\r\n> }\r\n\r\nImproved as suggested.\r\n\r\n> ~~~\r\n> \r\n> 6.\r\n> \r\n> + /*\r\n> + * Get the list of tables from publisher, the partition table whose\r\n> + * ancestor is also in this list will be ignored, otherwise the initial\r\n> + * data in the partition table would be replicated twice.\r\n> + */\r\n> \r\n> 6a.\r\n> \"from publisher, the partition\" -> \"from the publisher. The partition\"\r\n> \r\n> ~\r\n> \r\n> 6b.\r\n> This looks like a common comment that also applied to the \"if\" part,\r\n> so it seems more appropriate to move it to where I indicated below.\r\n> Perhaps the whole comment needs a bit of massaging after you move\r\n> it...\r\n> \r\n> + /*\r\n> + * Get namespace, relname and column list (if supported) of the tables\r\n> + * belonging to the specified publications.\r\n> + *\r\n> + * HERE <<<<<<<<<\r\n> + *\r\n> + * From version 16, the parameter of the function pg_get_publication_tables\r\n> + * can be an array of publications. The partition table whose ancestor is\r\n> + * also published in this publication array will be filtered out in this\r\n> + * function.\r\n> + */\r\n\r\nImproved as suggested.\r\n\r\nAlso rebased the patch because the change in the HEAD (20b6847).\r\n\r\nAttach the new patches.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Wed, 28 Sep 2022 08:35:33 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Hi Wang-san. Here are my review comments for HEAD_v12-0001 patch.\n\n======\n\n1. Missing documentation.\n\nIn [1] you wrote:\n> I think the behaviour of multiple publications with parameter publish_via_partition_root could be added to the pg-doc later in a separate patch.\n\n~\n\nThat doesn't seem right to me. IMO the related documentation updates\ncannot really be separated from this patch. Otherwise, what's the\nalternative? Push this change, and then (while waiting for the\ndocumentation patch) users will just have to use trial and error to\nguess how it works...?\n\n------\n\n2. src/backend/catalog/pg_publication.c\n\n+ typedef struct\n+ {\n+ Oid relid; /* OID of published table */\n+ Oid pubid; /* OID of publication that publishes this\n+ * table. */\n+ } published_rel;\n\n2a.\nI think that should be added to typedefs.list\n\n~\n\n2b.\nMaybe this also needs some comment to clarify that there will be\n*multiple* of these structures in scenarios where the same table is\npublished by different publications in the array passed.\n\n------\n\n3. QUESTION - pg_get_publication_tables / fetch_table_list\n\nWhen the same table is published by different publications (but there\nare other differences like row-filters/column-lists in each\npublication) the result tuple of this function does not include the\npubid. Maybe the SQL of pg_publication_tables/fetch_table_list() is OK\nas-is but how does it manage to associate each table with the correct\ntuple?\n\nI know it apparently all seems to work but I’m not how does that\nhappen? Can you explain why a puboid is not needed for the result\ntuple of this function?\n\n~~\n\ntest_pub=# create table t1(a int, b int, c int);\nCREATE TABLE\ntest_pub=# create publication pub1 for table t1(a) where (a > 99);\nCREATE PUBLICATION\ntest_pub=# create publication pub2 for table t1(a,b) where (b < 33);\nCREATE PUBLICATION\n\nFollowing seems OK when I swap orders of publication names...\n\ntest_pub=# SELECT gpt.relid, gpt.attrs, pg_get_expr(gpt.qual,\ngpt.relid) AS rowfilter from pg_get_publication_tables(VARIADIC\nARRAY['pub2','pub1']) gpt(relid, attrs, qual);\n relid | attrs | rowfilter\n-------+-------+-----------\n 16385 | 1 2 | (b < 33)\n 16385 | 1 | (a > 99)\n(2 rows)\n\ntest_pub=# SELECT gpt.relid, gpt.attrs, pg_get_expr(gpt.qual,\ngpt.relid) AS rowfilter from pg_get_publication_tables(VARIADIC\nARRAY['pub1','pub2']) gpt(relid, attrs, qual);\n relid | attrs | rowfilter\n-------+-------+-----------\n 16385 | 1 | (a > 99)\n 16385 | 1 2 | (b < 33)\n(2 rows)\n\nBut what about this (this is similar to the SQL fragment from\nfetch_table_list); I swapped the pub names but the results are the\nsame...\n\ntest_pub=# SELECT pg_get_publication_tables(VARIADIC\narray_agg(p.pubname)) from pg_publication p where pubname\nIN('pub2','pub1');\n\n pg_get_publication_tables\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------\n (16385,1,\"{OPEXPR :opno 521 :opfuncid 147 :opresulttype 16 :opretset\nfalse :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 1\n:vartype 23 :vartypmod -1 :var\ncollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 47}\n{CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\n:constbyval true :constisnull false :\nlocation 51 :constvalue 4 [ 99 0 0 0 0 0 0 0 ]}) :location 49}\")\n (16385,\"1 2\",\"{OPEXPR :opno 97 :opfuncid 66 :opresulttype 16\n:opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1\n:varattno 2 :vartype 23 :vartypmod -1 :v\narcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 2 :location 49}\n{CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\n:constbyval true :constisnull false\n :location 53 :constvalue 4 [ 33 0 0 0 0 0 0 0 ]}) :location 51}\")\n(2 rows)\n\ntest_pub=# SELECT pg_get_publication_tables(VARIADIC\narray_agg(p.pubname)) from pg_publication p where pubname\nIN('pub1','pub2');\n\n pg_get_publication_tables\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------\n (16385,1,\"{OPEXPR :opno 521 :opfuncid 147 :opresulttype 16 :opretset\nfalse :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 1\n:vartype 23 :vartypmod -1 :var\ncollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 47}\n{CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\n:constbyval true :constisnull false :\nlocation 51 :constvalue 4 [ 99 0 0 0 0 0 0 0 ]}) :location 49}\")\n (16385,\"1 2\",\"{OPEXPR :opno 97 :opfuncid 66 :opresulttype 16\n:opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1\n:varattno 2 :vartype 23 :vartypmod -1 :v\narcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 2 :location 49}\n{CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\n:constbyval true :constisnull false\n :location 53 :constvalue 4 [ 33 0 0 0 0 0 0 0 ]}) :location 51}\")\n(2 rows)\n\n\n------\n[1] https://www.postgresql.org/message-id/OS3PR01MB6275A9B8C65C381C6828DF9D9E549%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 5 Oct 2022 14:17:35 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wednesday, September 28, 2022 5:36 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> Also rebased the patch because the change in the HEAD (20b6847).\r\n> \r\n> Attach the new patches.\r\nHi, thank you for the updated patches!\r\n\r\n\r\nHere are my minor review comments for HEAD v12.\r\n\r\n(1) typo & suggestion to reword one comment\r\n\r\n\r\n+ * Publications support partitioned tables. If\r\n+ * publish_via_partition_root is false, all changes are replicated\r\n+ * using leaf partition identity and schema, so we only need\r\n+ * those. Otherwise, If publish_via_partition_root is true, get\r\n+ * the partitioned table itself.\r\n\r\n\r\nThe last sentence has \"If\" in the middle of the sentence.\r\nWe can use the lower letter for it. Or, I think \"Otherwise\" by itself means\r\n\"If publish_via_partition_root is true\". So, I'll suggest a below change.\r\n\r\n\r\nFROM:\r\nOtherwise, If publish_via_partition_root is true, get the partitioned table itself.\r\nTO:\r\nOtherwise, get the partitioned table itself.\r\n\r\n\r\n(2) Do we need to get \"attnames\" column from the publisher in the fetch_table_list() ?\r\n\r\nWhen I was looking at v16 path, I didn't see any codes that utilize\r\nthe \"attnames\" column information returned from the publisher.\r\nIf we don't need it, could we remove it ?\r\nI can miss something greatly, but this might be affected by HEAD codes ?\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Wed, 5 Oct 2022 15:04:50 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Oct 5, 2022 at 11:08 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Hi Wang-san. Here are my review comments for HEAD_v12-0001 patch.\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> \r\n> 1. Missing documentation.\r\n> \r\n> In [1] you wrote:\r\n> > I think the behaviour of multiple publications with parameter\r\n> publish_via_partition_root could be added to the pg-doc later in a separate\r\n> patch.\r\n> \r\n> ~\r\n> \r\n> That doesn't seem right to me. IMO the related documentation updates\r\n> cannot really be separated from this patch. Otherwise, what's the\r\n> alternative? Push this change, and then (while waiting for the\r\n> documentation patch) users will just have to use trial and error to\r\n> guess how it works...?\r\n\r\nI tried to add related documentation in a separate patch (HEAD_v13-0002*).\r\n\r\n> ------\r\n> \r\n> 2. src/backend/catalog/pg_publication.c\r\n> \r\n> + typedef struct\r\n> + {\r\n> + Oid relid; /* OID of published table */\r\n> + Oid pubid; /* OID of publication that publishes this\r\n> + * table. */\r\n> + } published_rel;\r\n> \r\n> 2a.\r\n> I think that should be added to typedefs.list\r\n\r\nAdded.\r\n\r\n> ~\r\n> \r\n> 2b.\r\n> Maybe this also needs some comment to clarify that there will be\r\n> *multiple* of these structures in scenarios where the same table is\r\n> published by different publications in the array passed.\r\n\r\nAdded the comments.\r\n\r\n> ------\r\n> \r\n> 3. QUESTION - pg_get_publication_tables / fetch_table_list\r\n> \r\n> When the same table is published by different publications (but there\r\n> are other differences like row-filters/column-lists in each\r\n> publication) the result tuple of this function does not include the\r\n> pubid. Maybe the SQL of pg_publication_tables/fetch_table_list() is OK\r\n> as-is but how does it manage to associate each table with the correct\r\n> tuple?\r\n> \r\n> I know it apparently all seems to work but I’m not how does that\r\n> happen? Can you explain why a puboid is not needed for the result\r\n> tuple of this function?\r\n\r\nSorry, I am not sure I understand your question.\r\nI try to answer your question by explaining the two functions you mentioned:\r\n\r\nFirst, the function pg_get_publication_tables gets the list (see table_infos)\r\nthat included published table and the corresponding publication. Then based\r\non this list, the function pg_get_publication_tables returns information\r\n(scheme, relname, row filter and column list) about the published tables in the\r\npublications list. It just doesn't return pubid.\r\n\r\nThen, the SQL in the function fetch_table_list will get the columns in the\r\ncolumn list from pg_attribute. (This is to return all columns when the column\r\nlist is not specified)\r\n\r\n> ~~\r\n> \r\n> test_pub=# create table t1(a int, b int, c int);\r\n> CREATE TABLE\r\n> test_pub=# create publication pub1 for table t1(a) where (a > 99);\r\n> CREATE PUBLICATION\r\n> test_pub=# create publication pub2 for table t1(a,b) where (b < 33);\r\n> CREATE PUBLICATION\r\n> \r\n> Following seems OK when I swap orders of publication names...\r\n> \r\n> test_pub=# SELECT gpt.relid, gpt.attrs, pg_get_expr(gpt.qual,\r\n> gpt.relid) AS rowfilter from pg_get_publication_tables(VARIADIC\r\n> ARRAY['pub2','pub1']) gpt(relid, attrs, qual);\r\n> relid | attrs | rowfilter\r\n> -------+-------+-----------\r\n> 16385 | 1 2 | (b < 33)\r\n> 16385 | 1 | (a > 99)\r\n> (2 rows)\r\n> \r\n> test_pub=# SELECT gpt.relid, gpt.attrs, pg_get_expr(gpt.qual,\r\n> gpt.relid) AS rowfilter from pg_get_publication_tables(VARIADIC\r\n> ARRAY['pub1','pub2']) gpt(relid, attrs, qual);\r\n> relid | attrs | rowfilter\r\n> -------+-------+-----------\r\n> 16385 | 1 | (a > 99)\r\n> 16385 | 1 2 | (b < 33)\r\n> (2 rows)\r\n> \r\n> But what about this (this is similar to the SQL fragment from\r\n> fetch_table_list); I swapped the pub names but the results are the\r\n> same...\r\n> \r\n> test_pub=# SELECT pg_get_publication_tables(VARIADIC\r\n> array_agg(p.pubname)) from pg_publication p where pubname\r\n> IN('pub2','pub1');\r\n> \r\n> pg_get_publication_tables\r\n> \r\n> -------------------------------------------------------------------------------------------------\r\n> ---------------------------------------------------------------------\r\n> -------------------------------------------------------------------------------------------------\r\n> ---------------------------------------------------------------------\r\n> -------------------------------------------------------------------\r\n> (16385,1,\"{OPEXPR :opno 521 :opfuncid 147 :opresulttype 16 :opretset\r\n> false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 1\r\n> :vartype 23 :vartypmod -1 :var\r\n> collid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 47}\r\n> {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\r\n> :constbyval true :constisnull false :\r\n> location 51 :constvalue 4 [ 99 0 0 0 0 0 0 0 ]}) :location 49}\")\r\n> (16385,\"1 2\",\"{OPEXPR :opno 97 :opfuncid 66 :opresulttype 16\r\n> :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1\r\n> :varattno 2 :vartype 23 :vartypmod -1 :v\r\n> arcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 2 :location 49}\r\n> {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\r\n> :constbyval true :constisnull false\r\n> :location 53 :constvalue 4 [ 33 0 0 0 0 0 0 0 ]}) :location 51}\")\r\n> (2 rows)\r\n> \r\n> test_pub=# SELECT pg_get_publication_tables(VARIADIC\r\n> array_agg(p.pubname)) from pg_publication p where pubname\r\n> IN('pub1','pub2');\r\n> \r\n> pg_get_publication_tables\r\n> \r\n> -------------------------------------------------------------------------------------------------\r\n> ---------------------------------------------------------------------\r\n> -------------------------------------------------------------------------------------------------\r\n> ---------------------------------------------------------------------\r\n> -------------------------------------------------------------------\r\n> (16385,1,\"{OPEXPR :opno 521 :opfuncid 147 :opresulttype 16 :opretset\r\n> false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 1\r\n> :vartype 23 :vartypmod -1 :var\r\n> collid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 47}\r\n> {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\r\n> :constbyval true :constisnull false :\r\n> location 51 :constvalue 4 [ 99 0 0 0 0 0 0 0 ]}) :location 49}\")\r\n> (16385,\"1 2\",\"{OPEXPR :opno 97 :opfuncid 66 :opresulttype 16\r\n> :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1\r\n> :varattno 2 :vartype 23 :vartypmod -1 :v\r\n> arcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 2 :location 49}\r\n> {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\r\n> :constbyval true :constisnull false\r\n> :location 53 :constvalue 4 [ 33 0 0 0 0 0 0 0 ]}) :location 51}\")\r\n> (2 rows)\r\n\r\nI think this is because the usage of SELECT statement. The order seems depend\r\non pg_publication. Such as:\r\n\r\npostgres=# SELECT array_agg(p.pubname) FROM pg_publication p WHERE pubname IN ('pub1','pub2');\r\n array_agg\r\n-------------\r\n {pub1,pub2}\r\n(1 row)\r\n\r\npostgres=# SELECT array_agg(p.pubname) FROM pg_publication p WHERE pubname IN ('pub2','pub1');\r\n array_agg\r\n-------------\r\n {pub1,pub2}\r\n(1 row)\r\n\r\nAttach the new patch set.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Mon, 17 Oct 2022 05:49:23 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Oct 5, 2022 at 23:05 PM Osumi, Takamichi/大墨 昂道 <osumi.takamichi@fujitsu.com> wrote:\r\n> Hi, thank you for the updated patches!\r\n> \r\n> \r\n> Here are my minor review comments for HEAD v12.\r\n\r\nThanks for your comments.\r\n\r\n> (1) typo & suggestion to reword one comment\r\n> \r\n> \r\n> + * Publications support partitioned tables. If\r\n> + * publish_via_partition_root is false, all changes are replicated\r\n> + * using leaf partition identity and schema, so we only need\r\n> + * those. Otherwise, If publish_via_partition_root is true, get\r\n> + * the partitioned table itself.\r\n> \r\n> \r\n> The last sentence has \"If\" in the middle of the sentence.\r\n> We can use the lower letter for it. Or, I think \"Otherwise\" by itself means\r\n> \"If publish_via_partition_root is true\". So, I'll suggest a below change.\r\n> \r\n> \r\n> FROM:\r\n> Otherwise, If publish_via_partition_root is true, get the partitioned table itself.\r\n> TO:\r\n> Otherwise, get the partitioned table itself.\r\n\r\nImproved.\r\n\r\n> (2) Do we need to get \"attnames\" column from the publisher in the\r\n> fetch_table_list() ?\r\n> \r\n> When I was looking at v16 path, I didn't see any codes that utilize\r\n> the \"attnames\" column information returned from the publisher.\r\n> If we don't need it, could we remove it ?\r\n> I can miss something greatly, but this might be affected by HEAD codes ?\r\n\r\nYes, it is affected by HEAD. I think we need this column to check whether the\r\nsame table has multiple column lists. (see commit fd0b9dc)\r\n\r\nThe new patch set were attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275843B2BBE92870F7881C19E299%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Mon, 17 Oct 2022 05:52:32 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Oct 17, 2022 at 4:49 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Wed, Oct 5, 2022 at 11:08 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > Hi Wang-san. Here are my review comments for HEAD_v12-0001 patch.\n>\n...\n> >\n> > 3. QUESTION - pg_get_publication_tables / fetch_table_list\n> >\n> > When the same table is published by different publications (but there\n> > are other differences like row-filters/column-lists in each\n> > publication) the result tuple of this function does not include the\n> > pubid. Maybe the SQL of pg_publication_tables/fetch_table_list() is OK\n> > as-is but how does it manage to associate each table with the correct\n> > tuple?\n> >\n> > I know it apparently all seems to work but I’m not how does that\n> > happen? Can you explain why a puboid is not needed for the result\n> > tuple of this function?\n>\n> Sorry, I am not sure I understand your question.\n> I try to answer your question by explaining the two functions you mentioned:\n>\n> First, the function pg_get_publication_tables gets the list (see table_infos)\n> that included published table and the corresponding publication. Then based\n> on this list, the function pg_get_publication_tables returns information\n> (scheme, relname, row filter and column list) about the published tables in the\n> publications list. It just doesn't return pubid.\n>\n> Then, the SQL in the function fetch_table_list will get the columns in the\n> column list from pg_attribute. (This is to return all columns when the column\n> list is not specified)\n>\n\nI meant, for example, if the different publications specified\ndifferent col-lists for the same table then IIUC the\nfetch_table_lists() is going to return 2 list elements\n(schema,rel_name,row_filter,col_list). But when the schema/rel_name\nare the same for 2 elements then (without also a pubid) how are you\ngoing to know where the list element came from, and how come that is\nnot important?\n\n> > ~~\n> >\n> > test_pub=# create table t1(a int, b int, c int);\n> > CREATE TABLE\n> > test_pub=# create publication pub1 for table t1(a) where (a > 99);\n> > CREATE PUBLICATION\n> > test_pub=# create publication pub2 for table t1(a,b) where (b < 33);\n> > CREATE PUBLICATION\n> >\n> > Following seems OK when I swap orders of publication names...\n> >\n> > test_pub=# SELECT gpt.relid, gpt.attrs, pg_get_expr(gpt.qual,\n> > gpt.relid) AS rowfilter from pg_get_publication_tables(VARIADIC\n> > ARRAY['pub2','pub1']) gpt(relid, attrs, qual);\n> > relid | attrs | rowfilter\n> > -------+-------+-----------\n> > 16385 | 1 2 | (b < 33)\n> > 16385 | 1 | (a > 99)\n> > (2 rows)\n> >\n> > test_pub=# SELECT gpt.relid, gpt.attrs, pg_get_expr(gpt.qual,\n> > gpt.relid) AS rowfilter from pg_get_publication_tables(VARIADIC\n> > ARRAY['pub1','pub2']) gpt(relid, attrs, qual);\n> > relid | attrs | rowfilter\n> > -------+-------+-----------\n> > 16385 | 1 | (a > 99)\n> > 16385 | 1 2 | (b < 33)\n> > (2 rows)\n> >\n> > But what about this (this is similar to the SQL fragment from\n> > fetch_table_list); I swapped the pub names but the results are the\n> > same...\n> >\n> > test_pub=# SELECT pg_get_publication_tables(VARIADIC\n> > array_agg(p.pubname)) from pg_publication p where pubname\n> > IN('pub2','pub1');\n> >\n> > pg_get_publication_tables\n> >\n> > -------------------------------------------------------------------------------------------------\n> > ---------------------------------------------------------------------\n> > -------------------------------------------------------------------------------------------------\n> > ---------------------------------------------------------------------\n> > -------------------------------------------------------------------\n> > (16385,1,\"{OPEXPR :opno 521 :opfuncid 147 :opresulttype 16 :opretset\n> > false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 1\n> > :vartype 23 :vartypmod -1 :var\n> > collid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 47}\n> > {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\n> > :constbyval true :constisnull false :\n> > location 51 :constvalue 4 [ 99 0 0 0 0 0 0 0 ]}) :location 49}\")\n> > (16385,\"1 2\",\"{OPEXPR :opno 97 :opfuncid 66 :opresulttype 16\n> > :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1\n> > :varattno 2 :vartype 23 :vartypmod -1 :v\n> > arcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 2 :location 49}\n> > {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\n> > :constbyval true :constisnull false\n> > :location 53 :constvalue 4 [ 33 0 0 0 0 0 0 0 ]}) :location 51}\")\n> > (2 rows)\n> >\n> > test_pub=# SELECT pg_get_publication_tables(VARIADIC\n> > array_agg(p.pubname)) from pg_publication p where pubname\n> > IN('pub1','pub2');\n> >\n> > pg_get_publication_tables\n> >\n> > -------------------------------------------------------------------------------------------------\n> > ---------------------------------------------------------------------\n> > -------------------------------------------------------------------------------------------------\n> > ---------------------------------------------------------------------\n> > -------------------------------------------------------------------\n> > (16385,1,\"{OPEXPR :opno 521 :opfuncid 147 :opresulttype 16 :opretset\n> > false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 1\n> > :vartype 23 :vartypmod -1 :var\n> > collid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 47}\n> > {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\n> > :constbyval true :constisnull false :\n> > location 51 :constvalue 4 [ 99 0 0 0 0 0 0 0 ]}) :location 49}\")\n> > (16385,\"1 2\",\"{OPEXPR :opno 97 :opfuncid 66 :opresulttype 16\n> > :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1\n> > :varattno 2 :vartype 23 :vartypmod -1 :v\n> > arcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 2 :location 49}\n> > {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\n> > :constbyval true :constisnull false\n> > :location 53 :constvalue 4 [ 33 0 0 0 0 0 0 0 ]}) :location 51}\")\n> > (2 rows)\n>\n> I think this is because the usage of SELECT statement. The order seems depend\n> on pg_publication. Such as:\n>\n> postgres=# SELECT array_agg(p.pubname) FROM pg_publication p WHERE pubname IN ('pub1','pub2');\n> array_agg\n> -------------\n> {pub1,pub2}\n> (1 row)\n>\n> postgres=# SELECT array_agg(p.pubname) FROM pg_publication p WHERE pubname IN ('pub2','pub1');\n> array_agg\n> -------------\n> {pub1,pub2}\n> (1 row)\n>\n\nRight, so I felt it was a bit dubious that the result of the function\n“seems to depend on” something. That’s why I was asking why the list\nelements did not include a pubid. Then a caller could be certain what\nelement belonged with what publication. It's not quite clear to me why\nthat is not important for this patch - but anyway, even if it's not\nnecessary for this patch's usage, this is a function that is exposed\nto users who might have different needs/expectations than this patch\nhas, so shouldn't the result be less fuzzy for them?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 21 Oct 2022 19:52:13 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Here are my review comments for HEAD patches v13*\n\n//////\n\nPatch HEAD_v13-0001\n\nI already posted some follow-up questions. See [1]\n\n/////\n\nPatch HEAD_v13-0002\n\n1. Commit message\n\nThe following usage scenarios are not described in detail in the manual:\nIf one subscription subscribes multiple publications, and these publications\npublish a partitioned table and its partitions respectively. When we specify\nthis parameter on one or more of these publications, which identity and schema\nshould be used to publish the changes?\n\nIn these cases, I think the parameter publish_via_partition_root behave as\nfollows:\n\n~\n\nIt seemed worded a bit strangely. Also, you said \"on one or more of\nthese publications\" but the examples only show only one publication\nhaving 'publish_via_partition_root'.\n\nSUGGESTION (I've modified the wording slightly but the examples are unchanged).\n\nAssume a subscription is subscribing to multiple publications, and\nthese publications publish a partitioned table and its partitions\nrespectively:\n\n[publisher-side]\ncreate table parent (a int primary key) partition by range (a);\ncreate table child partition of parent default;\n\ncreate publication pub1 for table parent;\ncreate publication pub2 for table child;\n\n[subscriber-side]\ncreate subscription sub connection 'xxxx' publication pub1, pub2;\n\nThe manual does not clearly describe the behaviour when the user had\nspecified the parameter 'publish_via_partition_root' on just one of\nthe publications. This patch modifies documentation to clarify the\nfollowing rules:\n\n- If the parameter publish_via_partition_root is specified only in pub1,\nchanges will be published using the identity and schema of the table 'parent'.\n\n- If the parameter publish_via_partition_root is specified only in pub2,\nchanges will be published using the identity and schema of the table 'child'.\n\n~~~\n\n2.\n\n- If the parameter publish_via_partition_root is specified only in pub2,\nchanges will be published using the identity and schema of the table child.\n\n~\n\nIs that right though? This rule seems 100% contrary to the meaning of\n'publish_via_partition_root=true'.\n\n------\n\n3. doc/src/sgml/ref/create_publication.sgml\n\n+ <para>\n+ If a root partitioned table is published by any subscribed\npublications which\n+ set publish_via_partition_root = true, changes on this root\npartitioned table\n+ (or on its partitions) will be published using the identity\nand schema of this\n+ root partitioned table rather than that of the individual partitions.\n+ </para>\n\nThis seems to only describe the first example from the commit message.\nWhat about some description to explain the second example?\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPt%2B1PNx6VsZ-xKzAU-18HmNXhjCC1TGakKX46Wg7YNT1Q%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 21 Oct 2022 20:01:54 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "2022年10月17日(月) 14:49 wangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com>:\n>\n> On Wed, Oct 5, 2022 at 11:08 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > Hi Wang-san. Here are my review comments for HEAD_v12-0001 patch.\n>\n> Thanks for your comments.\n>\n> > ======\n> >\n> > 1. Missing documentation.\n> >\n> > In [1] you wrote:\n> > > I think the behaviour of multiple publications with parameter\n> > publish_via_partition_root could be added to the pg-doc later in a separate\n> > patch.\n> >\n> > ~\n> >\n> > That doesn't seem right to me. IMO the related documentation updates\n> > cannot really be separated from this patch. Otherwise, what's the\n> > alternative? Push this change, and then (while waiting for the\n> > documentation patch) users will just have to use trial and error to\n> > guess how it works...?\n>\n> I tried to add related documentation in a separate patch (HEAD_v13-0002*).\n>\n> > ------\n> >\n> > 2. src/backend/catalog/pg_publication.c\n> >\n> > + typedef struct\n> > + {\n> > + Oid relid; /* OID of published table */\n> > + Oid pubid; /* OID of publication that publishes this\n> > + * table. */\n> > + } published_rel;\n> >\n> > 2a.\n> > I think that should be added to typedefs.list\n>\n> Added.\n>\n> > ~\n> >\n> > 2b.\n> > Maybe this also needs some comment to clarify that there will be\n> > *multiple* of these structures in scenarios where the same table is\n> > published by different publications in the array passed.\n>\n> Added the comments.\n>\n> > ------\n> >\n> > 3. QUESTION - pg_get_publication_tables / fetch_table_list\n> >\n> > When the same table is published by different publications (but there\n> > are other differences like row-filters/column-lists in each\n> > publication) the result tuple of this function does not include the\n> > pubid. Maybe the SQL of pg_publication_tables/fetch_table_list() is OK\n> > as-is but how does it manage to associate each table with the correct\n> > tuple?\n> >\n> > I know it apparently all seems to work but I’m not how does that\n> > happen? Can you explain why a puboid is not needed for the result\n> > tuple of this function?\n>\n> Sorry, I am not sure I understand your question.\n> I try to answer your question by explaining the two functions you mentioned:\n>\n> First, the function pg_get_publication_tables gets the list (see table_infos)\n> that included published table and the corresponding publication. Then based\n> on this list, the function pg_get_publication_tables returns information\n> (scheme, relname, row filter and column list) about the published tables in the\n> publications list. It just doesn't return pubid.\n>\n> Then, the SQL in the function fetch_table_list will get the columns in the\n> column list from pg_attribute. (This is to return all columns when the column\n> list is not specified)\n>\n> > ~~\n> >\n> > test_pub=# create table t1(a int, b int, c int);\n> > CREATE TABLE\n> > test_pub=# create publication pub1 for table t1(a) where (a > 99);\n> > CREATE PUBLICATION\n> > test_pub=# create publication pub2 for table t1(a,b) where (b < 33);\n> > CREATE PUBLICATION\n> >\n> > Following seems OK when I swap orders of publication names...\n> >\n> > test_pub=# SELECT gpt.relid, gpt.attrs, pg_get_expr(gpt.qual,\n> > gpt.relid) AS rowfilter from pg_get_publication_tables(VARIADIC\n> > ARRAY['pub2','pub1']) gpt(relid, attrs, qual);\n> > relid | attrs | rowfilter\n> > -------+-------+-----------\n> > 16385 | 1 2 | (b < 33)\n> > 16385 | 1 | (a > 99)\n> > (2 rows)\n> >\n> > test_pub=# SELECT gpt.relid, gpt.attrs, pg_get_expr(gpt.qual,\n> > gpt.relid) AS rowfilter from pg_get_publication_tables(VARIADIC\n> > ARRAY['pub1','pub2']) gpt(relid, attrs, qual);\n> > relid | attrs | rowfilter\n> > -------+-------+-----------\n> > 16385 | 1 | (a > 99)\n> > 16385 | 1 2 | (b < 33)\n> > (2 rows)\n> >\n> > But what about this (this is similar to the SQL fragment from\n> > fetch_table_list); I swapped the pub names but the results are the\n> > same...\n> >\n> > test_pub=# SELECT pg_get_publication_tables(VARIADIC\n> > array_agg(p.pubname)) from pg_publication p where pubname\n> > IN('pub2','pub1');\n> >\n> > pg_get_publication_tables\n> >\n> > -------------------------------------------------------------------------------------------------\n> > ---------------------------------------------------------------------\n> > -------------------------------------------------------------------------------------------------\n> > ---------------------------------------------------------------------\n> > -------------------------------------------------------------------\n> > (16385,1,\"{OPEXPR :opno 521 :opfuncid 147 :opresulttype 16 :opretset\n> > false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 1\n> > :vartype 23 :vartypmod -1 :var\n> > collid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 47}\n> > {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\n> > :constbyval true :constisnull false :\n> > location 51 :constvalue 4 [ 99 0 0 0 0 0 0 0 ]}) :location 49}\")\n> > (16385,\"1 2\",\"{OPEXPR :opno 97 :opfuncid 66 :opresulttype 16\n> > :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1\n> > :varattno 2 :vartype 23 :vartypmod -1 :v\n> > arcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 2 :location 49}\n> > {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\n> > :constbyval true :constisnull false\n> > :location 53 :constvalue 4 [ 33 0 0 0 0 0 0 0 ]}) :location 51}\")\n> > (2 rows)\n> >\n> > test_pub=# SELECT pg_get_publication_tables(VARIADIC\n> > array_agg(p.pubname)) from pg_publication p where pubname\n> > IN('pub1','pub2');\n> >\n> > pg_get_publication_tables\n> >\n> > -------------------------------------------------------------------------------------------------\n> > ---------------------------------------------------------------------\n> > -------------------------------------------------------------------------------------------------\n> > ---------------------------------------------------------------------\n> > -------------------------------------------------------------------\n> > (16385,1,\"{OPEXPR :opno 521 :opfuncid 147 :opresulttype 16 :opretset\n> > false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 1\n> > :vartype 23 :vartypmod -1 :var\n> > collid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 47}\n> > {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\n> > :constbyval true :constisnull false :\n> > location 51 :constvalue 4 [ 99 0 0 0 0 0 0 0 ]}) :location 49}\")\n> > (16385,\"1 2\",\"{OPEXPR :opno 97 :opfuncid 66 :opresulttype 16\n> > :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1\n> > :varattno 2 :vartype 23 :vartypmod -1 :v\n> > arcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 2 :location 49}\n> > {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\n> > :constbyval true :constisnull false\n> > :location 53 :constvalue 4 [ 33 0 0 0 0 0 0 0 ]}) :location 51}\")\n> > (2 rows)\n>\n> I think this is because the usage of SELECT statement. The order seems depend\n> on pg_publication. Such as:\n>\n> postgres=# SELECT array_agg(p.pubname) FROM pg_publication p WHERE pubname IN ('pub1','pub2');\n> array_agg\n> -------------\n> {pub1,pub2}\n> (1 row)\n>\n> postgres=# SELECT array_agg(p.pubname) FROM pg_publication p WHERE pubname IN ('pub2','pub1');\n> array_agg\n> -------------\n> {pub1,pub2}\n> (1 row)\n>\n> Attach the new patch set.\n\n\nThis entry was marked as \"Needs review\" in the CommitFest app but cfbot\nreports the patch no longer applies.\n\nWe've marked it as \"Waiting on Author\". As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time update the patch.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can move the patch entry forward by visiting\n\n https://commitfest.postgresql.org/40/3623/\n\nand changing the status to \"Needs review\".\n\n\nThanks\n\nIan Barwick\n\n\n", "msg_date": "Fri, 4 Nov 2022 08:34:43 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Monday, October 17, 2022 2:49 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> Attach the new patch set.\r\nHi, thank you for posting the new patches.\r\n\r\n\r\nHere are minor comments on the HEAD_v13-0002.\r\n\r\n(1) Suggestion for the document description\r\n\r\n+ <para>\r\n+ If a root partitioned table is published by any subscribed publications which\r\n+ set publish_via_partition_root = true, changes on this root partitioned table\r\n+ (or on its partitions) will be published using the identity and schema of this\r\n+ root partitioned table rather than that of the individual partitions.\r\n+ </para>\r\n+\r\n\r\nI suppose this sentence looks quite similar to the one in the previous paragraph and can be adjusted.\r\n\r\nIIUC the main value of the patch is to clarify what happens when\r\nwe mix publications of different publish_via_partition_root settings for one partition hierarchy.\r\nIf this is true, how about below sentence instead of the one above ?\r\n\r\n\"\r\nThere can be a case where a subscription combines publications with\r\ndifferent publish_via_partition_root values for one same partition hierarchy\r\n(e.g. subscribe two publications indicating the root partitioned table and its child table respectively).\r\nIn this case, the identity and schema of the root partitioned table take priority.\r\n\"\r\n\r\n(2) Better documentation alignment\r\n\r\nI think we need to wrap publish_via_partition_root by \"literal\" tag\r\nin the documentation create_publication.sgml.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Tue, 8 Nov 2022 04:12:00 +0000", "msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, Oct 21, 2022 at 17:02 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n>\r\n\r\nThanks for your comments. Sorry for not replying in time.\r\n\r\n> On Mon, Oct 17, 2022 at 4:49 PM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Wed, Oct 5, 2022 at 11:08 AM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> > > Hi Wang-san. Here are my review comments for HEAD_v12-0001 patch.\r\n> >\r\n> ...\r\n> > >\r\n> > > 3. QUESTION - pg_get_publication_tables / fetch_table_list\r\n> > >\r\n> > > When the same table is published by different publications (but there\r\n> > > are other differences like row-filters/column-lists in each\r\n> > > publication) the result tuple of this function does not include the\r\n> > > pubid. Maybe the SQL of pg_publication_tables/fetch_table_list() is OK\r\n> > > as-is but how does it manage to associate each table with the correct\r\n> > > tuple?\r\n> > >\r\n> > > I know it apparently all seems to work but I’m not how does that\r\n> > > happen? Can you explain why a puboid is not needed for the result\r\n> > > tuple of this function?\r\n> >\r\n> > Sorry, I am not sure I understand your question.\r\n> > I try to answer your question by explaining the two functions you mentioned:\r\n> >\r\n> > First, the function pg_get_publication_tables gets the list (see table_infos)\r\n> > that included published table and the corresponding publication. Then based\r\n> > on this list, the function pg_get_publication_tables returns information\r\n> > (scheme, relname, row filter and column list) about the published tables in the\r\n> > publications list. It just doesn't return pubid.\r\n> >\r\n> > Then, the SQL in the function fetch_table_list will get the columns in the\r\n> > column list from pg_attribute. (This is to return all columns when the column\r\n> > list is not specified)\r\n> >\r\n> \r\n> I meant, for example, if the different publications specified\r\n> different col-lists for the same table then IIUC the\r\n> fetch_table_lists() is going to return 2 list elements\r\n> (schema,rel_name,row_filter,col_list). But when the schema/rel_name\r\n> are the same for 2 elements then (without also a pubid) how are you\r\n> going to know where the list element came from, and how come that is\r\n> not important?\r\n> \r\n> > > ~~\r\n> > >\r\n> > > test_pub=# create table t1(a int, b int, c int);\r\n> > > CREATE TABLE\r\n> > > test_pub=# create publication pub1 for table t1(a) where (a > 99);\r\n> > > CREATE PUBLICATION\r\n> > > test_pub=# create publication pub2 for table t1(a,b) where (b < 33);\r\n> > > CREATE PUBLICATION\r\n> > >\r\n> > > Following seems OK when I swap orders of publication names...\r\n> > >\r\n> > > test_pub=# SELECT gpt.relid, gpt.attrs, pg_get_expr(gpt.qual,\r\n> > > gpt.relid) AS rowfilter from pg_get_publication_tables(VARIADIC\r\n> > > ARRAY['pub2','pub1']) gpt(relid, attrs, qual);\r\n> > > relid | attrs | rowfilter\r\n> > > -------+-------+-----------\r\n> > > 16385 | 1 2 | (b < 33)\r\n> > > 16385 | 1 | (a > 99)\r\n> > > (2 rows)\r\n> > >\r\n> > > test_pub=# SELECT gpt.relid, gpt.attrs, pg_get_expr(gpt.qual,\r\n> > > gpt.relid) AS rowfilter from pg_get_publication_tables(VARIADIC\r\n> > > ARRAY['pub1','pub2']) gpt(relid, attrs, qual);\r\n> > > relid | attrs | rowfilter\r\n> > > -------+-------+-----------\r\n> > > 16385 | 1 | (a > 99)\r\n> > > 16385 | 1 2 | (b < 33)\r\n> > > (2 rows)\r\n> > >\r\n> > > But what about this (this is similar to the SQL fragment from\r\n> > > fetch_table_list); I swapped the pub names but the results are the\r\n> > > same...\r\n> > >\r\n> > > test_pub=# SELECT pg_get_publication_tables(VARIADIC\r\n> > > array_agg(p.pubname)) from pg_publication p where pubname\r\n> > > IN('pub2','pub1');\r\n> > >\r\n> > > pg_get_publication_tables\r\n> > >\r\n> > > ---------------------------------------------------------------------------------------------\r\n> ----\r\n> > > ---------------------------------------------------------------------\r\n> > > ---------------------------------------------------------------------------------------------\r\n> ----\r\n> > > ---------------------------------------------------------------------\r\n> > > -------------------------------------------------------------------\r\n> > > (16385,1,\"{OPEXPR :opno 521 :opfuncid 147 :opresulttype 16 :opretset\r\n> > > false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 1\r\n> > > :vartype 23 :vartypmod -1 :var\r\n> > > collid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 47}\r\n> > > {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\r\n> > > :constbyval true :constisnull false :\r\n> > > location 51 :constvalue 4 [ 99 0 0 0 0 0 0 0 ]}) :location 49}\")\r\n> > > (16385,\"1 2\",\"{OPEXPR :opno 97 :opfuncid 66 :opresulttype 16\r\n> > > :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1\r\n> > > :varattno 2 :vartype 23 :vartypmod -1 :v\r\n> > > arcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 2 :location 49}\r\n> > > {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\r\n> > > :constbyval true :constisnull false\r\n> > > :location 53 :constvalue 4 [ 33 0 0 0 0 0 0 0 ]}) :location 51}\")\r\n> > > (2 rows)\r\n> > >\r\n> > > test_pub=# SELECT pg_get_publication_tables(VARIADIC\r\n> > > array_agg(p.pubname)) from pg_publication p where pubname\r\n> > > IN('pub1','pub2');\r\n> > >\r\n> > > pg_get_publication_tables\r\n> > >\r\n> > > ---------------------------------------------------------------------------------------------\r\n> ----\r\n> > > ---------------------------------------------------------------------\r\n> > > ---------------------------------------------------------------------------------------------\r\n> ----\r\n> > > ---------------------------------------------------------------------\r\n> > > -------------------------------------------------------------------\r\n> > > (16385,1,\"{OPEXPR :opno 521 :opfuncid 147 :opresulttype 16 :opretset\r\n> > > false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 1\r\n> > > :vartype 23 :vartypmod -1 :var\r\n> > > collid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 47}\r\n> > > {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\r\n> > > :constbyval true :constisnull false :\r\n> > > location 51 :constvalue 4 [ 99 0 0 0 0 0 0 0 ]}) :location 49}\")\r\n> > > (16385,\"1 2\",\"{OPEXPR :opno 97 :opfuncid 66 :opresulttype 16\r\n> > > :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1\r\n> > > :varattno 2 :vartype 23 :vartypmod -1 :v\r\n> > > arcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 2 :location 49}\r\n> > > {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\r\n> > > :constbyval true :constisnull false\r\n> > > :location 53 :constvalue 4 [ 33 0 0 0 0 0 0 0 ]}) :location 51}\")\r\n> > > (2 rows)\r\n> >\r\n> > I think this is because the usage of SELECT statement. The order seems\r\n> depend\r\n> > on pg_publication. Such as:\r\n> >\r\n> > postgres=# SELECT array_agg(p.pubname) FROM pg_publication p WHERE\r\n> pubname IN ('pub1','pub2');\r\n> > array_agg\r\n> > -------------\r\n> > {pub1,pub2}\r\n> > (1 row)\r\n> >\r\n> > postgres=# SELECT array_agg(p.pubname) FROM pg_publication p WHERE\r\n> pubname IN ('pub2','pub1');\r\n> > array_agg\r\n> > -------------\r\n> > {pub1,pub2}\r\n> > (1 row)\r\n> >\r\n> \r\n> Right, so I felt it was a bit dubious that the result of the function\r\n> “seems to depend on” something. That’s why I was asking why the list\r\n> elements did not include a pubid. Then a caller could be certain what\r\n> element belonged with what publication. It's not quite clear to me why\r\n> that is not important for this patch - but anyway, even if it's not\r\n> necessary for this patch's usage, this is a function that is exposed\r\n> to users who might have different needs/expectations than this patch\r\n> has, so shouldn't the result be less fuzzy for them?\r\n\r\nYes, I agree that there may be such a need in the future. Added 'pubid' to the\r\noutput of this function.\r\nBTW, I think the usage of the function pg_get_publication_tables is not\r\ndocumented in the pg-doc now, it doesn't seem to be a function provided to\r\nusers. So I didn't modify the documentation.\r\n\r\nAttach new patches.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Fri, 11 Nov 2022 05:43:10 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, Oct 21, 2022 at 17:02 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are my review comments for HEAD patches v13*\r\n\r\nThanks for your comments.\r\n \r\n> Patch HEAD_v13-0002\r\n> \r\n> 1. Commit message\r\n> \r\n> The following usage scenarios are not described in detail in the manual:\r\n> If one subscription subscribes multiple publications, and these publications\r\n> publish a partitioned table and its partitions respectively. When we specify\r\n> this parameter on one or more of these publications, which identity and schema\r\n> should be used to publish the changes?\r\n> \r\n> In these cases, I think the parameter publish_via_partition_root behave as\r\n> follows:\r\n> \r\n> ~\r\n> \r\n> It seemed worded a bit strangely. Also, you said \"on one or more of\r\n> these publications\" but the examples only show only one publication\r\n> having 'publish_via_partition_root'.\r\n> \r\n> SUGGESTION (I've modified the wording slightly but the examples are\r\n> unchanged).\r\n> \r\n> Assume a subscription is subscribing to multiple publications, and\r\n> these publications publish a partitioned table and its partitions\r\n> respectively:\r\n> \r\n> [publisher-side]\r\n> create table parent (a int primary key) partition by range (a);\r\n> create table child partition of parent default;\r\n> \r\n> create publication pub1 for table parent;\r\n> create publication pub2 for table child;\r\n> \r\n> [subscriber-side]\r\n> create subscription sub connection 'xxxx' publication pub1, pub2;\r\n> \r\n> The manual does not clearly describe the behaviour when the user had\r\n> specified the parameter 'publish_via_partition_root' on just one of\r\n> the publications. This patch modifies documentation to clarify the\r\n> following rules:\r\n> \r\n> - If the parameter publish_via_partition_root is specified only in pub1,\r\n> changes will be published using the identity and schema of the table 'parent'.\r\n> \r\n> - If the parameter publish_via_partition_root is specified only in pub2,\r\n> changes will be published using the identity and schema of the table 'child'.\r\n\r\nImproved as suggested.\r\n\r\n> ~~~\r\n> \r\n> 2.\r\n> \r\n> - If the parameter publish_via_partition_root is specified only in pub2,\r\n> changes will be published using the identity and schema of the table child.\r\n> \r\n> ~\r\n> \r\n> Is that right though? This rule seems 100% contrary to the meaning of\r\n> 'publish_via_partition_root=true'.\r\n\r\nYes, I think this behaviour fits the meaning of publish_via_partition_root.\r\nPlease refer to this description in the document:\r\n```\r\nThis parameter determines whether changes in a partitioned table (or on its\r\npartitions) contained in the publication will be published ...\r\n```\r\n\r\nSo I think for 'publish_via_partition_root' to work, the partitioned table must\r\nbe specified in this publication.\r\n\r\nSince only one member (partition table 'child') of this partition tree\r\n('parent', 'child') is specified in 'pub2', even if 'pub2' specifies the\r\nparameter 'publish_via_partition_root', 'pub2' will publish changes using the\r\nidentity and schema of the table 'child'.\r\n\r\n> ------\r\n> \r\n> 3. doc/src/sgml/ref/create_publication.sgml\r\n> \r\n> + <para>\r\n> + If a root partitioned table is published by any subscribed\r\n> publications which\r\n> + set publish_via_partition_root = true, changes on this root\r\n> partitioned table\r\n> + (or on its partitions) will be published using the identity\r\n> and schema of this\r\n> + root partitioned table rather than that of the individual partitions.\r\n> + </para>\r\n> \r\n> This seems to only describe the first example from the commit message.\r\n> What about some description to explain the second example?\r\n\r\nI think the second example is already described in the pg-doc (please see the\r\nreply to #2). I am not quite sure if additional modifications are required. Do\r\nyou have any suggestions?\r\n\r\nThe new patch set was attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275FB5397C6A647F262A3A69E009%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Fri, 11 Nov 2022 05:44:53 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tues, Nov 8, 2022 at 12:12 PM Osumi, Takamichi/大墨 昂道 <osumi.takamichi@fujitsu.com> wrote:\r\n> On Monday, October 17, 2022 2:49 PM Wang, Wei/王 威\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> > Attach the new patch set.\r\n> Hi, thank you for posting the new patches.\r\n> \r\n> \r\n> Here are minor comments on the HEAD_v13-0002.\r\n\r\nThanks for your comments.\r\n\r\n> (1) Suggestion for the document description\r\n> \r\n> + <para>\r\n> + If a root partitioned table is published by any subscribed publications\r\n> which\r\n> + set publish_via_partition_root = true, changes on this root partitioned\r\n> table\r\n> + (or on its partitions) will be published using the identity and schema of this\r\n> + root partitioned table rather than that of the individual partitions.\r\n> + </para>\r\n> +\r\n> \r\n> I suppose this sentence looks quite similar to the one in the previous paragraph\r\n> and can be adjusted.\r\n> \r\n> IIUC the main value of the patch is to clarify what happens when\r\n> we mix publications of different publish_via_partition_root settings for one\r\n> partition hierarchy.\r\n> If this is true, how about below sentence instead of the one above ?\r\n> \r\n> \"\r\n> There can be a case where a subscription combines publications with\r\n> different publish_via_partition_root values for one same partition hierarchy\r\n> (e.g. subscribe two publications indicating the root partitioned table and its child\r\n> table respectively).\r\n> In this case, the identity and schema of the root partitioned table take priority.\r\n> \"\r\n\r\nThanks for your suggestion.\r\nI agree that we should mention that this description is for a case where one\r\nsubscription subscribes to multiple publications. And I think it would be\r\nbetter if we mentioned that the option publish_via_partition_root is specified\r\non a publication that publishes a root partitioned table. So I added the\r\ndescription of this case as you suggested.\r\n\r\n> (2) Better documentation alignment\r\n> \r\n> I think we need to wrap publish_via_partition_root by \"literal\" tag\r\n> in the documentation create_publication.sgml.\r\n\r\nImproved.\r\n\r\nThe new patch set was attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275FB5397C6A647F262A3A69E009%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Fri, 11 Nov 2022 05:45:40 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": " On Fri, 11 Nov 2022 at 11:13, wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Fri, Oct 21, 2022 at 17:02 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n>\n> Thanks for your comments. Sorry for not replying in time.\n>\n> > On Mon, Oct 17, 2022 at 4:49 PM wangw.fnst@fujitsu.com\n> > <wangw.fnst@fujitsu.com> wrote:\n> > >\n> > > On Wed, Oct 5, 2022 at 11:08 AM Peter Smith <smithpb2250@gmail.com>\n> > wrote:\n> > > > Hi Wang-san. Here are my review comments for HEAD_v12-0001 patch.\n> > >\n> > ...\n> > > >\n> > > > 3. QUESTION - pg_get_publication_tables / fetch_table_list\n> > > >\n> > > > When the same table is published by different publications (but there\n> > > > are other differences like row-filters/column-lists in each\n> > > > publication) the result tuple of this function does not include the\n> > > > pubid. Maybe the SQL of pg_publication_tables/fetch_table_list() is OK\n> > > > as-is but how does it manage to associate each table with the correct\n> > > > tuple?\n> > > >\n> > > > I know it apparently all seems to work but I’m not how does that\n> > > > happen? Can you explain why a puboid is not needed for the result\n> > > > tuple of this function?\n> > >\n> > > Sorry, I am not sure I understand your question.\n> > > I try to answer your question by explaining the two functions you mentioned:\n> > >\n> > > First, the function pg_get_publication_tables gets the list (see table_infos)\n> > > that included published table and the corresponding publication. Then based\n> > > on this list, the function pg_get_publication_tables returns information\n> > > (scheme, relname, row filter and column list) about the published tables in the\n> > > publications list. It just doesn't return pubid.\n> > >\n> > > Then, the SQL in the function fetch_table_list will get the columns in the\n> > > column list from pg_attribute. (This is to return all columns when the column\n> > > list is not specified)\n> > >\n> >\n> > I meant, for example, if the different publications specified\n> > different col-lists for the same table then IIUC the\n> > fetch_table_lists() is going to return 2 list elements\n> > (schema,rel_name,row_filter,col_list). But when the schema/rel_name\n> > are the same for 2 elements then (without also a pubid) how are you\n> > going to know where the list element came from, and how come that is\n> > not important?\n> >\n> > > > ~~\n> > > >\n> > > > test_pub=# create table t1(a int, b int, c int);\n> > > > CREATE TABLE\n> > > > test_pub=# create publication pub1 for table t1(a) where (a > 99);\n> > > > CREATE PUBLICATION\n> > > > test_pub=# create publication pub2 for table t1(a,b) where (b < 33);\n> > > > CREATE PUBLICATION\n> > > >\n> > > > Following seems OK when I swap orders of publication names...\n> > > >\n> > > > test_pub=# SELECT gpt.relid, gpt.attrs, pg_get_expr(gpt.qual,\n> > > > gpt.relid) AS rowfilter from pg_get_publication_tables(VARIADIC\n> > > > ARRAY['pub2','pub1']) gpt(relid, attrs, qual);\n> > > > relid | attrs | rowfilter\n> > > > -------+-------+-----------\n> > > > 16385 | 1 2 | (b < 33)\n> > > > 16385 | 1 | (a > 99)\n> > > > (2 rows)\n> > > >\n> > > > test_pub=# SELECT gpt.relid, gpt.attrs, pg_get_expr(gpt.qual,\n> > > > gpt.relid) AS rowfilter from pg_get_publication_tables(VARIADIC\n> > > > ARRAY['pub1','pub2']) gpt(relid, attrs, qual);\n> > > > relid | attrs | rowfilter\n> > > > -------+-------+-----------\n> > > > 16385 | 1 | (a > 99)\n> > > > 16385 | 1 2 | (b < 33)\n> > > > (2 rows)\n> > > >\n> > > > But what about this (this is similar to the SQL fragment from\n> > > > fetch_table_list); I swapped the pub names but the results are the\n> > > > same...\n> > > >\n> > > > test_pub=# SELECT pg_get_publication_tables(VARIADIC\n> > > > array_agg(p.pubname)) from pg_publication p where pubname\n> > > > IN('pub2','pub1');\n> > > >\n> > > > pg_get_publication_tables\n> > > >\n> > > > ---------------------------------------------------------------------------------------------\n> > ----\n> > > > ---------------------------------------------------------------------\n> > > > ---------------------------------------------------------------------------------------------\n> > ----\n> > > > ---------------------------------------------------------------------\n> > > > -------------------------------------------------------------------\n> > > > (16385,1,\"{OPEXPR :opno 521 :opfuncid 147 :opresulttype 16 :opretset\n> > > > false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 1\n> > > > :vartype 23 :vartypmod -1 :var\n> > > > collid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 47}\n> > > > {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\n> > > > :constbyval true :constisnull false :\n> > > > location 51 :constvalue 4 [ 99 0 0 0 0 0 0 0 ]}) :location 49}\")\n> > > > (16385,\"1 2\",\"{OPEXPR :opno 97 :opfuncid 66 :opresulttype 16\n> > > > :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1\n> > > > :varattno 2 :vartype 23 :vartypmod -1 :v\n> > > > arcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 2 :location 49}\n> > > > {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\n> > > > :constbyval true :constisnull false\n> > > > :location 53 :constvalue 4 [ 33 0 0 0 0 0 0 0 ]}) :location 51}\")\n> > > > (2 rows)\n> > > >\n> > > > test_pub=# SELECT pg_get_publication_tables(VARIADIC\n> > > > array_agg(p.pubname)) from pg_publication p where pubname\n> > > > IN('pub1','pub2');\n> > > >\n> > > > pg_get_publication_tables\n> > > >\n> > > > ---------------------------------------------------------------------------------------------\n> > ----\n> > > > ---------------------------------------------------------------------\n> > > > ---------------------------------------------------------------------------------------------\n> > ----\n> > > > ---------------------------------------------------------------------\n> > > > -------------------------------------------------------------------\n> > > > (16385,1,\"{OPEXPR :opno 521 :opfuncid 147 :opresulttype 16 :opretset\n> > > > false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 1\n> > > > :vartype 23 :vartypmod -1 :var\n> > > > collid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 47}\n> > > > {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\n> > > > :constbyval true :constisnull false :\n> > > > location 51 :constvalue 4 [ 99 0 0 0 0 0 0 0 ]}) :location 49}\")\n> > > > (16385,\"1 2\",\"{OPEXPR :opno 97 :opfuncid 66 :opresulttype 16\n> > > > :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1\n> > > > :varattno 2 :vartype 23 :vartypmod -1 :v\n> > > > arcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 2 :location 49}\n> > > > {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4\n> > > > :constbyval true :constisnull false\n> > > > :location 53 :constvalue 4 [ 33 0 0 0 0 0 0 0 ]}) :location 51}\")\n> > > > (2 rows)\n> > >\n> > > I think this is because the usage of SELECT statement. The order seems\n> > depend\n> > > on pg_publication. Such as:\n> > >\n> > > postgres=# SELECT array_agg(p.pubname) FROM pg_publication p WHERE\n> > pubname IN ('pub1','pub2');\n> > > array_agg\n> > > -------------\n> > > {pub1,pub2}\n> > > (1 row)\n> > >\n> > > postgres=# SELECT array_agg(p.pubname) FROM pg_publication p WHERE\n> > pubname IN ('pub2','pub1');\n> > > array_agg\n> > > -------------\n> > > {pub1,pub2}\n> > > (1 row)\n> > >\n> >\n> > Right, so I felt it was a bit dubious that the result of the function\n> > “seems to depend on” something. That’s why I was asking why the list\n> > elements did not include a pubid. Then a caller could be certain what\n> > element belonged with what publication. It's not quite clear to me why\n> > that is not important for this patch - but anyway, even if it's not\n> > necessary for this patch's usage, this is a function that is exposed\n> > to users who might have different needs/expectations than this patch\n> > has, so shouldn't the result be less fuzzy for them?\n>\n> Yes, I agree that there may be such a need in the future. Added 'pubid' to the\n> output of this function.\n> BTW, I think the usage of the function pg_get_publication_tables is not\n> documented in the pg-doc now, it doesn't seem to be a function provided to\n> users. So I didn't modify the documentation.\n>\n> Attach new patches.\n\nHere we are having tables list to store the relids and table_infos\nlist which stores pubid along with relid. Here tables list acts as a\ntemporary list to get filter_partitions and then delete the\npublished_rel from table_infos. Will it be possible to directly\noperate on table_infos list and remove the temporary tables list used.\nWe might have to implement comparator, deduplication functions and\nchange filter_partitions function to work directly on published_rel\ntype list.\n+ /\n+ * Record the published table and the\ncorresponding publication so\n+ * that we can get row filters and column list later.\n+ *\n+ * When a table is published by multiple\npublications, to obtain\n+ * all row filters and column list, the\nstructure related to this\n+ * table will be recorded multiple times.\n+ */\n+ foreach(lc, pub_elem_tables)\n+ {\n+ published_rel *table_info =\n(published_rel *) malloc(sizeof(published_rel));\n+\n+ table_info->relid = lfirst_oid(lc);\n+ table_info->pubid = pub_elem->oid;\n+ table_infos = lappend(table_infos, table_info);\n+ }\n+\n+ tables = list_concat(tables, pub_elem_tables);\n\nThoughts?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 13 Nov 2022 22:25:48 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Nov 14, 2022 at 0:56 AM vignesh C <vignesh21@gmail.com> wrote:\r\n> >\r\n> > Attach new patches.\r\n> \r\n\r\nThanks for your comments.\r\n\r\n> Here we are having tables list to store the relids and table_infos\r\n> list which stores pubid along with relid. Here tables list acts as a\r\n> temporary list to get filter_partitions and then delete the\r\n> published_rel from table_infos. Will it be possible to directly\r\n> operate on table_infos list and remove the temporary tables list used.\r\n> We might have to implement comparator, deduplication functions and\r\n> change filter_partitions function to work directly on published_rel\r\n> type list.\r\n> + /\r\n> + * Record the published table and the\r\n> corresponding publication so\r\n> + * that we can get row filters and column list later.\r\n> + *\r\n> + * When a table is published by multiple\r\n> publications, to obtain\r\n> + * all row filters and column list, the\r\n> structure related to this\r\n> + * table will be recorded multiple times.\r\n> + */\r\n> + foreach(lc, pub_elem_tables)\r\n> + {\r\n> + published_rel *table_info =\r\n> (published_rel *) malloc(sizeof(published_rel));\r\n> +\r\n> + table_info->relid = lfirst_oid(lc);\r\n> + table_info->pubid = pub_elem->oid;\r\n> + table_infos = lappend(table_infos, table_info);\r\n> + }\r\n> +\r\n> + tables = list_concat(tables, pub_elem_tables);\r\n> \r\n> Thoughts?\r\n\r\nI think we could only deduplicate published tables per publication to get all\r\nrow filters and column lists for each published table later.\r\nI removed the temporary list 'tables' and modified the API of the function\r\nfilter_partitions to handle published_rel type list.\r\n\r\nAttach the new patch set.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Wed, 16 Nov 2022 08:58:31 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, 16 Nov 2022 at 14:28, wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Mon, Nov 14, 2022 at 0:56 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Attach new patches.\n> >\n>\n> Thanks for your comments.\n>\n> > Here we are having tables list to store the relids and table_infos\n> > list which stores pubid along with relid. Here tables list acts as a\n> > temporary list to get filter_partitions and then delete the\n> > published_rel from table_infos. Will it be possible to directly\n> > operate on table_infos list and remove the temporary tables list used.\n> > We might have to implement comparator, deduplication functions and\n> > change filter_partitions function to work directly on published_rel\n> > type list.\n> > + /\n> > + * Record the published table and the\n> > corresponding publication so\n> > + * that we can get row filters and column list later.\n> > + *\n> > + * When a table is published by multiple\n> > publications, to obtain\n> > + * all row filters and column list, the\n> > structure related to this\n> > + * table will be recorded multiple times.\n> > + */\n> > + foreach(lc, pub_elem_tables)\n> > + {\n> > + published_rel *table_info =\n> > (published_rel *) malloc(sizeof(published_rel));\n> > +\n> > + table_info->relid = lfirst_oid(lc);\n> > + table_info->pubid = pub_elem->oid;\n> > + table_infos = lappend(table_infos, table_info);\n> > + }\n> > +\n> > + tables = list_concat(tables, pub_elem_tables);\n> >\n> > Thoughts?\n>\n> I think we could only deduplicate published tables per publication to get all\n> row filters and column lists for each published table later.\n> I removed the temporary list 'tables' and modified the API of the function\n> filter_partitions to handle published_rel type list.\n>\n> Attach the new patch set.\n\nThanks for the update patch.\nOne suggestion:\n+/* Records association between publication and published table */\n+typedef struct\n+{\n+ Oid relid; /* OID of published table */\n+ Oid pubid; /* OID of publication\nthat publishes this\n+ * table. */\n+} published_rel;\n+\n\n+ /*\n+ * Record the published table and the\ncorresponding publication so\n+ * that we can get row filters and column list later.\n+ *\n+ * When a table is published by multiple\npublications, to obtain\n+ * all row filters and column list, the\nstructure related to this\n+ * table will be recorded multiple times.\n+ */\n+ foreach(lc, pub_elem_tables)\n+ {\n+ published_rel *table_info =\n(published_rel *) malloc(sizeof(published_rel));\n+\n+ table_info->relid = lfirst_oid(lc);\n+ table_info->pubid = pub_elem->oid;\n+ table_infos = lappend(table_infos, table_info);\n+ }\n\nIn this format if there are n relations in publication we will store\npubid n times, in all tables publication there will many thousands of\ntables. We could avoid storing the pubid for every relid, instead we\ncould represent it like below to avoid storing publication id for\neach tables:\n\n+/* Records association between publication and published tables */\n+typedef struct\n+{\n+ List *relids, /* OIDs of the publisher tables */\n+ Oid pubid; /* OID of publication that publishes this\n+ * tables. */\n+}published_rel;\n\nThoughts?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 17 Nov 2022 11:27:58 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thurs, Nov 17, 2022 at 13:58 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> On Wed, 16 Nov 2022 at 14:28, wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Mon, Nov 14, 2022 at 0:56 AM vignesh C <vignesh21@gmail.com> wrote:\r\n> > > >\r\n> > > > Attach new patches.\r\n> > >\r\n> >\r\n> > Thanks for your comments.\r\n> >\r\n> > > Here we are having tables list to store the relids and table_infos\r\n> > > list which stores pubid along with relid. Here tables list acts as a\r\n> > > temporary list to get filter_partitions and then delete the\r\n> > > published_rel from table_infos. Will it be possible to directly\r\n> > > operate on table_infos list and remove the temporary tables list used.\r\n> > > We might have to implement comparator, deduplication functions and\r\n> > > change filter_partitions function to work directly on published_rel\r\n> > > type list.\r\n> > > + /\r\n> > > + * Record the published table and the\r\n> > > corresponding publication so\r\n> > > + * that we can get row filters and column list later.\r\n> > > + *\r\n> > > + * When a table is published by multiple\r\n> > > publications, to obtain\r\n> > > + * all row filters and column list, the\r\n> > > structure related to this\r\n> > > + * table will be recorded multiple times.\r\n> > > + */\r\n> > > + foreach(lc, pub_elem_tables)\r\n> > > + {\r\n> > > + published_rel *table_info =\r\n> > > (published_rel *) malloc(sizeof(published_rel));\r\n> > > +\r\n> > > + table_info->relid = lfirst_oid(lc);\r\n> > > + table_info->pubid = pub_elem->oid;\r\n> > > + table_infos = lappend(table_infos, table_info);\r\n> > > + }\r\n> > > +\r\n> > > + tables = list_concat(tables, pub_elem_tables);\r\n> > >\r\n> > > Thoughts?\r\n> >\r\n> > I think we could only deduplicate published tables per publication to get all\r\n> > row filters and column lists for each published table later.\r\n> > I removed the temporary list 'tables' and modified the API of the function\r\n> > filter_partitions to handle published_rel type list.\r\n> >\r\n> > Attach the new patch set.\r\n> \r\n> Thanks for the update patch.\r\n\r\nThanks for your comment.\r\n\r\n> One suggestion:\r\n> +/* Records association between publication and published table */\r\n> +typedef struct\r\n> +{\r\n> + Oid relid; /* OID of published table */\r\n> + Oid pubid; /* OID of publication\r\n> that publishes this\r\n> + * table. */\r\n> +} published_rel;\r\n> +\r\n> \r\n> + /*\r\n> + * Record the published table and the\r\n> corresponding publication so\r\n> + * that we can get row filters and column list later.\r\n> + *\r\n> + * When a table is published by multiple\r\n> publications, to obtain\r\n> + * all row filters and column list, the\r\n> structure related to this\r\n> + * table will be recorded multiple times.\r\n> + */\r\n> + foreach(lc, pub_elem_tables)\r\n> + {\r\n> + published_rel *table_info =\r\n> (published_rel *) malloc(sizeof(published_rel));\r\n> +\r\n> + table_info->relid = lfirst_oid(lc);\r\n> + table_info->pubid = pub_elem->oid;\r\n> + table_infos = lappend(table_infos, table_info);\r\n> + }\r\n> \r\n> In this format if there are n relations in publication we will store\r\n> pubid n times, in all tables publication there will many thousands of\r\n> tables. We could avoid storing the pubid for every relid, instead we\r\n> could represent it like below to avoid storing publication id for\r\n> each tables:\r\n> \r\n> +/* Records association between publication and published tables */\r\n> +typedef struct\r\n> +{\r\n> + List *relids, /* OIDs of the publisher tables */\r\n> + Oid pubid; /* OID of publication that publishes this\r\n> + * tables. */\r\n> +}published_rel;\r\n> \r\n> Thoughts?\r\n\r\nI think this complicates the function filter_partitions.\r\nBecause if we use such a node type, I think we need to concatenate 'relids'\r\nlist of each node of the 'table_infos' list in the function filter_partitions\r\nto become a temporary list. Then filter this temporary list and process the\r\n'table_infos' list according to the filtering result.\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Thu, 17 Nov 2022 07:43:38 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Hi,\n\nOn 2022-11-16 08:58:31 +0000, wangw.fnst@fujitsu.com wrote:\n> Attach the new patch set.\n\nThis patch causes several of the tests to fail. See e.g.:\n\nhttps://cirrus-ci.com/task/6587624765259776\n\nMost of the failures appear to be due to the main regression tests failing:\nhttps://api.cirrus-ci.com/v1/artifact/task/6587624765259776/testrun/build/testrun/regress/regress/regression.diffs\n\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/publication.out /tmp/cirrus-ci-build/build/testrun/regress/regress/results/publication.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/publication.out\t2023-02-07 20:19:34.318018729 +0000\n+++ /tmp/cirrus-ci-build/build/testrun/regress/regress/results/publication.out\t2023-02-07 20:22:53.545223026 +0000\n@@ -1657,7 +1657,7 @@\n SELECT * FROM pg_publication_tables;\n pubname | schemaname | tablename | attnames | rowfilter \n ---------+------------+------------+----------+-----------\n- pub | sch2 | tbl1_part1 | {a} | \n+ pub | sch2 | tbl1_part1 | | \n (1 row)\n \n DROP PUBLICATION pub;\n\n\n\n", "msg_date": "Tue, 7 Feb 2023 12:28:34 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Feb 8, 2023 4:29 AM Andres Freund <andres@anarazel.de> wrote:\r\n> Hi,\r\n> \r\n> On 2022-11-16 08:58:31 +0000, wangw.fnst@fujitsu.com wrote:\r\n> > Attach the new patch set.\r\n> \r\n> This patch causes several of the tests to fail. See e.g.:\r\n> \r\n> https://cirrus-ci.com/task/6587624765259776\r\n> \r\n> Most of the failures appear to be due to the main regression tests failing:\r\n> https://api.cirrus-\r\n> ci.com/v1/artifact/task/6587624765259776/testrun/build/testrun/regress/regres\r\n> s/regression.diffs\r\n> \r\n> diff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/publication.out\r\n> /tmp/cirrus-ci-build/build/testrun/regress/regress/results/publication.out\r\n> --- /tmp/cirrus-ci-build/src/test/regress/expected/publication.out\t2023-02-\r\n> 07 20:19:34.318018729 +0000\r\n> +++ /tmp/cirrus-ci-build/build/testrun/regress/regress/results/publication.out\r\n> \t2023-02-07 20:22:53.545223026 +0000\r\n> @@ -1657,7 +1657,7 @@\r\n> SELECT * FROM pg_publication_tables;\r\n> pubname | schemaname | tablename | attnames | rowfilter\r\n> ---------+------------+------------+----------+-----------\r\n> - pub | sch2 | tbl1_part1 | {a} |\r\n> + pub | sch2 | tbl1_part1 | |\r\n> (1 row)\r\n> \r\n> DROP PUBLICATION pub;\r\n\r\nThanks for your kind reminder and analysis.\r\n\r\nI think this failure is caused by the recently commit (b7ae039) in the current\r\nHEAD. Rebased the patch set and attach them.\r\n\r\nRegards,\r\nWang Wei", "msg_date": "Wed, 8 Feb 2023 03:51:08 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Feb 8, 2023 at 9:21 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> I think this failure is caused by the recently commit (b7ae039) in the current\n> HEAD. Rebased the patch set and attach them.\n>\n\n+ if (server_version >= 160000)\n+ {\n+ appendStringInfo(&cmd, \"SELECT DISTINCT N.nspname, C.relname,\\n\"\n+ \" ( SELECT array_agg(a.attname ORDER BY a.attnum)\\n\"\n+ \" FROM pg_attribute a\\n\"\n+ \" WHERE a.attrelid = GPT.relid AND a.attnum > 0 AND\\n\"\n+ \" NOT a.attisdropped AND\\n\"\n+ \" (a.attnum = ANY(GPT.attrs) OR GPT.attrs IS NULL)\\n\"\n+ \" ) AS attnames\\n\"\n+ \" FROM pg_class C\\n\"\n+ \" JOIN pg_namespace N ON N.oid = C.relnamespace\\n\"\n+ \" JOIN ( SELECT (pg_get_publication_tables(VARIADIC\narray_agg(pubname::text))).*\\n\"\n+ \" FROM pg_publication\\n\"\n+ \" WHERE pubname IN ( %s )) as GPT\\n\"\n+ \" ON GPT.relid = C.oid\\n\",\n+ pub_names.data);\n\nThe function pg_get_publication_tables() has already handled dropped\ncolumns, so we don't need it here in this query. Also, the part to\nbuild attnames should be the same as it is in view\npg_publication_tables. Can we directly try to pass the list of\npubnames to the function pg_get_publication_tables() instead of\njoining it with pg_publication?\n\nCan we keep the changes in the else part (fix when publisher < 16) the\nsame as HEAD and move the proposed change to a separate patch?\nBasically, for the HEAD patch, let's just try to fix this when\npublisher >=16. I am slightly worried that as this is a corner case\nbug and we didn't see any user complaints for this, so introducing a\ncomplex fix for back branches may not be required or at least we can\ndiscuss that separately.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 16 Mar 2023 17:55:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Mar 16, 2023 at 20:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>\r\n\r\nThanks for your comments.\r\n\r\n> + if (server_version >= 160000)\r\n> + {\r\n> + appendStringInfo(&cmd, \"SELECT DISTINCT N.nspname, C.relname,\\n\"\r\n> + \" ( SELECT array_agg(a.attname ORDER BY a.attnum)\\n\"\r\n> + \" FROM pg_attribute a\\n\"\r\n> + \" WHERE a.attrelid = GPT.relid AND a.attnum > 0 AND\\n\"\r\n> + \" NOT a.attisdropped AND\\n\"\r\n> + \" (a.attnum = ANY(GPT.attrs) OR GPT.attrs IS NULL)\\n\"\r\n> + \" ) AS attnames\\n\"\r\n> + \" FROM pg_class C\\n\"\r\n> + \" JOIN pg_namespace N ON N.oid = C.relnamespace\\n\"\r\n> + \" JOIN ( SELECT (pg_get_publication_tables(VARIADIC\r\n> array_agg(pubname::text))).*\\n\"\r\n> + \" FROM pg_publication\\n\"\r\n> + \" WHERE pubname IN ( %s )) as GPT\\n\"\r\n> + \" ON GPT.relid = C.oid\\n\",\r\n> + pub_names.data);\r\n> \r\n> The function pg_get_publication_tables() has already handled dropped\r\n> columns, so we don't need it here in this query. Also, the part to\r\n> build attnames should be the same as it is in view\r\n> pg_publication_tables.\r\n\r\nAgree. Changed.\r\n\r\n> Can we directly try to pass the list of\r\n> pubnames to the function pg_get_publication_tables() instead of\r\n> joining it with pg_publication?\r\n\r\nChanged.\r\nI think the aim of joining it with pg_publication before is to exclude\r\nnon-existing publications. Otherwise, we would get an error because of the call\r\nto function GetPublicationByName (with 'missing_ok = false') in function\r\npg_get_publication_tables. So, I changed \"missing_ok\" to true. If anyone doesn't\r\nlike this change, I'll reconsider this in the next version.\r\n\r\n> Can we keep the changes in the else part (fix when publisher < 16) the\r\n> same as HEAD and move the proposed change to a separate patch?\r\n> Basically, for the HEAD patch, let's just try to fix this when\r\n> publisher >=16. I am slightly worried that as this is a corner case\r\n> bug and we didn't see any user complaints for this, so introducing a\r\n> complex fix for back branches may not be required or at least we can\r\n> discuss that separately.\r\n\r\nSplit the patch as suggested.\r\n\r\nAttach the new patch set.\r\n\r\nRegards,\r\nWang Wei", "msg_date": "Fri, 17 Mar 2023 06:28:00 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, Mar 17, 2023 at 11:58 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Thu, Mar 16, 2023 at 20:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> Thanks for your comments.\n>\n> > + if (server_version >= 160000)\n> > + {\n> > + appendStringInfo(&cmd, \"SELECT DISTINCT N.nspname, C.relname,\\n\"\n> > + \" ( SELECT array_agg(a.attname ORDER BY a.attnum)\\n\"\n> > + \" FROM pg_attribute a\\n\"\n> > + \" WHERE a.attrelid = GPT.relid AND a.attnum > 0 AND\\n\"\n> > + \" NOT a.attisdropped AND\\n\"\n> > + \" (a.attnum = ANY(GPT.attrs) OR GPT.attrs IS NULL)\\n\"\n> > + \" ) AS attnames\\n\"\n> > + \" FROM pg_class C\\n\"\n> > + \" JOIN pg_namespace N ON N.oid = C.relnamespace\\n\"\n> > + \" JOIN ( SELECT (pg_get_publication_tables(VARIADIC\n> > array_agg(pubname::text))).*\\n\"\n> > + \" FROM pg_publication\\n\"\n> > + \" WHERE pubname IN ( %s )) as GPT\\n\"\n> > + \" ON GPT.relid = C.oid\\n\",\n> > + pub_names.data);\n> >\n> > The function pg_get_publication_tables() has already handled dropped\n> > columns, so we don't need it here in this query. Also, the part to\n> > build attnames should be the same as it is in view\n> > pg_publication_tables.\n>\n> Agree. Changed.\n>\n> > Can we directly try to pass the list of\n> > pubnames to the function pg_get_publication_tables() instead of\n> > joining it with pg_publication?\n>\n> Changed.\n> I think the aim of joining it with pg_publication before is to exclude\n> non-existing publications.\n>\n\nOkay, A comment for that would have made it clear.\n\n> Otherwise, we would get an error because of the call\n> to function GetPublicationByName (with 'missing_ok = false') in function\n> pg_get_publication_tables. So, I changed \"missing_ok\" to true. If anyone doesn't\n> like this change, I'll reconsider this in the next version.\n>\n\nI am not sure about changing missing_ok behavior. Did you check it for\nany other similar usage in other functions?\n\n+ foreach(lc, pub_elem_tables)\n+ {\n+ published_rel *table_info = (published_rel *) malloc(sizeof(published_rel));\n\nIs there a reason to use malloc instead of palloc?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 17 Mar 2023 17:37:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Mar 16, 2023 at 11:28 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n> Attach the new patch set.\n\nHi,\n\nI ran into this problem while hacking on [1], so thank you for tackling\nit! I have no strong opinions on the implementation itself; I just want\nto register a concern that the tests have not kept up with the\nimplementation complexity.\n\nFor example, the corner case mentioned in 0003, with multiple\npublications having conflicting pubviaroot settings, isn't tested as far\nas I can see. (I checked manually, and it appears to work as intended.)\nAnd the related pub_lower_level test currently only covers the case\nwhere multiple publications have pubviaroot=true, so the following test\ncomment is now misleading:\n\n> # for tab4, we publish changes through the \"middle\" partitioned table\n> $node_publisher->safe_psql('postgres',\n> \t\"CREATE PUBLICATION pub_lower_level FOR TABLE tab4_1 WITH (publish_via_partition_root = true)\"\n> );\n\n...since the changes are now in fact published via the tab4 root after\nthis patchset is applied.\n\n> I think the aim of joining it with pg_publication before is to exclude\n> non-existing publications. Otherwise, we would get an error because of the call\n> to function GetPublicationByName (with 'missing_ok = false') in function\n> pg_get_publication_tables.\n\nIn the same vein, I don't think that case is covered anywhere.\n\nThere are a bunch of moving parts and hidden subtleties here, and I fell\ninto a few traps when I was working on my patch, so it'd be nice to have\nadditional coverage. I'm happy to contribute effort in that area if it's\nhelpful.\n\nThanks!\n--Jacob\n\n[1] https://postgr.es/m/dc57f088-039b-7a71-8f4c-082ef106246e%40timescale.com\n\n\n", "msg_date": "Fri, 17 Mar 2023 16:36:53 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Sat, Mar 18, 2023 at 5:06 AM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Thu, Mar 16, 2023 at 11:28 PM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> > Attach the new patch set.\n>\n> Hi,\n>\n> I ran into this problem while hacking on [1], so thank you for tackling\n> it! I have no strong opinions on the implementation itself; I just want\n> to register a concern that the tests have not kept up with the\n> implementation complexity.\n>\n> For example, the corner case mentioned in 0003, with multiple\n> publications having conflicting pubviaroot settings, isn't tested as far\n> as I can see. (I checked manually, and it appears to work as intended.)\n> And the related pub_lower_level test currently only covers the case\n> where multiple publications have pubviaroot=true, so the following test\n> comment is now misleading:\n>\n> > # for tab4, we publish changes through the \"middle\" partitioned table\n> > $node_publisher->safe_psql('postgres',\n> > \"CREATE PUBLICATION pub_lower_level FOR TABLE tab4_1 WITH (publish_via_partition_root = true)\"\n> > );\n>\n> ...since the changes are now in fact published via the tab4 root after\n> this patchset is applied.\n>\n> > I think the aim of joining it with pg_publication before is to exclude\n> > non-existing publications. Otherwise, we would get an error because of the call\n> > to function GetPublicationByName (with 'missing_ok = false') in function\n> > pg_get_publication_tables.\n>\n> In the same vein, I don't think that case is covered anywhere.\n>\n\nWe can have a test case to cover this scenario.\n\n> There are a bunch of moving parts and hidden subtleties here, and I fell\n> into a few traps when I was working on my patch, so it'd be nice to have\n> additional coverage. I'm happy to contribute effort in that area if it's\n> helpful.\n>\n\nI think it depends on what tests you have in mind. I suggest you can\npropose a patch to cover tests for this are in a separate thread. We\ncan then evaluate those separately.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 18 Mar 2023 10:14:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, Mar 17, 2023 at 20:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Mar 17, 2023 at 11:58 AM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Thu, Mar 16, 2023 at 20:25 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> >\r\n> > Thanks for your comments.\r\n> >\r\n> > > + if (server_version >= 160000)\r\n> > > + {\r\n> > > + appendStringInfo(&cmd, \"SELECT DISTINCT N.nspname, C.relname,\\n\"\r\n> > > + \" ( SELECT array_agg(a.attname ORDER BY a.attnum)\\n\"\r\n> > > + \" FROM pg_attribute a\\n\"\r\n> > > + \" WHERE a.attrelid = GPT.relid AND a.attnum > 0 AND\\n\"\r\n> > > + \" NOT a.attisdropped AND\\n\"\r\n> > > + \" (a.attnum = ANY(GPT.attrs) OR GPT.attrs IS NULL)\\n\"\r\n> > > + \" ) AS attnames\\n\"\r\n> > > + \" FROM pg_class C\\n\"\r\n> > > + \" JOIN pg_namespace N ON N.oid = C.relnamespace\\n\"\r\n> > > + \" JOIN ( SELECT (pg_get_publication_tables(VARIADIC\r\n> > > array_agg(pubname::text))).*\\n\"\r\n> > > + \" FROM pg_publication\\n\"\r\n> > > + \" WHERE pubname IN ( %s )) as GPT\\n\"\r\n> > > + \" ON GPT.relid = C.oid\\n\",\r\n> > > + pub_names.data);\r\n> > >\r\n> > > The function pg_get_publication_tables() has already handled dropped\r\n> > > columns, so we don't need it here in this query. Also, the part to\r\n> > > build attnames should be the same as it is in view\r\n> > > pg_publication_tables.\r\n> >\r\n> > Agree. Changed.\r\n> >\r\n> > > Can we directly try to pass the list of\r\n> > > pubnames to the function pg_get_publication_tables() instead of\r\n> > > joining it with pg_publication?\r\n> >\r\n> > Changed.\r\n> > I think the aim of joining it with pg_publication before is to exclude\r\n> > non-existing publications.\r\n> >\r\n> \r\n> Okay, A comment for that would have made it clear.\r\n\r\nMake sense. Added the comment atop the query.\r\n\r\n> > Otherwise, we would get an error because of the call\r\n> > to function GetPublicationByName (with 'missing_ok = false') in function\r\n> > pg_get_publication_tables. So, I changed \"missing_ok\" to true. If anyone doesn't\r\n> > like this change, I'll reconsider this in the next version.\r\n> >\r\n> \r\n> I am not sure about changing missing_ok behavior. Did you check it for\r\n> any other similar usage in other functions?\r\n\r\nAfter reviewing the pg_get_* functions in the 'pg_proc.dat' file, I think most\r\nof them ignore incorrect input, such as the function pg_get_indexdef. However,\r\nsome functions, such as pg_get_serial_sequence and pg_get_object_address, will\r\nreport an error. So, I think it's better to discuss this in a separate thread.\r\nReverted this modification. And I will start a new separate thread for this\r\nlater.\r\n\r\n> + foreach(lc, pub_elem_tables)\r\n> + {\r\n> + published_rel *table_info = (published_rel *) malloc(sizeof(published_rel));\r\n> \r\n> Is there a reason to use malloc instead of palloc?\r\n\r\nNo. I think we need to use palloc here.\r\nChanged.\r\n\r\nAttach the new patch set.\r\n\r\nRegards,\r\nWang Wei", "msg_date": "Mon, 20 Mar 2023 00:53:56 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Here are some review comments for v17-0001.\n\n======\nsrc/backend/catalog/pg_publication.c\n\n1. filter_partitions\n\n-static List *\n-filter_partitions(List *relids)\n+static void\n+filter_partitions(List *table_infos)\n {\n- List *result = NIL;\n ListCell *lc;\n- ListCell *lc2;\n\n- foreach(lc, relids)\n+ foreach(lc, table_infos)\n {\n- bool skip = false;\n- List *ancestors = NIL;\n- Oid relid = lfirst_oid(lc);\n+ bool skip = false;\n+ List *ancestors = NIL;\n+ ListCell *lc2;\n+ published_rel *table_info = (published_rel *) lfirst(lc);\n\n- if (get_rel_relispartition(relid))\n- ancestors = get_partition_ancestors(relid);\n+ if (get_rel_relispartition(table_info->relid))\n+ ancestors = get_partition_ancestors(table_info->relid);\n\n foreach(lc2, ancestors)\n {\n Oid ancestor = lfirst_oid(lc2);\n+ ListCell *lc3;\n\n /* Check if the parent table exists in the published table list. */\n- if (list_member_oid(relids, ancestor))\n+ foreach(lc3, table_infos)\n {\n- skip = true;\n- break;\n+ Oid relid = ((published_rel *) lfirst(lc3))->relid;\n+\n+ if (relid == ancestor)\n+ {\n+ skip = true;\n+ break;\n+ }\n }\n+\n+ if (skip)\n+ break;\n }\n\n- if (!skip)\n- result = lappend_oid(result, relid);\n+ if (skip)\n+ table_infos = foreach_delete_current(table_infos, lc);\n }\n-\n- return result;\n }\n\n\nIt seems the 'skip' and 'ancestors' and 'lc2' vars are not needed\nexcept when \"if (get_rel_relispartition(table_info->relid))\" is true,\nso won't it be better to restructure the code to put everything inside\nthat condition. Then you will save a few unnecessary tests of\nforeach(lc2, ancestors) and (skip).\n\nFor example,\n\nstatic void\nfilter_partitions(List *table_infos)\n{\nListCell *lc;\n\nforeach(lc, table_infos)\n{\npublished_rel *table_info = (published_rel *) lfirst(lc);\n\nif (get_rel_relispartition(table_info->relid))\n{\nbool skip = false;\nList *ancestors = get_partition_ancestors(table_info->relid);\nListCell *lc2;\n\nforeach(lc2, ancestors)\n{\nOid ancestor = lfirst_oid(lc2);\nListCell *lc3;\n/* Check if the parent table exists in the published table list. */\nforeach(lc3, table_infos)\n{\nOid relid = ((published_rel *) lfirst(lc3))->relid;\n\nif (relid == ancestor)\n{\nskip = true;\nbreak;\n}\n}\nif (skip)\nbreak;\n}\n\nif (skip)\ntable_infos = foreach_delete_current(table_infos, lc);\n}\n}\n}\n\n~~~\n\n2. pg_get_publication_tables\n\n+ else\n+ {\n+ List *relids,\n+ *schemarelids;\n+\n+ relids = GetPublicationRelations(pub_elem->oid,\n+ pub_elem->pubviaroot ?\n+ PUBLICATION_PART_ROOT :\n+ PUBLICATION_PART_LEAF);\n+ schemarelids = GetAllSchemaPublicationRelations(pub_elem->oid,\n+ pub_elem->pubviaroot ?\n+ PUBLICATION_PART_ROOT :\n+ PUBLICATION_PART_LEAF);\n+ pub_elem_tables = list_concat_unique_oid(relids, schemarelids);\n+ }\n\n2a.\nMaybe 'schema_relids' would be a better name than 'schemareliids'?\n\n~\n\n2b.\nBy introducing another variable maybe you could remove some of this\nduplicated code.\n\nPublicationPartOpt root_or_leaf = pub_elem->pubviaroot ?\nPUBLICATION_PART_ROOT : PUBLICATION_PART_LEAF;\n\n~~~\n\n3. pg_get_publication_tables\n\n /* Show all columns when the column list is not specified. */\n- if (nulls[1] == true)\n+ if (nulls[2] == true)\n\nSince you are changing this line anyway, you might as well change it\nto remove the redundant \"== true\" part.\n\nSUGGESTION\nif (nulls[2])\n\n======\nsrc/include/catalog/pg_proc.dat\n\n4.\n+{ oid => '6119',\n+ descr => 'get information of the tables in the given publication array',\n\nShould that be worded in a way to make it more clear that the\n\"publication array\" is really an \"array of publication names\"?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 20 Mar 2023 18:32:15 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Mar 20, 2023 at 1:02 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n>\n> 2. pg_get_publication_tables\n>\n> + else\n> + {\n> + List *relids,\n> + *schemarelids;\n> +\n> + relids = GetPublicationRelations(pub_elem->oid,\n> + pub_elem->pubviaroot ?\n> + PUBLICATION_PART_ROOT :\n> + PUBLICATION_PART_LEAF);\n> + schemarelids = GetAllSchemaPublicationRelations(pub_elem->oid,\n> + pub_elem->pubviaroot ?\n> + PUBLICATION_PART_ROOT :\n> + PUBLICATION_PART_LEAF);\n> + pub_elem_tables = list_concat_unique_oid(relids, schemarelids);\n> + }\n>\n> 2a.\n> Maybe 'schema_relids' would be a better name than 'schemareliids'?\n>\n> ~\n>\n> 2b.\n> By introducing another variable maybe you could remove some of this\n> duplicated code.\n>\n> PublicationPartOpt root_or_leaf = pub_elem->pubviaroot ?\n> PUBLICATION_PART_ROOT : PUBLICATION_PART_LEAF;\n>\n\nIIUC, 2b is an existing code, so I would prefer not to change that as\npart of this patch. Similarly, for other comments, unless something is\na very clear improvement and makes difference w.r.t this patch, it\nmakes sense to change that, otherwise, let's focus on the current\nissue.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 20 Mar 2023 14:23:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Mar 20, 2023 at 1:02 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n>\n> ======\n> src/include/catalog/pg_proc.dat\n>\n> 4.\n> +{ oid => '6119',\n> + descr => 'get information of the tables in the given publication array',\n>\n> Should that be worded in a way to make it more clear that the\n> \"publication array\" is really an \"array of publication names\"?\n>\n\nI don't know how important it is to tell that the array is an array of\npublication names but the current description can be improved. How\nabout something like: \"get information of the tables that are part of\nthe specified publications\"\n\nFew other comments:\n=================\n1.\n foreach(lc2, ancestors)\n {\n Oid ancestor = lfirst_oid(lc2);\n+ ListCell *lc3;\n\n /* Check if the parent table exists in the published table list. */\n- if (list_member_oid(relids, ancestor))\n+ foreach(lc3, table_infos)\n {\n- skip = true;\n- break;\n+ Oid relid = ((published_rel *) lfirst(lc3))->relid;\n+\n+ if (relid == ancestor)\n+ {\n+ skip = true;\n+ break;\n+ }\n }\n+\n+ if (skip)\n+ break;\n }\n\n- if (!skip)\n- result = lappend_oid(result, relid);\n+ if (skip)\n+ table_infos = foreach_delete_current(table_infos, lc);\n\nThe usage of skip looks a bit ugly to me. Can we move the code for the\ninner loop to a separate function (like\nis_ancestor_member_tableinfos()) and remove the current cell if it\nreturns true?\n\n2.\n * Filter out the partitions whose parent tables were also specified in\n * the publication.\n */\n-static List *\n-filter_partitions(List *relids)\n+static void\n+filter_partitions(List *table_infos)\n\nThe comment atop filter_partitions is no longer valid. Can we slightly\nchange it to: \"Filter out the partitions whose parent tables are also\npresent in the list.\"?\n\n3.\n-# Note: We create two separate tables, not a partitioned one, so that we can\n-# easily identity through which relation were the changes replicated.\n+# Note: We only create one table (tab4) here. We specified\n+# publish_via_partition_root = true (see pub_all and pub_lower_level above), so\n+# all data will be replicated to that table.\n $node_subscriber2->safe_psql('postgres',\n \"CREATE TABLE tab4 (a int PRIMARY KEY)\");\n-$node_subscriber2->safe_psql('postgres',\n- \"CREATE TABLE tab4_1 (a int PRIMARY KEY)\");\n\nI am not sure if it is a good idea to remove tab4_1 here. It is\ntesting something different as mentioned in the comments. Also, I\ndon't see any data in tab4 for the initial sync, so not sure if this\ntests the behavior changed by this patch.\n\n4.\n--- a/src/test/subscription/t/031_column_list.pl\n+++ b/src/test/subscription/t/031_column_list.pl\n@@ -959,7 +959,8 @@ $node_publisher->safe_psql(\n CREATE TABLE test_root_1 PARTITION OF test_root FOR VALUES FROM (1) TO (10);\n CREATE TABLE test_root_2 PARTITION OF test_root FOR VALUES FROM (10) TO (20);\n\n- CREATE PUBLICATION pub_root_true FOR TABLE test_root (a) WITH\n(publish_via_partition_root = true);\n+ CREATE PUBLICATION pub_root_true_1 FOR TABLE test_root (a) WITH\n(publish_via_partition_root = true);\n+ CREATE PUBLICATION pub_root_true_2 FOR TABLE test_root_1 (a, b) WITH\n(publish_via_partition_root = true);\n\n -- initial data\n INSERT INTO test_root VALUES (1, 2, 3);\n@@ -968,7 +969,7 @@ $node_publisher->safe_psql(\n\n $node_subscriber->safe_psql(\n 'postgres', qq(\n- CREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr' PUBLICATION\npub_root_true;\n+ CREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr' PUBLICATION\npub_root_true_1, pub_root_true_2;\n\nIt is not clear to me what exactly you want to test here. Please add\nsome comments.\n\n5. I think you can merge the 0001 and 0003 patches.\n\nApart from the above, attached is a patch to change some of the\ncomments in the patch.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 20 Mar 2023 15:44:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Dear Wang,\r\n\r\nI have tested about multilevel partitions, and it worked well.\r\nFollowings are my comments for v18-0001.\r\n\r\n01. pg_get_publication_tables\r\n\r\n```\r\n+ ListCell *lc;\r\n```\r\n\r\nThis definition can be inside of the \"for (i = 0; i < nelems; i++)\".\r\n\r\n02. pg_get_publication_tables\r\n\r\n```\r\n- * If the publication publishes partition changes via their\r\n- * respective root partitioned tables, we must exclude partitions\r\n- * in favor of including the root partitioned tables. Otherwise,\r\n- * the function could return both the child and parent tables\r\n- * which could cause data of the child table to be\r\n- * double-published on the subscriber side.\r\n+ * Publications support partitioned tables. If\r\n+ * publish_via_partition_root is false, all changes are replicated\r\n+ * using leaf partition identity and schema, so we only need those.\r\n+ * Otherwise, get the partitioned table itself.\r\n```\r\n\r\nThe comments can be inside of the \"else\".\r\n\r\n03. pg_get_publication_tables\r\n\r\n```\r\n+ pfree(elems);\r\n```\r\n\r\nOnly elems is pfree()'d here, but how about other variable like pub_elem and pub_elem_tables?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Mon, 20 Mar 2023 13:17:52 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, Mar 17, 2023 at 9:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > There are a bunch of moving parts and hidden subtleties here, and I fell\n> > into a few traps when I was working on my patch, so it'd be nice to have\n> > additional coverage. I'm happy to contribute effort in that area if it's\n> > helpful.\n>\n> I think it depends on what tests you have in mind.\n\nJust the ones I mentioned, to start with.\n\n> I suggest you can\n> propose a patch to cover tests for this are in a separate thread. We\n> can then evaluate those separately.\n\nTo confirm -- you want me to start a new thread for tests for this\npatchset? (Tests written against HEAD would likely be obsoleted by\nthis change.)\n\n--Jacob\n\n\n", "msg_date": "Mon, 20 Mar 2023 10:52:46 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Mar 20, 2023 at 11:22 PM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Fri, Mar 17, 2023 at 9:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > There are a bunch of moving parts and hidden subtleties here, and I fell\n> > > into a few traps when I was working on my patch, so it'd be nice to have\n> > > additional coverage. I'm happy to contribute effort in that area if it's\n> > > helpful.\n> >\n> > I think it depends on what tests you have in mind.\n>\n> Just the ones I mentioned, to start with.\n>\n> > I suggest you can\n> > propose a patch to cover tests for this are in a separate thread. We\n> > can then evaluate those separately.\n>\n> To confirm -- you want me to start a new thread for tests for this\n> patchset? (Tests written against HEAD would likely be obsoleted by\n> this change.)\n>\n\nIf the tests you have in mind are only related to this patch set then\nfeel free to propose them here if you feel the current ones are not\nsufficient. I just want to be cautious that we shouldn't spend too\nmuch time adding additional tests which are related to the base\nfunctionality as we have left with less time for the last CF and I\nwould like to push the change for HEAD before that.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 21 Mar 2023 11:51:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Mar 20, 2023 at 18:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>\r\n\r\nThanks for your comments.\r\n\r\n> On Mon, Mar 20, 2023 at 1:02 PM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> >\r\n> > ======\r\n> > src/include/catalog/pg_proc.dat\r\n> >\r\n> > 4.\r\n> > +{ oid => '6119',\r\n> > + descr => 'get information of the tables in the given publication array',\r\n> >\r\n> > Should that be worded in a way to make it more clear that the\r\n> > \"publication array\" is really an \"array of publication names\"?\r\n> >\r\n> \r\n> I don't know how important it is to tell that the array is an array of\r\n> publication names but the current description can be improved. How\r\n> about something like: \"get information of the tables that are part of\r\n> the specified publications\"\r\n\r\nChanged.\r\n\r\n> Few other comments:\r\n> =================\r\n> 1.\r\n> foreach(lc2, ancestors)\r\n> {\r\n> Oid ancestor = lfirst_oid(lc2);\r\n> + ListCell *lc3;\r\n> \r\n> /* Check if the parent table exists in the published table list. */\r\n> - if (list_member_oid(relids, ancestor))\r\n> + foreach(lc3, table_infos)\r\n> {\r\n> - skip = true;\r\n> - break;\r\n> + Oid relid = ((published_rel *) lfirst(lc3))->relid;\r\n> +\r\n> + if (relid == ancestor)\r\n> + {\r\n> + skip = true;\r\n> + break;\r\n> + }\r\n> }\r\n> +\r\n> + if (skip)\r\n> + break;\r\n> }\r\n> \r\n> - if (!skip)\r\n> - result = lappend_oid(result, relid);\r\n> + if (skip)\r\n> + table_infos = foreach_delete_current(table_infos, lc);\r\n> \r\n> The usage of skip looks a bit ugly to me. Can we move the code for the\r\n> inner loop to a separate function (like\r\n> is_ancestor_member_tableinfos()) and remove the current cell if it\r\n> returns true?\r\n\r\nChanged.\r\n\r\n> 2.\r\n> * Filter out the partitions whose parent tables were also specified in\r\n> * the publication.\r\n> */\r\n> -static List *\r\n> -filter_partitions(List *relids)\r\n> +static void\r\n> +filter_partitions(List *table_infos)\r\n> \r\n> The comment atop filter_partitions is no longer valid. Can we slightly\r\n> change it to: \"Filter out the partitions whose parent tables are also\r\n> present in the list.\"?\r\n\r\nChanged.\r\n\r\n> 3.\r\n> -# Note: We create two separate tables, not a partitioned one, so that we can\r\n> -# easily identity through which relation were the changes replicated.\r\n> +# Note: We only create one table (tab4) here. We specified\r\n> +# publish_via_partition_root = true (see pub_all and pub_lower_level above), so\r\n> +# all data will be replicated to that table.\r\n> $node_subscriber2->safe_psql('postgres',\r\n> \"CREATE TABLE tab4 (a int PRIMARY KEY)\");\r\n> -$node_subscriber2->safe_psql('postgres',\r\n> - \"CREATE TABLE tab4_1 (a int PRIMARY KEY)\");\r\n> \r\n> I am not sure if it is a good idea to remove tab4_1 here. It is\r\n> testing something different as mentioned in the comments. Also, I\r\n> don't see any data in tab4 for the initial sync, so not sure if this\r\n> tests the behavior changed by this patch.\r\n\r\nReverted this change. And inserted the initial sync data into table tab4 to test\r\nthis more clearly.\r\n\r\n> 4.\r\n> --- a/src/test/subscription/t/031_column_list.pl\r\n> +++ b/src/test/subscription/t/031_column_list.pl\r\n> @@ -959,7 +959,8 @@ $node_publisher->safe_psql(\r\n> CREATE TABLE test_root_1 PARTITION OF test_root FOR VALUES FROM (1) TO\r\n> (10);\r\n> CREATE TABLE test_root_2 PARTITION OF test_root FOR VALUES FROM (10) TO\r\n> (20);\r\n> \r\n> - CREATE PUBLICATION pub_root_true FOR TABLE test_root (a) WITH\r\n> (publish_via_partition_root = true);\r\n> + CREATE PUBLICATION pub_root_true_1 FOR TABLE test_root (a) WITH\r\n> (publish_via_partition_root = true);\r\n> + CREATE PUBLICATION pub_root_true_2 FOR TABLE test_root_1 (a, b) WITH\r\n> (publish_via_partition_root = true);\r\n> \r\n> -- initial data\r\n> INSERT INTO test_root VALUES (1, 2, 3);\r\n> @@ -968,7 +969,7 @@ $node_publisher->safe_psql(\r\n> \r\n> $node_subscriber->safe_psql(\r\n> 'postgres', qq(\r\n> - CREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr' PUBLICATION\r\n> pub_root_true;\r\n> + CREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr' PUBLICATION\r\n> pub_root_true_1, pub_root_true_2;\r\n> \r\n> It is not clear to me what exactly you want to test here. Please add\r\n> some comments.\r\n\r\nTried to add the following comment to make it clear:\r\n```\r\n+# Subscribe to pub_root_true_1 and pub_root_true_2 at the same time, which\r\n+# means that the initial data will be synced once, and only the column list of\r\n+# the parent table (test_root) in the publication pub_root_true_1 will be used\r\n+# for both table sync and data replication.\r\n $node_subscriber->safe_psql(\r\n \t'postgres', qq(\r\n-\tCREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr' PUBLICATION pub_root_true;\r\n+\tCREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr' PUBLICATION pub_root_true_1, pub_root_true_2;\r\n```\r\n\r\n> 5. I think you can merge the 0001 and 0003 patches.\r\n\r\nMerged.\r\n\r\n> Apart from the above, attached is a patch to change some of the\r\n> comments in the patch.\r\n\r\nThanks for this improvement. I've checked and merged it.\r\n\r\nAttach the new patch set.\r\n\r\nRegards,\r\nWang Wei", "msg_date": "Tue, 21 Mar 2023 07:40:00 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Sat, Mar 18, 2023 at 7:37 AM Jacob Champion <jchampion@timescale.com> wrote:\r\n> On Thu, Mar 16, 2023 at 11:28 PM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> > Attach the new patch set.\r\n\r\nThanks for your comments and testing.\r\n\r\n> For example, the corner case mentioned in 0003, with multiple\r\n> publications having conflicting pubviaroot settings, isn't tested as far\r\n> as I can see. (I checked manually, and it appears to work as intended.)\r\n> And the related pub_lower_level test currently only covers the case\r\n> where multiple publications have pubviaroot=true, so the following test\r\n> comment is now misleading:\r\n> \r\n> > # for tab4, we publish changes through the \"middle\" partitioned table\r\n> > $node_publisher->safe_psql('postgres',\r\n> > \t\"CREATE PUBLICATION pub_lower_level FOR TABLE tab4_1 WITH\r\n> (publish_via_partition_root = true)\"\r\n> > );\r\n> \r\n> ...since the changes are now in fact published via the tab4 root after\r\n> this patchset is applied.\r\n\r\nMake sense.\r\nTried to improve this comment like below:\r\n```\r\nIf we subscribe only to pub_lower_level, changes for tab4 will be published\r\nthrough the \"middle\" partition table. However, since we will be subscribing to\r\nboth pub_lower_level and pub_all (see subscription sub2 below), we will publish\r\nchanges via the root table (tab4).\r\n```\r\n\r\nRegards,\r\nWang Wei\r\n", "msg_date": "Tue, 21 Mar 2023 07:40:16 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Mar 20, 2023 at 15:32 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for v17-0001.\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> src/backend/catalog/pg_publication.c\r\n> \r\n> 1. filter_partitions\r\n> \r\n> -static List *\r\n> -filter_partitions(List *relids)\r\n> +static void\r\n> +filter_partitions(List *table_infos)\r\n> {\r\n> - List *result = NIL;\r\n> ListCell *lc;\r\n> - ListCell *lc2;\r\n> \r\n> - foreach(lc, relids)\r\n> + foreach(lc, table_infos)\r\n> {\r\n> - bool skip = false;\r\n> - List *ancestors = NIL;\r\n> - Oid relid = lfirst_oid(lc);\r\n> + bool skip = false;\r\n> + List *ancestors = NIL;\r\n> + ListCell *lc2;\r\n> + published_rel *table_info = (published_rel *) lfirst(lc);\r\n> \r\n> - if (get_rel_relispartition(relid))\r\n> - ancestors = get_partition_ancestors(relid);\r\n> + if (get_rel_relispartition(table_info->relid))\r\n> + ancestors = get_partition_ancestors(table_info->relid);\r\n> \r\n> foreach(lc2, ancestors)\r\n> {\r\n> Oid ancestor = lfirst_oid(lc2);\r\n> + ListCell *lc3;\r\n> \r\n> /* Check if the parent table exists in the published table list. */\r\n> - if (list_member_oid(relids, ancestor))\r\n> + foreach(lc3, table_infos)\r\n> {\r\n> - skip = true;\r\n> - break;\r\n> + Oid relid = ((published_rel *) lfirst(lc3))->relid;\r\n> +\r\n> + if (relid == ancestor)\r\n> + {\r\n> + skip = true;\r\n> + break;\r\n> + }\r\n> }\r\n> +\r\n> + if (skip)\r\n> + break;\r\n> }\r\n> \r\n> - if (!skip)\r\n> - result = lappend_oid(result, relid);\r\n> + if (skip)\r\n> + table_infos = foreach_delete_current(table_infos, lc);\r\n> }\r\n> -\r\n> - return result;\r\n> }\r\n> \r\n> \r\n> It seems the 'skip' and 'ancestors' and 'lc2' vars are not needed\r\n> except when \"if (get_rel_relispartition(table_info->relid))\" is true,\r\n> so won't it be better to restructure the code to put everything inside\r\n> that condition. Then you will save a few unnecessary tests of\r\n> foreach(lc2, ancestors) and (skip).\r\n> \r\n> For example,\r\n> \r\n> static void\r\n> filter_partitions(List *table_infos)\r\n> {\r\n> ListCell *lc;\r\n> \r\n> foreach(lc, table_infos)\r\n> {\r\n> published_rel *table_info = (published_rel *) lfirst(lc);\r\n> \r\n> if (get_rel_relispartition(table_info->relid))\r\n> {\r\n> bool skip = false;\r\n> List *ancestors = get_partition_ancestors(table_info->relid);\r\n> ListCell *lc2;\r\n> \r\n> foreach(lc2, ancestors)\r\n> {\r\n> Oid ancestor = lfirst_oid(lc2);\r\n> ListCell *lc3;\r\n> /* Check if the parent table exists in the published table list. */\r\n> foreach(lc3, table_infos)\r\n> {\r\n> Oid relid = ((published_rel *) lfirst(lc3))->relid;\r\n> \r\n> if (relid == ancestor)\r\n> {\r\n> skip = true;\r\n> break;\r\n> }\r\n> }\r\n> if (skip)\r\n> break;\r\n> }\r\n> \r\n> if (skip)\r\n> table_infos = foreach_delete_current(table_infos, lc);\r\n> }\r\n> }\r\n> }\r\n\r\nRefactored this part of code based on other comments.\r\n\r\n> ~~~\r\n> \r\n> 3. pg_get_publication_tables\r\n> \r\n> /* Show all columns when the column list is not specified. */\r\n> - if (nulls[1] == true)\r\n> + if (nulls[2] == true)\r\n> \r\n> Since you are changing this line anyway, you might as well change it\r\n> to remove the redundant \"== true\" part.\r\n> \r\n> SUGGESTION\r\n> if (nulls[2])\r\n\r\nChanged.\r\n\r\nRegards,\r\nWang Wei\r\n", "msg_date": "Tue, 21 Mar 2023 07:41:04 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Mar 20, 2023 at 21:18 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> Dear Wang,\r\n> \r\n> I have tested about multilevel partitions, and it worked well.\r\n> Followings are my comments for v18-0001.\r\n\r\nThanks for your comments and testing.\r\n\r\n> 01. pg_get_publication_tables\r\n> \r\n> ```\r\n> + ListCell *lc;\r\n> ```\r\n> \r\n> This definition can be inside of the \"for (i = 0; i < nelems; i++)\".\r\n\r\nChanged.\r\n\r\n> 02. pg_get_publication_tables\r\n> \r\n> ```\r\n> - * If the publication publishes partition changes via their\r\n> - * respective root partitioned tables, we must exclude partitions\r\n> - * in favor of including the root partitioned tables. Otherwise,\r\n> - * the function could return both the child and parent tables\r\n> - * which could cause data of the child table to be\r\n> - * double-published on the subscriber side.\r\n> + * Publications support partitioned tables. If\r\n> + * publish_via_partition_root is false, all changes are replicated\r\n> + * using leaf partition identity and schema, so we only need those.\r\n> + * Otherwise, get the partitioned table itself.\r\n> ```\r\n> \r\n> The comments can be inside of the \"else\".\r\n\r\nSince I think there are related operations in the function\r\nGetAllTablesPublicationRelations, it might be better to write it above the\r\nif-statement.\r\n\r\n> 03. pg_get_publication_tables\r\n> \r\n> ```\r\n> + pfree(elems);\r\n> ```\r\n> \r\n> Only elems is pfree()'d here, but how about other variable like pub_elem and\r\n> pub_elem_tables?\r\n\r\nAdded releases to these two variables.\r\n\r\nRegards,\r\nWang Wei\r\n", "msg_date": "Tue, 21 Mar 2023 07:42:11 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Here are some review comments for patch code of HEAD_v19-0001\n\n======\ndoc/src/sgml/ref/create_publication.sgml\n\n1.\n+ <para>\n+ There can be a case where a subscription combines multiple\n+ publications. If a root partitioned table is published by any\n+ subscribed publications which set\n+ <literal>publish_via_partition_root</literal> = true, changes on this\n+ root partitioned table (or on its partitions) will be published using\n+ the identity and schema of this root partitioned table rather than\n+ that of the individual partitions.\n+ </para>\n\n1a.\nThe paragraph prior to this one just refers to \"partitioned tables\"\ninstead of \"root partitioned table\", so IMO we should continue with\nthe same terminology.\n\nI also modified the remaining text slightly. AFAIK my suggestion\nconveys exactly the same information but is shorter.\n\nSUGGESTION\nThere can be a case where one subscription combines multiple\npublications. If any of those publications has set\n<literal>publish_via_partition_root</literal> = true, then changes in\na partitioned table (or on its partitions) will be published using the\nidentity and schema of the partitioned table.\n\n~\n\n1b.\nShouldn't that paragraph (or possibly somewhere in the CREATE\nSUBSCRIPTION notes?) also explain that in this scenario the logical\nreplication will only publish one set of changes? After all, this is\nthe whole point of the patch, but I am not sure if the user will know\nof this behaviour from the current documentation.\n\n======\nsrc/backend/catalog/pg_publication.c\n\n2. filter_partitions\n\nBEFORE:\nstatic void\nfilter_partitions(List *table_infos)\n{\nListCell *lc;\n\nforeach(lc, table_infos)\n{\nbool skip = false;\nList *ancestors = NIL;\nListCell *lc2;\npublished_rel *table_info = (published_rel *) lfirst(lc);\n\nif (get_rel_relispartition(table_info->relid))\nancestors = get_partition_ancestors(table_info->relid);\n\nforeach(lc2, ancestors)\n{\nOid ancestor = lfirst_oid(lc2);\n\n/* Is ancestor exists in the published table list? */\nif (is_ancestor_member_tableinfos(ancestor, table_infos))\n{\nskip = true;\nbreak;\n}\n}\n\nif (skip)\ntable_infos = foreach_delete_current(table_infos, lc);\n}\n}\n\n~\n\n2a.\nMy previous review [1] (see #1) suggested putting most code within the\ncondition. AFAICT my comment still is applicable but was not yet\naddressed.\n\n2b.\nIMO the comment \"/* Is ancestor exists in the published table list?\n*/\" is unnecessary because it is already clear what is the purpose of\nthe function named \"is_ancestor_member_tableinfos\".\n\n\nSUGGESTION\nstatic void\nfilter_partitions(List *table_infos)\n{\n ListCell *lc;\n\n foreach(lc, table_infos)\n {\n if (get_rel_relispartition(table_info->relid))\n {\n bool skip = false;\n ListCell *lc2;\n published_rel *table_info = (published_rel *) lfirst(lc);\n List *ancestors = get_partition_ancestors(table_info->relid);\n\n foreach(lc2, ancestors)\n {\n Oid ancestor = lfirst_oid(lc2);\n\n if (is_ancestor_member_tableinfos(ancestor, table_infos))\n {\n skip = true;\n break;\n }\n }\n\n if (skip)\n table_infos = foreach_delete_current(table_infos, lc);\n }\n }\n}\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n3.\n fetch_table_list(WalReceiverConn *wrconn, List *publications)\n {\n WalRcvExecResult *res;\n- StringInfoData cmd;\n+ StringInfoData cmd,\n+ pub_names;\n TupleTableSlot *slot;\n Oid tableRow[3] = {TEXTOID, TEXTOID, NAMEARRAYOID};\n List *tablelist = NIL;\n- bool check_columnlist = (walrcv_server_version(wrconn) >= 150000);\n+ int server_version = walrcv_server_version(wrconn);\n+ bool check_columnlist = (server_version >= 150000);\n+\n+ initStringInfo(&pub_names);\n+ get_publications_str(publications, &pub_names, true);\n\n initStringInfo(&cmd);\n- appendStringInfoString(&cmd, \"SELECT DISTINCT t.schemaname, t.tablename \\n\");\n\n- /* Get column lists for each relation if the publisher supports it */\n- if (check_columnlist)\n- appendStringInfoString(&cmd, \", t.attnames\\n\");\n+ /* Get the list of tables from the publisher. */\n+ if (server_version >= 160000)\n+ {\n\n~\n\nI think the 'pub_names' is only needed within that \">= 160000\" condition.\n\nSo all the below code can be moved into that scope can't it?\n\n+ StringInfoData pub_names;\n+ initStringInfo(&pub_names);\n+ get_publications_str(publications, &pub_names, true);\n\n+ pfree(pub_names.data);\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPuNsvO9o9XzeJuSLsAsndgCKVphDPBRqYuOTy2bR28E%2Bg%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 22 Mar 2023 15:50:07 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Dear Wang,\r\n\r\nThank you for updating patch! Following are comments form v19-0001.\r\n\r\n01. logical-replication.sgml\r\n\r\nI found a following statement in logical-replication.sgml. I think this may cause\r\nmis-reading because it's OK when publishers list partitions and publish_via_root is true.\r\n\r\n```\r\n <para>\r\n A subscriber node may have multiple subscriptions if desired. It is\r\n possible to define multiple subscriptions between a single\r\n publisher-subscriber pair, in which case care must be taken to ensure\r\n that the subscribed publication objects don't overlap.\r\n </para>\r\n```\r\n\r\nHow about adding \"If publications are set publish_via_partition_root as true and\r\nthey publish partitions that have same partitioned table, only a change to partitioned\r\ntable is published from the publisher.\"or something like that?\r\n\r\n\r\n02. filter_partitions\r\n\r\nIIUC this function can refactor like following to avoid \"skip\" flag.\r\nHow do you think?\r\n\r\n```\r\n@@ -209,7 +209,6 @@ filter_partitions(List *table_infos)\r\n \r\n foreach(lc, table_infos)\r\n {\r\n- bool skip = false;\r\n List *ancestors = NIL;\r\n ListCell *lc2;\r\n published_rel *table_info = (published_rel *) lfirst(lc);\r\n@@ -224,13 +223,10 @@ filter_partitions(List *table_infos)\r\n /* Is ancestor exists in the published table list? */\r\n if (is_ancestor_member_tableinfos(ancestor, table_infos))\r\n {\r\n- skip = true;\r\n+ table_infos = foreach_delete_current(table_infos, lc);\r\n break;\r\n }\r\n }\r\n-\r\n- if (skip)\r\n- table_infos = foreach_delete_current(table_infos, lc);\r\n }\r\n }\r\n```\r\n\r\n03. fetch_table_list\r\n\r\n```\r\n+ /* Get the list of tables from the publisher. */\r\n+ if (server_version >= 160000)\r\n```\r\n\r\nI think boolean variable can be used to check it like check_columnlist.\r\nHow about \"use_extended_function\" or something?\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Wed, 22 Mar 2023 06:31:50 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Mar 22, 2023 at 12:50 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for patch code of HEAD_v19-0001\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> doc/src/sgml/ref/create_publication.sgml\r\n> \r\n> 1.\r\n> + <para>\r\n> + There can be a case where a subscription combines multiple\r\n> + publications. If a root partitioned table is published by any\r\n> + subscribed publications which set\r\n> + <literal>publish_via_partition_root</literal> = true, changes on this\r\n> + root partitioned table (or on its partitions) will be published using\r\n> + the identity and schema of this root partitioned table rather than\r\n> + that of the individual partitions.\r\n> + </para>\r\n> \r\n> 1a.\r\n> The paragraph prior to this one just refers to \"partitioned tables\"\r\n> instead of \"root partitioned table\", so IMO we should continue with\r\n> the same terminology.\r\n\r\nChanged.\r\n\r\n> I also modified the remaining text slightly. AFAIK my suggestion\r\n> conveys exactly the same information but is shorter.\r\n> \r\n> SUGGESTION\r\n> There can be a case where one subscription combines multiple\r\n> publications. If any of those publications has set\r\n> <literal>publish_via_partition_root</literal> = true, then changes in\r\n> a partitioned table (or on its partitions) will be published using the\r\n> identity and schema of the partitioned table.\r\n\r\nSorry, I'm not sure about this.\r\nI'm not a native speaker of English, but it seems like the following use case is\r\nnot explained:\r\n```\r\ncreate table t1 (a int primary key) partition by range (a);\r\ncreate table t2 (a int primary key) partition by range (a);\r\ncreate table t3 (a int primary key);\r\nalter table t1 attach partition t2 default;\r\nalter table t2 attach partition t3 default;\r\n\r\ncreate publication p1 for table t1;\r\ncreate publication p2_via for table t2 with(publish_via_partition_root);\r\ncreate publication p3 for table t3;\r\n```\r\nIf we subscribe to p1, p2_via and p3 at the same time, then t2's identity and\r\nschema will be used instead of t1's (and of course not t3's).\r\n\r\n> ~\r\n> \r\n> 1b.\r\n> Shouldn't that paragraph (or possibly somewhere in the CREATE\r\n> SUBSCRIPTION notes?) also explain that in this scenario the logical\r\n> replication will only publish one set of changes? After all, this is\r\n> the whole point of the patch, but I am not sure if the user will know\r\n> of this behaviour from the current documentation.\r\n\r\nIt seems to me that what you're explaining is what users expect. So, it seems we\r\ndon't need to explain it.\r\nBTW IIUC, when user wants to use the \"via_root\" option, they should first read\r\nthe pg-doc to confirm the meaning and related notes of this option. So, I'm not\r\nsure if adding this section in other documentation would be redundant.\r\n\r\n> ======\r\n> src/backend/catalog/pg_publication.c\r\n> \r\n> 2. filter_partitions\r\n> \r\n> BEFORE:\r\n> static void\r\n> filter_partitions(List *table_infos)\r\n> {\r\n> ListCell *lc;\r\n> \r\n> foreach(lc, table_infos)\r\n> {\r\n> bool skip = false;\r\n> List *ancestors = NIL;\r\n> ListCell *lc2;\r\n> published_rel *table_info = (published_rel *) lfirst(lc);\r\n> \r\n> if (get_rel_relispartition(table_info->relid))\r\n> ancestors = get_partition_ancestors(table_info->relid);\r\n> \r\n> foreach(lc2, ancestors)\r\n> {\r\n> Oid ancestor = lfirst_oid(lc2);\r\n> \r\n> /* Is ancestor exists in the published table list? */\r\n> if (is_ancestor_member_tableinfos(ancestor, table_infos))\r\n> {\r\n> skip = true;\r\n> break;\r\n> }\r\n> }\r\n> \r\n> if (skip)\r\n> table_infos = foreach_delete_current(table_infos, lc);\r\n> }\r\n> }\r\n> \r\n> ~\r\n> \r\n> 2a.\r\n> My previous review [1] (see #1) suggested putting most code within the\r\n> condition. AFAICT my comment still is applicable but was not yet\r\n> addressed.\r\n\r\nPersonally, I prefer the current style because the approach you mentioned adds\r\nsome indentations.\r\n\r\n> 2b.\r\n> IMO the comment \"/* Is ancestor exists in the published table list?\r\n> */\" is unnecessary because it is already clear what is the purpose of\r\n> the function named \"is_ancestor_member_tableinfos\".\r\n\r\nRemoved.\r\n\r\n> ======\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 3.\r\n> fetch_table_list(WalReceiverConn *wrconn, List *publications)\r\n> {\r\n> WalRcvExecResult *res;\r\n> - StringInfoData cmd;\r\n> + StringInfoData cmd,\r\n> + pub_names;\r\n> TupleTableSlot *slot;\r\n> Oid tableRow[3] = {TEXTOID, TEXTOID, NAMEARRAYOID};\r\n> List *tablelist = NIL;\r\n> - bool check_columnlist = (walrcv_server_version(wrconn) >= 150000);\r\n> + int server_version = walrcv_server_version(wrconn);\r\n> + bool check_columnlist = (server_version >= 150000);\r\n> +\r\n> + initStringInfo(&pub_names);\r\n> + get_publications_str(publications, &pub_names, true);\r\n> \r\n> initStringInfo(&cmd);\r\n> - appendStringInfoString(&cmd, \"SELECT DISTINCT t.schemaname, t.tablename\r\n> \\n\");\r\n> \r\n> - /* Get column lists for each relation if the publisher supports it */\r\n> - if (check_columnlist)\r\n> - appendStringInfoString(&cmd, \", t.attnames\\n\");\r\n> + /* Get the list of tables from the publisher. */\r\n> + if (server_version >= 160000)\r\n> + {\r\n> \r\n> ~\r\n> \r\n> I think the 'pub_names' is only needed within that \">= 160000\" condition.\r\n> \r\n> So all the below code can be moved into that scope can't it?\r\n> \r\n> + StringInfoData pub_names;\r\n> + initStringInfo(&pub_names);\r\n> + get_publications_str(publications, &pub_names, true);\r\n> \r\n> + pfree(pub_names.data);\r\n\r\nChanged.\r\n\r\nAlso, I've run pgindent for the new patch set.\r\nAttach the new patch set.\r\n\r\nRegards,\r\nWang Wei", "msg_date": "Wed, 22 Mar 2023 10:07:49 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Mar 22, 2023 at 14:32 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> Dear Wang,\r\n> \r\n> Thank you for updating patch! Following are comments form v19-0001.\r\n\r\nThanks for your comments.\r\n\r\n> 01. logical-replication.sgml\r\n> \r\n> I found a following statement in logical-replication.sgml. I think this may cause\r\n> mis-reading because it's OK when publishers list partitions and publish_via_root\r\n> is true.\r\n> \r\n> ```\r\n> <para>\r\n> A subscriber node may have multiple subscriptions if desired. It is\r\n> possible to define multiple subscriptions between a single\r\n> publisher-subscriber pair, in which case care must be taken to ensure\r\n> that the subscribed publication objects don't overlap.\r\n> </para>\r\n> ```\r\n> \r\n> How about adding \"If publications are set publish_via_partition_root as true and\r\n> they publish partitions that have same partitioned table, only a change to\r\n> partitioned\r\n> table is published from the publisher.\"or something like that?\r\n\r\nI think these seem to be two different scenarios: The scenario mentioned here is\r\nmultiple subscriptions at the subscription node, while the scenario we fixed\r\nthis time is a single subscription at the subscription node. So, it seems that\r\nthese two notes are not strongly related.\r\n\r\n> 02. filter_partitions\r\n> \r\n> IIUC this function can refactor like following to avoid \"skip\" flag.\r\n> How do you think?\r\n> \r\n> ```\r\n> @@ -209,7 +209,6 @@ filter_partitions(List *table_infos)\r\n> \r\n> foreach(lc, table_infos)\r\n> {\r\n> - bool skip = false;\r\n> List *ancestors = NIL;\r\n> ListCell *lc2;\r\n> published_rel *table_info = (published_rel *) lfirst(lc);\r\n> @@ -224,13 +223,10 @@ filter_partitions(List *table_infos)\r\n> /* Is ancestor exists in the published table list? */\r\n> if (is_ancestor_member_tableinfos(ancestor, table_infos))\r\n> {\r\n> - skip = true;\r\n> + table_infos = foreach_delete_current(table_infos, lc);\r\n> break;\r\n> }\r\n> }\r\n> -\r\n> - if (skip)\r\n> - table_infos = foreach_delete_current(table_infos, lc);\r\n> }\r\n> }\r\n> ```\r\n\r\nI think this approach deletes the cell of the list of the outer loop in the\r\ninner loop. IIUC, we can only use function foreach_delete_current in the current\r\nloop to delete the cell of the current loop.\r\n\r\n> 03. fetch_table_list\r\n> \r\n> ```\r\n> + /* Get the list of tables from the publisher. */\r\n> + if (server_version >= 160000)\r\n> ```\r\n> \r\n> I think boolean variable can be used to check it like check_columnlist.\r\n> How about \"use_extended_function\" or something?\r\n\r\nSince we only need it once, I think it's fine not to add a new variable.\r\n\r\nRegards,\r\nWang Wei\r\n", "msg_date": "Wed, 22 Mar 2023 10:09:05 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Here are some review comments for patch v20-0001.\n\n======\nGeneral.\n\n1.\nThat function 'pg_get_publication_tables' does not seem to be\ndescribed in the PG documentation. Why isn't it in the \"System Catalog\nInformation Functions\" table [1] ?\n\nI asked this same question a long time ago but then the reply [2] was\nlike \"it doesn't seem to be a function provided to users\".\n\nWell, perhaps that just means that the documentation has been\naccidentally missing for a long time. Does anybody know for sure if\nthe omission of this function from the documentation is deliberate? If\nnobody here knows, then maybe this can be asked/addressed in a\nseparate thread.\n\n======\nsrc/backend/catalog/pg_publication.c\n\n2. filter_partitions\n\n(review comment from my v19 review)\n\n> 2a.\n> My previous review [1] (see #1) suggested putting most code within the\n> condition. AFAICT my comment still is applicable but was not yet\n> addressed.\n\n22/3 Wang-san replied: \"Personally, I prefer the current style because\nthe approach you mentioned adds some indentations.\"\n\nSure, but there is more than just indentation/style differences here.\nCurrently, there is some unnecessary code executed if the table is not\na partition. And the reader cannot tell at-a-glance if (skip) will be\ntrue/false without looking more closely at the loop logic. So, I think\nchanging it would be better, but anyway I won’t debate about it any\nmore because it's not a functional problem.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n3. fetch_table_list\n\n+ /* Get the list of tables from the publisher. */\n+ if (server_version >= 160000)\n+ {\n+ StringInfoData pub_names;\n\n- appendStringInfoString(&cmd, \"FROM pg_catalog.pg_publication_tables t\\n\"\n- \" WHERE t.pubname IN (\");\n- get_publications_str(publications, &cmd, true);\n- appendStringInfoChar(&cmd, ')');\n+ initStringInfo(&pub_names);\n+ get_publications_str(publications, &pub_names, true);\n+\n+ /*\n+ * From version 16, we allowed passing multiple publications to the\n+ * function pg_get_publication_tables. This helped to filter out the\n+ * partition table whose ancestor is also published in this\n+ * publication array.\n+ *\n+ * Join pg_get_publication_tables with pg_publication to exclude\n+ * non-existing publications.\n+ */\n+ appendStringInfo(&cmd, \"SELECT DISTINCT N.nspname, C.relname,\\n\"\n+ \" ( SELECT array_agg(a.attname ORDER BY a.attnum)\\n\"\n+ \" FROM pg_attribute a\\n\"\n+ \" WHERE a.attrelid = GPT.relid AND\\n\"\n+ \" a.attnum = ANY(GPT.attrs)\\n\"\n+ \" ) AS attnames\\n\"\n+ \" FROM pg_class C\\n\"\n+ \" JOIN pg_namespace N ON N.oid = C.relnamespace\\n\"\n+ \" JOIN ( SELECT (pg_get_publication_tables(VARIADIC\narray_agg(pubname::text))).*\\n\"\n+ \" FROM pg_publication\\n\"\n+ \" WHERE pubname IN ( %s )) as GPT\\n\"\n+ \" ON GPT.relid = C.oid\\n\",\n+ pub_names.data);\n+\n+ pfree(pub_names.data);\n+ }\n+ else\n+ {\n+ appendStringInfoString(&cmd, \"SELECT DISTINCT t.schemaname, t.tablename \\n\");\n+\n+ /* Get column lists for each relation if the publisher supports it */\n+ if (check_columnlist)\n+ appendStringInfoString(&cmd, \", t.attnames\\n\");\n+\n+ appendStringInfoString(&cmd, \"FROM pg_catalog.pg_publication_tables t\\n\"\n+ \" WHERE t.pubname IN (\");\n+ get_publications_str(publications, &cmd, true);\n+ appendStringInfoChar(&cmd, ')');\n+ }\n\nI noticed the SQL \"if\" part is using uppercase aliases, but the SQL in\nthe \"else\" part is using lowercase aliases. I think it would be better\nto be consistent (pick one).\n\n======\nsrc/test/subscription/t/013_partition.pl\n\n4.\n-# for tab4, we publish changes through the \"middle\" partitioned table\n+# If we subscribe only to pub_lower_level, changes for tab4 will be published\n+# through the \"middle\" partition table. However, since we will be subscribing\n+# to both pub_lower_level and pub_all (see subscription sub2 below), we will\n+# publish changes via the root table (tab4).\n $node_publisher->safe_psql('postgres',\n \"CREATE PUBLICATION pub_lower_level FOR TABLE tab4_1 WITH\n(publish_via_partition_root = true)\"\n );\n\n~\n\nThis comment seemed a bit overkill IMO. I don't think you need to say\nmuch here except maybe:\n\n# Note that subscription \"sub2\" will later subscribe simultaneously to\nboth pub_lower_level (i.e. just table tab4_1) and pub_all.\n\n~~~\n\n5.\nI think maybe you could have another test scenario where you INSERT\nsomething into tab4_1_1 while only the publication for tab4_1 has\npublish_via_partition_root=true\n\n~~~\n\n6.\nAFAICT the tab4 tests are only testing the initial sync, but are not\ntesting normal replication. Maybe some more normal (post sync) INSERTS\nare needed to tab4, tab4_1, tab4_1_1 ?\n\n\n======\nsrc/test/subscription/t/028_row_filter.pl\n\n7.\n+# insert data into partitioned table.\n+$node_publisher->safe_psql('postgres',\n+ \"INSERT INTO tab_rowfilter_viaroot_part(a) VALUES(13), (17)\");\n+\n $node_subscriber->safe_psql('postgres',\n \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\napplication_name=$appname' PUBLICATION tap_pub_1, tap_pub_2,\ntap_pub_3, tap_pub_4a, tap_pub_4b, tap_pub_5a, tap_pub_5b,\ntap_pub_toast, tap_pub_inherits, tap_pub_viaroot_2, tap_pub_viaroot_1\"\n );\n@@ -707,13 +711,17 @@ is($result, qq(t|1), 'check replicated rows to\ntab_rowfilter_toast');\n # the row filter for the top-level ancestor:\n #\n # tab_rowfilter_viaroot_part filter is: (a > 15)\n+# - INSERT (13) NO, 13 < 15\n # - INSERT (14) NO, 14 < 15\n # - INSERT (15) NO, 15 = 15\n # - INSERT (16) YES, 16 > 15\n+# - INSERT (17) YES, 17 > 15\n $result =\n $node_subscriber->safe_psql('postgres',\n- \"SELECT a FROM tab_rowfilter_viaroot_part\");\n-is($result, qq(16), 'check replicated rows to tab_rowfilter_viaroot_part');\n+ \"SELECT a FROM tab_rowfilter_viaroot_part ORDER BY 1\");\n+is( $result, qq(16\n+17),\n+ 'check replicated rows to tab_rowfilter_viaroot_part');\n\n~\n\nI'm not 100% sure this is testing quite what you want to test. AFAICT\nthe subscription is created like:\n\n\"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\napplication_name=$appname' PUBLICATION tap_pub_1, tap_pub_2,\ntap_pub_3, tap_pub_4a, tap_pub_4b, tap_pub_5a, tap_pub_5b,\ntap_pub_toast, tap_pub_inherits, tap_pub_viaroot_2, tap_pub_viaroot_1\"\n\nNotice in this case BOTH the partitioned table and the partition had\nbeen published using \"WITH (publish_via_partition_root)\". But, IIUC\nwon't it be better to test when only the partition's publication was\nusing that option?\n\nFor example, I think then it would be a better test of this \"At least one\" code:\n\n/* At least one publication is using publish_via_partition_root. */\nif (pub_elem->pubviaroot)\n viaroot = true;\n======\nsrc/test/subscription/t/031_column_list.pl\n\n8.\n- CREATE PUBLICATION pub_root_true FOR TABLE test_root (a) WITH\n(publish_via_partition_root = true);\n+ CREATE PUBLICATION pub_root_true_1 FOR TABLE test_root (a) WITH\n(publish_via_partition_root = true);\n+ CREATE PUBLICATION pub_root_true_2 FOR TABLE test_root_1 (a, b) WITH\n(publish_via_partition_root = true);\n\n -- initial data\n INSERT INTO test_root VALUES (1, 2, 3);\n INSERT INTO test_root VALUES (10, 20, 30);\n ));\n\n+# Subscribe to pub_root_true_1 and pub_root_true_2 at the same time, which\n+# means that the initial data will be synced once, and only the column list of\n+# the parent table (test_root) in the publication pub_root_true_1 will be used\n+# for both table sync and data replication.\n $node_subscriber->safe_psql(\n 'postgres', qq(\n- CREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr' PUBLICATION\npub_root_true;\n+ CREATE\n\n~\n\n(This is simlar to the previous review comment #7 above)\n\nWon't it be a better test of the \"At least one\" code when only the\npublication of partition (test_root_1) is using \"WITH\n(publish_via_partition_root = true)\".\n\ne.g\nCREATE PUBLICATION pub_root_true_1 FOR TABLE test_root (a);\nCREATE PUBLICATION pub_root_true_2 FOR TABLE test_root_1 (a, b) WITH\n(publish_via_partition_root = true);\n\n------\n[1] https://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-CATALOG-TABLE\n[2] https://www.postgresql.org/message-id/OS3PR01MB6275FB5397C6A647F262A3A69E009%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 23 Mar 2023 15:26:44 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Mar 23, 2023 at 9:57 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for patch v20-0001.\n>\n> ======\n> General.\n>\n> 1.\n> That function 'pg_get_publication_tables' does not seem to be\n> described in the PG documentation. Why isn't it in the \"System Catalog\n> Information Functions\" table [1] ?\n>\n> I asked this same question a long time ago but then the reply [2] was\n> like \"it doesn't seem to be a function provided to users\".\n>\n> Well, perhaps that just means that the documentation has been\n> accidentally missing for a long time. Does anybody know for sure if\n> the omission of this function from the documentation is deliberate? If\n> nobody here knows, then maybe this can be asked/addressed in a\n> separate thread.\n>\n\nIt is better that you start a separate thread to discuss this question\nas it is not related to this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 23 Mar 2023 12:21:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Mar 23, 2023 at 12:27 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for patch v20-0001.\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 3. fetch_table_list\r\n> \r\n> + /* Get the list of tables from the publisher. */\r\n> + if (server_version >= 160000)\r\n> + {\r\n> + StringInfoData pub_names;\r\n> \r\n> - appendStringInfoString(&cmd, \"FROM pg_catalog.pg_publication_tables t\\n\"\r\n> - \" WHERE t.pubname IN (\");\r\n> - get_publications_str(publications, &cmd, true);\r\n> - appendStringInfoChar(&cmd, ')');\r\n> + initStringInfo(&pub_names);\r\n> + get_publications_str(publications, &pub_names, true);\r\n> +\r\n> + /*\r\n> + * From version 16, we allowed passing multiple publications to the\r\n> + * function pg_get_publication_tables. This helped to filter out the\r\n> + * partition table whose ancestor is also published in this\r\n> + * publication array.\r\n> + *\r\n> + * Join pg_get_publication_tables with pg_publication to exclude\r\n> + * non-existing publications.\r\n> + */\r\n> + appendStringInfo(&cmd, \"SELECT DISTINCT N.nspname, C.relname,\\n\"\r\n> + \" ( SELECT array_agg(a.attname ORDER BY a.attnum)\\n\"\r\n> + \" FROM pg_attribute a\\n\"\r\n> + \" WHERE a.attrelid = GPT.relid AND\\n\"\r\n> + \" a.attnum = ANY(GPT.attrs)\\n\"\r\n> + \" ) AS attnames\\n\"\r\n> + \" FROM pg_class C\\n\"\r\n> + \" JOIN pg_namespace N ON N.oid = C.relnamespace\\n\"\r\n> + \" JOIN ( SELECT (pg_get_publication_tables(VARIADIC\r\n> array_agg(pubname::text))).*\\n\"\r\n> + \" FROM pg_publication\\n\"\r\n> + \" WHERE pubname IN ( %s )) as GPT\\n\"\r\n> + \" ON GPT.relid = C.oid\\n\",\r\n> + pub_names.data);\r\n> +\r\n> + pfree(pub_names.data);\r\n> + }\r\n> + else\r\n> + {\r\n> + appendStringInfoString(&cmd, \"SELECT DISTINCT t.schemaname, t.tablename\r\n> \\n\");\r\n> +\r\n> + /* Get column lists for each relation if the publisher supports it */\r\n> + if (check_columnlist)\r\n> + appendStringInfoString(&cmd, \", t.attnames\\n\");\r\n> +\r\n> + appendStringInfoString(&cmd, \"FROM pg_catalog.pg_publication_tables t\\n\"\r\n> + \" WHERE t.pubname IN (\");\r\n> + get_publications_str(publications, &cmd, true);\r\n> + appendStringInfoChar(&cmd, ')');\r\n> + }\r\n> \r\n> I noticed the SQL \"if\" part is using uppercase aliases, but the SQL in\r\n> the \"else\" part is using lowercase aliases. I think it would be better\r\n> to be consistent (pick one).\r\n\r\nUnified them into lowercase aliases.\r\n\r\n> ======\r\n> src/test/subscription/t/013_partition.pl\r\n> \r\n> 4.\r\n> -# for tab4, we publish changes through the \"middle\" partitioned table\r\n> +# If we subscribe only to pub_lower_level, changes for tab4 will be published\r\n> +# through the \"middle\" partition table. However, since we will be subscribing\r\n> +# to both pub_lower_level and pub_all (see subscription sub2 below), we will\r\n> +# publish changes via the root table (tab4).\r\n> $node_publisher->safe_psql('postgres',\r\n> \"CREATE PUBLICATION pub_lower_level FOR TABLE tab4_1 WITH\r\n> (publish_via_partition_root = true)\"\r\n> );\r\n> \r\n> ~\r\n> \r\n> This comment seemed a bit overkill IMO. I don't think you need to say\r\n> much here except maybe:\r\n> \r\n> # Note that subscription \"sub2\" will later subscribe simultaneously to\r\n> both pub_lower_level (i.e. just table tab4_1) and pub_all.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 5.\r\n> I think maybe you could have another test scenario where you INSERT\r\n> something into tab4_1_1 while only the publication for tab4_1 has\r\n> publish_via_partition_root=true\r\n\r\nI'm not sure if this scenario is necessary.\r\n\r\n> ~~~\r\n> \r\n> 6.\r\n> AFAICT the tab4 tests are only testing the initial sync, but are not\r\n> testing normal replication. Maybe some more normal (post sync) INSERTS\r\n> are needed to tab4, tab4_1, tab4_1_1 ?\r\n\r\nSince I think the scenario we fixed is sync and not replication, it doesn't seem\r\nlike we should extend the test you mentioned.\r\n\r\n> ======\r\n> src/test/subscription/t/028_row_filter.pl\r\n> \r\n> 7.\r\n> +# insert data into partitioned table.\r\n> +$node_publisher->safe_psql('postgres',\r\n> + \"INSERT INTO tab_rowfilter_viaroot_part(a) VALUES(13), (17)\");\r\n> +\r\n> $node_subscriber->safe_psql('postgres',\r\n> \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\r\n> application_name=$appname' PUBLICATION tap_pub_1, tap_pub_2,\r\n> tap_pub_3, tap_pub_4a, tap_pub_4b, tap_pub_5a, tap_pub_5b,\r\n> tap_pub_toast, tap_pub_inherits, tap_pub_viaroot_2, tap_pub_viaroot_1\"\r\n> );\r\n> @@ -707,13 +711,17 @@ is($result, qq(t|1), 'check replicated rows to\r\n> tab_rowfilter_toast');\r\n> # the row filter for the top-level ancestor:\r\n> #\r\n> # tab_rowfilter_viaroot_part filter is: (a > 15)\r\n> +# - INSERT (13) NO, 13 < 15\r\n> # - INSERT (14) NO, 14 < 15\r\n> # - INSERT (15) NO, 15 = 15\r\n> # - INSERT (16) YES, 16 > 15\r\n> +# - INSERT (17) YES, 17 > 15\r\n> $result =\r\n> $node_subscriber->safe_psql('postgres',\r\n> - \"SELECT a FROM tab_rowfilter_viaroot_part\");\r\n> -is($result, qq(16), 'check replicated rows to tab_rowfilter_viaroot_part');\r\n> + \"SELECT a FROM tab_rowfilter_viaroot_part ORDER BY 1\");\r\n> +is( $result, qq(16\r\n> +17),\r\n> + 'check replicated rows to tab_rowfilter_viaroot_part');\r\n> \r\n> ~\r\n> \r\n> I'm not 100% sure this is testing quite what you want to test. AFAICT\r\n> the subscription is created like:\r\n> \r\n> \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\r\n> application_name=$appname' PUBLICATION tap_pub_1, tap_pub_2,\r\n> tap_pub_3, tap_pub_4a, tap_pub_4b, tap_pub_5a, tap_pub_5b,\r\n> tap_pub_toast, tap_pub_inherits, tap_pub_viaroot_2, tap_pub_viaroot_1\"\r\n\r\nI think this is the scenario we fixed : Simultaneously subscribing to two\r\npublications that publish the parent and child respectively, then want to use\r\nthe parent's identity and schema).\r\n\r\n> Notice in this case BOTH the partitioned table and the partition had\r\n> been published using \"WITH (publish_via_partition_root)\". But, IIUC\r\n> won't it be better to test when only the partition's publication was\r\n> using that option?\r\n> \r\n> For example, I think then it would be a better test of this \"At least one\" code:\r\n> \r\n> /* At least one publication is using publish_via_partition_root. */\r\n> if (pub_elem->pubviaroot)\r\n> viaroot = true;\r\n>\r\n> ======\r\n> src/test/subscription/t/031_column_list.pl\r\n> \r\n> 8.\r\n> - CREATE PUBLICATION pub_root_true FOR TABLE test_root (a) WITH\r\n> (publish_via_partition_root = true);\r\n> + CREATE PUBLICATION pub_root_true_1 FOR TABLE test_root (a) WITH\r\n> (publish_via_partition_root = true);\r\n> + CREATE PUBLICATION pub_root_true_2 FOR TABLE test_root_1 (a, b) WITH\r\n> (publish_via_partition_root = true);\r\n> \r\n> -- initial data\r\n> INSERT INTO test_root VALUES (1, 2, 3);\r\n> INSERT INTO test_root VALUES (10, 20, 30);\r\n> ));\r\n> \r\n> +# Subscribe to pub_root_true_1 and pub_root_true_2 at the same time, which\r\n> +# means that the initial data will be synced once, and only the column list of\r\n> +# the parent table (test_root) in the publication pub_root_true_1 will be used\r\n> +# for both table sync and data replication.\r\n> $node_subscriber->safe_psql(\r\n> 'postgres', qq(\r\n> - CREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr' PUBLICATION\r\n> pub_root_true;\r\n> + CREATE\r\n> \r\n> ~\r\n> \r\n> (This is simlar to the previous review comment #7 above)\r\n> \r\n> Won't it be a better test of the \"At least one\" code when only the\r\n> publication of partition (test_root_1) is using \"WITH\r\n> (publish_via_partition_root = true)\".\r\n> \r\n> e.g\r\n> CREATE PUBLICATION pub_root_true_1 FOR TABLE test_root (a);\r\n> CREATE PUBLICATION pub_root_true_2 FOR TABLE test_root_1 (a, b) WITH\r\n> (publish_via_partition_root = true);\r\n\r\nI think specifying one or both is the same scenario here.\r\nBut it seemed clearer if only the \"via_root\" option is specified in the\r\npublication that publishes the parent, so I changed this point in\r\n\"031_column_list.pl\". Since the publications in \"028_row_filter.pl\" were\r\nintroduced by other commits, I didn't change it.\r\n\r\nAttach the new patch set.\r\n\r\nRegards,\r\nWang Wei", "msg_date": "Thu, 23 Mar 2023 09:11:29 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Thu, Mar 23, 2023 at 5:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 23, 2023 at 9:57 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Here are some review comments for patch v20-0001.\n> >\n> > ======\n> > General.\n> >\n> > 1.\n> > That function 'pg_get_publication_tables' does not seem to be\n> > described in the PG documentation. Why isn't it in the \"System Catalog\n> > Information Functions\" table [1] ?\n> >\n> > I asked this same question a long time ago but then the reply [2] was\n> > like \"it doesn't seem to be a function provided to users\".\n> >\n> > Well, perhaps that just means that the documentation has been\n> > accidentally missing for a long time. Does anybody know for sure if\n> > the omission of this function from the documentation is deliberate? If\n> > nobody here knows, then maybe this can be asked/addressed in a\n> > separate thread.\n> >\n>\n> It is better that you start a separate thread to discuss this question\n> as it is not related to this patch.\n>\n\nOK. I have asked this question in a new thread here [1].\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPvGQER0rbNWii1U4c-npDhP-HxfX5yj5fmfBo%3D45z9pPA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 24 Mar 2023 09:16:25 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Hi Wang-san. I looked at the v21-0001 patch.\n\nI don't have any new review comments -- only follow-ups for some of my\nprevious v20 comments that were rejected.\n\nOn Thu, Mar 23, 2023 at 8:11 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Thu, Mar 23, 2023 at 12:27 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > Here are some review comments for patch v20-0001.\n>\n...\n> > ======\n> > src/test/subscription/t/013_partition.pl\n> >\n> > 5.\n> > I think maybe you could have another test scenario where you INSERT\n> > something into tab4_1_1 while only the publication for tab4_1 has\n> > publish_via_partition_root=true\n>\n> I'm not sure if this scenario is necessary.\n>\n\nPlease see my reply for #7 below.\n\n> > ~~~\n> >\n> > 6.\n> > AFAICT the tab4 tests are only testing the initial sync, but are not\n> > testing normal replication. Maybe some more normal (post sync) INSERTS\n> > are needed to tab4, tab4_1, tab4_1_1 ?\n>\n> Since I think the scenario we fixed is sync and not replication, it doesn't seem\n> like we should extend the test you mentioned.\n>\n\nMaybe you are right. I only thought it would be better to have testing\nwhich verifies that the sync phase and the normal replication phase\nare using the same rules.\n\n> > ======\n> > src/test/subscription/t/028_row_filter.pl\n> >\n> > 7.\n> > +# insert data into partitioned table.\n> > +$node_publisher->safe_psql('postgres',\n> > + \"INSERT INTO tab_rowfilter_viaroot_part(a) VALUES(13), (17)\");\n> > +\n> > $node_subscriber->safe_psql('postgres',\n> > \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\n> > application_name=$appname' PUBLICATION tap_pub_1, tap_pub_2,\n> > tap_pub_3, tap_pub_4a, tap_pub_4b, tap_pub_5a, tap_pub_5b,\n> > tap_pub_toast, tap_pub_inherits, tap_pub_viaroot_2, tap_pub_viaroot_1\"\n> > );\n> > @@ -707,13 +711,17 @@ is($result, qq(t|1), 'check replicated rows to\n> > tab_rowfilter_toast');\n> > # the row filter for the top-level ancestor:\n> > #\n> > # tab_rowfilter_viaroot_part filter is: (a > 15)\n> > +# - INSERT (13) NO, 13 < 15\n> > # - INSERT (14) NO, 14 < 15\n> > # - INSERT (15) NO, 15 = 15\n> > # - INSERT (16) YES, 16 > 15\n> > +# - INSERT (17) YES, 17 > 15\n> > $result =\n> > $node_subscriber->safe_psql('postgres',\n> > - \"SELECT a FROM tab_rowfilter_viaroot_part\");\n> > -is($result, qq(16), 'check replicated rows to tab_rowfilter_viaroot_part');\n> > + \"SELECT a FROM tab_rowfilter_viaroot_part ORDER BY 1\");\n> > +is( $result, qq(16\n> > +17),\n> > + 'check replicated rows to tab_rowfilter_viaroot_part');\n> >\n> > ~\n> >\n> > I'm not 100% sure this is testing quite what you want to test. AFAICT\n> > the subscription is created like:\n> >\n> > \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\n> > application_name=$appname' PUBLICATION tap_pub_1, tap_pub_2,\n> > tap_pub_3, tap_pub_4a, tap_pub_4b, tap_pub_5a, tap_pub_5b,\n> > tap_pub_toast, tap_pub_inherits, tap_pub_viaroot_2, tap_pub_viaroot_1\"\n>\n> I think this is the scenario we fixed : Simultaneously subscribing to two\n> publications that publish the parent and child respectively, then want to use\n> the parent's identity and schema).\n>\n\nYeah, but currently BOTH the tap_pub_viaroot_2, tap_pub_viaroot_1 are\nusing \"WITH (publish_via_partition_root)\", so IMO the user would\nsurely expect that only the root table would be published even when a\nsubscription combines those publications. OTOH, I thought a subtle\npoint of this patch is that now the same result will happen even if\nonly ONE of the publications was using \"WITH\n(publish_via_partition_root)\". So it’s that scenario of “only ONE\npublication is using the option” that I thought ought to be explicitly\ntested.\n\nThis was the same also reason for my comment #5 above.\n\n> > ======\n> > src/test/subscription/t/031_column_list.pl\n> >\n> > 8.\n> > - CREATE PUBLICATION pub_root_true FOR TABLE test_root (a) WITH\n> > (publish_via_partition_root = true);\n> > + CREATE PUBLICATION pub_root_true_1 FOR TABLE test_root (a) WITH\n> > (publish_via_partition_root = true);\n> > + CREATE PUBLICATION pub_root_true_2 FOR TABLE test_root_1 (a, b) WITH\n> > (publish_via_partition_root = true);\n> >\n> > -- initial data\n> > INSERT INTO test_root VALUES (1, 2, 3);\n> > INSERT INTO test_root VALUES (10, 20, 30);\n> > ));\n> >\n> > +# Subscribe to pub_root_true_1 and pub_root_true_2 at the same time, which\n> > +# means that the initial data will be synced once, and only the column list of\n> > +# the parent table (test_root) in the publication pub_root_true_1 will be used\n> > +# for both table sync and data replication.\n> > $node_subscriber->safe_psql(\n> > 'postgres', qq(\n> > - CREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr' PUBLICATION\n> > pub_root_true;\n> > + CREATE\n> >\n> > ~\n> >\n> > (This is similar to the previous review comment #7 above)\n> >\n> > Won't it be a better test of the \"At least one\" code when only the\n> > publication of partition (test_root_1) is using \"WITH\n> > (publish_via_partition_root = true)\".\n> >\n> > e.g\n> > CREATE PUBLICATION pub_root_true_1 FOR TABLE test_root (a);\n> > CREATE PUBLICATION pub_root_true_2 FOR TABLE test_root_1 (a, b) WITH\n> > (publish_via_partition_root = true);\n>\n> I think specifying one or both is the same scenario here.\n> But it seemed clearer if only the \"via_root\" option is specified in the\n> publication that publishes the parent, so I changed this point in\n> \"031_column_list.pl\". Since the publications in \"028_row_filter.pl\" were\n> introduced by other commits, I didn't change it.\n>\n\nIn hindsight, I think those publications should be renamed to\nsomething more appropriate. The name \"pub_root_true_2\" seems\nmisleading now since the publish_via_partition_root = false\n\ne.g.1. pub_test_root, pub_test_root_1\nor\ne.g.2. pub_root_true, pub_root_1_false\netc.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Fri, 24 Mar 2023 12:48:47 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "BTW, since this patch changes the signature of the API\npg_get_publication_tables, I assume the example in the CREATE\nSUBSCRIPTION Notes [1] may not work anymore.\n\nMeanwhile, Tom Lane suggested [2] that the example could be re-written\nto avoid even mentioning pg_get_publication_tables at all.\n\n------\n[1] https://www.postgresql.org/docs/devel/sql-createsubscription.html\n[2] https://www.postgresql.org/message-id/2106581.1679610361%40sss.pgh.pa.us\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 24 Mar 2023 13:13:56 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, Mar 24, 2023 at 7:19 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Wang-san. I looked at the v21-0001 patch.\n>\n> I don't have any new review comments -- only follow-ups for some of my\n> previous v20 comments that were rejected.\n>\n> On Thu, Mar 23, 2023 at 8:11 PM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > On Thu, Mar 23, 2023 at 12:27 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > Here are some review comments for patch v20-0001.\n> >\n> ...\n> > > ======\n> > > src/test/subscription/t/013_partition.pl\n> > >\n> > > 5.\n> > > I think maybe you could have another test scenario where you INSERT\n> > > something into tab4_1_1 while only the publication for tab4_1 has\n> > > publish_via_partition_root=true\n> >\n> > I'm not sure if this scenario is necessary.\n> >\n>\n> Please see my reply for #7 below.\n>\n> > > ~~~\n> > >\n> > > 6.\n> > > AFAICT the tab4 tests are only testing the initial sync, but are not\n> > > testing normal replication. Maybe some more normal (post sync) INSERTS\n> > > are needed to tab4, tab4_1, tab4_1_1 ?\n> >\n> > Since I think the scenario we fixed is sync and not replication, it doesn't seem\n> > like we should extend the test you mentioned.\n> >\n>\n> Maybe you are right. I only thought it would be better to have testing\n> which verifies that the sync phase and the normal replication phase\n> are using the same rules.\n>\n\nYeah, we could extend such tests if we want but I think it is not a\nmust as the patch didn't change this behavior.\n\n> > > ======\n> > > src/test/subscription/t/028_row_filter.pl\n> > >\n> > > 7.\n> > > +# insert data into partitioned table.\n> > > +$node_publisher->safe_psql('postgres',\n> > > + \"INSERT INTO tab_rowfilter_viaroot_part(a) VALUES(13), (17)\");\n> > > +\n> > > $node_subscriber->safe_psql('postgres',\n> > > \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\n> > > application_name=$appname' PUBLICATION tap_pub_1, tap_pub_2,\n> > > tap_pub_3, tap_pub_4a, tap_pub_4b, tap_pub_5a, tap_pub_5b,\n> > > tap_pub_toast, tap_pub_inherits, tap_pub_viaroot_2, tap_pub_viaroot_1\"\n> > > );\n> > > @@ -707,13 +711,17 @@ is($result, qq(t|1), 'check replicated rows to\n> > > tab_rowfilter_toast');\n> > > # the row filter for the top-level ancestor:\n> > > #\n> > > # tab_rowfilter_viaroot_part filter is: (a > 15)\n> > > +# - INSERT (13) NO, 13 < 15\n> > > # - INSERT (14) NO, 14 < 15\n> > > # - INSERT (15) NO, 15 = 15\n> > > # - INSERT (16) YES, 16 > 15\n> > > +# - INSERT (17) YES, 17 > 15\n> > > $result =\n> > > $node_subscriber->safe_psql('postgres',\n> > > - \"SELECT a FROM tab_rowfilter_viaroot_part\");\n> > > -is($result, qq(16), 'check replicated rows to tab_rowfilter_viaroot_part');\n> > > + \"SELECT a FROM tab_rowfilter_viaroot_part ORDER BY 1\");\n> > > +is( $result, qq(16\n> > > +17),\n> > > + 'check replicated rows to tab_rowfilter_viaroot_part');\n> > >\n> > > ~\n> > >\n> > > I'm not 100% sure this is testing quite what you want to test. AFAICT\n> > > the subscription is created like:\n> > >\n> > > \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\n> > > application_name=$appname' PUBLICATION tap_pub_1, tap_pub_2,\n> > > tap_pub_3, tap_pub_4a, tap_pub_4b, tap_pub_5a, tap_pub_5b,\n> > > tap_pub_toast, tap_pub_inherits, tap_pub_viaroot_2, tap_pub_viaroot_1\"\n> >\n> > I think this is the scenario we fixed : Simultaneously subscribing to two\n> > publications that publish the parent and child respectively, then want to use\n> > the parent's identity and schema).\n> >\n>\n> Yeah, but currently BOTH the tap_pub_viaroot_2, tap_pub_viaroot_1 are\n> using \"WITH (publish_via_partition_root)\", so IMO the user would\n> surely expect that only the root table would be published even when a\n> subscription combines those publications. OTOH, I thought a subtle\n> point of this patch is that now the same result will happen even if\n> only ONE of the publications was using \"WITH\n> (publish_via_partition_root)\". So it’s that scenario of “only ONE\n> publication is using the option” that I thought ought to be explicitly\n> tested.\n>\n\nThe current change to existing tests is difficult to understand. I\nsuggest writing a separate test for row filter and then cover the\nscenario you have suggested.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 24 Mar 2023 11:46:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, Mar 24, 2023 at 10:14 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n>\r\n\r\nThanks for the information.\r\n\r\n> BTW, since this patch changes the signature of the API\r\n> pg_get_publication_tables, I assume the example in the CREATE\r\n> SUBSCRIPTION Notes [1] may not work anymore.\r\n\r\nThe use case you mentioned is still work.\r\n\r\n> Meanwhile, Tom Lane suggested [2] that the example could be re-written\r\n> to avoid even mentioning pg_get_publication_tables at all.\r\n\r\nI'm not against this.\r\n\r\nRegards,\r\nWang Wei\r\n", "msg_date": "Fri, 24 Mar 2023 06:43:32 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, Mar 24, 2023 at 14:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > > > ======\r\n> > > > src/test/subscription/t/028_row_filter.pl\r\n> > > >\r\n> > > > 7.\r\n> > > > +# insert data into partitioned table.\r\n> > > > +$node_publisher->safe_psql('postgres',\r\n> > > > + \"INSERT INTO tab_rowfilter_viaroot_part(a) VALUES(13), (17)\");\r\n> > > > +\r\n> > > > $node_subscriber->safe_psql('postgres',\r\n> > > > \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\r\n> > > > application_name=$appname' PUBLICATION tap_pub_1, tap_pub_2,\r\n> > > > tap_pub_3, tap_pub_4a, tap_pub_4b, tap_pub_5a, tap_pub_5b,\r\n> > > > tap_pub_toast, tap_pub_inherits, tap_pub_viaroot_2, tap_pub_viaroot_1\"\r\n> > > > );\r\n> > > > @@ -707,13 +711,17 @@ is($result, qq(t|1), 'check replicated rows to\r\n> > > > tab_rowfilter_toast');\r\n> > > > # the row filter for the top-level ancestor:\r\n> > > > #\r\n> > > > # tab_rowfilter_viaroot_part filter is: (a > 15)\r\n> > > > +# - INSERT (13) NO, 13 < 15\r\n> > > > # - INSERT (14) NO, 14 < 15\r\n> > > > # - INSERT (15) NO, 15 = 15\r\n> > > > # - INSERT (16) YES, 16 > 15\r\n> > > > +# - INSERT (17) YES, 17 > 15\r\n> > > > $result =\r\n> > > > $node_subscriber->safe_psql('postgres',\r\n> > > > - \"SELECT a FROM tab_rowfilter_viaroot_part\");\r\n> > > > -is($result, qq(16), 'check replicated rows to tab_rowfilter_viaroot_part');\r\n> > > > + \"SELECT a FROM tab_rowfilter_viaroot_part ORDER BY 1\");\r\n> > > > +is( $result, qq(16\r\n> > > > +17),\r\n> > > > + 'check replicated rows to tab_rowfilter_viaroot_part');\r\n> > > >\r\n> > > > ~\r\n> > > >\r\n> > > > I'm not 100% sure this is testing quite what you want to test. AFAICT\r\n> > > > the subscription is created like:\r\n> > > >\r\n> > > > \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\r\n> > > > application_name=$appname' PUBLICATION tap_pub_1, tap_pub_2,\r\n> > > > tap_pub_3, tap_pub_4a, tap_pub_4b, tap_pub_5a, tap_pub_5b,\r\n> > > > tap_pub_toast, tap_pub_inherits, tap_pub_viaroot_2, tap_pub_viaroot_1\"\r\n> > >\r\n> > > I think this is the scenario we fixed : Simultaneously subscribing to two\r\n> > > publications that publish the parent and child respectively, then want to use\r\n> > > the parent's identity and schema).\r\n> > >\r\n> >\r\n> > Yeah, but currently BOTH the tap_pub_viaroot_2, tap_pub_viaroot_1 are\r\n> > using \"WITH (publish_via_partition_root)\", so IMO the user would\r\n> > surely expect that only the root table would be published even when a\r\n> > subscription combines those publications. OTOH, I thought a subtle\r\n> > point of this patch is that now the same result will happen even if\r\n> > only ONE of the publications was using \"WITH\r\n> > (publish_via_partition_root)\". So it’s that scenario of “only ONE\r\n> > publication is using the option” that I thought ought to be explicitly\r\n> > tested.\r\n> >\r\n> \r\n> The current change to existing tests is difficult to understand. I\r\n> suggest writing a separate test for row filter and then cover the\r\n> scenario you have suggested.\r\n\r\nChanged as suggested.\r\n\r\nAnd I found there is a problem in the three back-branch patches (HEAD_v21_0002*,\r\nREL15_* and REL14_*):\r\nIn the function fetch_table_list, we use pg_partition_ancestors to get the list\r\nof tables from the publisher. But the pg_partition_ancestors was introduced in\r\nv12, which means that if the publisher is v11 and the subscriber is v14+, this\r\nwill cause an error.\r\nSince we are going to first submit the fix for the publisher >= 16 case on HEAD,\r\nI think we could discuss this issue later if needed. Also, I will update these\r\nthree patches later if needed.\r\n\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang Wei", "msg_date": "Fri, 24 Mar 2023 09:06:45 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, Mar 24, 2023 at 9:49 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> On Thu, Mar 23, 2023 at 8:11 PM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Thu, Mar 23, 2023 at 12:27 PM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> > > Here are some review comments for patch v20-0001.\r\n> >\r\n> ...\r\n> > > ======\r\n> > > src/test/subscription/t/031_column_list.pl\r\n> > >\r\n> > > 8.\r\n> > > - CREATE PUBLICATION pub_root_true FOR TABLE test_root (a) WITH\r\n> > > (publish_via_partition_root = true);\r\n> > > + CREATE PUBLICATION pub_root_true_1 FOR TABLE test_root (a) WITH\r\n> > > (publish_via_partition_root = true);\r\n> > > + CREATE PUBLICATION pub_root_true_2 FOR TABLE test_root_1 (a, b) WITH\r\n> > > (publish_via_partition_root = true);\r\n> > >\r\n> > > -- initial data\r\n> > > INSERT INTO test_root VALUES (1, 2, 3);\r\n> > > INSERT INTO test_root VALUES (10, 20, 30);\r\n> > > ));\r\n> > >\r\n> > > +# Subscribe to pub_root_true_1 and pub_root_true_2 at the same time,\r\n> which\r\n> > > +# means that the initial data will be synced once, and only the column list of\r\n> > > +# the parent table (test_root) in the publication pub_root_true_1 will be\r\n> used\r\n> > > +# for both table sync and data replication.\r\n> > > $node_subscriber->safe_psql(\r\n> > > 'postgres', qq(\r\n> > > - CREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr'\r\n> PUBLICATION\r\n> > > pub_root_true;\r\n> > > + CREATE\r\n> > >\r\n> > > ~\r\n> > >\r\n> > > (This is similar to the previous review comment #7 above)\r\n> > >\r\n> > > Won't it be a better test of the \"At least one\" code when only the\r\n> > > publication of partition (test_root_1) is using \"WITH\r\n> > > (publish_via_partition_root = true)\".\r\n> > >\r\n> > > e.g\r\n> > > CREATE PUBLICATION pub_root_true_1 FOR TABLE test_root (a);\r\n> > > CREATE PUBLICATION pub_root_true_2 FOR TABLE test_root_1 (a, b) WITH\r\n> > > (publish_via_partition_root = true);\r\n> >\r\n> > I think specifying one or both is the same scenario here.\r\n> > But it seemed clearer if only the \"via_root\" option is specified in the\r\n> > publication that publishes the parent, so I changed this point in\r\n> > \"031_column_list.pl\". Since the publications in \"028_row_filter.pl\" were\r\n> > introduced by other commits, I didn't change it.\r\n> >\r\n> \r\n> In hindsight, I think those publications should be renamed to\r\n> something more appropriate. The name \"pub_root_true_2\" seems\r\n> misleading now since the publish_via_partition_root = false\r\n> \r\n> e.g.1. pub_test_root, pub_test_root_1\r\n> or\r\n> e.g.2. pub_root_true, pub_root_1_false\r\n> etc.\r\n\r\nI prefer your first suggestion.\r\nChanged.\r\n\r\nRegards,\r\nWang Wei\r\n", "msg_date": "Fri, 24 Mar 2023 09:06:48 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, Mar 24, 2023 at 2:36 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Fri, Mar 24, 2023 at 14:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> And I found there is a problem in the three back-branch patches (HEAD_v21_0002*,\n> REL15_* and REL14_*):\n> In the function fetch_table_list, we use pg_partition_ancestors to get the list\n> of tables from the publisher. But the pg_partition_ancestors was introduced in\n> v12, which means that if the publisher is v11 and the subscriber is v14+, this\n> will cause an error.\n>\n\nYeah, I am also not sure how to fix this for back-branches. I didn't\nsee any field report for this so I am hesitant to make any complicated\nchanges in back-branches that will deviate it from HEAD. Let's try to\nfix it for HEAD at this stage. I have slightly modified the attached\npatch, the changes are (a) I have removed the retail pfrees added in\npg_get_publication_tables() as that memory will anyway be freed when\nwe call SRF_RETURN_DONE(). It is also inconsistent to sometimes do\nretail pfree and not other times in the same function. I have also\nreferred few similar functions and didn't find them doing retail\npfree. (b) Changed the comments in a few places.\n\nThe patch looks good to me. So, I am planning to push this sometime\nearly next week unless there are more suggestions or comments.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 25 Mar 2023 16:21:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "Here are some review comments for v23-0001.\n\n======\nsrc/test/subscription/t/028_row_filter.pl\n\n1.\n+# two publications, one publishing through ancestor and another one directly\n+# publsihing the partition, with different row filters\n+$node_publisher->safe_psql('postgres',\n+ \"CREATE PUBLICATION tap_pub_viaroot_sync_1 FOR TABLE\ntab_rowfilter_viaroot_part_sync WHERE (a > 15) WITH\n(publish_via_partition_root)\"\n+);\n+$node_publisher->safe_psql('postgres',\n+ \"CREATE PUBLICATION tap_pub_viaroot_sync_2 FOR TABLE\ntab_rowfilter_viaroot_part_sync_1 WHERE (a < 15)\"\n+);\n+\n\n1a.\nTypo \"publsihing\"\n\n~\n\n1b.\nIMO these table and publication names could be better.\n\nI thought it was confusing to have the word \"sync\" in these table\nnames and publication names. To the casual reader, it looks like these\nare synchronous replication tests but they are not.\n\nSimilarly, I thought it was confusing that 2nd publication and table\nhave names with the word \"viaroot\" when the option\npublish_via_partition_root is not even true.\n\n~~~\n\n2.\n\n # The following commands are executed after CREATE SUBSCRIPTION, so these SQL\n # commands are for testing normal logical replication behavior.\n #\n\n~\n\nI think you should add a couple of INSERTS for the newly added table/s\nalso. IMO it is not only better for test completeness, but it causes\nreaders to question why there are INSERTS for every other table except\nthese ones.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 27 Mar 2023 12:33:25 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Mar 27, 2023 at 7:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n>\n> 1.\n> +# two publications, one publishing through ancestor and another one directly\n> +# publsihing the partition, with different row filters\n> +$node_publisher->safe_psql('postgres',\n> + \"CREATE PUBLICATION tap_pub_viaroot_sync_1 FOR TABLE\n> tab_rowfilter_viaroot_part_sync WHERE (a > 15) WITH\n> (publish_via_partition_root)\"\n> +);\n> +$node_publisher->safe_psql('postgres',\n> + \"CREATE PUBLICATION tap_pub_viaroot_sync_2 FOR TABLE\n> tab_rowfilter_viaroot_part_sync_1 WHERE (a < 15)\"\n> +);\n> +\n>\n> 1a.\n> Typo \"publsihing\"\n>\n> ~\n>\n> 1b.\n> IMO these table and publication names could be better.\n>\n> I thought it was confusing to have the word \"sync\" in these table\n> names and publication names. To the casual reader, it looks like these\n> are synchronous replication tests but they are not.\n>\n\nHmm, sync here is for initial sync, so I don't think it is too much of\na problem to understand if one is aware that these are logical\nreplication related tests.\n\n> Similarly, I thought it was confusing that 2nd publication and table\n> have names with the word \"viaroot\" when the option\n> publish_via_partition_root is not even true.\n>\n\nI think the better names for tables could be\n\"tab_rowfilter_parent_sync, tab_rowfilter_child_sync\" and for\npublications \"tap_pub_parent_sync_1,\ntap_pub_child_sync_1\"\n\n> ~~~\n>\n> 2.\n>\n> # The following commands are executed after CREATE SUBSCRIPTION, so these SQL\n> # commands are for testing normal logical replication behavior.\n> #\n>\n> ~\n>\n> I think you should add a couple of INSERTS for the newly added table/s\n> also. IMO it is not only better for test completeness, but it causes\n> readers to question why there are INSERTS for every other table except\n> these ones.\n>\n\nThe purpose of the test is to test the initial sync's interaction with\n'publish_via_partition_root' option. So, adding Inserts after that for\nreplication doesn't serve any purpose and it also consumes test cycles\nwithout any additional benefit. So, -1 for extending it further.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 27 Mar 2023 09:01:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Mar 27, 2023 at 11:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Mon, Mar 27, 2023 at 7:03 AM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> >\r\n> > 1.\r\n> > +# two publications, one publishing through ancestor and another one directly\r\n> > +# publsihing the partition, with different row filters\r\n> > +$node_publisher->safe_psql('postgres',\r\n> > + \"CREATE PUBLICATION tap_pub_viaroot_sync_1 FOR TABLE\r\n> > tab_rowfilter_viaroot_part_sync WHERE (a > 15) WITH\r\n> > (publish_via_partition_root)\"\r\n> > +);\r\n> > +$node_publisher->safe_psql('postgres',\r\n> > + \"CREATE PUBLICATION tap_pub_viaroot_sync_2 FOR TABLE\r\n> > tab_rowfilter_viaroot_part_sync_1 WHERE (a < 15)\"\r\n> > +);\r\n> > +\r\n> >\r\n> > 1a.\r\n> > Typo \"publsihing\"\r\n\r\nChanged.\r\n\r\n> > ~\r\n> >\r\n> > 1b.\r\n> > IMO these table and publication names could be better.\r\n> >\r\n> > I thought it was confusing to have the word \"sync\" in these table\r\n> > names and publication names. To the casual reader, it looks like these\r\n> > are synchronous replication tests but they are not.\r\n> >\r\n> \r\n> Hmm, sync here is for initial sync, so I don't think it is too much of\r\n> a problem to understand if one is aware that these are logical\r\n> replication related tests.\r\n> \r\n> > Similarly, I thought it was confusing that 2nd publication and table\r\n> > have names with the word \"viaroot\" when the option\r\n> > publish_via_partition_root is not even true.\r\n> >\r\n> \r\n> I think the better names for tables could be\r\n> \"tab_rowfilter_parent_sync, tab_rowfilter_child_sync\" and for\r\n> publications \"tap_pub_parent_sync_1,\r\n> tap_pub_child_sync_1\"\r\n\r\nChanged. And removed \"_1\" in the suggested publication names.\r\nPreviously, I added \"_1\" and \"_2\" to distinguish between two publications with\r\nthe same name. However, the publication names are now different, so I think we\r\ncould remove \"_1\".\r\n\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang Wei", "msg_date": "Mon, 27 Mar 2023 05:18:07 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "I looked at v24-0001.\n\n======\nsrc/test/subscription/t/028_row_filter.pl\n\n+# Check expected replicated rows for tap_pub_parent_sync and\n+# tap_pub_child_sync\n+# Since the option publish_via_partition_root of tap_pub_parent_sync is true,\n+# so the row filter of tap_pub_parent_sync will be used:\n+# tap_pub_parent_sync filter is: (a > 15)\n+# tap_pub_child_sync filter is: (a < 15)\n\nMaybe wrapping can be improved in the above comment and a full stop\nadded to the first sentence.\n\nOtherwise, I have no more comments for v24.\n\n------\nKind Regards,\nPeter Smith\n\n\n", "msg_date": "Mon, 27 Mar 2023 18:56:09 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Mon, Mar 20, 2023 at 11:22 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n> If the tests you have in mind are only related to this patch set then\n> feel free to propose them here if you feel the current ones are not\n> sufficient.\n\nI think the new tests added by Wang cover my concerns (thanks!). I share\nPeter's comment that we don't seem to have a regression test covering\nonly the bug description itself -- just ones that combine that case with\nrow and column restrictions -- but if you're all happy with the existing\napproach then I have nothing much to add there.\n\nI was staring at this subquery in fetch_table_list():\n\n> + \" ( SELECT array_agg(a.attname ORDER BY a.attnum)\\n\"\n> + \" FROM pg_attribute a\\n\"\n> + \" WHERE a.attrelid = gpt.relid AND\\n\"\n> + \" a.attnum = ANY(gpt.attrs)\\n\"\n> + \" ) AS attnames\\n\"\n\nOn my machine this takes up roughly 90% of the runtime of the query,\nwhich makes for a noticeable delay with a bigger test case (a couple of\nFOR ALL TABLES subscriptions on the regression database). And it seems\nlike we immediately throw all that work away: if I understand correctly,\nwe only use the third column for its interaction with DISTINCT. Would it\nbe enough to just replace that whole thing with gpt.attrs?\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Mon, 27 Mar 2023 16:01:59 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tues, Mar 28, 2023 at 7:02 AM Jacob Champion <jchampion@timescale.com> wrote:\r\n> On Mon, Mar 20, 2023 at 11:22 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > If the tests you have in mind are only related to this patch set then\r\n> > feel free to propose them here if you feel the current ones are not\r\n> > sufficient.\r\n> \r\n> I think the new tests added by Wang cover my concerns (thanks!). I share\r\n> Peter's comment that we don't seem to have a regression test covering\r\n> only the bug description itself -- just ones that combine that case with\r\n> row and column restrictions -- but if you're all happy with the existing\r\n> approach then I have nothing much to add there.\r\n\r\nThe scenario of this bug is to subscribe to two publications at the same time,\r\nand these two publications publish parent table and child table respectively.\r\nAnd option via_root is specified in both publications or only in the publication\r\nof the parent table. At this time, the data on the publisher-side will be copied\r\ntwice (the data will be copied to the two tables on the subscribe-side\r\nrespectively).\r\nSo, I think we have covered this bug itself in 013_partition.pl. We inserted the\r\ninitial data into the parent table tab4 on the publisher-side, and checked\r\nwhether the sync is completed as we expected (there is data in table tab4, but\r\nthere is no data in table tab4_1).\r\n\r\n> I was staring at this subquery in fetch_table_list():\r\n> \r\n> > + \" ( SELECT array_agg(a.attname ORDER BY a.attnum)\\n\"\r\n> > + \" FROM pg_attribute a\\n\"\r\n> > + \" WHERE a.attrelid = gpt.relid AND\\n\"\r\n> > + \" a.attnum = ANY(gpt.attrs)\\n\"\r\n> > + \" ) AS attnames\\n\"\r\n> \r\n> On my machine this takes up roughly 90% of the runtime of the query,\r\n> which makes for a noticeable delay with a bigger test case (a couple of\r\n> FOR ALL TABLES subscriptions on the regression database). And it seems\r\n> like we immediately throw all that work away: if I understand correctly,\r\n> we only use the third column for its interaction with DISTINCT. Would it\r\n> be enough to just replace that whole thing with gpt.attrs?\r\n\r\nMake sense.\r\nChanged as suggested.\r\n\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang Wei", "msg_date": "Tue, 28 Mar 2023 09:59:49 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tues, Mar 28, 2023 at 18:00 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> Attach the new patch.\r\n\r\nSorry, I attached the wrong patch.\r\nHere is the correct new version patch which addressed all comments so far.\r\n\r\nRegards,\r\nWang Wei", "msg_date": "Tue, 28 Mar 2023 10:09:27 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tue, Mar 28, 2023 at 2:59 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n> The scenario of this bug is to subscribe to two publications at the same time,\n> and these two publications publish parent table and child table respectively.\n> And option via_root is specified in both publications or only in the publication\n> of the parent table.\n\nAh, reading the initial mail again, that makes sense. I came to this\nthread with the alternative reproduction in mind (subscribing to one\npublication with viaroot=true, and another publication with\nviaroot=false) and misread the report accordingly... In the end, I'm\ncomfortable with the current level of coverage.\n\n> > Would it\n> > be enough to just replace that whole thing with gpt.attrs?\n>\n> Make sense.\n> Changed as suggested.\n\nLGTM, by inspection. Thanks!\n\n--Jacob\n\n\n", "msg_date": "Tue, 28 Mar 2023 10:44:28 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "A minor review comment for v25-0001.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n1.\n@@ -1936,21 +1936,56 @@ fetch_table_list(WalReceiverConn *wrconn, List\n*publications)\n WalRcvExecResult *res;\n StringInfoData cmd;\n TupleTableSlot *slot;\n- Oid tableRow[3] = {TEXTOID, TEXTOID, NAMEARRAYOID};\n+ Oid tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};\n\nThe patch could be slightly less invasive if you did not make this\nchange, but instead, only overwrite tableRow[2] for the >= PG16 case.\n\nOr vice versa, if you prefer.\n\nThe point is, there are only 2 cases, so you might as well initialize\na default tableRow[2] that is valid for one case and overwrite it only\nfor the other case, instead of overwriting it in 2 places.\n\nYMMV.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 29 Mar 2023 13:14:17 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Mar 29, 2023 at 7:44 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> A minor review comment for v25-0001.\n>\n> ======\n> src/backend/commands/subscriptioncmds.c\n>\n> 1.\n> @@ -1936,21 +1936,56 @@ fetch_table_list(WalReceiverConn *wrconn, List\n> *publications)\n> WalRcvExecResult *res;\n> StringInfoData cmd;\n> TupleTableSlot *slot;\n> - Oid tableRow[3] = {TEXTOID, TEXTOID, NAMEARRAYOID};\n> + Oid tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};\n>\n> The patch could be slightly less invasive if you did not make this\n> change, but instead, only overwrite tableRow[2] for the >= PG16 case.\n>\n> Or vice versa, if you prefer.\n>\n\nThe current coding pattern looks neat to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 29 Mar 2023 09:51:30 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Tue, Mar 28, 2023 at 11:14 PM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Tue, Mar 28, 2023 at 2:59 AM wangw.fnst@fujitsu.com\n>\n> > > Would it\n> > > be enough to just replace that whole thing with gpt.attrs?\n> >\n> > Make sense.\n> > Changed as suggested.\n>\n> LGTM, by inspection. Thanks!\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 29 Mar 2023 14:29:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Wed, Mar 29, 2023 at 2:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Pushed.\n\nWhile rebasing my logical-roots patch over the top of this, I ran into\nanother situation where mixed viaroot settings can duplicate data. The\nkey idea is to subscribe to two publications with mixed settings, as\nbefore, and add a partition root that's already been replicated with\nviaroot=false to the other publication with viaroot=true.\n\n pub=# CREATE TABLE part (a int) PARTITION BY RANGE (a);\n pub=# CREATE PUBLICATION pub_all FOR ALL TABLES;\n pub=# CREATE PUBLICATION pub_other FOR TABLE other WITH\n(publish_via_partition_root);\n -- populate with data, then switch to subscription side\n sub=# CREATE SUBSCRIPTION sub CONNECTION ... PUBLICATION pub_all, pub_other;\n -- switch back to publication\n pub=# ALTER PUBLICATION pub_other ADD TABLE part;\n -- and back to subscription\n sub=# ALTER SUBSCRIPTION sub REFRESH PUBLICATION;\n -- data is now duplicated\n\n(Standalone reproduction attached.)\n\nThis is similar to what happens if you alter the\npublish_via_partition_root setting for an existing publication, but\nI'd argue it's easier to hit by accident. Is this part of the same\nclass of bugs, or is it different (or even expected) behavior?\n\nThanks,\n--Jacob", "msg_date": "Thu, 30 Mar 2023 11:15:46 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, Mar 31, 2023 at 5:15 AM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Wed, Mar 29, 2023 at 2:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Pushed.\n>\n> While rebasing my logical-roots patch over the top of this, I ran into\n> another situation where mixed viaroot settings can duplicate data. The\n> key idea is to subscribe to two publications with mixed settings, as\n> before, and add a partition root that's already been replicated with\n> viaroot=false to the other publication with viaroot=true.\n>\n> pub=# CREATE TABLE part (a int) PARTITION BY RANGE (a);\n> pub=# CREATE PUBLICATION pub_all FOR ALL TABLES;\n> pub=# CREATE PUBLICATION pub_other FOR TABLE other WITH\n> (publish_via_partition_root);\n> -- populate with data, then switch to subscription side\n> sub=# CREATE SUBSCRIPTION sub CONNECTION ... PUBLICATION pub_all, pub_other;\n> -- switch back to publication\n> pub=# ALTER PUBLICATION pub_other ADD TABLE part;\n> -- and back to subscription\n> sub=# ALTER SUBSCRIPTION sub REFRESH PUBLICATION;\n> -- data is now duplicated\n>\n> (Standalone reproduction attached.)\n>\n> This is similar to what happens if you alter the\n> publish_via_partition_root setting for an existing publication, but\n> I'd argue it's easier to hit by accident. Is this part of the same\n> class of bugs, or is it different (or even expected) behavior?\n>\n\nHi Jacob. I tried your example. And I can see after the REFRESH the\nadded table 'part' tablesync is launched and so does the copy causing\nduplicate data.\n\nsub=# ALTER SUBSCRIPTION sub REFRESH PUBLICATION;\nALTER SUBSCRIPTION\nsub=# 2023-03-31 13:09:30.348 AEDT [334] LOG: logical replication\ntable synchronization worker for subscription \"sub\", table \"part\" has\nstarted\n...\n\nDuplicate data happens because REFRESH PUBLICATION has the default\n\"refresh_option of copy_data=true.\n\nAlthough the result is at first a bit unexpected, I am not sure if\nanything can be done to make it do what you probably hoped it would\ndo:\n\nFor example, Just imagine if logic could be made smarter to recognize\nthat since there was already the 'part_def' being subscribed so it\nshould NOT use the default 'copy_data=true' when the REFRESH launches\nthe ancestor table 'part'...\n\nEven if that logic was implemented, I have a feeling you could *still*\nrun into problems if the 'part' table was made of multiple partitions.\nI think you might get to a situation where you DO want some partition\ndata copied (because you did not have it yet but now you are\nsubscribing to the root you want it) while at the same time, you DON'T\nwant to get duplicated data from other partitions (because you already\nknew about those ones -- like your example does).\n\nSo, I am not sure what the answer is, or maybe there isn't one.\n\nAt least, we need to check there are sufficient \"BE CAREFUL\" warnings\nin the documentation for scenarios like this.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 31 Mar 2023 14:01:57 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On Fri, Mar 31, 2023 2:16 AM Jacob Champion <jchampion@timescale.com> wrote:\r\n> \r\n> On Wed, Mar 29, 2023 at 2:00 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > Pushed.\r\n> \r\n> While rebasing my logical-roots patch over the top of this, I ran into\r\n> another situation where mixed viaroot settings can duplicate data. The\r\n> key idea is to subscribe to two publications with mixed settings, as\r\n> before, and add a partition root that's already been replicated with\r\n> viaroot=false to the other publication with viaroot=true.\r\n> \r\n> pub=# CREATE TABLE part (a int) PARTITION BY RANGE (a);\r\n> pub=# CREATE PUBLICATION pub_all FOR ALL TABLES;\r\n> pub=# CREATE PUBLICATION pub_other FOR TABLE other WITH\r\n> (publish_via_partition_root);\r\n> -- populate with data, then switch to subscription side\r\n> sub=# CREATE SUBSCRIPTION sub CONNECTION ... PUBLICATION pub_all,\r\n> pub_other;\r\n> -- switch back to publication\r\n> pub=# ALTER PUBLICATION pub_other ADD TABLE part;\r\n> -- and back to subscription\r\n> sub=# ALTER SUBSCRIPTION sub REFRESH PUBLICATION;\r\n> -- data is now duplicated\r\n> \r\n> (Standalone reproduction attached.)\r\n> \r\n> This is similar to what happens if you alter the\r\n> publish_via_partition_root setting for an existing publication, but\r\n> I'd argue it's easier to hit by accident. Is this part of the same\r\n> class of bugs, or is it different (or even expected) behavior?\r\n> \r\n\r\nI noticed that a similar problem has been discussed in this thread, see [1] [2]\r\n[3] [4]. It seems complicated to fix it if we want to automatically skip tables\r\nthat have been synchronized previously by code, and this may overkill in some\r\ncases (e.g. The target table in subscriber is not a partitioned table, and the\r\nuser want to synchronize all data in the partitioned table from the publisher).\r\nBesides, it seems not a common case. So I'm not sure we should fix it. Maybe we\r\ncan just add some documentation for it as Peter mentioned.\r\n\r\n[1] https://www.postgresql.org/message-id/CAJcOf-eQR_%3Dq0f4ZVHd342QdLvBd_995peSr4xCU05hrS3TeTg%40mail.gmail.com\r\n[2] https://www.postgresql.org/message-id/OS0PR01MB5716C756312959F293A822C794869%40OS0PR01MB5716.jpnprd01.prod.outlook.com (the second issue in it)\r\n[3] https://www.postgresql.org/message-id/CA%2BHiwqHnDHcT4OOcga9rDFyc7TvDrpN5xFH9J2pyHQo9ptvjmQ%40mail.gmail.com\r\n[4] https://www.postgresql.org/message-id/CAA4eK1%2BNWreG%3D2sKiMz8vFzTsFhEHCjgQMyAu6zj3sdLmcheYg%40mail.gmail.com\r\n\r\nRegards,\r\nShi Yu\r\n", "msg_date": "Fri, 31 Mar 2023 10:04:10 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On 3/30/23 20:01, Peter Smith wrote:\n> For example, Just imagine if logic could be made smarter to recognize\n> that since there was already the 'part_def' being subscribed so it\n> should NOT use the default 'copy_data=true' when the REFRESH launches\n> the ancestor table 'part'...\n> \n> Even if that logic was implemented, I have a feeling you could *still*\n> run into problems if the 'part' table was made of multiple partitions.\n> I think you might get to a situation where you DO want some partition\n> data copied (because you did not have it yet but now you are\n> subscribing to the root you want it) while at the same time, you DON'T\n> want to get duplicated data from other partitions (because you already\n> knew about those ones -- like your example does).\n\nHm, okay. My interest here is mainly because my logical-roots proposal\ngeneralizes the problem (and therefore makes it worse).\n\nFor what it's worth, that patchset introduces the ability for the\nsubscriber to sync multiple tables into one. I wonder if that could be\nused somehow to help fix this problem too?\n\n> At least, we need to check there are sufficient \"BE CAREFUL\" warnings\n> in the documentation for scenarios like this.\n\nAgreed. These are sharp edges.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Fri, 31 Mar 2023 16:04:37 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" }, { "msg_contents": "On 3/31/23 03:04, shiy.fnst@fujitsu.com wrote:\n> I noticed that a similar problem has been discussed in this thread, see [1] [2]\n> [3] [4].\n\nAh, thank you. I didn't go far back enough in the thread...\n\n> It seems complicated to fix it if we want to automatically skip tables\n> that have been synchronized previously by code\n\nI agree, this is looking very complex. I need to read through the\nexamples you sent more closely.\n\n> and this may overkill in some\n> cases (e.g. The target table in subscriber is not a partitioned table, and the\n> user want to synchronize all data in the partitioned table from the publisher).\n\nHm. It seems like the setup process doesn't really capture the user's\nintent. There are just so many things that they could be theoretically\ntrying to do.\n\n> Besides, it seems not a common case. So I'm not sure we should fix it. Maybe we\n> can just add some documentation for it as Peter mentioned.\n\nI think we should absolutely document the pitfalls here. (I'm still\ntrying to figure out what they are, though, so I don't have any concrete\nsuggestions yet...)\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Fri, 31 Mar 2023 16:05:37 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Data is copied twice when specifying both child and parent table\n in publication" } ]
[ { "msg_contents": "Hi.\n\nOne of the issues when we try to use sharding in PostgreSQL is absence \nof partial aggregates pushdown.\n\nI see several opportunities to alleviate this issue.\nIf we look at Citus, it implements aggregate, calculating internal state \nof an arbitrary agregate function and exporting it as text. So we could \ncalculate internal states independently on all data sources and then \nfinalize it, which allows to compute arbitrary aggregate.\n\nBut, as mentioned in [1] thread, for some functions (like \ncount/max/min/sum) we can just push down them. It seems easy and covers \na lot of cases.\nFor now there are still issues - for example you can't handle functions \nas avg() as we should somehow get its internal state or sum() variants, \nwhich need aggserialfn/aggdeserialfn. Preliminary version is attached.\n\nIs someone else working on the issue? Does suggested approach make \nsense?\n\n[1] \nhttps://www.postgresql.org/message-id/flat/9998c3af9fdb5f7d62a6c7ad0fcd9142%40postgrespro.ru\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Fri, 15 Oct 2021 16:15:33 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Partial aggregates pushdown" }, { "msg_contents": "Hi Alexander,\n\nOn 10/15/21 15:15, Alexander Pyhalov wrote:\n> Hi.\n> \n> One of the issues when we try to use sharding in PostgreSQL is absence \n> of partial aggregates pushdown.\n> \n> I see several opportunities to alleviate this issue.\n> If we look at Citus, it implements aggregate, calculating internal state \n> of an arbitrary agregate function and exporting it as text. So we could \n> calculate internal states independently on all data sources and then \n> finalize it, which allows to compute arbitrary aggregate.\n> \n> But, as mentioned in [1] thread, for some functions (like \n> count/max/min/sum) we can just push down them. It seems easy and covers \n> a lot of cases.\n> For now there are still issues - for example you can't handle functions \n> as avg() as we should somehow get its internal state or sum() variants, \n> which need aggserialfn/aggdeserialfn. Preliminary version is attached.\n> \n> Is someone else working on the issue? Does suggested approach make sense?\n> \n\nI think a couple people worked on this (or something similar/related) in \nthe past, but I don't recall any recent patches.\n\nIMHO being able to push-down parts of an aggregation to other nodes is a \nvery desirable feature, that might result in huge improvements for some \nanalytical workloads.\n\nAs for the proposed approach, it's probably good enough for the first \nversion to restrict this to aggregates where the aggregate result is \nsufficient, i.e. we don't need any new export/import procedures.\n\nBut it's very unlikely we'd want to restrict it the way the patch does \nit, i.e. based on aggregate name. That's both fragile (people can create \nnew aggregates with such name) and against the PostgreSQL extensibility \n(people may implement custom aggregates, but won't be able to benefit \nfrom this just because of name).\n\nSo for v0 maybe, but I think there neeeds to be a way to relax this in \nsome way, for example we could add a new flag to pg_aggregate to mark \naggregates supporting this.\n\nAnd then we should extend this for aggregates with more complex internal \nstates (e.g. avg), by supporting a function that \"exports\" the aggregate \nstate - similar to serial/deserial functions, but needs to be portable.\n\nI think the trickiest thing here is rewriting the remote query to call \nthis export function, but maybe we could simply instruct the remote node \nto use a different final function for the top-level node?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 15 Oct 2021 16:56:27 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Tomas Vondra писал 2021-10-15 17:56:\n> Hi Alexander,\n> \n\nHi.\n\n> And then we should extend this for aggregates with more complex\n> internal states (e.g. avg), by supporting a function that \"exports\"\n> the aggregate state - similar to serial/deserial functions, but needs\n> to be portable.\n> \n> I think the trickiest thing here is rewriting the remote query to call\n> this export function, but maybe we could simply instruct the remote\n> node to use a different final function for the top-level node?\n> \n> \n\nIf we have some special export function, how should we find out that \nremote server supports this? Should it be server property or should it \nsomehow find out it while connecting to the server?\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Fri, 15 Oct 2021 18:05:27 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On 10/15/21 17:05, Alexander Pyhalov wrote:\n> Tomas Vondra писал 2021-10-15 17:56:\n>> Hi Alexander,\n>>\n> \n> Hi.\n> \n>> And then we should extend this for aggregates with more complex\n>> internal states (e.g. avg), by supporting a function that \"exports\"\n>> the aggregate state - similar to serial/deserial functions, but needs\n>> to be portable.\n>>\n>> I think the trickiest thing here is rewriting the remote query to call\n>> this export function, but maybe we could simply instruct the remote\n>> node to use a different final function for the top-level node?\n>>\n>>\n> \n> If we have some special export function, how should we find out that \n> remote server supports this? Should it be server property or should it \n> somehow find out it while connecting to the server?\n> \n\nGood question. I guess there could be some initial negotiation based on \nremote node version etc. And we could also disable this pushdown for \nolder server versions, etc.\n\nBut after that, I think we can treat this just like other definitions \nbetween local/remote node - we'd assume they match (i.e. the remote \nserver has the export function), and then we'd get an error if it does \nnot. If you need to use remote nodes without an export function, you'd \nhave to disable the pushdown.\n\nAFAICS this works both for case with explicit query rewrite (i.e. we \nsend SQL with calls to the export function) and implicit query rewrite \n(where the remote node uses a different finalize function based on mode, \nspecified by GUC).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 15 Oct 2021 17:26:37 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 10/15/21 17:05, Alexander Pyhalov wrote:\n> >Tomas Vondra писал 2021-10-15 17:56:\n> >>And then we should extend this for aggregates with more complex\n> >>internal states (e.g. avg), by supporting a function that \"exports\"\n> >>the aggregate state - similar to serial/deserial functions, but needs\n> >>to be portable.\n> >>\n> >>I think the trickiest thing here is rewriting the remote query to call\n> >>this export function, but maybe we could simply instruct the remote\n> >>node to use a different final function for the top-level node?\n> >\n> >If we have some special export function, how should we find out that\n> >remote server supports this? Should it be server property or should it\n> >somehow find out it while connecting to the server?\n> \n> Good question. I guess there could be some initial negotiation based on\n> remote node version etc. And we could also disable this pushdown for older\n> server versions, etc.\n\nYeah, I'd think we would just only support it on versions where we know\nit's available. That doesn't seem terribly difficult.\n\n> But after that, I think we can treat this just like other definitions\n> between local/remote node - we'd assume they match (i.e. the remote server\n> has the export function), and then we'd get an error if it does not. If you\n> need to use remote nodes without an export function, you'd have to disable\n> the pushdown.\n> \n> AFAICS this works both for case with explicit query rewrite (i.e. we send\n> SQL with calls to the export function) and implicit query rewrite (where the\n> remote node uses a different finalize function based on mode, specified by\n> GUC).\n\nNot quite sure where to drop this, but I've always figured we'd find a\nway to use the existing PartialAgg / FinalizeAggregate bits which are\nused for parallel query when it comes to pushing down to foreign servers\nto perform aggregates. That also gives us how to serialize the results,\nthough we'd have to make sure that works across different\narchitectures.. I've not looked to see if that's the case today.\n\nThen again, being able to transform an aggregate into a partial\naggregate that runs as an actual SQL query would mean we do partial\naggregate push-down against non-PG FDWs and that'd be pretty darn neat,\nso maybe that's a better way to go, if we can figure out how.\n\n(I mean, for avg it's pretty easy to just turn that into a SELECT that\ngrabs the sum and the count and use that.. other aggregates are more\ncomplicated though and that doesn't work, maybe we need both?)\n\nThanks,\n\nStephen", "msg_date": "Fri, 15 Oct 2021 15:31:33 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On 10/15/21 21:31, Stephen Frost wrote:\n> Greetings,\n> \n> * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n>> On 10/15/21 17:05, Alexander Pyhalov wrote:\n>>> Tomas Vondra писал 2021-10-15 17:56:\n>>>> And then we should extend this for aggregates with more complex\n>>>> internal states (e.g. avg), by supporting a function that \"exports\"\n>>>> the aggregate state - similar to serial/deserial functions, but needs\n>>>> to be portable.\n>>>>\n>>>> I think the trickiest thing here is rewriting the remote query to call\n>>>> this export function, but maybe we could simply instruct the remote\n>>>> node to use a different final function for the top-level node?\n>>>\n>>> If we have some special export function, how should we find out that\n>>> remote server supports this? Should it be server property or should it\n>>> somehow find out it while connecting to the server?\n>>\n>> Good question. I guess there could be some initial negotiation based on\n>> remote node version etc. And we could also disable this pushdown for older\n>> server versions, etc.\n> \n> Yeah, I'd think we would just only support it on versions where we know\n> it's available. That doesn't seem terribly difficult.\n> \n\nYeah.\n\nBut maybe Alexander was concerned about cases where the nodes disagree \non the aggregate definition, so one node might have the export function \nand the other would not. E.g. the remote node may have older version of \nan extension implementing the aggregate, without the export function \n(although the server version supports it). I don't think we can do much \nabout that, it's just one of many issues that may be caused by \nmismatching schemas.\n\nI wonder if this might get more complex, though. Imagine for example a \npartitioned table on node A with a FDW partition, pointing to a node B. \nBut on B, the object is partitioned again, with one partition placed on \nC. So it's like\n\n A -> partition on B -> partition on C\n\nWhen planning on A, we can consider server version on B. But what if C \nis an older version, not supporting the export function?\n\nBot sure if this makes any difference, though ... in the worst case it \nwill error out, and we should have a way to disable the feature on A.\n\n>> But after that, I think we can treat this just like other definitions\n>> between local/remote node - we'd assume they match (i.e. the remote server\n>> has the export function), and then we'd get an error if it does not. If you\n>> need to use remote nodes without an export function, you'd have to disable\n>> the pushdown.\n>>\n>> AFAICS this works both for case with explicit query rewrite (i.e. we send\n>> SQL with calls to the export function) and implicit query rewrite (where the\n>> remote node uses a different finalize function based on mode, specified by\n>> GUC).\n> \n> Not quite sure where to drop this, but I've always figured we'd find a\n> way to use the existing PartialAgg / FinalizeAggregate bits which are\n> used for parallel query when it comes to pushing down to foreign servers\n> to perform aggregates. That also gives us how to serialize the results,\n> though we'd have to make sure that works across different\n> architectures.. I've not looked to see if that's the case today.\n> \n\nIt sure is similar to what serial/deserial functions do for partial \naggs, but IIRC the functions were not designed to be portable. I think \nwe don't even require compatibility across minor releases, because we \nonly use this to copy data between workers running at the same time. Not \nsaying it can't be made to work, of course.\n\n> Then again, being able to transform an aggregate into a partial\n> aggregate that runs as an actual SQL query would mean we do partial\n> aggregate push-down against non-PG FDWs and that'd be pretty darn neat,\n> so maybe that's a better way to go, if we can figure out how.\n> \n> (I mean, for avg it's pretty easy to just turn that into a SELECT that\n> grabs the sum and the count and use that.. other aggregates are more\n> complicated though and that doesn't work, maybe we need both?)\n> \n\nMaybe, but that seems like a very different concept - transforming the \nSQL so that it calculates different set of aggregates that we know can \nbe pushed down easily. But I don't recall any other practical example \nbeyond the AVG() -> SUM()/COUNT(). Well, VAR() can be translated into \nSUM(X), SUM(X^2).\n\nAnother thing is how many users would actually benefit from this. I \nmean, for this to matter you need partitioned table with partitions \nplaced on a non-PG FDW, right? Seems like a pretty niche use case.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 15 Oct 2021 23:59:47 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi.\n\nTomas Vondra писал 2021-10-15 17:56:\n> As for the proposed approach, it's probably good enough for the first\n> version to restrict this to aggregates where the aggregate result is\n> sufficient, i.e. we don't need any new export/import procedures.\n> \n> But it's very unlikely we'd want to restrict it the way the patch does\n> it, i.e. based on aggregate name. That's both fragile (people can\n> create new aggregates with such name) and against the PostgreSQL\n> extensibility (people may implement custom aggregates, but won't be\n> able to benefit from this just because of name).\n> \n> So for v0 maybe, but I think there neeeds to be a way to relax this in\n> some way, for example we could add a new flag to pg_aggregate to mark\n> aggregates supporting this.\n> \n\nUpdated patch to mark aggregates as pushdown-safe in pg_aggregates.\n\nSo far have no solution for aggregates with internal aggtranstype.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Tue, 19 Oct 2021 09:56:45 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On 10/19/21 08:56, Alexander Pyhalov wrote:\n> Hi.\n> \n> Tomas Vondra писал 2021-10-15 17:56:\n>> As for the proposed approach, it's probably good enough for the first\n>> version to restrict this to aggregates where the aggregate result is\n>> sufficient, i.e. we don't need any new export/import procedures.\n>>\n>> But it's very unlikely we'd want to restrict it the way the patch does\n>> it, i.e. based on aggregate name. That's both fragile (people can\n>> create new aggregates with such name) and against the PostgreSQL\n>> extensibility (people may implement custom aggregates, but won't be\n>> able to benefit from this just because of name).\n>>\n>> So for v0 maybe, but I think there neeeds to be a way to relax this in\n>> some way, for example we could add a new flag to pg_aggregate to mark\n>> aggregates supporting this.\n>>\n> \n> Updated patch to mark aggregates as pushdown-safe in pg_aggregates.\n> \n> So far have no solution for aggregates with internal aggtranstype.\n\nThanks. Please add it to the next CF, so that we don't lose track of it.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 19 Oct 2021 15:25:52 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Tomas Vondra писал 2021-10-19 16:25:\n> On 10/19/21 08:56, Alexander Pyhalov wrote:\n>> Hi.\n>> \n>> Tomas Vondra писал 2021-10-15 17:56:\n>>> As for the proposed approach, it's probably good enough for the first\n>>> version to restrict this to aggregates where the aggregate result is\n>>> sufficient, i.e. we don't need any new export/import procedures.\n>>> \n>>> But it's very unlikely we'd want to restrict it the way the patch \n>>> does\n>>> it, i.e. based on aggregate name. That's both fragile (people can\n>>> create new aggregates with such name) and against the PostgreSQL\n>>> extensibility (people may implement custom aggregates, but won't be\n>>> able to benefit from this just because of name).\n>>> \n>>> So for v0 maybe, but I think there neeeds to be a way to relax this \n>>> in\n>>> some way, for example we could add a new flag to pg_aggregate to mark\n>>> aggregates supporting this.\n>>> \n>> \n>> Updated patch to mark aggregates as pushdown-safe in pg_aggregates.\n>> \n>> So far have no solution for aggregates with internal aggtranstype.\n\nHi. Updated patch.\nNow aggregates with internal states can be pushed down, if they are \nmarked as pushdown safe (this flag is set to true for min/max/sum),\nhave internal states and associated converters. Converters are called \nlocally, they transform aggregate result to serialized internal \nrepresentation.\nAs converters don't have access to internal aggregate state, partial \naggregates like avg() are still not pushable.\n\nFor now the overall logic is quite simple. We now also call \nadd_foreign_grouping_paths() for partial aggregation. In \nforeign_expr_walker() we check if aggregate is pushable (which means \nthat it is simple, marked as pushable and if has 'internal' as \naggtranstype, has associated converter).\nIf it is pushable, we proceed as with usual aggregates (but forbid \nhaving pushdown). During postgresGetForeignPlan() we produce list of \nconverters for aggregates. As converters has different input argument \ntype from their result (bytea), we have to generate alternative \nmetadata, which is used by make_tuple_from_result_row().\nIf make_tuple_from_result_row() encounters field with converter, it \ncalls converter and returns result. For now we expect converter to have \nonly one input and output argument. Existing converters just transform \ninput value to internal representation and return its serialized form.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Thu, 21 Oct 2021 13:55:04 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "\nOn 21.10.21 12:55, Alexander Pyhalov wrote:\n> Now aggregates with internal states can be pushed down, if they are \n> marked as pushdown safe (this flag is set to true for min/max/sum),\n> have internal states and associated converters. Converters are called \n> locally, they transform aggregate result to serialized internal \n> representation.\n> As converters don't have access to internal aggregate state, partial \n> aggregates like avg() are still not pushable.\n\nIt seems to me that the system should be able to determine from the \nexisting aggregate catalog entry whether an aggregate can be pushed \ndown. For example, it could check aggtranstype != internal and similar. \n A separate boolean flag should not be necessary. Or if it is, the \npatch should provide some guidance about how an aggregate function \nauthor should set it.\n\n\n", "msg_date": "Mon, 1 Nov 2021 10:47:42 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Peter Eisentraut писал 2021-11-01 12:47:\n> On 21.10.21 12:55, Alexander Pyhalov wrote:\n>> Now aggregates with internal states can be pushed down, if they are \n>> marked as pushdown safe (this flag is set to true for min/max/sum),\n>> have internal states and associated converters. Converters are called \n>> locally, they transform aggregate result to serialized internal \n>> representation.\n>> As converters don't have access to internal aggregate state, partial \n>> aggregates like avg() are still not pushable.\n> \n> It seems to me that the system should be able to determine from the\n> existing aggregate catalog entry whether an aggregate can be pushed\n> down. For example, it could check aggtranstype != internal and\n> similar. A separate boolean flag should not be necessary.\n\nHi.\nI think we can't infer this property from existing flags. For example, \nif I have avg() with bigint[] argtranstype, it doesn't mean we can push \ndown it. We couldn't also decide if partial aggregete is safe to push \ndown based on aggfinalfn presence (for example, it is defined for \nsum(numeric), but we can push it down.\n\n> Or if it\n> is, the patch should provide some guidance about how an aggregate\n> function author should set it.\n\nWhere should it be provided?\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Mon, 01 Nov 2021 13:30:27 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi,\n\nOn 21.10.2021 13:55, Alexander Pyhalov wrote:\n> Hi. Updated patch.\n> Now aggregates with internal states can be pushed down, if they are \n> marked as pushdown safe (this flag is set to true for min/max/sum),\n> have internal states and associated converters.\n\nI don't quite understand why this is restricted only to aggregates that \nhave 'internal' state, I feel like that should be possible for any \naggregate that has a function to convert its final result back to \naggregate state to be pushed down. While I couldn't come up with a \nuseful example for this, except maybe for an aggregate whose aggfinalfn \nis used purely for cosmetic purposes (e.g. format the result into a \nstring), I still feel that it is an unnecessary restriction.\n\nA few minor review notes to the patch:\n\n\n+static List *build_conv_list(RelOptInfo *foreignrel);\n\nthis should probably be up top among other declarations.\n\n\n@@ -1433,6 +1453,48 @@ postgresGetForeignPlan(PlannerInfo *root,\n                              outer_plan);\n  }\n\n+/*\n+ * Generate attinmeta if there are some converters:\n+ * they are expecxted to return BYTEA, but real input type is likely \ndifferent.\n+ */\n\n\ntypo in word \"expec*x*ted\".\n\n\n@@ -139,10 +147,13 @@ typedef struct PgFdwScanState\n                                   * for a foreign join scan. */\n      TupleDesc    tupdesc;        /* tuple descriptor of scan */\n      AttInMetadata *attinmeta;    /* attribute datatype conversion \nmetadata */\n+    AttInMetadata *rcvd_attinmeta;    /* metadata for received tuples, \nNULL if\n+                                     * there's no converters */\n\n\nLooks like rcvd_attinmeta is redundant and you could use attinmeta for \nconversion metadata.\n\nThe last thing - the patch needs to be rebased, it doesn't apply cleanly \non top of current master.\n\nThanks,\n\nIlya Gladyshev\n\n\n\n\n\n\n\n\nHi,\n\nOn 21.10.2021 13:55, Alexander Pyhalov\n wrote:\n\nHi.\n Updated patch.\n \n Now aggregates with internal states can be pushed down, if they\n are marked as pushdown safe (this flag is set to true for\n min/max/sum),\n \n have internal states and associated converters.\nI don't quite understand why this is restricted only to\n aggregates that have 'internal' state, I feel like that should be\n possible for any aggregate that has a function to convert its\n final result back to aggregate state to be pushed down. While I\n couldn't come up with a useful example for this, except maybe for\n an aggregate whose aggfinalfn is used purely for cosmetic purposes\n (e.g. format the result into a string), I still feel that it is an\n unnecessary restriction.\n\nA few minor review notes to the patch:\n\n +static List *build_conv_list(RelOptInfo *foreignrel);\nthis should probably be up top among other declarations.\n\n\n@@ -1433,6 +1453,48 @@ postgresGetForeignPlan(PlannerInfo *root,\n                              outer_plan);\n  }\n  \n +/*\n + * Generate attinmeta if there are some converters:\n + * they are expecxted to return BYTEA, but real input type is\n likely different.\n + */\n\n\ntypo in word \"expecxted\".\n\n\n@@ -139,10 +147,13 @@ typedef struct PgFdwScanState\n                                   * for a foreign join scan. */\n      TupleDesc    tupdesc;        /* tuple descriptor of scan */\n      AttInMetadata *attinmeta;    /* attribute datatype conversion\n metadata */\n +    AttInMetadata *rcvd_attinmeta;    /* metadata for received\n tuples, NULL if\n +                                     * there's no converters */\n\n\nLooks like rcvd_attinmeta is redundant and you could use\n attinmeta for conversion metadata.\nThe last thing - the patch needs to be rebased, it doesn't apply\n cleanly on top of current master.\nThanks,\nIlya Gladyshev", "msg_date": "Tue, 2 Nov 2021 00:31:35 +0300", "msg_from": "Ilya Gladyshev <i.gladyshev@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "\nOn 01.11.2021 13:30, Alexander Pyhalov wrote:\n> Peter Eisentraut писал 2021-11-01 12:47:\n>> On 21.10.21 12:55, Alexander Pyhalov wrote:\n>>> Now aggregates with internal states can be pushed down, if they are \n>>> marked as pushdown safe (this flag is set to true for min/max/sum),\n>>> have internal states and associated converters. Converters are \n>>> called locally, they transform aggregate result to serialized \n>>> internal representation.\n>>> As converters don't have access to internal aggregate state, partial \n>>> aggregates like avg() are still not pushable.\n>>\n>> It seems to me that the system should be able to determine from the\n>> existing aggregate catalog entry whether an aggregate can be pushed\n>> down.  For example, it could check aggtranstype != internal and\n>> similar.  A separate boolean flag should not be necessary.\n>\n> Hi.\n> I think we can't infer this property from existing flags. For example, \n> if I have avg() with bigint[] argtranstype, it doesn't mean we can \n> push down it. We couldn't also decide if partial aggregete is safe to \n> push down based on aggfinalfn presence (for example, it is defined for \n> sum(numeric), but we can push it down.\n\nI think one potential way to do it would be to allow pushing down \naggregates that EITHER have state of the same type as their return type, \nOR have a conversion function that converts their return value to the \ntype of their state.\n\n\n\n", "msg_date": "Tue, 2 Nov 2021 00:53:54 +0300", "msg_from": "Ilya Gladyshev <i.gladyshev@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "\n\nOn 11/1/21 22:31, Ilya Gladyshev wrote:\n> Hi,\n> \n> On 21.10.2021 13:55, Alexander Pyhalov wrote:\n>> Hi. Updated patch.\n>> Now aggregates with internal states can be pushed down, if they are \n>> marked as pushdown safe (this flag is set to true for min/max/sum),\n>> have internal states and associated converters.\n> \n> I don't quite understand why this is restricted only to aggregates that \n> have 'internal' state, I feel like that should be possible for any \n> aggregate that has a function to convert its final result back to \n> aggregate state to be pushed down. While I couldn't come up with a \n> useful example for this, except maybe for an aggregate whose aggfinalfn \n> is used purely for cosmetic purposes (e.g. format the result into a \n> string), I still feel that it is an unnecessary restriction.\n> \n\nBut it's *not* restricted to aggregates with internal state. The patch \nmerely requires aggregates with \"internal\" state to have an extra \n\"converter\" function.\n\nThat being said, I don't think the approach used to deal with internal \nstate is the right one. AFAICS it simply runs the aggregate on the \nremote node, finalizes is there, and then uses the converter function to \n\"expand\" the partial result back into the internal state.\n\nUnfortunately that only works for aggregates like \"sum\" where the result \nis enough to rebuild the internal state, but it fails for anything more \ncomplex (like \"avg\" or \"var\").\n\nEarlier in this thread I mentioned this to serial/deserial functions, \nand I think we need to do something like that for internal state. I.e. \nwe need to call the \"serial\" function on the remote node, and which \ndumps the whole internal state, and then \"deserial\" on the local node.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 1 Nov 2021 22:57:48 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "\n\nOn 11/1/21 22:53, Ilya Gladyshev wrote:\n> \n> On 01.11.2021 13:30, Alexander Pyhalov wrote:\n>> Peter Eisentraut писал 2021-11-01 12:47:\n>>> On 21.10.21 12:55, Alexander Pyhalov wrote:\n>>>> Now aggregates with internal states can be pushed down, if they are \n>>>> marked as pushdown safe (this flag is set to true for min/max/sum),\n>>>> have internal states and associated converters. Converters are \n>>>> called locally, they transform aggregate result to serialized \n>>>> internal representation.\n>>>> As converters don't have access to internal aggregate state, partial \n>>>> aggregates like avg() are still not pushable.\n>>>\n>>> It seems to me that the system should be able to determine from the\n>>> existing aggregate catalog entry whether an aggregate can be pushed\n>>> down.  For example, it could check aggtranstype != internal and\n>>> similar.  A separate boolean flag should not be necessary.\n>>\n>> Hi.\n>> I think we can't infer this property from existing flags. For example, \n>> if I have avg() with bigint[] argtranstype, it doesn't mean we can \n>> push down it. We couldn't also decide if partial aggregete is safe to \n>> push down based on aggfinalfn presence (for example, it is defined for \n>> sum(numeric), but we can push it down.\n> \n> I think one potential way to do it would be to allow pushing down \n> aggregates that EITHER have state of the same type as their return type, \n> OR have a conversion function that converts their return value to the \n> type of their state.\n> \n\nIMO just checking (aggtranstype == result type) entirely ignores the \nissue of portability - we've never required the aggregate state to be \nportable in any meaningful way (between architectures, minor/major \nversions, ...) and it seems foolish to just start relying on it here.\n\nImagine for example an aggregate using bytea state, storing some complex \nC struct in it. You can't just copy that between architectures.\n\nIt's a bit like why we don't simply copy data types to network, but pass \nthem through input/output or send/receive functions. The new flag is a \nway to mark aggregates where this is safe, and I don't think we can do \naway without it.\n\nThe more I think about this, the more I'm convinced the proper way to do \nthis would be adding export/import functions, similar to serial/deserial \nfunctions, with the extra portability guarantees. And we'd need to do \nthat for all aggregates, not just those with (aggtranstype == internal).\n\nI get it - the idea of the patch is that keeping the data types the same \nmakes it much simpler to pass the aggregate state (compared to having to \nexport/import it). But I'm not sure it's the right approach.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 2 Nov 2021 00:49:08 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi.\n\nUpdated and rebased patch.\n\nIlya Gladyshev писал 2021-11-02 00:31:\n> Hi,\n> On 21.10.2021 13:55, Alexander Pyhalov wrote:\n> \n>> Hi. Updated patch.\n>> Now aggregates with internal states can be pushed down, if they are\n>> marked as pushdown safe (this flag is set to true for min/max/sum),\n>> have internal states and associated converters.\n> \n> I don't quite understand why this is restricted only to aggregates\n> that have 'internal' state, I feel like that should be possible for\n> any aggregate that has a function to convert its final result back to\n> aggregate state to be pushed down. While I couldn't come up with a\n> useful example for this, except maybe for an aggregate whose\n> aggfinalfn is used purely for cosmetic purposes (e.g. format the\n> result into a string), I still feel that it is an unnecessary\n> restriction.\n> \n\nI don't feel comfortable with it for the following reasons.\n- Now partial converters translate aggregate result to serialized \ninternal representation.\nIn case when aggregate type is different from internal state,\nwe'd have to translate it to non-serialized internal representation,\nso converters should skip serialization step. This seems like \nintroducing two\nkind of converters.\n- I don't see any system aggregates which would benefit from this.\n\nHowever, it doesn't seem to be complex, and if it seems to be desirable,\nit can be done.\nFor now introduced check that transtype matches aggregate type (or is \ninternal)\nin partial_agg_ok().\n\n\n> A few minor review notes to the patch:\n> \n> +static List *build_conv_list(RelOptInfo *foreignrel);\n> \n> this should probably be up top among other declarations.\n> \n\nMoved it upper.\n\n\n> @@ -1433,6 +1453,48 @@ postgresGetForeignPlan(PlannerInfo *root,\n> outer_plan);\n> }\n> \n> +/*\n> + * Generate attinmeta if there are some converters:\n> + * they are expecxted to return BYTEA, but real input type is likely\n> different.\n> + */\n> \n> typo in word \"expecxted\".\n\nFixed.\n\n> \n> @@ -139,10 +147,13 @@ typedef struct PgFdwScanState\n> * for a foreign join scan. */\n> TupleDesc tupdesc; /* tuple descriptor of scan */\n> AttInMetadata *attinmeta; /* attribute datatype conversion\n> metadata */\n> + AttInMetadata *rcvd_attinmeta; /* metadata for received\n> tuples, NULL if\n> + * there's no converters */\n> \n> Looks like rcvd_attinmeta is redundant and you could use attinmeta for\n> conversion metadata.\n\nSeems so, removed it.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Tue, 02 Nov 2021 12:12:07 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "> On 2 Nov 2021, at 10:12, Alexander Pyhalov <a.pyhalov@postgrespro.ru> wrote:\n\n> Updated and rebased patch.\n\n+\tstate = (Int128AggState *) palloc0(sizeof(Int128AggState));\n+\tstate->calcSumX2 = false;\n+\n+\tif (!PG_ARGISNULL(0))\n+\t{\n+#ifdef HAVE_INT128\n+\t\tdo_int128_accum(state, (int128) PG_GETARG_INT64(0));\n+#else\n+\t\tdo_numeric_accum(state, int64_to_numeric(PG_GETARG_INT64(0)));\n+#endif\n\nThis fails on non-INT128 platforms as state cannot be cast to Int128AggState\noutside of HAVE_INT128; it's not defined there. This needs to be a\nPolyNumAggState no?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 3 Nov 2021 14:45:38 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Daniel Gustafsson писал 2021-11-03 16:45:\n>> On 2 Nov 2021, at 10:12, Alexander Pyhalov <a.pyhalov@postgrespro.ru> \n>> wrote:\n> \n>> Updated and rebased patch.\n> \n> +\tstate = (Int128AggState *) palloc0(sizeof(Int128AggState));\n> +\tstate->calcSumX2 = false;\n> +\n> +\tif (!PG_ARGISNULL(0))\n> +\t{\n> +#ifdef HAVE_INT128\n> +\t\tdo_int128_accum(state, (int128) PG_GETARG_INT64(0));\n> +#else\n> +\t\tdo_numeric_accum(state, int64_to_numeric(PG_GETARG_INT64(0)));\n> +#endif\n> \n> This fails on non-INT128 platforms as state cannot be cast to \n> Int128AggState\n> outside of HAVE_INT128; it's not defined there. This needs to be a\n> PolyNumAggState no?\n\nHi.\nThank you for noticing this. It's indeed fails with \npgac_cv__128bit_int=no.\nUpdated patch.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Wed, 03 Nov 2021 17:50:19 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "> On 3 Nov 2021, at 15:50, Alexander Pyhalov <a.pyhalov@postgrespro.ru> wrote:\n> \n> Daniel Gustafsson писал 2021-11-03 16:45:\n>>> On 2 Nov 2021, at 10:12, Alexander Pyhalov <a.pyhalov@postgrespro.ru> wrote:\n>>> Updated and rebased patch.\n>> +\tstate = (Int128AggState *) palloc0(sizeof(Int128AggState));\n>> +\tstate->calcSumX2 = false;\n>> +\n>> +\tif (!PG_ARGISNULL(0))\n>> +\t{\n>> +#ifdef HAVE_INT128\n>> +\t\tdo_int128_accum(state, (int128) PG_GETARG_INT64(0));\n>> +#else\n>> +\t\tdo_numeric_accum(state, int64_to_numeric(PG_GETARG_INT64(0)));\n>> +#endif\n>> This fails on non-INT128 platforms as state cannot be cast to Int128AggState\n>> outside of HAVE_INT128; it's not defined there. This needs to be a\n>> PolyNumAggState no?\n> \n> Hi.\n> Thank you for noticing this. It's indeed fails with pgac_cv__128bit_int=no.\n> Updated patch.\n\nThe updated patch also fails to apply now, but on the catversion.h bump. To\navoid having to rebase for that I recommend to skip that part in the patch and\njust mention the need in the thread, any committer picking this up for commit\nwill know to bump the catversion so there is no use in risking unneccesary\nconflicts.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 15 Nov 2021 11:16:24 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Daniel Gustafsson писал 2021-11-15 13:16:\n>> On 3 Nov 2021, at 15:50, Alexander Pyhalov <a.pyhalov@postgrespro.ru> \n>> wrote:\n>> \n>> Daniel Gustafsson писал 2021-11-03 16:45:\n>>>> On 2 Nov 2021, at 10:12, Alexander Pyhalov \n>>>> <a.pyhalov@postgrespro.ru> wrote:\n>>>> Updated and rebased patch.\n>>> +\tstate = (Int128AggState *) palloc0(sizeof(Int128AggState));\n>>> +\tstate->calcSumX2 = false;\n>>> +\n>>> +\tif (!PG_ARGISNULL(0))\n>>> +\t{\n>>> +#ifdef HAVE_INT128\n>>> +\t\tdo_int128_accum(state, (int128) PG_GETARG_INT64(0));\n>>> +#else\n>>> +\t\tdo_numeric_accum(state, int64_to_numeric(PG_GETARG_INT64(0)));\n>>> +#endif\n>>> This fails on non-INT128 platforms as state cannot be cast to \n>>> Int128AggState\n>>> outside of HAVE_INT128; it's not defined there. This needs to be a\n>>> PolyNumAggState no?\n>> \n>> Hi.\n>> Thank you for noticing this. It's indeed fails with \n>> pgac_cv__128bit_int=no.\n>> Updated patch.\n> \n> The updated patch also fails to apply now, but on the catversion.h \n> bump. To\n> avoid having to rebase for that I recommend to skip that part in the \n> patch and\n> just mention the need in the thread, any committer picking this up for \n> commit\n> will know to bump the catversion so there is no use in risking \n> unneccesary\n> conflicts.\n\nI've updated patch - removed catversion dump.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Mon, 15 Nov 2021 16:01:51 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi,\n\nOn Mon, Nov 15, 2021 at 04:01:51PM +0300, Alexander Pyhalov wrote:\n> \n> I've updated patch - removed catversion dump.\n\nThis version of the patchset doesn't apply anymore:\n\nhttp://cfbot.cputube.org/patch_36_3369.log\n=== Applying patches on top of PostgreSQL commit ID 025b920a3d45fed441a0a58fdcdf05b321b1eead ===\n=== applying patch ./0001-Partial-aggregates-push-down-v07.patch\npatching file src/bin/pg_dump/pg_dump.c\nHunk #1 succeeded at 13111 (offset -965 lines).\nHunk #2 FAILED at 14167.\nHunk #3 succeeded at 13228 (offset -961 lines).\nHunk #4 succeeded at 13319 (offset -966 lines).\n1 out of 4 hunks FAILED -- saving rejects to file src/bin/pg_dump/pg_dump.c.rej\n\nCould you send a rebased version? In the meantime I will switch the cf entry\nto Waiting on Author.\n\n\n", "msg_date": "Fri, 14 Jan 2022 20:16:53 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Julien Rouhaud писал 2022-01-14 15:16:\n> Hi,\n> \n> On Mon, Nov 15, 2021 at 04:01:51PM +0300, Alexander Pyhalov wrote:\n>> \n>> I've updated patch - removed catversion dump.\n> \n> This version of the patchset doesn't apply anymore:\n> \n> http://cfbot.cputube.org/patch_36_3369.log\n> === Applying patches on top of PostgreSQL commit ID\n> 025b920a3d45fed441a0a58fdcdf05b321b1eead ===\n> === applying patch ./0001-Partial-aggregates-push-down-v07.patch\n> patching file src/bin/pg_dump/pg_dump.c\n> Hunk #1 succeeded at 13111 (offset -965 lines).\n> Hunk #2 FAILED at 14167.\n> Hunk #3 succeeded at 13228 (offset -961 lines).\n> Hunk #4 succeeded at 13319 (offset -966 lines).\n> 1 out of 4 hunks FAILED -- saving rejects to file \n> src/bin/pg_dump/pg_dump.c.rej\n> \n> Could you send a rebased version? In the meantime I will switch the cf \n> entry\n> to Waiting on Author.\n\nHi. Attaching rebased patch.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Mon, 17 Jan 2022 10:46:55 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Sun, Jan 16, 2022 at 11:47 PM Alexander Pyhalov <a.pyhalov@postgrespro.ru>\nwrote:\n\n> Julien Rouhaud писал 2022-01-14 15:16:\n> > Hi,\n> >\n> > On Mon, Nov 15, 2021 at 04:01:51PM +0300, Alexander Pyhalov wrote:\n> >>\n> >> I've updated patch - removed catversion dump.\n> >\n> > This version of the patchset doesn't apply anymore:\n> >\n> > http://cfbot.cputube.org/patch_36_3369.log\n> > === Applying patches on top of PostgreSQL commit ID\n> > 025b920a3d45fed441a0a58fdcdf05b321b1eead ===\n> > === applying patch ./0001-Partial-aggregates-push-down-v07.patch\n> > patching file src/bin/pg_dump/pg_dump.c\n> > Hunk #1 succeeded at 13111 (offset -965 lines).\n> > Hunk #2 FAILED at 14167.\n> > Hunk #3 succeeded at 13228 (offset -961 lines).\n> > Hunk #4 succeeded at 13319 (offset -966 lines).\n> > 1 out of 4 hunks FAILED -- saving rejects to file\n> > src/bin/pg_dump/pg_dump.c.rej\n> >\n> > Could you send a rebased version? In the meantime I will switch the cf\n> > entry\n> > to Waiting on Author.\n>\n> Hi. Attaching rebased patch.\n> --\n> Best regards,\n> Alexander Pyhalov,\n> Postgres Professional\n\nHi,\n+ FdwScanPrivateConvertors\n\n+ * Generate attinmeta if there are some converters:\n\nI think it would be better if converter is spelled the same way across the\npatch.\n\nFor build_conv_list():\n\n+ if (IS_UPPER_REL(foreignrel))\n\nYou can return NIL for !IS_UPPER_REL(foreignrel) - this would save\nindentation for the body of the func.\n\nCheers\n\nOn Sun, Jan 16, 2022 at 11:47 PM Alexander Pyhalov <a.pyhalov@postgrespro.ru> wrote:Julien Rouhaud писал 2022-01-14 15:16:\n> Hi,\n> \n> On Mon, Nov 15, 2021 at 04:01:51PM +0300, Alexander Pyhalov wrote:\n>> \n>> I've updated patch - removed catversion dump.\n> \n> This version of the patchset doesn't apply anymore:\n> \n> http://cfbot.cputube.org/patch_36_3369.log\n> === Applying patches on top of PostgreSQL commit ID\n> 025b920a3d45fed441a0a58fdcdf05b321b1eead ===\n> === applying patch ./0001-Partial-aggregates-push-down-v07.patch\n> patching file src/bin/pg_dump/pg_dump.c\n> Hunk #1 succeeded at 13111 (offset -965 lines).\n> Hunk #2 FAILED at 14167.\n> Hunk #3 succeeded at 13228 (offset -961 lines).\n> Hunk #4 succeeded at 13319 (offset -966 lines).\n> 1 out of 4 hunks FAILED -- saving rejects to file \n> src/bin/pg_dump/pg_dump.c.rej\n> \n> Could you send a rebased version?  In the meantime I will switch the cf \n> entry\n> to Waiting on Author.\n\nHi. Attaching rebased patch.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres ProfessionalHi,+   FdwScanPrivateConvertors+ * Generate attinmeta if there are some converters:I think it would be better if converter is spelled the same way across the patch.For build_conv_list():+   if (IS_UPPER_REL(foreignrel))You can return NIL for !IS_UPPER_REL(foreignrel) - this would save indentation for the body of the func.Cheers", "msg_date": "Mon, 17 Jan 2022 00:43:27 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Zhihong Yu писал 2022-01-17 11:43:\n> Hi,\n> + FdwScanPrivateConvertors\n> \n> + * Generate attinmeta if there are some converters:\n> \n> I think it would be better if converter is spelled the same way across\n> the patch.\n> \n> For build_conv_list():\n> \n> + if (IS_UPPER_REL(foreignrel))\n> \n> You can return NIL for !IS_UPPER_REL(foreignrel) - this would save\n> indentation for the body of the func.\n\nHi.\nUpdated patch.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Mon, 17 Jan 2022 15:26:55 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Alexander Pyhalov писал 2022-01-17 15:26:\n> Zhihong Yu писал 2022-01-17 11:43:\n>> Hi,\n>> + FdwScanPrivateConvertors\n>> \n>> + * Generate attinmeta if there are some converters:\n>> \n>> I think it would be better if converter is spelled the same way across\n>> the patch.\n>> \n>> For build_conv_list():\n>> \n>> + if (IS_UPPER_REL(foreignrel))\n>> \n>> You can return NIL for !IS_UPPER_REL(foreignrel) - this would save\n>> indentation for the body of the func.\n> \n> Hi.\n> Updated patch.\n\nSorry, missed attachment.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Mon, 17 Jan 2022 15:27:53 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On 2022-01-17 15:27:53 +0300, Alexander Pyhalov wrote:\n> Alexander Pyhalov писал 2022-01-17 15:26:\n> > Updated patch.\n> \n> Sorry, missed attachment.\n\nNeeds another update: http://cfbot.cputube.org/patch_37_3369.log\n\nMarked as waiting on author.\n\n- Andres\n\n\n", "msg_date": "Mon, 21 Mar 2022 17:49:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On 3/22/22 01:49, Andres Freund wrote:\n> On 2022-01-17 15:27:53 +0300, Alexander Pyhalov wrote:\n>> Alexander Pyhalov писал 2022-01-17 15:26:\n>>> Updated patch.\n>>\n>> Sorry, missed attachment.\n> \n> Needs another update: http://cfbot.cputube.org/patch_37_3369.log\n> \n> Marked as waiting on author.\n> \n\nTBH I'm still not convinced this is the right approach. I've voiced this\nopinion before, but to reiterate the main arguments:\n\n1) It's not clear to me how could this get extended to aggregates with\nmore complex aggregate states, to support e.g. avg() and similar fairly\ncommon aggregates.\n\n2) I'm not sure relying on aggpartialpushdownsafe without any version\nchecks etc. is sufficient. I mean, how would we know the remote node has\nthe same idea of representing the aggregate state. I wonder how this\naligns with assumptions we do e.g. for functions etc.\n\nAside from that, there's a couple review comments:\n\n1) should not remove the comment in foreign_expr_walker\n\n2) comment in deparseAggref is obsolete/inaccurate\n\n3) comment for partial_agg_ok should probably explain when we consider\naggregate OK to be pushed down\n\n4) I'm not sure why get_rcvd_attinmeta comment talks about \"return type\nbytea\" and \"real input type\".\n\n5) Talking about \"partial\" aggregates is a bit confusing, because that\nsuggests this is related to actual \"partial aggregates\". But it's not.\n\n6) Can add_foreign_grouping_paths do without the new 'partial'\nparameter? Clearly, it can be deduced from extra->patype, no?\n\n7) There's no docs for PARTIALCONVERTERFUNC / PARTIAL_PUSHDOWN_SAFE in\nCREATE AGGREGATE sgml docs.\n\n8) I don't think \"serialize\" in the converter functions is the right\nterm, considering those functions are not \"serializing\" anything. If\nanything, it's the remote node that is serializing the agg state and the\nlocal not is deserializing it. Or maybe I just misunderstand where are\nthe converter functions executed?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 22 Mar 2022 13:28:46 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Tomas Vondra писал 2022-03-22 15:28:\n> On 3/22/22 01:49, Andres Freund wrote:\n>> On 2022-01-17 15:27:53 +0300, Alexander Pyhalov wrote:\n>>> Alexander Pyhalov писал 2022-01-17 15:26:\n>>>> Updated patch.\n>>> \n>>> Sorry, missed attachment.\n>> \n>> Needs another update: http://cfbot.cputube.org/patch_37_3369.log\n>> \n>> Marked as waiting on author.\n>> \n> \n> TBH I'm still not convinced this is the right approach. I've voiced \n> this\n> opinion before, but to reiterate the main arguments:\n> \n> 1) It's not clear to me how could this get extended to aggregates with\n> more complex aggregate states, to support e.g. avg() and similar fairly\n> common aggregates.\n\nHi.\nYes, I'm also not sure how to proceed with aggregates with complex \nstate.\nLikely it needs separate function to export their state, but then we \nshould\nsomehow ensure that this function exists and our 'importer' can handle \nits result.\nNote that for now we have no mechanics in postgres_fdw to find out \nremote server version\non planning stage.\n\n> 2) I'm not sure relying on aggpartialpushdownsafe without any version\n> checks etc. is sufficient. I mean, how would we know the remote node \n> has\n> the same idea of representing the aggregate state. I wonder how this\n> aligns with assumptions we do e.g. for functions etc.\n\nIt seems to be not a problem for me, as for now we don't care about \nremote node internal aggregate state representation.\nWe currently get just aggregate result from remote node. For aggregates\nwith 'internal' stype we call converter locally, and it converts \nexternal result from\naggregate return type to local node internal representation.\n\n> \n> Aside from that, there's a couple review comments:\n> \n> 1) should not remove the comment in foreign_expr_walker\n\nFixed.\n\n> \n> 2) comment in deparseAggref is obsolete/inaccurate\n\nFixed.\n\n> \n> 3) comment for partial_agg_ok should probably explain when we consider\n> aggregate OK to be pushed down\n\nExpanded comment.\n> \n> 4) I'm not sure why get_rcvd_attinmeta comment talks about \"return type\n> bytea\" and \"real input type\".\n\nExpanded comment. Tupdesc can be retrieved from \nnode->ss.ss_ScanTupleSlot,\nand so we expect to see bytea (as should be produced by partial \naggregation).\nBut when we scan data, we get aggregate\noutput type (which matches converter input type), so attinmeta should\nbe fixed.\nIf we deal with aggregate which doesn't have converter, partial_agg_ok()\nensures that agg->aggfnoid return type matches agg->aggtranstype.\n\n\n> 5) Talking about \"partial\" aggregates is a bit confusing, because that\n> suggests this is related to actual \"partial aggregates\". But it's not.\n\nHow should we call them? It's about pushing \"Partial count()\" or \n\"Partial sum()\" to the remote server,\nwhy it's not related to partial aggregates? Do you mean that it's not \nabout parallel aggregate processing?\n\n> 6) Can add_foreign_grouping_paths do without the new 'partial'\n> parameter? Clearly, it can be deduced from extra->patype, no?\n\nFixed this.\n\n> \n> 7) There's no docs for PARTIALCONVERTERFUNC / PARTIAL_PUSHDOWN_SAFE in\n> CREATE AGGREGATE sgml docs.\n\nAdded documentation. I'd appreciate advice on how it should be extended.\n\n> \n> 8) I don't think \"serialize\" in the converter functions is the right\n> term, considering those functions are not \"serializing\" anything. If\n> anything, it's the remote node that is serializing the agg state and \n> the\n> local not is deserializing it. Or maybe I just misunderstand where are\n> the converter functions executed?\n\nConverter function transforms aggregate result to serialized internal \nrepresentation,\nwhich is expected from partial aggregate. I mean, it converts aggregate\nresult type to internal representation and then efficiently executes\nserialization code (i.e. converter(x) == serialize(to_internal(x))).\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Tue, 22 Mar 2022 19:15:10 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Vondra, Mr.Pyhalov.\r\n\r\nI'm interesied in Mr.Pyhalov's patch due to the following background.\r\n--Background\r\nI develop postgresql's extension such as fdw in my work. \r\nI'm interested in using postgresql for OLAP. \r\nI think the function of a previous patch \"Push aggregation down to base relations and joins\"[1] is desiable. I rebased the previous patch and register the rebased patch on the next commitfest[2].\r\nAnd I think it would be more useful if the previous patch works on a foreign table of postgres_fdw.\r\nI realized the function of partial aggregation pushdown is necessary to make the previous patch work on a foreign table of postgres_fdw.\r\n--\r\n\r\nSo I reviewed Mr.Pyhalov's patch and discussions on this thread.\r\nI made a draft of approach to respond to Mr.Vondra's comments.\r\nWould you check whether my draft is right or not?\r\n\r\n--My draft\r\n> 1) It's not clear to me how could this get extended to aggregates with \r\n> more complex aggregate states, to support e.g. avg() and similar \r\n> fairly common aggregates.\r\nWe add a special aggregate function every aggregate function (hereafter we call this src) which supports partial aggregation.\r\nThe followings are differences between the src and the special aggregate function.\r\ndifference1) result type\r\nThe result type is same with the src's transtype if the src's transtype is not internal.\r\nOtherwise the result type is bytea.\r\n\r\ndifference2) final func\r\nThe final func does not exist if the src's transtype is not internal.\r\nOtherwize the final func returns serialized value.\r\n\r\nFor example, let me call the special aggregate function of avg(float8) avg_p(float8).\r\nThe result value of avg_p is a float8 array which consists of count and summation.\r\navg_p does not have finalfunc.\r\n\r\nWe pushdown the special aggregate function instead of a src.\r\nFor example, we issue \"select avg_p(c) from t\" instead of \"select avg(c) from t\"\r\nin the above example.\r\n\r\nWe add a new column partialaggfn to pg_aggregate to get the oid of the special aggregate function from the the src's oid.\r\nThis column is the oid of the special aggregate function which corresponds to the src.\r\n\r\nIf an aggregate function does not have any special aggregate function, then we does not pushdown any partial aggregation of the aggregate function.\r\n\r\n> 2) I'm not sure relying on aggpartialpushdownsafe without any version \r\n> checks etc. is sufficient. I mean, how would we know the remote node \r\n> has the same idea of representing the aggregate state. I wonder how \r\n> this aligns with assumptions we do e.g. for functions etc.\r\nWe add compatible server versions infomation to pg_aggregate and the set of options of postgres_fdw's foreign server.\r\nWe check compatibility of an aggregate function using this infomation.\r\n\r\nAn additional column of pg_aggregate is compatibleversonrange.\r\nThis column is a range of postgresql server versions which has compatible aggregate function.\r\nAn additional options of postgres_fdw's foreign server are serverversion and bwcompatibleverson.\r\nserverversion is remote postgresql server version.\r\nbwcompatibleverson is the maximum version in which any aggregate function is compatible with local noed's one.\r\nOur version check passes if and only if at least one of the following conditions is true.\r\ncondition1) the option value of serverversion is in compatibleversonrange.\r\ncondition2) the local postgresql server version is between bwcompatibleverson and the option value of serverversion.\r\n\r\nWe can get the local postgresql server version from PG_VERSION_NUM macro.\r\nWe use condition1 if the local postgresql server version is not more than the remote one.\r\nand use condition2 if the local postgresql server version is greater than the remote one.\r\n--\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n[1] https://commitfest.postgresql.org/32/1247/\r\n[2] https://commitfest.postgresql.org/39/3764/\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation\r\n", "msg_date": "Mon, 1 Aug 2022 05:55:31 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Vondra, Mr.Pyhalov, Everyone.\n\nI discussed with Mr.Pyhalov about the above draft by directly sending mail to \n him(outside of pgsql-hackers). Mr.Pyhalov allowed me to update his patch \nalong with the above draft. So I update Mr.Pyhalov's patch v10.\n\nI wrote my patch for discussion. \nMy patch passes regression tests which contains additional basic postgres_fdw tests\nfor my patch's feature. But my patch doesn't contain sufficient documents and tests.\nIf reviewers accept my approach, I will add documents and tests to my patch.\n\nThe following is a my patch's readme. \n# I simplified the above draft.\n\n--readme of my patch\n1. interface\n1) pg_aggregate\nThere are the following additional columns.\na) partialaggfn\n data type : regproc.\n default value: zero(means invalid).\n description : This field refers to the special aggregate function(then we call\n this partialaggfunc)\n corresponding to aggregation function(then we call src) which has aggfnoid.\n partialaggfunc is used for partial aggregation pushdown by postgres_fdw.\n The followings are differences between the src and the special aggregate function.\n difference1) result type\n The result type is same as the src's transtype if the src's transtype\n is not internal.\n Otherwise the result type is bytea.\n difference2) final func\n The final func does not exist if the src's transtype is not internal.\n Otherwize the final func returns serialized value.\n For example, there is a partialaggfunc avg_p_int4 which corresponds to avg(int4)\n whose aggtranstype is _int4.\n The result value of avg_p_int4 is a float8 array which consists of count and \n summation. avg_p_int4 does not have finalfunc.\n For another example, there is a partialaggfunc avg_p_int8 which corresponds to \n avg(int8) whose aggtranstype is internal.\n The result value of avg_p_int8 is a bytea serialized array which consists of count \n and summation. avg_p_int8 has finalfunc int8_avg_serialize which is serialize function\n of avg(int8). This field is zero if there is no partialaggfunc.\n\nb) partialagg_minversion\n data type : int4.\n default value: zero(means current version).\n description : This field is the minimum PostgreSQL server version which has \n partialaggfunc. This field is used for checking compatibility of partialaggfunc.\n\nThe above fields are valid in tuples for builtin avg, sum, min, max, count.\nThere are additional records which correspond to partialaggfunc for avg, sum, min, max, \ncount.\n\n2) pg_proc\nThere are additional records which correspond to partialaggfunc for avg, sum, min, max, \ncount.\n\n3) postgres_fdw\npostgres_fdw has an additional foreign server option server_version. server_version is \ninteger value which means remote server version number. Default value of server_version \nis zero. server_version is used for checking compatibility of partialaggfunc.\n\n2. feature\npostgres_fdw can pushdown partial aggregation of avg, sum, min, max, count.\nPartial aggregation pushdown is fine when the following two conditions are both true.\n condition1) partialaggfn is valid.\n condition2) server_version is not less than partialagg_minversion\npostgres_fdw executes pushdown the patialaggfunc instead of a src.\nFor example, we issue \"select avg_p_int4(c) from t\" instead of \"select avg(c) from t\"\nin the above example.\n\npostgres_fdw can pushdown every aggregate function which supports partial aggregation\nif you add a partialaggfunc corresponding to the aggregate function by create aggregate \ncommand.\n\n3. difference between my patch and Mr.Pyhalov's v10 patch.\n1) In my patch postgres_fdw can pushdown partial aggregation of avg\n2) In my patch postgres_fdw can pushdown every aggregate function which supports partial \n aggregation if you add a partialaggfunc corresponding to the aggregate function.\n\n4. sample commands in psql\n\\c postgres\ndrop database tmp;\ncreate database tmp;\n\\c tmp\ncreate extension postgres_fdw;\ncreate server server_01 foreign data wrapper postgres_fdw options(host 'localhost', dbname 'tmp', server_version '160000', async_capable 'true');\ncreate user mapping for postgres server server_01 options(user 'postgres', password 'postgres');\ncreate server server_02 foreign data wrapper postgres_fdw options(host 'localhost', dbname 'tmp', server_version '160000', async_capable 'true');\ncreate user mapping for postgres server server_02 options(user 'postgres', password 'postgres');\n\ncreate table t(dt timestamp, id int4, name text, total int4, val float4, type int4, span interval) partition by list (type);\n\ncreate table t1(dt timestamp, id int4, name text, total int4, val float4, type int4, span interval);\ncreate table t2(dt timestamp, id int4, name text, total int4, val float4, type int4, span interval);\n\ntruncate table t1;\ntruncate table t2;\ninsert into t1 select timestamp'2020-01-01' + cast(t || ' seconds' as interval), t % 100, 'hoge' || t, 1, 1.1, 1, cast('1 seconds' as interval) from generate_series(1, 100000, 1) t;\ninsert into t1 select timestamp'2020-01-01' + cast(t || ' seconds' as interval), t % 100, 'hoge' || t, 2, 2.1, 1, cast('2 seconds' as interval) from generate_series(1, 100000, 1) t;\ninsert into t2 select timestamp'2020-01-01' + cast(t || ' seconds' as interval), t % 100, 'hoge' || t, 1, 1.1, 2, cast('1 seconds' as interval) from generate_series(1, 100000, 1) t;\ninsert into t2 select timestamp'2020-01-01' + cast(t || ' seconds' as interval), t % 100, 'hoge' || t, 2, 2.1, 2, cast('2 seconds' as interval) from generate_series(1, 100000, 1) t;\n\ncreate foreign table f_t1 partition of t for values in (1) server server_01 options(table_name 't1');\ncreate foreign table f_t2 partition of t for values in (2) server server_02 options(table_name 't2');\n\nset enable_partitionwise_aggregate = on;\nexplain (verbose, costs off) select avg(total::int4), avg(total::int8) from t;\nselect avg(total::int4), avg(total::int8) from t;\n\nSincerely yours,\nYuuki Fujii\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Tue, 22 Nov 2022 01:01:55 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Mon, Nov 21, 2022 at 5:02 PM Fujii.Yuki@df.MitsubishiElectric.co.jp <\nFujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n\n> Hi Mr.Vondra, Mr.Pyhalov, Everyone.\n>\n> I discussed with Mr.Pyhalov about the above draft by directly sending mail\n> to\n> him(outside of pgsql-hackers). Mr.Pyhalov allowed me to update his patch\n> along with the above draft. So I update Mr.Pyhalov's patch v10.\n>\n> I wrote my patch for discussion.\n> My patch passes regression tests which contains additional basic\n> postgres_fdw tests\n> for my patch's feature. But my patch doesn't contain sufficient documents\n> and tests.\n> If reviewers accept my approach, I will add documents and tests to my\n> patch.\n>\n> The following is a my patch's readme.\n> # I simplified the above draft.\n>\n> --readme of my patch\n> 1. interface\n> 1) pg_aggregate\n> There are the following additional columns.\n> a) partialaggfn\n> data type : regproc.\n> default value: zero(means invalid).\n> description : This field refers to the special aggregate function(then\n> we call\n> this partialaggfunc)\n> corresponding to aggregation function(then we call src) which has\n> aggfnoid.\n> partialaggfunc is used for partial aggregation pushdown by\n> postgres_fdw.\n> The followings are differences between the src and the special\n> aggregate function.\n> difference1) result type\n> The result type is same as the src's transtype if the src's\n> transtype\n> is not internal.\n> Otherwise the result type is bytea.\n> difference2) final func\n> The final func does not exist if the src's transtype is not\n> internal.\n> Otherwize the final func returns serialized value.\n> For example, there is a partialaggfunc avg_p_int4 which corresponds to\n> avg(int4)\n> whose aggtranstype is _int4.\n> The result value of avg_p_int4 is a float8 array which consists of\n> count and\n> summation. avg_p_int4 does not have finalfunc.\n> For another example, there is a partialaggfunc avg_p_int8 which\n> corresponds to\n> avg(int8) whose aggtranstype is internal.\n> The result value of avg_p_int8 is a bytea serialized array which\n> consists of count\n> and summation. avg_p_int8 has finalfunc int8_avg_serialize which is\n> serialize function\n> of avg(int8). This field is zero if there is no partialaggfunc.\n>\n> b) partialagg_minversion\n> data type : int4.\n> default value: zero(means current version).\n> description : This field is the minimum PostgreSQL server version which\n> has\n> partialaggfunc. This field is used for checking compatibility of\n> partialaggfunc.\n>\n> The above fields are valid in tuples for builtin avg, sum, min, max, count.\n> There are additional records which correspond to partialaggfunc for avg,\n> sum, min, max,\n> count.\n>\n> 2) pg_proc\n> There are additional records which correspond to partialaggfunc for avg,\n> sum, min, max,\n> count.\n>\n> 3) postgres_fdw\n> postgres_fdw has an additional foreign server option server_version.\n> server_version is\n> integer value which means remote server version number. Default value of\n> server_version\n> is zero. server_version is used for checking compatibility of\n> partialaggfunc.\n>\n> 2. feature\n> postgres_fdw can pushdown partial aggregation of avg, sum, min, max, count.\n> Partial aggregation pushdown is fine when the following two conditions are\n> both true.\n> condition1) partialaggfn is valid.\n> condition2) server_version is not less than partialagg_minversion\n> postgres_fdw executes pushdown the patialaggfunc instead of a src.\n> For example, we issue \"select avg_p_int4(c) from t\" instead of \"select\n> avg(c) from t\"\n> in the above example.\n>\n> postgres_fdw can pushdown every aggregate function which supports partial\n> aggregation\n> if you add a partialaggfunc corresponding to the aggregate function by\n> create aggregate\n> command.\n>\n> 3. difference between my patch and Mr.Pyhalov's v10 patch.\n> 1) In my patch postgres_fdw can pushdown partial aggregation of avg\n> 2) In my patch postgres_fdw can pushdown every aggregate function which\n> supports partial\n> aggregation if you add a partialaggfunc corresponding to the aggregate\n> function.\n>\n> 4. sample commands in psql\n> \\c postgres\n> drop database tmp;\n> create database tmp;\n> \\c tmp\n> create extension postgres_fdw;\n> create server server_01 foreign data wrapper postgres_fdw options(host\n> 'localhost', dbname 'tmp', server_version '160000', async_capable 'true');\n> create user mapping for postgres server server_01 options(user 'postgres',\n> password 'postgres');\n> create server server_02 foreign data wrapper postgres_fdw options(host\n> 'localhost', dbname 'tmp', server_version '160000', async_capable 'true');\n> create user mapping for postgres server server_02 options(user 'postgres',\n> password 'postgres');\n>\n> create table t(dt timestamp, id int4, name text, total int4, val float4,\n> type int4, span interval) partition by list (type);\n>\n> create table t1(dt timestamp, id int4, name text, total int4, val float4,\n> type int4, span interval);\n> create table t2(dt timestamp, id int4, name text, total int4, val float4,\n> type int4, span interval);\n>\n> truncate table t1;\n> truncate table t2;\n> insert into t1 select timestamp'2020-01-01' + cast(t || ' seconds' as\n> interval), t % 100, 'hoge' || t, 1, 1.1, 1, cast('1 seconds' as interval)\n> from generate_series(1, 100000, 1) t;\n> insert into t1 select timestamp'2020-01-01' + cast(t || ' seconds' as\n> interval), t % 100, 'hoge' || t, 2, 2.1, 1, cast('2 seconds' as interval)\n> from generate_series(1, 100000, 1) t;\n> insert into t2 select timestamp'2020-01-01' + cast(t || ' seconds' as\n> interval), t % 100, 'hoge' || t, 1, 1.1, 2, cast('1 seconds' as interval)\n> from generate_series(1, 100000, 1) t;\n> insert into t2 select timestamp'2020-01-01' + cast(t || ' seconds' as\n> interval), t % 100, 'hoge' || t, 2, 2.1, 2, cast('2 seconds' as interval)\n> from generate_series(1, 100000, 1) t;\n>\n> create foreign table f_t1 partition of t for values in (1) server\n> server_01 options(table_name 't1');\n> create foreign table f_t2 partition of t for values in (2) server\n> server_02 options(table_name 't2');\n>\n> set enable_partitionwise_aggregate = on;\n> explain (verbose, costs off) select avg(total::int4), avg(total::int8)\n> from t;\n> select avg(total::int4), avg(total::int8) from t;\n>\n> Sincerely yours,\n> Yuuki Fujii\n> --\n> Yuuki Fujii\n> Information Technology R&D Center Mitsubishi Electric Corporation\n>\n\nHi,\nFor partial_agg_compatible :\n\n+ * Check that partial aggregate agg has compatibility\n\nIf the `agg` refers to func parameter, the parameter name is aggform\n\n+ int32 partialagg_minversion = PG_VERSION_NUM;\n+ if (aggform->partialagg_minversion ==\nPARTIALAGG_MINVERSION_DEFAULT) {\n+ partialagg_minversion = PG_VERSION_NUM;\n\nI am curious why the same variable is assigned the same value twice. It\nseems the if block is redundant.\n\n+ if ((fpinfo->server_version >= partialagg_minversion)) {\n+ compatible = true;\n\nThe above can be simplified as: return fpinfo->server_version >=\npartialagg_minversion;\n\nCheers\n\nOn Mon, Nov 21, 2022 at 5:02 PM Fujii.Yuki@df.MitsubishiElectric.co.jp <Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:Hi Mr.Vondra, Mr.Pyhalov, Everyone.\n\nI discussed with Mr.Pyhalov about the above draft by directly sending mail to \n him(outside of pgsql-hackers). Mr.Pyhalov allowed me to update his patch \nalong with the above draft. So I update Mr.Pyhalov's patch v10.\n\nI wrote my patch for discussion. \nMy patch passes regression tests which contains additional basic postgres_fdw tests\nfor my patch's feature. But my patch doesn't contain sufficient documents and tests.\nIf reviewers accept my approach, I will add documents and tests to my patch.\n\nThe following is a my patch's readme. \n# I simplified the above draft.\n\n--readme of my patch\n1. interface\n1) pg_aggregate\nThere are the following additional columns.\na) partialaggfn\n  data type    : regproc.\n  default value: zero(means invalid).\n  description  : This field refers to the special aggregate function(then we call\n     this partialaggfunc)\n    corresponding to aggregation function(then we call src) which has aggfnoid.\n    partialaggfunc is used for partial aggregation pushdown by postgres_fdw.\n    The followings are differences between the src and the special aggregate function.\n      difference1) result type\n        The result type is same as the src's transtype if the src's transtype\n        is not internal.\n        Otherwise the result type is bytea.\n      difference2) final func\n        The final func does not exist if the src's transtype is not internal.\n        Otherwize the final func returns serialized value.\n    For example, there is a partialaggfunc avg_p_int4 which corresponds to avg(int4)\n    whose aggtranstype is _int4.\n    The result value of avg_p_int4 is a float8 array which consists of count and \n    summation. avg_p_int4 does not have finalfunc.\n    For another example, there is a partialaggfunc avg_p_int8 which corresponds to \n    avg(int8) whose aggtranstype is internal.\n    The result value of avg_p_int8 is a bytea serialized array which consists of count \n    and summation. avg_p_int8 has finalfunc int8_avg_serialize which is serialize function\n    of avg(int8). This field is zero if there is no partialaggfunc.\n\nb) partialagg_minversion\n  data type    : int4.\n  default value: zero(means current version).\n  description  : This field is the minimum PostgreSQL server version which has \n    partialaggfunc. This field is used for checking compatibility of partialaggfunc.\n\nThe above fields are valid in tuples for builtin avg, sum, min, max, count.\nThere are additional records which correspond to partialaggfunc for avg, sum, min, max, \ncount.\n\n2) pg_proc\nThere are additional records which correspond to partialaggfunc for avg, sum, min, max, \ncount.\n\n3) postgres_fdw\npostgres_fdw has an additional foreign server option server_version. server_version is \ninteger value which means remote server version number. Default value of server_version \nis zero. server_version is used for checking compatibility of partialaggfunc.\n\n2. feature\npostgres_fdw can pushdown partial aggregation of avg, sum, min, max, count.\nPartial aggregation pushdown is fine when the following two conditions are both true.\n  condition1) partialaggfn is valid.\n  condition2) server_version is not less than partialagg_minversion\npostgres_fdw executes pushdown the patialaggfunc instead of a src.\nFor example, we issue \"select avg_p_int4(c) from t\" instead of \"select avg(c) from t\"\nin the above example.\n\npostgres_fdw can pushdown every aggregate function which supports partial aggregation\nif you add a partialaggfunc corresponding to the aggregate function by create aggregate \ncommand.\n\n3. difference between my patch and Mr.Pyhalov's v10 patch.\n1) In my patch postgres_fdw can pushdown partial aggregation of avg\n2) In my patch postgres_fdw can pushdown every aggregate function which supports partial \n  aggregation if you add a partialaggfunc corresponding to the aggregate function.\n\n4. sample commands in psql\n\\c postgres\ndrop database tmp;\ncreate database tmp;\n\\c tmp\ncreate extension postgres_fdw;\ncreate server server_01 foreign data wrapper postgres_fdw options(host 'localhost', dbname 'tmp', server_version '160000', async_capable 'true');\ncreate user mapping for postgres server server_01 options(user 'postgres', password 'postgres');\ncreate server server_02 foreign data wrapper postgres_fdw options(host 'localhost', dbname 'tmp', server_version '160000', async_capable 'true');\ncreate user mapping for postgres server server_02 options(user 'postgres', password 'postgres');\n\ncreate table t(dt timestamp, id int4, name text, total int4, val float4, type int4, span interval) partition by list (type);\n\ncreate table t1(dt timestamp, id int4, name text, total int4, val float4, type int4, span interval);\ncreate table t2(dt timestamp, id int4, name text, total int4, val float4, type int4, span interval);\n\ntruncate table t1;\ntruncate table t2;\ninsert into t1 select timestamp'2020-01-01' + cast(t || ' seconds' as interval), t % 100, 'hoge' || t, 1, 1.1, 1, cast('1 seconds' as interval) from generate_series(1, 100000, 1) t;\ninsert into t1 select timestamp'2020-01-01' + cast(t || ' seconds' as interval), t % 100, 'hoge' || t, 2, 2.1, 1, cast('2 seconds' as interval) from generate_series(1, 100000, 1) t;\ninsert into t2 select timestamp'2020-01-01' + cast(t || ' seconds' as interval), t % 100, 'hoge' || t, 1, 1.1, 2, cast('1 seconds' as interval) from generate_series(1, 100000, 1) t;\ninsert into t2 select timestamp'2020-01-01' + cast(t || ' seconds' as interval), t % 100, 'hoge' || t, 2, 2.1, 2, cast('2 seconds' as interval) from generate_series(1, 100000, 1) t;\n\ncreate foreign table f_t1 partition of t for values in (1) server server_01 options(table_name 't1');\ncreate foreign table f_t2 partition of t for values in (2) server server_02 options(table_name 't2');\n\nset enable_partitionwise_aggregate = on;\nexplain (verbose, costs off) select avg(total::int4), avg(total::int8) from t;\nselect avg(total::int4), avg(total::int8) from t;\n\nSincerely yours,\nYuuki Fujii\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric CorporationHi,For partial_agg_compatible :+ * Check that partial aggregate agg has compatibilityIf the `agg` refers to func parameter, the parameter name is aggform+       int32  partialagg_minversion = PG_VERSION_NUM;+       if (aggform->partialagg_minversion == PARTIALAGG_MINVERSION_DEFAULT) {+               partialagg_minversion = PG_VERSION_NUM;I am curious why the same variable is assigned the same value twice. It seems the if block is redundant.+       if ((fpinfo->server_version >= partialagg_minversion)) {+               compatible = true;The above can be simplified as: return fpinfo->server_version >= partialagg_minversion;Cheers", "msg_date": "Mon, 21 Nov 2022 21:00:01 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Yu.\r\n\r\nThank you for comments.\r\n\r\n> + * Check that partial aggregate agg has compatibility\r\n> \r\n> If the `agg` refers to func parameter, the parameter name is aggform\r\nI fixed the above typo and made the above comment easy to understand\r\nNew comment is \"Check that partial aggregate function of aggform exsits in remote\"\r\n\r\n> + int32 partialagg_minversion = PG_VERSION_NUM;\r\n> + if (aggform->partialagg_minversion ==\r\n> PARTIALAGG_MINVERSION_DEFAULT) {\r\n> + partialagg_minversion = PG_VERSION_NUM;\r\n> \r\n> \r\n> I am curious why the same variable is assigned the same value twice. It seems\r\n> the if block is redundant.\r\n> \r\n> + if ((fpinfo->server_version >= partialagg_minversion)) {\r\n> + compatible = true;\r\n> \r\n> \r\n> The above can be simplified as: return fpinfo->server_version >=\r\n> partialagg_minversion;\r\nI fixed according to your comment.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation\r\n\r\n> -----Original Message-----\r\n> From: Ted Yu <yuzhihong@gmail.com>\r\n> Sent: Tuesday, November 22, 2022 2:00 PM\r\n> To: Fujii Yuki/藤井 雄規(MELCO/情報総研 DM最適G)\r\n> <Fujii.Yuki@df.MitsubishiElectric.co.jp>\r\n> Cc: Alexander Pyhalov <a.pyhalov@postgrespro.ru>; Tomas Vondra\r\n> <tomas.vondra@enterprisedb.com>; PostgreSQL-development\r\n> <pgsql-hackers@postgresql.org>; Andres Freund <andres@anarazel.de>;\r\n> Zhihong Yu <zyu@yugabyte.com>; Julien Rouhaud <rjuju123@gmail.com>;\r\n> Daniel Gustafsson <daniel@yesql.se>; Ilya Gladyshev\r\n> <i.gladyshev@postgrespro.ru>\r\n> Subject: [CAUTION!! freemail] Re: Partial aggregates pushdown\r\n> \r\n> \r\n> \r\n> On Mon, Nov 21, 2022 at 5:02 PM Fujii.Yuki@df.MitsubishiElectric.co.jp\r\n> <mailto:Fujii.Yuki@df.MitsubishiElectric.co.jp>\r\n> <Fujii.Yuki@df.mitsubishielectric.co.jp\r\n> <mailto:Fujii.Yuki@df.mitsubishielectric.co.jp> > wrote:\r\n> \r\n> \r\n> \tHi Mr.Vondra, Mr.Pyhalov, Everyone.\r\n> \r\n> \tI discussed with Mr.Pyhalov about the above draft by directly sending\r\n> mail to\r\n> \t him(outside of pgsql-hackers). Mr.Pyhalov allowed me to update his\r\n> patch\r\n> \talong with the above draft. So I update Mr.Pyhalov's patch v10.\r\n> \r\n> \tI wrote my patch for discussion.\r\n> \tMy patch passes regression tests which contains additional basic\r\n> postgres_fdw tests\r\n> \tfor my patch's feature. But my patch doesn't contain sufficient\r\n> documents and tests.\r\n> \tIf reviewers accept my approach, I will add documents and tests to my\r\n> patch.\r\n> \r\n> \tThe following is a my patch's readme.\r\n> \t# I simplified the above draft.\r\n> \r\n> \t--readme of my patch\r\n> \t1. interface\r\n> \t1) pg_aggregate\r\n> \tThere are the following additional columns.\r\n> \ta) partialaggfn\r\n> \t data type : regproc.\r\n> \t default value: zero(means invalid).\r\n> \t description : This field refers to the special aggregate\r\n> function(then we call\r\n> \t this partialaggfunc)\r\n> \t corresponding to aggregation function(then we call src) which has\r\n> aggfnoid.\r\n> \t partialaggfunc is used for partial aggregation pushdown by\r\n> postgres_fdw.\r\n> \t The followings are differences between the src and the special\r\n> aggregate function.\r\n> \t difference1) result type\r\n> \t The result type is same as the src's transtype if the src's\r\n> transtype\r\n> \t is not internal.\r\n> \t Otherwise the result type is bytea.\r\n> \t difference2) final func\r\n> \t The final func does not exist if the src's transtype is not\r\n> internal.\r\n> \t Otherwize the final func returns serialized value.\r\n> \t For example, there is a partialaggfunc avg_p_int4 which\r\n> corresponds to avg(int4)\r\n> \t whose aggtranstype is _int4.\r\n> \t The result value of avg_p_int4 is a float8 array which consists of\r\n> count and\r\n> \t summation. avg_p_int4 does not have finalfunc.\r\n> \t For another example, there is a partialaggfunc avg_p_int8 which\r\n> corresponds to\r\n> \t avg(int8) whose aggtranstype is internal.\r\n> \t The result value of avg_p_int8 is a bytea serialized array which\r\n> consists of count\r\n> \t and summation. avg_p_int8 has finalfunc int8_avg_serialize\r\n> which is serialize function\r\n> \t of avg(int8). This field is zero if there is no partialaggfunc.\r\n> \r\n> \tb) partialagg_minversion\r\n> \t data type : int4.\r\n> \t default value: zero(means current version).\r\n> \t description : This field is the minimum PostgreSQL server version\r\n> which has\r\n> \t partialaggfunc. This field is used for checking compatibility of\r\n> partialaggfunc.\r\n> \r\n> \tThe above fields are valid in tuples for builtin avg, sum, min, max,\r\n> count.\r\n> \tThere are additional records which correspond to partialaggfunc for\r\n> avg, sum, min, max,\r\n> \tcount.\r\n> \r\n> \t2) pg_proc\r\n> \tThere are additional records which correspond to partialaggfunc for\r\n> avg, sum, min, max,\r\n> \tcount.\r\n> \r\n> \t3) postgres_fdw\r\n> \tpostgres_fdw has an additional foreign server option server_version.\r\n> server_version is\r\n> \tinteger value which means remote server version number. Default\r\n> value of server_version\r\n> \tis zero. server_version is used for checking compatibility of\r\n> partialaggfunc.\r\n> \r\n> \t2. feature\r\n> \tpostgres_fdw can pushdown partial aggregation of avg, sum, min, max,\r\n> count.\r\n> \tPartial aggregation pushdown is fine when the following two\r\n> conditions are both true.\r\n> \t condition1) partialaggfn is valid.\r\n> \t condition2) server_version is not less than partialagg_minversion\r\n> \tpostgres_fdw executes pushdown the patialaggfunc instead of a src.\r\n> \tFor example, we issue \"select avg_p_int4(c) from t\" instead of \"select\r\n> avg(c) from t\"\r\n> \tin the above example.\r\n> \r\n> \tpostgres_fdw can pushdown every aggregate function which supports\r\n> partial aggregation\r\n> \tif you add a partialaggfunc corresponding to the aggregate function by\r\n> create aggregate\r\n> \tcommand.\r\n> \r\n> \t3. difference between my patch and Mr.Pyhalov's v10 patch.\r\n> \t1) In my patch postgres_fdw can pushdown partial aggregation of avg\r\n> \t2) In my patch postgres_fdw can pushdown every aggregate function\r\n> which supports partial\r\n> \t aggregation if you add a partialaggfunc corresponding to the\r\n> aggregate function.\r\n> \r\n> \t4. sample commands in psql\r\n> \t\\c postgres\r\n> \tdrop database tmp;\r\n> \tcreate database tmp;\r\n> \t\\c tmp\r\n> \tcreate extension postgres_fdw;\r\n> \tcreate server server_01 foreign data wrapper postgres_fdw\r\n> options(host 'localhost', dbname 'tmp', server_version '160000', async_capable\r\n> 'true');\r\n> \tcreate user mapping for postgres server server_01 options(user\r\n> 'postgres', password 'postgres');\r\n> \tcreate server server_02 foreign data wrapper postgres_fdw\r\n> options(host 'localhost', dbname 'tmp', server_version '160000', async_capable\r\n> 'true');\r\n> \tcreate user mapping for postgres server server_02 options(user\r\n> 'postgres', password 'postgres');\r\n> \r\n> \tcreate table t(dt timestamp, id int4, name text, total int4, val float4, type\r\n> int4, span interval) partition by list (type);\r\n> \r\n> \tcreate table t1(dt timestamp, id int4, name text, total int4, val float4,\r\n> type int4, span interval);\r\n> \tcreate table t2(dt timestamp, id int4, name text, total int4, val float4,\r\n> type int4, span interval);\r\n> \r\n> \ttruncate table t1;\r\n> \ttruncate table t2;\r\n> \tinsert into t1 select timestamp'2020-01-01' + cast(t || ' seconds' as\r\n> interval), t % 100, 'hoge' || t, 1, 1.1, 1, cast('1 seconds' as interval) from\r\n> generate_series(1, 100000, 1) t;\r\n> \tinsert into t1 select timestamp'2020-01-01' + cast(t || ' seconds' as\r\n> interval), t % 100, 'hoge' || t, 2, 2.1, 1, cast('2 seconds' as interval) from\r\n> generate_series(1, 100000, 1) t;\r\n> \tinsert into t2 select timestamp'2020-01-01' + cast(t || ' seconds' as\r\n> interval), t % 100, 'hoge' || t, 1, 1.1, 2, cast('1 seconds' as interval) from\r\n> generate_series(1, 100000, 1) t;\r\n> \tinsert into t2 select timestamp'2020-01-01' + cast(t || ' seconds' as\r\n> interval), t % 100, 'hoge' || t, 2, 2.1, 2, cast('2 seconds' as interval) from\r\n> generate_series(1, 100000, 1) t;\r\n> \r\n> \tcreate foreign table f_t1 partition of t for values in (1) server server_01\r\n> options(table_name 't1');\r\n> \tcreate foreign table f_t2 partition of t for values in (2) server server_02\r\n> options(table_name 't2');\r\n> \r\n> \tset enable_partitionwise_aggregate = on;\r\n> \texplain (verbose, costs off) select avg(total::int4), avg(total::int8) from\r\n> t;\r\n> \tselect avg(total::int4), avg(total::int8) from t;\r\n> \r\n> \tSincerely yours,\r\n> \tYuuki Fujii\r\n> \t--\r\n> \tYuuki Fujii\r\n> \tInformation Technology R&D Center Mitsubishi Electric Corporation\r\n> \r\n> \r\n> \r\n> Hi,\r\n> For partial_agg_compatible :\r\n> \r\n> + * Check that partial aggregate agg has compatibility\r\n> \r\n> If the `agg` refers to func parameter, the parameter name is aggform\r\n> \r\n> + int32 partialagg_minversion = PG_VERSION_NUM;\r\n> + if (aggform->partialagg_minversion ==\r\n> PARTIALAGG_MINVERSION_DEFAULT) {\r\n> + partialagg_minversion = PG_VERSION_NUM;\r\n> \r\n> \r\n> I am curious why the same variable is assigned the same value twice. It seems\r\n> the if block is redundant.\r\n> \r\n> + if ((fpinfo->server_version >= partialagg_minversion)) {\r\n> + compatible = true;\r\n> \r\n> \r\n> The above can be simplified as: return fpinfo->server_version >=\r\n> partialagg_minversion;\r\n> \r\n> Cheers", "msg_date": "Tue, 22 Nov 2022 09:11:13 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: [CAUTION!! freemail] Re: Partial aggregates pushdown" }, { "msg_contents": "On Tue, Nov 22, 2022 at 1:11 AM Fujii.Yuki@df.MitsubishiElectric.co.jp <\nFujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n\n> Hi Mr.Yu.\n>\n> Thank you for comments.\n>\n> > + * Check that partial aggregate agg has compatibility\n> >\n> > If the `agg` refers to func parameter, the parameter name is aggform\n> I fixed the above typo and made the above comment easy to understand\n> New comment is \"Check that partial aggregate function of aggform exsits in\n> remote\"\n>\n> > + int32 partialagg_minversion = PG_VERSION_NUM;\n> > + if (aggform->partialagg_minversion ==\n> > PARTIALAGG_MINVERSION_DEFAULT) {\n> > + partialagg_minversion = PG_VERSION_NUM;\n> >\n> >\n> > I am curious why the same variable is assigned the same value twice. It\n> seems\n> > the if block is redundant.\n> >\n> > + if ((fpinfo->server_version >= partialagg_minversion)) {\n> > + compatible = true;\n> >\n> >\n> > The above can be simplified as: return fpinfo->server_version >=\n> > partialagg_minversion;\n> I fixed according to your comment.\n>\n> Sincerely yours,\n> Yuuki Fujii\n>\n>\n> Hi,\nThanks for the quick response.\n\nOn Tue, Nov 22, 2022 at 1:11 AM Fujii.Yuki@df.MitsubishiElectric.co.jp <Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:Hi Mr.Yu.\n\nThank you for comments.\n\n> + * Check that partial aggregate agg has compatibility\n> \n> If the `agg` refers to func parameter, the parameter name is aggform\nI fixed the above typo and made the above comment easy to understand\nNew comment is \"Check that partial aggregate function of aggform exsits in remote\"\n\n> +       int32  partialagg_minversion = PG_VERSION_NUM;\n> +       if (aggform->partialagg_minversion ==\n> PARTIALAGG_MINVERSION_DEFAULT) {\n> +               partialagg_minversion = PG_VERSION_NUM;\n> \n> \n> I am curious why the same variable is assigned the same value twice. It seems\n> the if block is redundant.\n> \n> +       if ((fpinfo->server_version >= partialagg_minversion)) {\n> +               compatible = true;\n> \n> \n> The above can be simplified as: return fpinfo->server_version >=\n> partialagg_minversion;\nI fixed according to your comment.\n\nSincerely yours,\nYuuki Fujii\nHi,Thanks for the quick response.", "msg_date": "Tue, 22 Nov 2022 02:51:44 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CAUTION!! freemail] Re: Partial aggregates pushdown" }, { "msg_contents": "Fujii.Yuki@df.MitsubishiElectric.co.jp писал 2022-11-22 04:01:\n> Hi Mr.Vondra, Mr.Pyhalov, Everyone.\n> \n> I discussed with Mr.Pyhalov about the above draft by directly sending \n> mail to\n> him(outside of pgsql-hackers). Mr.Pyhalov allowed me to update his \n> patch\n> along with the above draft. So I update Mr.Pyhalov's patch v10.\n> \n\nHi, Yuki. Thank you for your work on this.\n\nI've looked through the patch. Overall I like this approach, but have \nthe following comments.\n\n1) Why should we require partialaggfn for min()/max()/count()? We could \njust use original functions for a lot of aggregates, and so it would be \npossible to push down some partial aggregates to older servers. I'm not \nsure that it's a strict requirement, but a nice thing to think about. \nCan we use the function itself as partialaggfn, for example, for \nsum(int4)? For functions with internal aggtranstype (like sum(int8) it \nwould be more difficult).\n\n2) fpinfo->server_version is not aggregated, for example, when we form \nfpinfo in foreign_join_ok(), it seems we should spread it in more places \nin postgres_fdw.c.\n\n3) In add_foreign_grouping_paths() it seems there's no need for \nadditional argument, we can look at extra->patype. Also Assert() in \nadd_foreign_grouping_paths() will fire in --enable-cassert build.\n\n4) Why do you modify lookup_agg_function() signature? I don't see tests, \nshowing that it's neccessary. Perhaps, more precise function naming \nshould be used instead?\n\n5) In tests:\n - Why version_num does have \"name\" type in \nf_alter_server_version() function?\n - You modify server_version option of 'loopback' server, but \ndon't reset it after test. This could affect further tests.\n - \"It's unsafe to push down partial aggregates with distinct\" \nin postgres_fdw.sql:3002 seems to be misleading.\n3001\n3002 -- It's unsafe to push down partial aggregates with distinct\n3003 SELECT f_alter_server_version('loopback', 'set', -1);\n3004 EXPLAIN (VERBOSE, COSTS OFF)\n3005 SELECT avg(d) FROM pagg_tab;\n3006 SELECT avg(d) FROM pagg_tab;\n3007 select * from pg_foreign_server;\n\n6) While looking at it, could cause a crash with something like\n\nCREATE TYPE COMPLEX AS (re FLOAT, im FLOAT);\n\nCREATE OR REPLACE FUNCTION\nsum_complex (sum complex, el complex)\nRETURNS complex AS\n$$\nDECLARE\ns complex;\nBEGIN\nif el is not null and sum is not null then\nsum.re:=coalesce(sum.re,0)+el.re;\nsum.im:=coalesce(sum.im,0)+el.im;\nend if;\nRETURN sum;\nEND;\n$$ LANGUAGE plpgSQL;\n\nCREATE AGGREGATE SUM(COMPLEX) (\nSFUNC=sum_complex,\nSTYPE=complex,\npartialaggfunc=aaaa,\npartialagg_minversion=1400\n);\n\nwhere aaaa - something nonexisting\n\n\nenforce_generic_type_consistency (actual_arg_types=0x56269873d200, \ndeclared_arg_types=0x0, nargs=1, rettype=0, allow_poly=true) at \nparse_coerce.c:2132\n2132 Oid decl_type = \ndeclared_arg_types[j];\n(gdb) bt\n#0 enforce_generic_type_consistency (actual_arg_types=0x56269873d200, \ndeclared_arg_types=0x0, nargs=1, rettype=0, allow_poly=true) at \nparse_coerce.c:2132\n#1 0x00005626960072de in lookup_agg_function (fnName=0x5626986715a0, \nnargs=1, input_types=0x56269873d200, variadicArgType=0, \nrettype=0x7ffd1a4045d8, only_normal=false) at pg_aggregate.c:916\n#2 0x00005626960064ba in AggregateCreate (aggName=0x562698671000 \"sum\", \naggNamespace=2200, replace=false, aggKind=110 'n', numArgs=1, \nnumDirectArgs=0, parameterTypes=0x56269873d1e8, allParameterTypes=0, \nparameterModes=0,\n parameterNames=0, parameterDefaults=0x0, variadicArgType=0, \naggtransfnName=0x5626986712c0, aggfinalfnName=0x0, aggcombinefnName=0x0, \naggserialfnName=0x0, aggdeserialfnName=0x0, aggmtransfnName=0x0, \naggminvtransfnName=0x0,\n aggmfinalfnName=0x0, partialaggfnName=0x5626986715a0, \nfinalfnExtraArgs=false, mfinalfnExtraArgs=false, finalfnModify=114 'r', \nmfinalfnModify=114 'r', aggsortopName=0x0, aggTransType=16390, \naggTransSpace=0, aggmTransType=0,\n aggmTransSpace=0, partialaggMinversion=1400, agginitval=0x0, \naggminitval=0x0, proparallel=117 'u') at pg_aggregate.c:582\n#3 0x00005626960a1e1c in DefineAggregate (pstate=0x56269869ab48, \nname=0x562698671038, args=0x5626986711b0, oldstyle=false, \nparameters=0x5626986713b0, replace=false) at aggregatecmds.c:450\n#4 0x000056269643061f in ProcessUtilitySlow (pstate=0x56269869ab48, \npstmt=0x562698671a68,\n queryString=0x5626986705d8 \"CREATE AGGREGATE SUM(COMPLEX) \n(\\nSFUNC=sum_complex,\\nSTYPE=COMPLEX,\\npartialaggfunc=scomplex,\\npartialagg_minversion=1400\\n);\", \ncontext=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\n dest=0x562698671b48, qc=0x7ffd1a4053c0) at utility.c:1407\n#5 0x000056269642fbb4 in standard_ProcessUtility (pstmt=0x562698671a68, \nqueryString=0x5626986705d8 \"CREATE AGGREGATE SUM(COMPLEX) \n(\\nSFUNC=sum_complex,\\nSTYPE=COMPLEX,\\npartialaggfunc=scomplex,\\npartialagg_minversion=1400\\n);\",\n readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, \nqueryEnv=0x0, dest=0x562698671b48, qc=0x7ffd1a4053c0) at utility.c:1074\n\n\nLater will look at it again.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Tue, 22 Nov 2022 19:05:29 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Pyhalov.\r\n\r\nThank you for comments.\r\n\r\n> I've looked through the patch. Overall I like this approach, but have\r\n> the following comments.\r\n> \r\n> 1) Why should we require partialaggfn for min()/max()/count()? We could\r\n> just use original functions for a lot of aggregates, and so it would be\r\n> possible to push down some partial aggregates to older servers. I'm not\r\n> sure that it's a strict requirement, but a nice thing to think about.\r\n> Can we use the function itself as partialaggfn, for example, for\r\n> sum(int4)?\r\n> For functions with internal aggtranstype (like sum(int8) it\r\n> would be more difficult).\r\nThank you. I realized that partial aggregate pushdown is fine \r\nwithout partialaggfn if original function has no aggfinalfn and \r\naggtranstype of it is not internal. So I have improved v12 by\r\nthis realization.\r\nHowever, v13 requires partialaggfn for aggregate if it has aggfinalfn or \r\naggtranstype of it is internal such as sum(int8). \r\n\r\n> 2) fpinfo->server_version is not aggregated, for example, when we form\r\n> fpinfo in foreign_join_ok(), it seems we should spread it in more places\r\n> in postgres_fdw.c.\r\nI have responded to your comment by adding copy of server_version in \r\nmerge_fdw_options.\r\n \r\n> 3) In add_foreign_grouping_paths() it seems there's no need for\r\n> additional argument, we can look at extra->patype. Also Assert() in\r\n> add_foreign_grouping_paths() will fire in --enable-cassert build.\r\nI have fixed according to your comment.\r\n\r\n> 4) Why do you modify lookup_agg_function() signature? I don't see tests,\r\n> showing that it's neccessary. Perhaps, more precise function naming\r\n> should be used instead?\r\nI realized that there is no need of modification lookup_agg_function().\r\nInstead, I use LookupFuncName().\r\n\r\n> 5) In tests:\r\n> - Why version_num does have \"name\" type in\r\n> f_alter_server_version() function?\r\n> - You modify server_version option of 'loopback' server, but\r\n> don't reset it after test. This could affect further tests.\r\n> - \"It's unsafe to push down partial aggregates with distinct\"\r\n> in postgres_fdw.sql:3002 seems to be misleading.\r\n> 3001\r\n> 3002 -- It's unsafe to push down partial aggregates with distinct\r\n> 3003 SELECT f_alter_server_version('loopback', 'set', -1);\r\nI have fixed according to your comment.\r\n\r\n> 6) While looking at it, could cause a crash with something like\r\nI have fixed this problem by using LookupFuncName() instead of lookup_agg_function.\r\n\r\nThe following is readme of v13.\r\n--readme of Partial aggregates push down v13\r\n1. interface\r\n1) pg_aggregate\r\nThere are the following additional columns.\r\na) partialaggfn\r\n data type : regproc.\r\n default value: zero(means invalid).\r\n description : This field refers to the special aggregate function(then we call\r\n this partialaggfunc)\r\n corresponding to aggregation function(then we call src) which has aggfnoid.\r\n partialaggfunc is used for partial aggregation pushdown by postgres_fdw.\r\n The followings are differences between the src and the special aggregate function.\r\n difference1) result type\r\n The result type is same as the src's transtype if the src's transtype\r\n is not internal.\r\n Otherwise the result type is bytea.\r\n difference2) final func\r\n The final func does not exist if the src's transtype is not internal.\r\n Otherwize the final func returns serialized value.\r\n For example, there is a partialaggfunc avg_p_int4 which corresponds to avg(int4)\r\n whose aggtranstype is _int4.\r\n The result value of avg_p_int4 is a float8 array which consists of count and \r\n summation. avg_p_int4 does not have finalfunc.\r\n For another example, there is a partialaggfunc avg_p_int8 which corresponds to \r\n avg(int8) whose aggtranstype is internal.\r\n The result value of avg_p_int8 is a bytea serialized array which consists of count \r\n and summation. avg_p_int8 has finalfunc int8_avg_serialize which is serialize function\r\n of avg(int8). This field is zero if there is no partialaggfunc.\r\n\r\nb) partialagg_minversion\r\n data type : int4.\r\n default value: zero(means current version).\r\n description : This field is the minimum PostgreSQL server version which has \r\n partialaggfunc. This field is used for checking compatibility of partialaggfunc.\r\n\r\nThe above fields are valid in tuples for builtin avg, sum, min, max, count.\r\nThere are additional records which correspond to partialaggfunc for avg, sum, min, max, count.\r\n\r\n2) pg_proc\r\nThere are additional records which correspond to partialaggfunc for avg, sum, min, max, count.\r\n\r\n3) postgres_fdw\r\npostgres_fdw has an additional foreign server option server_version. server_version is \r\ninteger value which means remote server version number. Default value of server_version \r\nis zero. server_version is used for checking compatibility of partialaggfunc.\r\n\r\n2. feature\r\nPartial aggregation pushdown is fine when either of the following conditions is true.\r\n condition1) aggregate function has not internal aggtranstype and has no aggfinalfn.\r\n condition2) the following two conditions are both true.\r\n condition2-1) partialaggfn is valid.\r\n condition2-2) server_version is not less than partialagg_minversion postgres_fdw executes \r\n pushdown the patialaggfunc instead of a src.\r\npostgres_fdw can pushdown partial aggregation of aggregate function which has internal \r\naggtranstype or has aggfinalfn if the function is one of avg, sum(int8), sum(numeric).\r\n\r\nFor example, we issue \"select avg_p_int4(c) from t\" instead of \"select avg(c) from t\"\r\nin the above example.\r\n\r\npostgres_fdw can pushdown every aggregate function which supports partial aggregation \r\nif you add a partialaggfunc corresponding to the aggregate function by create aggregate command.\r\n\r\n3. sample commands in psql\r\n\\c postgres\r\ndrop database tmp;\r\ncreate database tmp;\r\n\\c tmp\r\ncreate extension postgres_fdw;\r\ncreate server server_01 foreign data wrapper postgres_fdw options(host 'localhost', dbname 'tmp', server_version '160000', async_capable 'true'); \r\ncreate user mapping for postgres server server_01 options(user 'postgres', password 'postgres'); \r\ncreate server server_02 foreign data wrapper postgres_fdw options(host 'localhost', dbname 'tmp', server_version '160000', async_capable 'true'); \r\ncreate user mapping for postgres server server_02 options(user 'postgres', password 'postgres');\r\n\r\ncreate table t(dt timestamp, id int4, name text, total int4, val float4, type int4, span interval) partition by list (type);\r\n\r\ncreate table t1(dt timestamp, id int4, name text, total int4, val float4, type int4, span interval); \r\ncreate table t2(dt timestamp, id int4, name text, total int4, val float4, type int4, span interval);\r\n\r\ntruncate table t1;\r\ntruncate table t2;\r\ninsert into t1 select timestamp'2020-01-01' + cast(t || ' seconds' as interval), t % 100, 'hoge' || t, 1, 1.1, 1, cast('1 seconds' as interval) from generate_series(1, 100000, 1) t; \r\ninsert into t1 select timestamp'2020-01-01' + cast(t || ' seconds' as interval), t % 100, 'hoge' || t, 2, 2.1, 1, cast('2 seconds' as interval) from generate_series(1, 100000, 1) t; \r\ninsert into t2 select timestamp'2020-01-01' + cast(t || ' seconds' as interval), t % 100, 'hoge' || t, 1, 1.1, 2, cast('1 seconds' as interval) from generate_series(1, 100000, 1) t; \r\ninsert into t2 select timestamp'2020-01-01' + cast(t || ' seconds' as interval), t % 100, 'hoge' || t, 2, 2.1, 2, cast('2 seconds' as interval) from generate_series(1, 100000, 1) t;\r\n\r\ncreate foreign table f_t1 partition of t for values in (1) server server_01 options(table_name 't1'); \r\ncreate foreign table f_t2 partition of t for values in (2) server server_02 options(table_name 't2');\r\n\r\nset enable_partitionwise_aggregate = on; \r\nexplain (verbose, costs off) select avg(total::int4), avg(total::int8) from t; \r\nselect avg(total::int4), avg(total::int8) from t;\r\n--\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Wed, 30 Nov 2022 03:10:22 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Hi, Yuki.\n\n1) In previous version of the patch aggregates, which had partialaggfn, \nwere ok to push down. And it was a definite sign that aggregate can be \npushed down. Now we allow pushing down an aggregate, which prorettype is \nnot internal and aggfinalfn is not defined. Is it safe for all \nuser-defined (or builtin) aggregates, even if they are generally \nshippable? Aggcombinefn is executed locally and we check that aggregate \nfunction itself is shippable. Is it enough? Perhaps, we could use \npartialagg_minversion (like aggregates with partialagg_minversion == -1 \nshould not be pushed down) or introduce separate explicit flag?\n\n2) Do we really have to look at pg_proc in partial_agg_ok() and \ndeparseAggref()? Perhaps, looking at aggtranstype is enough?\n\n3) I'm not sure if CREATE AGGREGATE tests with invalid \nPARTIALAGGFUNC/PARTIALAGG_MINVERSION should be in postgres_fdw tests or \nbetter should be moved to src/test/regress/sql/create_aggregate.sql, as \nthey are not specific to postgres_fdw\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Wed, 30 Nov 2022 11:12:24 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Pyhalov.\n\n> 1) In previous version of the patch aggregates, which had partialaggfn, were ok\n> to push down. And it was a definite sign that aggregate can be pushed down.\n> Now we allow pushing down an aggregate, which prorettype is not internal and\n> aggfinalfn is not defined. Is it safe for all user-defined (or builtin) aggregates,\n> even if they are generally shippable? Aggcombinefn is executed locally and we\n> check that aggregate function itself is shippable. Is it enough? Perhaps, we\n> could use partialagg_minversion (like aggregates with partialagg_minversion\n> == -1 should not be pushed down) or introduce separate explicit flag?\nIn what case partial aggregate pushdown is unsafe for aggregate which has not internal aggtranstype\n and has no aggfinalfn?\nBy reading [1], I believe that if aggcombinefn of such aggregate recieves return values of original\n aggregate functions in each remote then it must produce same value that would have resulted \n from scanning all the input in a single operation.\n\n> 2) Do we really have to look at pg_proc in partial_agg_ok() and\n> deparseAggref()? Perhaps, looking at aggtranstype is enough?\nYou are right. I fixed according to your comment.\n\n> 3) I'm not sure if CREATE AGGREGATE tests with invalid\n> PARTIALAGGFUNC/PARTIALAGG_MINVERSION should be in postgres_fdw\n> tests or better should be moved to src/test/regress/sql/create_aggregate.sql,\n> as they are not specific to postgres_fdw\nThank you. I moved these tests to src/test/regress/sql/create_aggregate.sql.\n\n[1] https://www.postgresql.org/docs/15/xaggr.html#XAGGR-PARTIAL-AGGREGATES\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Wed, 30 Nov 2022 10:01:41 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Fujii.Yuki@df.MitsubishiElectric.co.jp писал 2022-11-30 13:01:\n> Hi Mr.Pyhalov.\n> \n>> 1) In previous version of the patch aggregates, which had \n>> partialaggfn, were ok\n>> to push down. And it was a definite sign that aggregate can be pushed \n>> down.\n>> Now we allow pushing down an aggregate, which prorettype is not \n>> internal and\n>> aggfinalfn is not defined. Is it safe for all user-defined (or \n>> builtin) aggregates,\n>> even if they are generally shippable? Aggcombinefn is executed locally \n>> and we\n>> check that aggregate function itself is shippable. Is it enough? \n>> Perhaps, we\n>> could use partialagg_minversion (like aggregates with \n>> partialagg_minversion\n>> == -1 should not be pushed down) or introduce separate explicit flag?\n> In what case partial aggregate pushdown is unsafe for aggregate which\n> has not internal aggtranstype\n> and has no aggfinalfn?\n> By reading [1], I believe that if aggcombinefn of such aggregate\n> recieves return values of original\n> aggregate functions in each remote then it must produce same value\n> that would have resulted\n> from scanning all the input in a single operation.\n> \n\nOne more issue I started to think about - now we don't check \npartialagg_minversion for \"simple\" aggregates at all. Is it correct? It \nseems that , for example, we could try to pushdown bit_or(int8) to old \nservers, but it didn't exist, for example, in 8.4. I think it's a \nbroader issue (it would be also the case already if we push down \naggregates) and shouldn't be fixed here. But there is an issue - \nis_shippable() is too optimistic.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Wed, 30 Nov 2022 16:30:24 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Fujii.Yuki@df.MitsubishiElectric.co.jp писал 2022-11-30 13:01:\n\n>> 2) Do we really have to look at pg_proc in partial_agg_ok() and\n>> deparseAggref()? Perhaps, looking at aggtranstype is enough?\n> You are right. I fixed according to your comment.\n> \n\npartial_agg_ok() still looks at pg_proc.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Wed, 30 Nov 2022 17:30:45 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Pyhalov.\n\n> One more issue I started to think about - now we don't check\n> partialagg_minversion for \"simple\" aggregates at all. Is it correct? It seems that ,\n> for example, we could try to pushdown bit_or(int8) to old servers, but it didn't\n> exist, for example, in 8.4. I think it's a broader issue (it would be also the case\n> already if we push down\n> aggregates) and shouldn't be fixed here. But there is an issue -\n> is_shippable() is too optimistic.\nI think it is correct for now.\nF.38.7 of [1] says \"A limitation however is that postgres_fdw generally assumes that \nimmutable built-in functions and operators are safe to send to the remote server for \nexecution, if they appear in a WHERE clause for a foreign table.\" and says that we can \navoid this limitation by rewriting query.\nIt looks that postgres_fdw follows this policy in case of UPPERREL_GROUP_AGG aggregate pushdown.\nIf a aggreagate has not internal aggtranstype and has not aggfinalfn ,\npartialaggfn of it is equal to itself.\nSo I think that it is adequate to follow this policy in case of partial aggregate pushdown\n for such aggregates.\n\n> >> 2) Do we really have to look at pg_proc in partial_agg_ok() and\n> >> deparseAggref()? Perhaps, looking at aggtranstype is enough?\n> > You are right. I fixed according to your comment.\n> >\n> \n> partial_agg_ok() still looks at pg_proc.\nSorry for taking up your time. I fixed it.\n\n[1] https://www.postgresql.org/docs/current/postgres-fdw.html\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Thu, 1 Dec 2022 02:23:28 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Fujii.Yuki@df.MitsubishiElectric.co.jp писал 2022-12-01 05:23:\n> Hi Mr.Pyhalov.\n> \nHi.\n\nAttaching minor fixes. I haven't proof-read all comments (but perhaps, \nthey need attention from some native speaker).\n\nTested it with queries from \nhttps://github.com/swarm64/s64da-benchmark-toolkit, works as expected.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Thu, 01 Dec 2022 19:36:12 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Pyhalov.\r\n\r\n> Attaching minor fixes. I haven't proof-read all comments (but perhaps, they\r\n> need attention from some native speaker).\r\nThank you. I fixed according to your patch.\r\nAnd I fixed have proof-read all comments and messages.\r\n\r\n> Tested it with queries from\r\n> https://github.com/swarm64/s64da-benchmark-toolkit, works as expected.\r\nThank you for additional tests.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Mon, 5 Dec 2022 02:03:49 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Hi,\n\nOn 2022-12-05 02:03:49 +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> > Attaching minor fixes. I haven't proof-read all comments (but perhaps, they\n> > need attention from some native speaker).\n> Thank you. I fixed according to your patch.\n> And I fixed have proof-read all comments and messages.\n\ncfbot complains about some compiler warnings when building with clang:\nhttps://cirrus-ci.com/task/6606268580757504\n\ndeparse.c:3459:22: error: equality comparison with extraneous parentheses [-Werror,-Wparentheses-equality]\n if ((node->aggsplit == AGGSPLIT_SIMPLE)) {\n ~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~\ndeparse.c:3459:22: note: remove extraneous parentheses around the comparison to silence this warning\n if ((node->aggsplit == AGGSPLIT_SIMPLE)) {\n ~ ^ ~\ndeparse.c:3459:22: note: use '=' to turn this equality comparison into an assignment\n if ((node->aggsplit == AGGSPLIT_SIMPLE)) {\n ^~\n =\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 7 Dec 2022 10:59:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Freund.\n\n> cfbot complains about some compiler warnings when building with clang:\n> https://cirrus-ci.com/task/6606268580757504\n> \n> deparse.c:3459:22: error: equality comparison with extraneous parentheses\n> [-Werror,-Wparentheses-equality]\n> if ((node->aggsplit == AGGSPLIT_SIMPLE)) {\n> ~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~\n> deparse.c:3459:22: note: remove extraneous parentheses around the\n> comparison to silence this warning\n> if ((node->aggsplit == AGGSPLIT_SIMPLE)) {\n> ~ ^ ~\n> deparse.c:3459:22: note: use '=' to turn this equality comparison into an\n> assignment\n> if ((node->aggsplit == AGGSPLIT_SIMPLE)) {\n> ^~\n> =\nI fixed this error.\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Thu, 15 Dec 2022 22:23:05 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Thu, Dec 15, 2022 at 10:23:05PM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Hi Mr.Freund.\n> \n> > cfbot complains about some compiler warnings when building with clang:\n> > https://cirrus-ci.com/task/6606268580757504\n> > \n> > deparse.c:3459:22: error: equality comparison with extraneous parentheses\n> > [-Werror,-Wparentheses-equality]\n> > if ((node->aggsplit == AGGSPLIT_SIMPLE)) {\n> > ~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~\n> > deparse.c:3459:22: note: remove extraneous parentheses around the\n> > comparison to silence this warning\n> > if ((node->aggsplit == AGGSPLIT_SIMPLE)) {\n> > ~ ^ ~\n> > deparse.c:3459:22: note: use '=' to turn this equality comparison into an\n> > assignment\n> > if ((node->aggsplit == AGGSPLIT_SIMPLE)) {\n> > ^~\n> > =\n> I fixed this error.\n\nConsidering we only have a week left before feature freeze, I wanted to\nreview the patch from this commitfest item:\n\n\thttps://commitfest.postgresql.org/42/4019/\n\nThe most recent patch attached.\n\nThis feature has been in development since 2021, and it is something\nthat will allow new workloads for Postgres, specifically data warehouse\nsharding workloads.\n\nWe currently allow parallel aggregates when the table is on the same\nmachine, and we allow partitonwise aggregates on FDWs only with GROUP BY\nkeys matching partition keys. The first is possible since we can share\ndata structures between background workers, and the second is possible\nbecause if the GROUP BY includes the partition key, we are really just\nappending aggregate rows, not combining aggregate computations.\n\nWhat we can't do without this patch is to push aggregates that require\npartial aggregate computations (no partition key GROUP BY) to FDW\npartitions because we don't have a clean way to pass such information\nfrom the remote FDW server to the finalize backend. I think that is\nwhat this patch does.\n\nFirst, am I correct? Second, how far away is this from being committable\nand/or what work needs to be done to get it committable, either for PG 16\nor 17?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.", "msg_date": "Thu, 30 Mar 2023 19:41:21 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Momjian\n\n> First, am I correct?\nYes, you are correct. This patch uses new special aggregate functions for partial aggregate\n(then we call this partialaggfunc).\n\n> Second, how far away is this from being committable\n> and/or what work needs to be done to get it committable, either for PG 16 or 17?\nI believe there are three: 1. and 2. are not clear if they are necessary or not; 3. are clearly necessary.\nI would like to hear the opinions of the development community on whether or not 1. and 2. need to be addressed.\n\n1. Making partialaggfunc user-defined function\nIn v17, I make partialaggfuncs as built-in functions.\nBecause of this approach, v17 changes specification of BKI file and pg_aggregate.\nFor now, partialaggfuncs are needed by only postgres_fdw which is just an extension of PostgreSQL.\nIn the future, when revising the specifications for BKI files and pg_aggregate when modifying existing PostgreSQL functions,\nIt is necessary to align them with this patch's changes.\nI am concerned that this may be undesirable.\nSo I am thinking that v17 should be modified to making partialaggfunc as user defined function.\n\n2. Automation of creating definition of partialaggfuncs\nIn development of v17, I manually create definition of partialaggfuncs for avg, min, max, sum, count.\nI am concerned that this may be undesirable.\nSo I am thinking that v17 should be modified to automate creating definition of partialaggfuncs\nfor all built-in aggregate functions.\n\n3. Documentation\nI need add explanation of partialaggfunc to documents on postgres_fdw and other places.\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation\n\n\n", "msg_date": "Fri, 31 Mar 2023 05:49:21 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Fri, Mar 31, 2023 at 05:49:21AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Hi Mr.Momjian\n> \n> > First, am I correct?\n> Yes, you are correct. This patch uses new special aggregate functions for partial aggregate\n> (then we call this partialaggfunc).\n\nFirst, my apologies for not addressing this sooner. I was so focused on\nmy own tasks that I didn't realize this very important patch was not\ngetting attention. I will try my best to get it into PG 17.\n\nWhat amazes me is that you didn't need to create _any_ actual aggregate\nfunctions. Rather, you just needed to hook existing functions into the\naggregate tables for partial FDW execution.\n\n> > Second, how far away is this from being committable\n> > and/or what work needs to be done to get it committable, either for PG 16 or 17?\n> I believe there are three: 1. and 2. are not clear if they are necessary or not; 3. are clearly necessary.\n> I would like to hear the opinions of the development community on whether or not 1. and 2. need to be addressed.\n> \n> 1. Making partialaggfunc user-defined function\n> In v17, I make partialaggfuncs as built-in functions.\n> Because of this approach, v17 changes specification of BKI file and pg_aggregate.\n> For now, partialaggfuncs are needed by only postgres_fdw which is just an extension of PostgreSQL.\n> In the future, when revising the specifications for BKI files and pg_aggregate when modifying existing PostgreSQL functions,\n> It is necessary to align them with this patch's changes.\n> I am concerned that this may be undesirable.\n> So I am thinking that v17 should be modified to making partialaggfunc as user defined function.\n\nI think we have three possible cases for aggregates pushdown to FDWs:\n\n1) Postgres built-in aggregate functions\n2) Postgres user-defined & extension aggregate functions\n3) aggregate functions calls to non-PG FDWs\n\nYour patch handles #1 by checking that the FDW Postgres version is the\nsame as the calling Postgres version. However, it doesn't check for\nextension versions, and frankly, I don't see how we could implement that\ncleanly without significant complexity.\n\nI suggest we remove the version check requirement --- instead just\ndocument that the FDW Postgres version should be the same or newer than\nthe calling Postgres server --- that way, we can assume that whatever is\nin the system catalogs of the caller is in the receiving side. We\nshould add a GUC to turn off this optimization for cases where the FDW\nPostgres version is older than the caller. This handles case 1-2.\n\nFor case 3, I don't even know how much pushdown those do of _any_\naggregates to non-PG servers, let along parallel FDW ones. Does anyone\nknow the details?\n\n> 2. Automation of creating definition of partialaggfuncs\n> In development of v17, I manually create definition of partialaggfuncs for avg, min, max, sum, count.\n> I am concerned that this may be undesirable.\n> So I am thinking that v17 should be modified to automate creating definition of partialaggfuncs\n> for all built-in aggregate functions.\n\nAre there any other builtin functions that need this? I think we can\njust provide documention for extensions on how to do this.\n\n> 3. Documentation\n> I need add explanation of partialaggfunc to documents on postgres_fdw and other places.\n\nI can help with that once we decide on the above.\n\nI think 'partialaggfn' should be named 'aggpartialfn' to match other\ncolumns in pg_aggregate.\n\nI am confused by these changes to pg_aggegate:\n\n+{ aggfnoid => 'sum_p_int8', aggtransfn => 'int8_avg_accum',\n+ aggfinalfn => 'int8_avg_serialize', aggcombinefn => 'int8_avg_combine',\n+ aggserialfn => 'int8_avg_serialize', aggdeserialfn => 'int8_avg_deserialize',\n+ aggtranstype => 'internal', aggtransspace => '48' },\n\n...\n\n+{ aggfnoid => 'sum_p_numeric', aggtransfn => 'numeric_avg_accum',\n+ aggfinalfn => 'numeric_avg_serialize', aggcombinefn => 'numeric_avg_combine',\n+ aggserialfn => 'numeric_avg_serialize',\n+ aggdeserialfn => 'numeric_avg_deserialize',\n+ aggtranstype => 'internal', aggtransspace => '128' },\n\nWhy are these marked as 'sum' but use 'avg' functions?\n\nIt would be good to explain exactly how this is diffent from background\nworker parallelism.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.", "msg_date": "Thu, 6 Apr 2023 21:59:39 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Momjian\n\n> First, my apologies for not addressing this sooner. I was so focused on my\n> own tasks that I didn't realize this very important patch was not getting\n> attention. I will try my best to get it into PG 17.\nThank you very much for your comments. \nI will improve this patch for PG17.\nI believe that this patch will help us use PostgreSQL's built-in sharding for OLAP.\n\n> What amazes me is that you didn't need to create _any_ actual aggregate\n> functions. Rather, you just needed to hook existing functions into the\n> aggregate tables for partial FDW execution.\nYes. This patch enables partial aggregate pushdown using \nonly existing functions which belong to existing aggregate function\nand are needed by parallel query(such as state transition function and serialization function).\nThis patch does not need new types of function belonging to aggregate functions\nand does not need new functions belonging to existing aggregate functions.\n\n> I suggest we remove the version check requirement --- instead just document\n> that the FDW Postgres version should be the same or newer than the calling\n> Postgres server --- that way, we can assume that whatever is in the system\n> catalogs of the caller is in the receiving side. \nThanks for the comment. I will modify this patch according to your comment.\n\n> We should add a GUC to turn off\n> this optimization for cases where the FDW Postgres version is older than the\n> caller. This handles case 1-2.\nThanks for the advice here too.\nI thought it would be more appropriate to add a foregin server option of \npostgres_fdw rather than adding GUC. \nWould you mind if I ask you what you think about it?\n\n> > 2. Automation of creating definition of partialaggfuncs In development\n> > of v17, I manually create definition of partialaggfuncs for avg, min, max, sum,\n> count.\n> > I am concerned that this may be undesirable.\n> > So I am thinking that v17 should be modified to automate creating\n> > definition of partialaggfuncs for all built-in aggregate functions.\n> \n> Are there any other builtin functions that need this? I think we can just\n> provide documention for extensions on how to do this.\nFor practical purposes, it is sufficient \nif partial aggregate for the above functions can be pushed down.\nI think you are right, it would be sufficient to document how to achieve\n partial aggregate pushdown for other built-in functions.\n\n> > 3. Documentation\n> > I need add explanation of partialaggfunc to documents on postgres_fdw and\n> other places.\n> \n> I can help with that once we decide on the above.\nThank you. In the next verion of this patch, I will add documents on postgres_fdw\nand other places. \n\n> I think 'partialaggfn' should be named 'aggpartialfn' to match other columns in\n> pg_aggregate.\nThanks for the comment. I will modify this patch according to your comment.\n\n> For case 3, I don't even know how much pushdown those do of _any_\n> aggregates to non-PG servers, let along parallel FDW ones. Does anyone\n> know the details?\nTo allow partial aggregate pushdown for non-PG FDWs,\nI think we need to add pushdown logic to their FDWs for each function.\nFor example, we need to add logic avg() -> sum()/count() to their FDWs for avg.\nTo allow parallel partial aggregate by non-PG FDWs,\nI think we need to add FDW Routines for Asynchronous Execution to their FDWs[1].\n\n> I am confused by these changes to pg_aggegate:\n> \n> +{ aggfnoid => 'sum_p_int8', aggtransfn => 'int8_avg_accum',\n> + aggfinalfn => 'int8_avg_serialize', aggcombinefn =>\n> +'int8_avg_combine',\n> + aggserialfn => 'int8_avg_serialize', aggdeserialfn =>\n> +'int8_avg_deserialize',\n> + aggtranstype => 'internal', aggtransspace => '48' },\n> \n> ...\n> \n> +{ aggfnoid => 'sum_p_numeric', aggtransfn => 'numeric_avg_accum',\n> + aggfinalfn => 'numeric_avg_serialize', aggcombinefn =>\n> +'numeric_avg_combine',\n> + aggserialfn => 'numeric_avg_serialize',\n> + aggdeserialfn => 'numeric_avg_deserialize',\n> + aggtranstype => 'internal', aggtransspace => '128' },\n> \n> Why are these marked as 'sum' but use 'avg' functions?\nThis reason is that sum(int8)/sum(numeric) shares some functions with avg(int8)/avg(numeric),\nand sum_p_int8 is aggpartialfn of sum(int8) and sum_p_numeric is aggpartialfn of sum(numeric).\n\n--Part of avg(int8) in BKI file in PostgreSQL15.0[2].\n{ aggfnoid => 'avg(int8)', aggtransfn => 'int8_avg_accum',\n aggfinalfn => 'numeric_poly_avg', aggcombinefn => 'int8_avg_combine',\n aggserialfn => 'int8_avg_serialize', aggdeserialfn => 'int8_avg_deserialize',\n aggmtransfn => 'int8_avg_accum', aggminvtransfn => 'int8_avg_accum_inv',\n aggmfinalfn => 'numeric_poly_avg', aggtranstype => 'internal',\n aggtransspace => '48', aggmtranstype => 'internal', aggmtransspace => '48' },\n--\n\n--Part of sum(int8) in BKI file in PostgreSQL15.0[2].\n{ aggfnoid => 'sum(int8)', aggtransfn => 'int8_avg_accum',\n aggfinalfn => 'numeric_poly_sum', aggcombinefn => 'int8_avg_combine',\n aggserialfn => 'int8_avg_serialize', aggdeserialfn => 'int8_avg_deserialize',\n aggmtransfn => 'int8_avg_accum', aggminvtransfn => 'int8_avg_accum_inv',\n aggmfinalfn => 'numeric_poly_sum', aggtranstype => 'internal',\n aggtransspace => '48', aggmtranstype => 'internal', aggmtransspace => '48' },\n--\n\n[1] https://www.postgresql.org/docs/15/fdw-callbacks.html#FDW-CALLBACKS-ASYNC\n[2] https://github.com/postgres/postgres/blob/REL_15_0/src/include/catalog/pg_aggregate.dat\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation\n\n\n", "msg_date": "Fri, 7 Apr 2023 09:25:52 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Fri, Apr 7, 2023 at 09:25:52AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Hi Mr.Momjian\n> \n> > First, my apologies for not addressing this sooner. I was so focused on my\n> > own tasks that I didn't realize this very important patch was not getting\n> > attention. I will try my best to get it into PG 17.\n> Thank you very much for your comments. \n> I will improve this patch for PG17.\n> I believe that this patch will help us use PostgreSQL's built-in sharding for OLAP.\n\nAgreed! Again, my apologies for not helping with this _much_ sooner. \nYou have done amazing work here.\n\n> > What amazes me is that you didn't need to create _any_ actual aggregate\n> > functions. Rather, you just needed to hook existing functions into the\n> > aggregate tables for partial FDW execution.\n> Yes. This patch enables partial aggregate pushdown using \n> only existing functions which belong to existing aggregate function\n> and are needed by parallel query(such as state transition function and serialization function).\n> This patch does not need new types of function belonging to aggregate functions\n> and does not need new functions belonging to existing aggregate functions.\n\nVery nice.\n\n> > I suggest we remove the version check requirement --- instead just document\n> > that the FDW Postgres version should be the same or newer than the calling\n> > Postgres server --- that way, we can assume that whatever is in the system\n> > catalogs of the caller is in the receiving side. \n> Thanks for the comment. I will modify this patch according to your comment.\n> \n> > We should add a GUC to turn off\n> > this optimization for cases where the FDW Postgres version is older than the\n> > caller. This handles case 1-2.\n> Thanks for the advice here too.\n> I thought it would be more appropriate to add a foregin server option of \n> postgres_fdw rather than adding GUC. \n> Would you mind if I ask you what you think about it?\n\nI like the GUC idea because it gives administrators a single place to\ncheck if the feature is enabled. However, I can imagine cases where you\nmight have multiple remote FDW servers and some might be older than the\nsending server.\n\nWhat I don't want is an error-prone setup where administrators have to\nremember what the per-server settings are. Based on your suggestion,\nlet's allow CREATE SERVER to have an option 'enable_async_aggregate' (is\nthat the right name?), which defaults to 'true' for _all_ servers, even\nthose that don't support async aggregates. With that, all FDW servers\nare enabled by default, and if the FDW extension supports async\naggregates, they will automatically be pushed down and will report an\nerror only if the remote FDW is too old to support it.\n\n> > > 2. Automation of creating definition of partialaggfuncs In development\n> > > of v17, I manually create definition of partialaggfuncs for avg, min, max, sum,\n> > count.\n> > > I am concerned that this may be undesirable.\n> > > So I am thinking that v17 should be modified to automate creating\n> > > definition of partialaggfuncs for all built-in aggregate functions.\n> > \n> > Are there any other builtin functions that need this? I think we can just\n> > provide documention for extensions on how to do this.\n> For practical purposes, it is sufficient \n> if partial aggregate for the above functions can be pushed down.\n> I think you are right, it would be sufficient to document how to achieve\n> partial aggregate pushdown for other built-in functions.\n\nUh, we actually want the patch to implement partial aggregate pushdown\nfor all builtin data types that can support it. Is that done? I think\nit is only extension aggregates, which we do not control, that need this\ndocumentation.\n\n> > > 3. Documentation\n> > > I need add explanation of partialaggfunc to documents on postgres_fdw and\n> > other places.\n> > \n> > I can help with that once we decide on the above.\n> Thank you. In the next verion of this patch, I will add documents on postgres_fdw\n> and other places. \n\nGood.\n\n> > I think 'partialaggfn' should be named 'aggpartialfn' to match other columns in\n> > pg_aggregate.\n> Thanks for the comment. I will modify this patch according to your comment.\n> \n> > For case 3, I don't even know how much pushdown those do of _any_\n> > aggregates to non-PG servers, let along parallel FDW ones. Does anyone\n> > know the details?\n> To allow partial aggregate pushdown for non-PG FDWs,\n> I think we need to add pushdown logic to their FDWs for each function.\n> For example, we need to add logic avg() -> sum()/count() to their FDWs for avg.\n> To allow parallel partial aggregate by non-PG FDWs,\n> I think we need to add FDW Routines for Asynchronous Execution to their FDWs[1].\n\nOkay, I think we can just implement this for 1-2 and let extensions\nworry about 3.\n\n> > I am confused by these changes to pg_aggegate:\n> > \n> > +{ aggfnoid => 'sum_p_int8', aggtransfn => 'int8_avg_accum',\n> > + aggfinalfn => 'int8_avg_serialize', aggcombinefn =>\n> > +'int8_avg_combine',\n> > + aggserialfn => 'int8_avg_serialize', aggdeserialfn =>\n> > +'int8_avg_deserialize',\n> > + aggtranstype => 'internal', aggtransspace => '48' },\n> > \n> > ...\n> > \n> > +{ aggfnoid => 'sum_p_numeric', aggtransfn => 'numeric_avg_accum',\n> > + aggfinalfn => 'numeric_avg_serialize', aggcombinefn =>\n> > +'numeric_avg_combine',\n> > + aggserialfn => 'numeric_avg_serialize',\n> > + aggdeserialfn => 'numeric_avg_deserialize',\n> > + aggtranstype => 'internal', aggtransspace => '128' },\n> > \n> > Why are these marked as 'sum' but use 'avg' functions?\n> This reason is that sum(int8)/sum(numeric) shares some functions with avg(int8)/avg(numeric),\n> and sum_p_int8 is aggpartialfn of sum(int8) and sum_p_numeric is aggpartialfn of sum(numeric).\n\nAh, I see this now, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n", "msg_date": "Fri, 7 Apr 2023 21:50:11 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> What I don't want is an error-prone setup where administrators have to\n> remember what the per-server settings are. Based on your suggestion,\n> let's allow CREATE SERVER to have an option 'enable_async_aggregate' (is\n> that the right name?), which defaults to 'true' for _all_ servers, even\n> those that don't support async aggregates.\n\nUh, what? Why would we not be able to tell from the remote server's\nversion number whether it has this ability?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Apr 2023 21:55:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Fri, Apr 7, 2023 at 09:55:00PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > What I don't want is an error-prone setup where administrators have to\n> > remember what the per-server settings are. Based on your suggestion,\n> > let's allow CREATE SERVER to have an option 'enable_async_aggregate' (is\n> > that the right name?), which defaults to 'true' for _all_ servers, even\n> > those that don't support async aggregates.\n> \n> Uh, what? Why would we not be able to tell from the remote server's\n> version number whether it has this ability?\n\nThat was covered here:\n\n\thttps://www.postgresql.org/message-id/ZC95C0%2BPVhVP3iax%40momjian.us\n\n\tI think we have three possible cases for aggregate pushdown to FDWs:\n\n\t1) Postgres built-in aggregate functions\n\t2) Postgres user-defined & extension aggregate functions\n\t3) aggregate functions calls to non-PG FDWs\n\n\tYour patch handles #1 by checking that the FDW Postgres version is the\n-->\tsame as the calling Postgres version. However, it doesn't check for\n-->\textension versions, and frankly, I don't see how we could implement that\n-->\tcleanly without significant complexity.\n\nThe issue is not a mismatch of postgres_fdw versions but the extension\nversions and whether the partial aggregate functions exist on the remote\nside, e.g., something like a PostGIS upgrade.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n", "msg_date": "Fri, 7 Apr 2023 22:34:08 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Fri, Apr 7, 2023 at 09:55:00PM -0400, Tom Lane wrote:\n>> Uh, what? Why would we not be able to tell from the remote server's\n>> version number whether it has this ability?\n\n> The issue is not a mismatch of postgres_fdw versions but the extension\n> versions and whether the partial aggregate functions exist on the remote\n> side, e.g., something like a PostGIS upgrade.\n\npostgres_fdw has no business pushing down calls to non-builtin functions\nunless the user has explicitly authorized that via the existing\nwhitelisting mechanism. I think you're reinventing the wheel,\nand not very well.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Apr 2023 22:44:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Fri, Apr 7, 2023 at 10:44:09PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Fri, Apr 7, 2023 at 09:55:00PM -0400, Tom Lane wrote:\n> >> Uh, what? Why would we not be able to tell from the remote server's\n> >> version number whether it has this ability?\n> \n> > The issue is not a mismatch of postgres_fdw versions but the extension\n> > versions and whether the partial aggregate functions exist on the remote\n> > side, e.g., something like a PostGIS upgrade.\n> \n> postgres_fdw has no business pushing down calls to non-builtin functions\n> unless the user has explicitly authorized that via the existing\n> whitelisting mechanism. I think you're reinventing the wheel,\n> and not very well.\n\nThe patch has you assign an option at CREATE AGGREGATE time if it can do\npush down, and postgres_fdw checks that. What whitelisting mechanism\nare you talking about? async_capable?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n", "msg_date": "Fri, 7 Apr 2023 22:53:53 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Fri, Apr 7, 2023 at 10:53:53PM -0400, Bruce Momjian wrote:\n> On Fri, Apr 7, 2023 at 10:44:09PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > On Fri, Apr 7, 2023 at 09:55:00PM -0400, Tom Lane wrote:\n> > >> Uh, what? Why would we not be able to tell from the remote server's\n> > >> version number whether it has this ability?\n> > \n> > > The issue is not a mismatch of postgres_fdw versions but the extension\n> > > versions and whether the partial aggregate functions exist on the remote\n> > > side, e.g., something like a PostGIS upgrade.\n> > \n> > postgres_fdw has no business pushing down calls to non-builtin functions\n> > unless the user has explicitly authorized that via the existing\n> > whitelisting mechanism. I think you're reinventing the wheel,\n> > and not very well.\n> \n> The patch has you assign an option at CREATE AGGREGATE time if it can do\n> push down, and postgres_fdw checks that. What whitelisting mechanism\n> are you talking about? async_capable?\n\nFYI, in the patch the CREATE AGGREGATE option is called PARTIALAGGFUNC.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n", "msg_date": "Fri, 7 Apr 2023 23:04:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On 2023-04-07 22:53:53 -0400, Bruce Momjian wrote:\n> On Fri, Apr 7, 2023 at 10:44:09PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > On Fri, Apr 7, 2023 at 09:55:00PM -0400, Tom Lane wrote:\n> > >> Uh, what? Why would we not be able to tell from the remote server's\n> > >> version number whether it has this ability?\n> > \n> > > The issue is not a mismatch of postgres_fdw versions but the extension\n> > > versions and whether the partial aggregate functions exist on the remote\n> > > side, e.g., something like a PostGIS upgrade.\n> > \n> > postgres_fdw has no business pushing down calls to non-builtin functions\n> > unless the user has explicitly authorized that via the existing\n> > whitelisting mechanism. I think you're reinventing the wheel,\n> > and not very well.\n> \n> The patch has you assign an option at CREATE AGGREGATE time if it can do\n> push down, and postgres_fdw checks that. What whitelisting mechanism\n> are you talking about? async_capable?\n\nextensions (string)\n\n This option is a comma-separated list of names of PostgreSQL extensions that are installed, in compatible versions, on both the local and remote servers. Functions and operators that are immutable and belong to a listed extension will be considered shippable to the remote server. This option can only be specified for foreign servers, not per-table.\n\n When using the extensions option, it is the user's responsibility that the listed extensions exist and behave identically on both the local and remote servers. Otherwise, remote queries may fail or behave unexpectedly.\n\n\n", "msg_date": "Fri, 7 Apr 2023 21:16:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Fri, Apr 7, 2023 at 09:16:14PM -0700, Andres Freund wrote:\n> On 2023-04-07 22:53:53 -0400, Bruce Momjian wrote:\n> > > postgres_fdw has no business pushing down calls to non-builtin functions\n> > > unless the user has explicitly authorized that via the existing\n> > > whitelisting mechanism. I think you're reinventing the wheel,\n> > > and not very well.\n> > \n> > The patch has you assign an option at CREATE AGGREGATE time if it can do\n> > push down, and postgres_fdw checks that. What whitelisting mechanism\n> > are you talking about? async_capable?\n> \n> extensions (string)\n> \n> This option is a comma-separated list of names of PostgreSQL extensions that are installed, in compatible versions, on both the local and remote servers. Functions and operators that are immutable and belong to a listed extension will be considered shippable to the remote server. This option can only be specified for foreign servers, not per-table.\n> \n> When using the extensions option, it is the user's responsibility that the listed extensions exist and behave identically on both the local and remote servers. Otherwise, remote queries may fail or behave unexpectedly.\n\nOkay, this is very helpful --- it is exactly the issue we are dealing\nwith --- how can we know if partial aggregate functions exists on the\nremote server. (I knew I was going to need API help on this.)\n\nSo, let's remove the PARTIALAGG_MINVERSION option from the patch and\njust make it automatic --- we push down builtin partial aggregates if\nthe remote server is the same or newer _major_ version than the sending\nserver. For extensions, if people have older extensions on the same or\nnewer foreign servers, they can adjust 'extensions' above.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n", "msg_date": "Sat, 8 Apr 2023 10:15:40 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Sat, Apr 8, 2023 at 10:15:40AM -0400, Bruce Momjian wrote:\n> On Fri, Apr 7, 2023 at 09:16:14PM -0700, Andres Freund wrote:\n> > extensions (string)\n> > \n> > This option is a comma-separated list of names of PostgreSQL extensions that are installed, in compatible versions, on both the local and remote servers. Functions and operators that are immutable and belong to a listed extension will be considered shippable to the remote server. This option can only be specified for foreign servers, not per-table.\n> > \n> > When using the extensions option, it is the user's responsibility that the listed extensions exist and behave identically on both the local and remote servers. Otherwise, remote queries may fail or behave unexpectedly.\n> \n> Okay, this is very helpful --- it is exactly the issue we are dealing\n> with --- how can we know if partial aggregate functions exists on the\n> remote server. (I knew I was going to need API help on this.)\n> \n> So, let's remove the PARTIALAGG_MINVERSION option from the patch and\n> just make it automatic --- we push down builtin partial aggregates if\n> the remote server is the same or newer _major_ version than the sending\n> server. For extensions, if people have older extensions on the same or\n> newer foreign servers, they can adjust 'extensions' above.\n\nLooking further, I don't see any cases where we check if a builtin\nfunction added in a major release also exists on the foreign server, so\nmaybe we don't need any checks but just need a mention in the release\nnotes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n", "msg_date": "Sat, 8 Apr 2023 12:18:21 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Momjian, Mr.Lane, Mr.Freund.\n\nThank you for advices.\n\n> From: Bruce Momjian <bruce@momjian.us>\n> > > > 2. Automation of creating definition of partialaggfuncs In\n> > > > development of v17, I manually create definition of\n> > > > partialaggfuncs for avg, min, max, sum,\n> > > count.\n> > > > I am concerned that this may be undesirable.\n> > > > So I am thinking that v17 should be modified to automate creating\n> > > > definition of partialaggfuncs for all built-in aggregate functions.\n> > >\n> > > Are there any other builtin functions that need this? I think we\n> > > can just provide documention for extensions on how to do this.\n> > For practical purposes, it is sufficient if partial aggregate for the\n> > above functions can be pushed down.\n> > I think you are right, it would be sufficient to document how to\n> > achieve partial aggregate pushdown for other built-in functions.\n> \n> Uh, we actually want the patch to implement partial aggregate pushdown for all\n> builtin data types that can support it. Is that done? I think it is only extension\n> aggregates, which we do not control, that need this documentation.\nThe last version of this patch can't pushdown partial aggregate for all builtin aggregate functions that can support it.\nI will improve this patch to pushdown partial aggregate for all builtin aggregate functions\nthat can support it.\n\nThere is one more thing I would like your opinion on.\nAs the major version of PostgreSQL increase, it is possible that\nthe new builtin aggregate functions are added to the newer PostgreSQL.\nThis patch assume that aggpartialfns definitions exist in BKI files.\nDue to this assumption, someone should add aggpartialfns definitions of new builtin aggregate functions to BKI files.\nThere are two possible ways to address this issue. Would the way1 be sufficient?\nOr would way2 be preferable?\n way1) Adding documentaion for how to add these definitions to BKI files\n way2) Improving this patch to automatically add these definitions to BKI files by some tool such as initdb.\n\n> From: Bruce Momjian <bruce@momjian.us>\n> On Fri, Apr 7, 2023 at 09:16:14PM -0700, Andres Freund wrote:\n> > On 2023-04-07 22:53:53 -0400, Bruce Momjian wrote:\n> > > > postgres_fdw has no business pushing down calls to non-builtin\n> > > > functions unless the user has explicitly authorized that via the\n> > > > existing whitelisting mechanism. I think you're reinventing the\n> > > > wheel, and not very well.\n> > >\n> > > The patch has you assign an option at CREATE AGGREGATE time if it\n> > > can do push down, and postgres_fdw checks that. What whitelisting\n> > > mechanism are you talking about? async_capable?\n> >\n> > extensions (string)\n> >\n> > This option is a comma-separated list of names of PostgreSQL\n> extensions that are installed, in compatible versions, on both the local and\n> remote servers. Functions and operators that are immutable and belong to a\n> listed extension will be considered shippable to the remote server. This option\n> can only be specified for foreign servers, not per-table.\n> >\n> > When using the extensions option, it is the user's responsibility that the\n> listed extensions exist and behave identically on both the local and remote\n> servers. Otherwise, remote queries may fail or behave unexpectedly.\n> \n> Okay, this is very helpful --- it is exactly the issue we are dealing with --- how\n> can we know if partial aggregate functions exists on the remote server. (I\n> knew I was going to need API help on this.)\n> \n> So, let's remove the PARTIALAGG_MINVERSION option from the patch and just\n> make it automatic --- we push down builtin partial aggregates if the remote\n> server is the same or newer _major_ version than the sending server. For\n> extensions, if people have older extensions on the same or newer foreign\n> servers, they can adjust 'extensions' above.\nOkay, I understand. I will remove PARTIALAGG_MINVERSION option from the patch\nand I will add check whether aggpartialfn depends on some extension which\nis containd in extensions list of the postgres_fdw's foreign server.\nIn the next version of this patch,\nwe can pushdown partial aggregate for an user-defined aggregate function only \nwhen the function pass through this check.\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation\n\n\n", "msg_date": "Mon, 10 Apr 2023 01:18:37 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Mon, Apr 10, 2023 at 01:18:37AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> > Uh, we actually want the patch to implement partial aggregate pushdown for all\n> > builtin data types that can support it. Is that done? I think it is only extension\n> > aggregates, which we do not control, that need this documentation.\n> The last version of this patch can't pushdown partial aggregate for all builtin aggregate functions that can support it.\n> I will improve this patch to pushdown partial aggregate for all builtin aggregate functions\n> that can support it.\n> \n> There is one more thing I would like your opinion on.\n> As the major version of PostgreSQL increase, it is possible that\n> the new builtin aggregate functions are added to the newer PostgreSQL.\n> This patch assume that aggpartialfns definitions exist in BKI files.\n> Due to this assumption, someone should add aggpartialfns definitions of new builtin aggregate functions to BKI files.\n> There are two possible ways to address this issue. Would the way1 be sufficient?\n> Or would way2 be preferable?\n> way1) Adding documentaion for how to add these definitions to BKI files\n> way2) Improving this patch to automatically add these definitions to BKI files by some tool such as initdb.\n\nI think documentation is sufficient. You already showed that someone\ncan do this with CREATE AGGREGATE for non-builtin aggregates.\n\n> > So, let's remove the PARTIALAGG_MINVERSION option from the patch and just\n> > make it automatic --- we push down builtin partial aggregates if the remote\n> > server is the same or newer _major_ version than the sending server. For\n> > extensions, if people have older extensions on the same or newer foreign\n> > servers, they can adjust 'extensions' above.\n> Okay, I understand. I will remove PARTIALAGG_MINVERSION option from the patch\n> and I will add check whether aggpartialfn depends on some extension which\n> is containd in extensions list of the postgres_fdw's foreign server.\n\nYes, good. Did we never push down aggregates before? I thought we\npushed down partitionwise aggregates already, and such a check should\nalready be done. If the check isn't there, it should be.\n\n> In the next version of this patch,\n> we can pushdown partial aggregate for an user-defined aggregate function only \n> when the function pass through this check.\n\nUnderstood.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n", "msg_date": "Thu, 13 Apr 2023 02:12:44 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Thu, Apr 13, 2023 at 02:12:44AM -0400, Bruce Momjian wrote:\n> > In the next version of this patch,\n> > we can pushdown partial aggregate for an user-defined aggregate function only \n> > when the function pass through this check.\n> \n> Understood.\n\nIn summary, we don't do any version check for built-in function\npushdown, so we don't need it for aggregates either. We check extension\nfunctions against the extension pushdown list, so we should be checking\nthis for partial aggregate pushdown, and for partition-wise aggregate\npushdown. If we don't do that last check already, it is a bug.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n", "msg_date": "Thu, 13 Apr 2023 02:50:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Momjian.\n\n> > There is one more thing I would like your opinion on.\n> > As the major version of PostgreSQL increase, it is possible that the\n> > new builtin aggregate functions are added to the newer PostgreSQL.\n> > This patch assume that aggpartialfns definitions exist in BKI files.\n> > Due to this assumption, someone should add aggpartialfns definitions of\n> new builtin aggregate functions to BKI files.\n> > There are two possible ways to address this issue. Would the way1 be\n> sufficient?\n> > Or would way2 be preferable?\n> > way1) Adding documentaion for how to add these definitions to BKI files\n> > way2) Improving this patch to automatically add these definitions to BKI\n> files by some tool such as initdb.\n> \n> I think documentation is sufficient. You already showed that someone can do\n> this with CREATE AGGREGATE for non-builtin aggregates.\nThank you for your opinion. I will modify this patch according to the way1.\n\n> > > So, let's remove the PARTIALAGG_MINVERSION option from the patch and\n> > > just make it automatic --- we push down builtin partial aggregates\n> > > if the remote server is the same or newer _major_ version than the\n> > > sending server. For extensions, if people have older extensions on\n> > > the same or newer foreign servers, they can adjust 'extensions' above.\n> > Okay, I understand. I will remove PARTIALAGG_MINVERSION option from\n> > the patch and I will add check whether aggpartialfn depends on some\n> > extension which is containd in extensions list of the postgres_fdw's foreign\n> server.\n> \n> Yes, good. Did we never push down aggregates before? I thought we\n> pushed down partitionwise aggregates already, and such a check should\n> already be done. If the check isn't there, it should be.\nYes. The last version of this patch(and original postgres_fdw) checks if\nuser-defined aggregate depends some extension which is contained in 'extensions'.\nBut, in the last version of this patch, there is no such check for \naggpartialfn of user-defined aggregate. So, I will add such check to this patch. \nI think that this modification is easy to do . \n\n> In summary, we don't do any version check for built-in function pushdown, so\n> we don't need it for aggregates either. We check extension functions against\n> the extension pushdown list, so we should be checking this for partial\n> aggregate pushdown, and for partition-wise aggregate pushdown. If we don't\n> do that last check already, it is a bug.\nI understood.\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation\n\n\n", "msg_date": "Thu, 13 Apr 2023 10:56:26 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Wed, Nov 30, 2022 at 3:12 AM Alexander Pyhalov\n<a.pyhalov@postgrespro.ru> wrote:\n> 1) In previous version of the patch aggregates, which had partialaggfn,\n> were ok to push down. And it was a definite sign that aggregate can be\n> pushed down. Now we allow pushing down an aggregate, which prorettype is\n> not internal and aggfinalfn is not defined. Is it safe for all\n> user-defined (or builtin) aggregates, even if they are generally\n> shippable?\n\nI think that this is exactly the correct test. Here's how to think\nabout it: to perform an aggregate, you merge all the values into the\ntransition state, and then you apply the final function once at the\nend. So the process looks like this:\n\nTRANSITION_STATE_0 + VALUE_1 = TRANSITION_STATE_1\nTRANSITION_STATE_1 + VALUE_2 = TRANSITION_STATE_2\n...\nTRANSITION_STATE_N => RESULT\n\nHere, + represents applying the transition function and => represents\napplying the final function.\n\nIn the case of parallel query, we want every worker to be able to\nincorporate values into its own transition states and then merge all\nthe transition states at the end. That's a problem, because the\ntransition function expects a transition state and a value, not two\ntransition states. So we invented the idea of a \"combine\" function to\nsolve this problem. A combine function takes two transition states\nand produces a new transition state. That allows each worker to create\nan initially empty transition state, merge a bunch of values into it,\nand then pass the result back to the leader, which can combine all the\ntransition states using the combine function, and then apply the final\nfunction at the end.\n\nThe same kind of idea works here. If we want to push down an entire\naggregate, there's no problem, provided the remote side supports it:\njust push down the whole operation and get the result. But if we want\nto push down part of the aggregate, then what we want to get back is a\ntransition value that we can then combine with other values (using the\ntransition function) or other transition states (using the combine\nfunction) locally. That's tricky, because there's no SQL syntax to ask\nthe remote side to give us the transition value rather than the final\nvalue. I think we would need to add that to solve this problem in its\nfull generality. However, in the special case where there's no final\nfunction, the problem goes away, because then a transition value and a\nresult are identical. If we ask for a result, we can treat it as a\ntransition value, and there's no problem.\n\nInternal values are a problem. Generally, you don't see internal as\nthe return type for an aggregate, because then the aggregate couldn't\nbe called by the user. An internal value can't be returned. However,\nit's pretty common to see an aggregate that has an internal value as a\ntransition type, and something else as the result type. In such cases,\neven if we had some syntax telling the remote side to send the\ntransition value rather than the final value, it would not be\nsufficient, because the internal value still couldn't be transmitted.\nThis problem also arises for parallel query, where we want to move\ntransition values between processes within a single database cluster.\nWe solved that problem using aggserialfn and aggdeserialfn.\naggserialfn converts an internal transition value (which can't be\nmoved between processes) into a bytea, and aggdeserialfn does the\nreverse. Maybe we would adopt the same solution here: our syntax that\ntells the remote side to give us the transition value rather than the\nfinal value could also tell the remote side to serialize it to bytea\nif it's an internal type. However, if we did this, we'd have to be\nsure that our deserialization functions were pretty well hardened\nagainst unexpected or even malicious input, because who knows whether\nthat remote server is really going to send us a bytea in the format\nthat we're expecting to get?\n\nAnyway, for the present patch, I think that testing whether there's a\nfinal function is the right thing, and testing whether the return type\nis internal doesn't hurt. If we want to extend this to other cases in\nthe future, then I think we need syntax to ask the remote side for the\nunfinalized aggregate, like SELECT UNFINALIZED MAX(a) FROM t1, or\nwhatever. I'm not sure what the best concrete SQL syntax is - probably\nnot that.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Apr 2023 12:01:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Thu, Apr 13, 2023 at 10:56:26AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> > Yes, good. Did we never push down aggregates before? I thought we\n> > pushed down partitionwise aggregates already, and such a check should\n> > already be done. If the check isn't there, it should be.\n> Yes. The last version of this patch(and original postgres_fdw) checks if\n> user-defined aggregate depends some extension which is contained in 'extensions'.\n> But, in the last version of this patch, there is no such check for \n> aggpartialfn of user-defined aggregate. So, I will add such check to this patch. \n> I think that this modification is easy to do . \n\nGood, so our existing code is correct and the patch just needs\nadjustment.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n", "msg_date": "Fri, 14 Apr 2023 02:54:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Bruce, hackers.\n\nI updated the patch.\nThe following is a list of comments received on the previous version of the patch\nand my update to them in this version of the patch.\n\n[comment1]\n> From: Bruce Momjian <bruce@momjian.us>\n> Sent: Saturday, April 8, 2023 10:50 AM\n> Uh, we actually want the patch to implement partial aggregate pushdown for all\n> builtin data types that can support it.\n\nI improved the patch to push partial aggregate down for all builtin aggregates that \nsupport it.\n\n[comment2]\n> From: Bruce Momjian <bruce@momjian.us>\n> Sent: Thursday, April 13, 2023 3:51 PM\n> In summary, we don't do any version check for built-in function pushdown, so\n> we don't need it for aggregates either. We check extension functions against\n> the extension pushdown list, so we should be checking this for partial\n> aggregate pushdown, and for partition-wise aggregate pushdown.\n\nI removed partialagg_minversion from pg_aggregate and removed the version \ncheck for partial aggregate pushdown by it. postgres_fdw assumes that every \nbuilt-in aggregate function has its aggpartialfunc on remote server.\npostgres_fdw assumes that a user-defined aggregate function has its \naggpartialfunc on remote server only when the user-defined aggregate function \nand the aggpartialfunc of it belong to some extension that's listed in the \nforeign server's extensions option. \n\n[comment3]\n> From: Bruce Momjian <bruce@momjian.us>\n> Sent: Saturday, April 8, 2023 10:50 AM\n> > > > 3. Documentation\n> > > > I need add explanation of partialaggfunc to documents on\n> > > > postgres_fdw and\n> > > other places.\n> > >\n> > > I can help with that once we decide on the above.\n> > Thank you. In the next verion of this patch, I will add documents on\n> > postgres_fdw and other places.\n> \n> Good.\n\nI appended description for partial aggregate pushdown feature by postgres_fdw\nto existing documents.\nThe following is a list of sgml files that have been appended and the\ncontents of the additions.\n postgres-fdw.sgml : Description about this partial aggregate pushdown feature\n and definition of aggpartialfunc.\n * Unlike existing aggregate pushdown feature, this partial aggregate\n pushdown feature is a one of the built-in sharding features in PostgreSQL.\n So I added a section about built-in sharding feature in PostgreSQL, and in that\n section I added a description of this partial aggregate pushdown feature.\n In this document, a description of the built-in sharding feature in PostgreSQL\n is based on [1].\n xaggr.sgml :Partial aggregate pushdown feature for user-defined\n aggregate functions.\n create_aggregate.sgml:Description about additional parameters for\n partial aggregate pushdown feature.\n catalogs.sgml :Description about aggpartialfn column.\n\n[comment4]\n> From: Bruce Momjian <bruce@momjian.us>\n> Sent: Thursday, April 13, 2023 3:13 PM\n> > There is one more thing I would like your opinion on.\n> > As the major version of PostgreSQL increase, it is possible that the\n> > new builtin aggregate functions are added to the newer PostgreSQL.\n> > This patch assume that aggpartialfns definitions exist in BKI files.\n> > Due to this assumption, someone should add aggpartialfns definitions of\n> new builtin aggregate functions to BKI files.\n> > There are two possible ways to address this issue. Would the way1 be\n> sufficient?\n> > Or would way2 be preferable?\n> > way1) Adding documentaion for how to add these definitions to BKI files\n> > way2) Improving this patch to automatically add these definitions to BKI\n> files by some tool such as initdb.\n> \n> I think documentation is sufficient. You already showed that someone can do\n> this with CREATE AGGREGATE for non-builtin aggregates.\n\nThe update addressesing to comment3 also addresses this comment.\nIf a new aggregate function is added in future, \ndefinition of aggpartialfunc in postgres-fdw.sgml\nhelps postgres_fdw developer add a new aggpartialfunc corresponding to the aggregate\nfunction to existing BKI files.\nFor user-defined aggregate functions, description in xaggr.sgml and\ncreate_aggregate.sgml help a user create an aggpartialfunc corresponding to\na user-defined aggregate function.\n\nIn addition, I added a validation mechanism to determine whether an\naggpartialfunc corresponding to an aggregate function is correctly created.\nThe aggpartialfunc information for the built-in aggregate functions is registered\nin the system catalog pg_aggregate at the time the database cluster is created\nfrom BKI files.\nSo I added this validation process to the regression test of postgres_fdw.\n\nHowever, I am concerned that it might be more appropriate to add this validation\nprocess to the build process or initdb, rather than to the regression test.\nI would appreciate comments from the PostgreSQL community on this point.\nFor aggpartialfunc for user-defined functions,\nI added this validation process to pg_aggregate.c.\n\n[comment5]\n> From: Bruce Momjian <bruce@momjian.us>\n> Sent: Friday, April 7, 2023 11:00 AM\n> I think 'partialaggfn' should be named 'aggpartialfn' to match other columns in\n> pg_aggregate.\n\nFixed.\n\n[1] PostgreSQL wiki, WIP PostgreSQL Sharding\nhttps://wiki.postgresql.org/wiki/WIP_PostgreSQL_Sharding\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Fri, 2 Jun 2023 03:54:06 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Fujii.Yuki@df.MitsubishiElectric.co.jp писал 2023-06-02 06:54:\n> Hi Mr.Bruce, hackers.\n> \n> I updated the patch.\n> The following is a list of comments received on the previous version\n> of the patch\n> and my update to them in this version of the patch.\n> \n\nHi.\n\nI've looked through the last version of the patch.\n\nHave found one issue -\n\nsrc/backend/catalog/pg_aggregate.c\n\n585 if(strcmp(strVal(linitial(aggpartialfnName)), \naggName) == 0){\n586 if(((aggTransType != INTERNALOID) && \n(finalfn != InvalidOid))\n587 || ((aggTransType == \nINTERNALOID) && (finalfn != serialfn)))\n588 elog(ERROR, \"%s is not its own \naggpartialfunc\", aggName);\n589 } else {\n\nHere string comparison of aggName and aggpartialfnName looks very \nsuspicios - it seems you should compare oids, not names (in this case,\nlikely oids of transition function and partial aggregation function). \nThe fact that aggregate name matches partial aggregation function name\nis not a enough to make any decisions.\n\n\nIn documentation\n\ndoc/src/sgml/postgres-fdw.sgml:\n\n 930 <filename>postgres_fdw</filename> attempts to optimize remote \nqueries to reduce\n 931 the amount of data transferred from foreign servers. This is \ndone by\n 932 sending query <literal>WHERE</literal> clauses and ggregate \nexpressions\n 933 to the remote server for execution, and by not retrieving table \ncolumns that\n 934 are not needed for the current query.\n 935 To reduce the risk of misexecution of queries,\n 936 <literal>WHERE</literal> clauses and ggregate expressions are \nnot sent to\n 937 the remote server unless they use only data types, operators, \nand functions\n 938 that are built-in or belong to an extension that's listed in the \nforeign\n 939 server's <literal>extensions</literal> option.\n 940 Operators and functions in such clauses must\n 941 be <literal>IMMUTABLE</literal> as well.\n\nthere are misprints in lines 932 and 936 - missing \"a\" in \"aggregate\" \nexpressions.\n\nNote that after these changes \"select sum()\" will fail for certain \ncases, when remote server version is not the latest. In other cases we \ntried\nto preserve compatibility. Should we have a switch for a foreign server \nto turn this optimization off? Or do we just state that users\nshould use different workarounds if remote server version doesn't match \nlocal one?\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Mon, 05 Jun 2023 12:00:27 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Mon, Jun 5, 2023 at 12:00:27PM +0300, Alexander Pyhalov wrote:\n> Note that after these changes \"select sum()\" will fail for certain cases,\n> when remote server version is not the latest. In other cases we tried\n> to preserve compatibility. Should we have a switch for a foreign server to\n> turn this optimization off? Or do we just state that users\n> should use different workarounds if remote server version doesn't match\n> local one?\n\nWe covered this in April in this and previous emails:\n\n\thttps://www.postgresql.org/message-id/ZDGTza4rovCa%2BN3d%40momjian.us\n\nWe don't check the version number for _any_ builtin functions so why\nwould we need to check for aggregate pushdown? Yes, these will be new\nfunctions in PG 17, we have added functions regularly in major releases\nand have never heard reports of problems about that.\n\nThis patch will filter pushdown based on the FDW extension whitelist:\n\n\thttps://www.postgresql.org/message-id/20230408041614.wfasmdm46bupbif4%40awork3.anarazel.de\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 5 Jun 2023 12:26:05 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Bruce Momjian писал 2023-06-05 19:26:\n> On Mon, Jun 5, 2023 at 12:00:27PM +0300, Alexander Pyhalov wrote:\n>> Note that after these changes \"select sum()\" will fail for certain \n>> cases,\n>> when remote server version is not the latest. In other cases we tried\n>> to preserve compatibility. Should we have a switch for a foreign \n>> server to\n>> turn this optimization off? Or do we just state that users\n>> should use different workarounds if remote server version doesn't \n>> match\n>> local one?\n> \n> We covered this in April in this and previous emails:\n> \n> \thttps://www.postgresql.org/message-id/ZDGTza4rovCa%2BN3d%40momjian.us\n> \n> We don't check the version number for _any_ builtin functions so why\n> would we need to check for aggregate pushdown? Yes, these will be new\n> functions in PG 17, we have added functions regularly in major releases\n> and have never heard reports of problems about that.\n> \nHi.\n\nI've seen this message. But introduction of new built-in function will \nbreak requests to old servers\nonly if this new function is used in the request (this means that query \nchanges). However, this patch\nchanges the behavior of old queries, which worked prior to update. This \nseems to be different to me.\nAlso I see that in connection.c (configure_remote_session()), we care \nabout old PostgreSQL versions.\nAnd now we make querying them more tricky. Is it consistent? Do you \nthink that\nenable_partitionwise_aggregate is a good enough protection in this \ncases?\n\nIn documentation I see\n\n\n\"F.38.7. Cross-Version Compatibility\npostgres_fdw can be used with remote servers dating back to PostgreSQL \n8.3. Read-only capability is available back to 8.1. A limitation however \nis that postgres_fdw generally assumes that immutable built-in functions \nand operators are safe to send to the remote server for execution, if \nthey appear in a WHERE clause for a foreign table. Thus, a built-in \nfunction that was added since the remote server's release might be sent \nto it for execution, resulting in “function does not exist” or a similar \nerror. This type of failure can be worked around by rewriting the query, \nfor example by embedding the foreign table reference in a sub-SELECT \nwith OFFSET 0 as an optimization fence, and placing the problematic \nfunction or operator outside the sub-SELECT.\"\n\nLikely, this paragraph should be expanded to state that partition-wise \naggregation for many functions can fail to work with old foreign \nservers.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Mon, 05 Jun 2023 21:14:46 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Fri, Jun 2, 2023 at 03:54:06AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Hi Mr.Bruce, hackers.\n> \n> I updated the patch.\n> The following is a list of comments received on the previous version of the patch\n> and my update to them in this version of the patch.\n\nThis thread started in October 2021 so I would like to explain what this\nfeature adds.\n\nBasically for partitions made up of postgres_fdw tables, there are four\npossible optimizations:\n\n1. Pruning, 3 stages, see slide 30 here:\n\n\thttps://momjian.us/main/writings/pgsql/partitioning.pdf#page=30\n\n2. Parallelism across partitions, see slide 38 here:\n\n\thttps://momjian.us/main/writings/pgsql/beyond.pdf#page=38\n\n3. Pushdown of partition-wise joins and aggregates, see slide 43 here:\n\n\thttps://momjian.us/main/writings/pgsql/partitioning.pdf#page=43\n\n4. Pushdown of aggregates that aren't partition-wise\n\nAs far as I know, over the years we have accomplished all of these\nitems, except for #4. #3 involves aggregates where the GROUP BY or\nJOINed tables match the partition keys.\n\nNumber 4 involves things like a SUM our COUNT that does not match the\npartition key, or has no groupings at all.\n\n#3 is easier than #4 since we just need to pass _rows_ back from the\nforeign servers. #4 is more complex because _partial_ count/sum, or\neven average values must be passed from the foreign servers to the\nrequesting server.\n\nThe good news is that we already have partial aggregate support as part\nof our parallel aggregate feature, see:\n\n\thttps://momjian.us/main/writings/pgsql/beyond.pdf#page=38\n\nWhat the patch does is to expand the existing partial aggregate code to\nallow partial aggregate results to pass from the foreign servers to the\nrequesting server. This feature will be very useful for data warehouse\nqueries that need to compute aggregate across partitions.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 5 Jun 2023 20:10:18 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Pyhalov.\r\n\r\nThank you for your always thoughtful review.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Monday, June 5, 2023 6:00 PM\r\n> Have found one issue -\r\n> \r\n> src/backend/catalog/pg_aggregate.c\r\n> \r\n> 585 if(strcmp(strVal(linitial(aggpartialfnName)),\r\n> aggName) == 0){\r\n> 586 if(((aggTransType != INTERNALOID) &&\r\n> (finalfn != InvalidOid))\r\n> 587 || ((aggTransType ==\r\n> INTERNALOID) && (finalfn != serialfn)))\r\n> 588 elog(ERROR, \"%s is not its own\r\n> aggpartialfunc\", aggName);\r\n> 589 } else {\r\n> \r\n> Here string comparison of aggName and aggpartialfnName looks very\r\n> suspicios - it seems you should compare oids, not names (in this case,\r\n> likely oids of transition function and partial aggregation function).\r\n> The fact that aggregate name matches partial aggregation function name\r\n> is not a enough to make any decisions.\r\n\r\nI see no problem with this string comparison. Here is the reason.\r\nThe intent of this code is, to determine whether the user specifies \r\nthe new aggregate function whose aggpartialfunc is itself.\r\nFor two aggregate functions,\r\nIf the argument list and function name match, then the two aggregate functions must match.\r\nBy definition of aggpartialfunc,\r\nevery aggregate function and its aggpartialfn must have the same argument list.\r\nThus, if aggpartialfnName and aggName are equal as strings,\r\nI think it is correct to determine that the user is specifying \r\nthe new aggregate function whose aggpartialfunc is itself.\r\n\r\nHowever, since the document does not state these intentions\r\nI think your suspicions are valid.\r\nTherefore, I have added a specification to the document reflecting the above intentions.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Monday, June 5, 2023 6:00 PM\r\n> In documentation\r\n> \r\n> doc/src/sgml/postgres-fdw.sgml:\r\n> \r\n> 930 <filename>postgres_fdw</filename> attempts to optimize remote\r\n> queries to reduce\r\n> 931 the amount of data transferred from foreign servers. This is\r\n> done by\r\n> 932 sending query <literal>WHERE</literal> clauses and ggregate\r\n> expressions\r\n> 933 to the remote server for execution, and by not retrieving table\r\n> columns that\r\n> 934 are not needed for the current query.\r\n> 935 To reduce the risk of misexecution of queries,\r\n> 936 <literal>WHERE</literal> clauses and ggregate expressions are\r\n> not sent to\r\n> 937 the remote server unless they use only data types, operators,\r\n> and functions\r\n> 938 that are built-in or belong to an extension that's listed in the\r\n> foreign\r\n> 939 server's <literal>extensions</literal> option.\r\n> 940 Operators and functions in such clauses must\r\n> 941 be <literal>IMMUTABLE</literal> as well.\r\n> \r\n> there are misprints in lines 932 and 936 - missing \"a\" in \"aggregate\"\r\n> expressions.\r\n\r\nFixed.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Monday, June 5, 2023 6:00 PM\r\n> Note that after these changes \"select sum()\" will fail for certain\r\n> cases, when remote server version is not the latest. In other cases we\r\n> tried\r\n> to preserve compatibility. Should we have a switch for a foreign server\r\n> to turn this optimization off? Or do we just state that users\r\n> should use different workarounds if remote server version doesn't match\r\n> local one?\r\n\r\nIt is the latter.\r\nI added description about the above limitation to F.38.6. Built-in sharding in PostgreSQL and\r\nF.38.8 Cross-Version Compatibility of doc/src/sgml/postgres-fdw.sgml.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Tuesday, June 6, 2023 3:15 AM\r\n> Bruce Momjian писал 2023-06-05 19:26:\r\n> > On Mon, Jun 5, 2023 at 12:00:27PM +0300, Alexander Pyhalov wrote:\r\n> >> Note that after these changes \"select sum()\" will fail for certain\r\n> >> cases, when remote server version is not the latest. In other cases\r\n> >> we tried to preserve compatibility. Should we have a switch for a\r\n> >> foreign server to turn this optimization off? Or do we just state\r\n> >> that users should use different workarounds if remote server version\r\n> >> doesn't match local one?\r\n> >\r\n> > We covered this in April in this and previous emails:\r\n> >\r\n> >\r\n> \thttps://www.postgresql.org/message-id/ZDGTza4rovCa%2BN3d%40\r\n> momjian.us\r\n> >\r\n> > We don't check the version number for _any_ builtin functions so why\r\n> > would we need to check for aggregate pushdown? Yes, these will be new\r\n> > functions in PG 17, we have added functions regularly in major\r\n> > releases and have never heard reports of problems about that.\r\n> >\r\n> Hi.\r\n> \r\n> I've seen this message. But introduction of new built-in function will break\r\n> requests to old servers only if this new function is used in the request (this\r\n> means that query changes). However, this patch changes the behavior of old\r\n> queries, which worked prior to update. This seems to be different to me.\r\n\r\nYou are right.\r\nHowever, for now, partial aggregates pushdown is mainly used when using built-in sharding in PostgreSQL.\r\nI believe when using built-in sharding in PostgreSQL, the version of the primary node server and\r\nthe version of the remote server will usually be the same.\r\nSo I think it would be sufficient to include in the documentation a note about such problem\r\nand a workaround for them.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Tue, 6 Jun 2023 03:08:50 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Fujii.Yuki@df.MitsubishiElectric.co.jp писал 2023-06-06 06:08:\n> Hi Mr.Pyhalov.\n> \n> Thank you for your always thoughtful review.\n> \n>> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\n>> Sent: Monday, June 5, 2023 6:00 PM\n>> Have found one issue -\n>> \n>> src/backend/catalog/pg_aggregate.c\n>> \n>> 585 if(strcmp(strVal(linitial(aggpartialfnName)),\n>> aggName) == 0){\n>> 586 if(((aggTransType != INTERNALOID) &&\n>> (finalfn != InvalidOid))\n>> 587 || ((aggTransType ==\n>> INTERNALOID) && (finalfn != serialfn)))\n>> 588 elog(ERROR, \"%s is not its own\n>> aggpartialfunc\", aggName);\n>> 589 } else {\n>> \n>> Here string comparison of aggName and aggpartialfnName looks very\n>> suspicios - it seems you should compare oids, not names (in this case,\n>> likely oids of transition function and partial aggregation function).\n>> The fact that aggregate name matches partial aggregation function name\n>> is not a enough to make any decisions.\n> \n> I see no problem with this string comparison. Here is the reason.\n> The intent of this code is, to determine whether the user specifies\n> the new aggregate function whose aggpartialfunc is itself.\n> For two aggregate functions,\n> If the argument list and function name match, then the two aggregate\n> functions must match.\n> By definition of aggpartialfunc,\n> every aggregate function and its aggpartialfn must have the same \n> argument list.\n> Thus, if aggpartialfnName and aggName are equal as strings,\n> I think it is correct to determine that the user is specifying\n> the new aggregate function whose aggpartialfunc is itself.\n> \n> However, since the document does not state these intentions\n> I think your suspicions are valid.\n> Therefore, I have added a specification to the document reflecting the\n> above intentions.\n> \n\nHi. Let me explain.\n\nLook at this example, taken from test.\n\nCREATE AGGREGATE udf_avg_p_int4(int4) (\n sfunc = int4_avg_accum,\n stype = _int8,\n combinefunc = int4_avg_combine,\n initcond = '{0,0}'\n);\nCREATE AGGREGATE udf_sum(int4) (\n sfunc = int4_avg_accum,\n stype = _int8,\n finalfunc = int8_avg,\n combinefunc = int4_avg_combine,\n initcond = '{0,0}',\n aggpartialfunc = udf_avg_p_int4\n);\n\nNow, let's create another aggregate.\n\n# create schema test ;\ncreate aggregate test.udf_avg_p_int4(int4) (\n sfunc = int4_avg_accum,\n stype = _int8,\n finalfunc = int8_avg,\n combinefunc = int4_avg_combine,\n initcond = '{0,0}',\n aggpartialfunc = udf_avg_p_int4\n);\nERROR: udf_avg_p_int4 is not its own aggpartialfunc\n\nWhat's the difference between test.udf_avg_p_int4(int4) aggregate and \nudf_sum(int4)? They are essentially the same, but second one can't be \ndefined.\n\nAlso note difference:\n\n# CREATE AGGREGATE udf_sum(int4) (\n sfunc = int4_avg_accum,\n stype = _int8,\n finalfunc = int8_avg,\n combinefunc = pg_catalog.int4_avg_combine,\n initcond = '{0,0}',\n aggpartialfunc = udf_avg_p_int4\n);\nCREATE AGGREGATE\n\n# CREATE AGGREGATE udf_sum(int4) (\n sfunc = int4_avg_accum,\n stype = _int8,\n finalfunc = int8_avg,\n combinefunc = pg_catalog.int4_avg_combine,\n initcond = '{0,0}',\n aggpartialfunc = public.udf_avg_p_int4\n);\nERROR: aggpartialfnName is invalid\n\nIt seems that assumption about aggpartialfnName - that it's a \nnon-qualified name is incorrect. And if we use qualified names, we can't \ncompare just names, likely we should compare oids.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Tue, 06 Jun 2023 07:19:01 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Pyhalov.\r\n\r\nThank you for comments.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Tuesday, June 6, 2023 1:19 PM\r\n> >> Have found one issue -\r\n> >>\r\n> >> src/backend/catalog/pg_aggregate.c\r\n> >>\r\n> >> 585 if(strcmp(strVal(linitial(aggpartialfnName)),\r\n> >> aggName) == 0){\r\n> >> 586 if(((aggTransType != INTERNALOID) &&\r\n> >> (finalfn != InvalidOid))\r\n> >> 587 || ((aggTransType ==\r\n> >> INTERNALOID) && (finalfn != serialfn)))\r\n> >> 588 elog(ERROR, \"%s is not its own\r\n> >> aggpartialfunc\", aggName);\r\n> >> 589 } else {\r\n> >>\r\n> >> Here string comparison of aggName and aggpartialfnName looks very\r\n> >> suspicios - it seems you should compare oids, not names (in this\r\n> >> case, likely oids of transition function and partial aggregation function).\r\n> >> The fact that aggregate name matches partial aggregation function\r\n> >> name is not a enough to make any decisions.\r\n> >\r\n> > I see no problem with this string comparison. Here is the reason.\r\n> > The intent of this code is, to determine whether the user specifies\r\n> > the new aggregate function whose aggpartialfunc is itself.\r\n> > For two aggregate functions,\r\n> > If the argument list and function name match, then the two aggregate\r\n> > functions must match.\r\n> > By definition of aggpartialfunc,\r\n> > every aggregate function and its aggpartialfn must have the same\r\n> > argument list.\r\n> > Thus, if aggpartialfnName and aggName are equal as strings, I think it\r\n> > is correct to determine that the user is specifying the new aggregate\r\n> > function whose aggpartialfunc is itself.\r\n> >\r\n> > However, since the document does not state these intentions I think\r\n> > your suspicions are valid.\r\n> > Therefore, I have added a specification to the document reflecting the\r\n> > above intentions.\r\n> >\r\n> \r\n> Hi. Let me explain.\r\n> \r\n> Look at this example, taken from test.\r\n> \r\n> CREATE AGGREGATE udf_avg_p_int4(int4) (\r\n> sfunc = int4_avg_accum,\r\n> stype = _int8,\r\n> combinefunc = int4_avg_combine,\r\n> initcond = '{0,0}'\r\n> );\r\n> CREATE AGGREGATE udf_sum(int4) (\r\n> sfunc = int4_avg_accum,\r\n> stype = _int8,\r\n> finalfunc = int8_avg,\r\n> combinefunc = int4_avg_combine,\r\n> initcond = '{0,0}',\r\n> aggpartialfunc = udf_avg_p_int4\r\n> );\r\n> \r\n> Now, let's create another aggregate.\r\n> \r\n> # create schema test ;\r\n> create aggregate test.udf_avg_p_int4(int4) (\r\n> sfunc = int4_avg_accum,\r\n> stype = _int8,\r\n> finalfunc = int8_avg,\r\n> combinefunc = int4_avg_combine,\r\n> initcond = '{0,0}',\r\n> aggpartialfunc = udf_avg_p_int4\r\n> );\r\n> ERROR: udf_avg_p_int4 is not its own aggpartialfunc\r\n> \r\n> What's the difference between test.udf_avg_p_int4(int4) aggregate and\r\n> udf_sum(int4)? They are essentially the same, but second one can't be\r\n> defined.\r\n> \r\n> Also note difference:\r\n> \r\n> # CREATE AGGREGATE udf_sum(int4) (\r\n> sfunc = int4_avg_accum,\r\n> stype = _int8,\r\n> finalfunc = int8_avg,\r\n> combinefunc = pg_catalog.int4_avg_combine,\r\n> initcond = '{0,0}',\r\n> aggpartialfunc = udf_avg_p_int4\r\n> );\r\n> CREATE AGGREGATE\r\n> \r\n> # CREATE AGGREGATE udf_sum(int4) (\r\n> sfunc = int4_avg_accum,\r\n> stype = _int8,\r\n> finalfunc = int8_avg,\r\n> combinefunc = pg_catalog.int4_avg_combine,\r\n> initcond = '{0,0}',\r\n> aggpartialfunc = public.udf_avg_p_int4 );\r\n> ERROR: aggpartialfnName is invalid\r\n> \r\n> It seems that assumption about aggpartialfnName - that it's a non-qualified\r\n> name is incorrect. And if we use qualified names, we can't compare just names,\r\n> likely we should compare oids.\r\n\r\nThanks for the explanation.\r\nI understand that the method of comparing two function name strings is incorrect.\r\nInstead, I added the parameter isaggpartialfunc indicating whether the aggregate\r\nfunction and its aggpartialfunc are the same or different.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Tue, 6 Jun 2023 12:31:20 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Fujii.Yuki@df.MitsubishiElectric.co.jp писал 2023-06-06 15:31:\n> Thanks for the explanation.\n> I understand that the method of comparing two function name strings is\n> incorrect.\n> Instead, I added the parameter isaggpartialfunc indicating whether the \n> aggregate\n> function and its aggpartialfunc are the same or different.\n\nHi.\n\nThis seems to be more robust, but the interface became more strange.\nI'm not sure what to do with it. Some ideas I had to avoid introducing \nthis parameter. Not sure I like any of them.\n\n1) You can use QualifiedNameGetCreationNamespace() for aggpartialfnName \nand still compare namespace and function name for it and aggName, \naggNamespace.\nSeems to be not ideal, but avoids introducing new parameters.\n\n2) You can lookup for partial aggregate function after ProcedureCreate() \nin AggregateCreate(), if it wasn't found at earlier stages. If it is the \naggregate itself - check it. If it's still not found, error out. Also \nseems to be a bit ugly - you leave uncommitted garbage for vacuum in \ncatalogue.\n\n\nAnother issue - the patch misses recording dependency between \naggpartialfn and aggregate procedure.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Wed, 07 Jun 2023 12:47:01 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Pyhalov.\n\nThank you for comments.\n\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\n> Sent: Wednesday, June 7, 2023 6:47 PM\n> This seems to be more robust, but the interface became more strange.\n> I'm not sure what to do with it. Some ideas I had to avoid introducing this\n> parameter. Not sure I like any of them.\n> \n> 1) You can use QualifiedNameGetCreationNamespace() for aggpartialfnName\n> and still compare namespace and function name for it and aggName,\n> aggNamespace.\n> Seems to be not ideal, but avoids introducing new parameters.\n> \n> 2) You can lookup for partial aggregate function after ProcedureCreate() in\n> AggregateCreate(), if it wasn't found at earlier stages. If it is the aggregate itself\n> - check it. If it's still not found, error out. Also seems to be a bit ugly - you leave\n> uncommitted garbage for vacuum in catalogue.\nThank you for suggesting alternatives.\nThe disadvantages of alternative 2) appear to be undesirable, \nI have modified it according to alternative 1)\n\n> Another issue - the patch misses recording dependency between aggpartialfn\n> and aggregate procedure.\nI added code to record dependencys between aggpartialfn\nand aggregate procedure, similar to the code for functions such as combinefunc.\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Wed, 7 Jun 2023 23:08:54 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Fujii.Yuki@df.MitsubishiElectric.co.jp писал 2023-06-08 02:08:\n>> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\n>> Sent: Wednesday, June 7, 2023 6:47 PM\n>> This seems to be more robust, but the interface became more strange.\n>> I'm not sure what to do with it. Some ideas I had to avoid introducing \n>> this\n>> parameter. Not sure I like any of them.\n>> \n>> 1) You can use QualifiedNameGetCreationNamespace() for \n>> aggpartialfnName\n>> and still compare namespace and function name for it and aggName,\n>> aggNamespace.\n>> Seems to be not ideal, but avoids introducing new parameters.\n>> \n>> 2) You can lookup for partial aggregate function after \n>> ProcedureCreate() in\n>> AggregateCreate(), if it wasn't found at earlier stages. If it is the \n>> aggregate itself\n>> - check it. If it's still not found, error out. Also seems to be a bit \n>> ugly - you leave\n>> uncommitted garbage for vacuum in catalogue.\n> Thank you for suggesting alternatives.\n> The disadvantages of alternative 2) appear to be undesirable,\n> I have modified it according to alternative 1)\n> \n>> Another issue - the patch misses recording dependency between \n>> aggpartialfn\n>> and aggregate procedure.\n> I added code to record dependencys between aggpartialfn\n> and aggregate procedure, similar to the code for functions such as \n> combinefunc.\n> \n\nHi.\n\nLooks better. The only question I have is should we record dependency \nbetween procOid and aggpartialfn if aggpartialfn == procOid.\n\nAlso it seems new code likely should be run through pgindent.\n\ndoc/src/sgml/postgres-fdw.sgml:\n\n+ For <literal>WHERE</literal> clauses,\n+ <literal>JOIN</literal> clauses, this sending is active if\n+ conditions in <xref \nlinkend=\"postgres-fdw-remote-query-optimization\"/>\n+ hold and <varname>enable_partitionwise_join</varname> is true(this \ncondition\n+ is need for only <literal>JOIN</literal> clauses).\n+ For aggregate expressions, this sending is active if conditions in\n\nNo space between \"true\" and \"(\" in \"is true(this condition\".\n\nSome sentences in documentation, like one starting with\n\"For aggregate expressions, this sending is active if conditions in...\"\nseem to be too long, but I'm not the best man to read out documentation.\n\nIn \"Built-in sharding in PostgreSQL\" term \"shard\" doesn't have a \ndefinition.\n\nBy the way, I'm not sure that \"sharding\" documentation belongs to this \npatch,\nat least it needs a review from native speaker.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Thu, 08 Jun 2023 10:39:30 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Pyhalov.\r\n\r\nThank you for comments.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Thursday, June 8, 2023 4:40 PM\r\n> Looks better. The only question I have is should we record dependency\r\n> between procOid and aggpartialfn if aggpartialfn == procOid.\r\nFixed.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Thursday, June 8, 2023 4:40 PM\r\n> Also it seems new code likely should be run through pgindent.\r\nDone.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Thursday, June 8, 2023 4:40 PM\r\n> doc/src/sgml/postgres-fdw.sgml:\r\n> \r\n> + For <literal>WHERE</literal> clauses,\r\n> + <literal>JOIN</literal> clauses, this sending is active if\r\n> + conditions in <xref\r\n> linkend=\"postgres-fdw-remote-query-optimization\"/>\r\n> + hold and <varname>enable_partitionwise_join</varname> is true(this\r\n> condition\r\n> + is need for only <literal>JOIN</literal> clauses).\r\n> + For aggregate expressions, this sending is active if conditions in\r\n> \r\n> No space between \"true\" and \"(\" in \"is true(this condition\".\r\nFixed.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Thursday, June 8, 2023 4:40 PM\r\n> Some sentences in documentation, like one starting with \"For aggregate\r\n> expressions, this sending is active if conditions in...\"\r\n> seem to be too long, but I'm not the best man to read out documentation.\r\nFixed.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Thursday, June 8, 2023 4:40 PM\r\n> In \"Built-in sharding in PostgreSQL\" term \"shard\" doesn't have a definition.\r\nI have removed the sentence you pointed out.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Thursday, June 8, 2023 4:40 PM\r\n> By the way, I'm not sure that \"sharding\" documentation belongs to this patch, at\r\n> least it needs a review from native speaker.\r\nI removed general description of sharding.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Fri, 9 Jun 2023 11:38:37 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Hi.\n\n+ An aggregate function, called the partial aggregate function for \npartial aggregate\n+ that corresponding to the aggregate function, is defined on the \nprimary node and\n+ the <filename>postgres_fdw</filename> node.\n\nSomething is clearly wrong here.\n\n+ When using built-in sharding feature in PostgreSQL is used,\n\nAnd here.\n\nOverall the code looks good to me, but I suppose that documentation \nneeds further review from some native speaker.\n\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Fri, 09 Jun 2023 17:09:42 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Tue, Jun 6, 2023 at 03:08:50AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> > I've seen this message. But introduction of new built-in function will break\n> > requests to old servers only if this new function is used in the request (this\n> > means that query changes). However, this patch changes the behavior of old\n> > queries, which worked prior to update. This seems to be different to me.\n> \n> You are right.\n> However, for now, partial aggregates pushdown is mainly used when using built-in sharding in PostgreSQL.\n> I believe when using built-in sharding in PostgreSQL, the version of the primary node server and\n> the version of the remote server will usually be the same.\n> So I think it would be sufficient to include in the documentation a note about such problem\n> and a workaround for them.\n\nI agree that this feature is designed for built-in sharding, but it is\npossible people could be using aggregates on partitions backed by\nforeign tables without sharding. Adding a requirement for non-sharding\nsetups to need PG 17+ servers might be unreasonable.\n\nLooking at previous release note incompatibilities, we don't normally\nchange non-administrative functions in a way that causes errors if run\non older servers. Based on Alexander's observations, I wonder if we\nneed to re-add the postgres_fdw option to control partial aggregate\npushdown, and default it to enabled.\n\nIf we ever add more function breakage we might need more postgres_fdw\noptions. Fortunately, such changes are rare.\n\nYuki, thank you for writing and updating this patch, and Alexander,\nthank you for helping with this patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 9 Jun 2023 12:44:03 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Bruce, Mr.Pyhalov, hackers.\n\nThank you for comments. I will try to respond to both of your comments as follows.\nI plan to start revising the patch next week. If you have any comments on the following\nrespondences, I would appreciate it if you could give them to me this week.\n\n> From: Bruce Momjian <bruce@momjian.us>\n> Sent: Saturday, June 10, 2023 1:44 AM\n> I agree that this feature is designed for built-in sharding, but it is possible people could be using aggregates on partitions\n> backed by foreign tables without sharding. Adding a requirement for non-sharding setups to need PG 17+ servers might\n> be unreasonable.\nIndeed, it is possible to use partial aggregate pushdown feature for purposes other than sharding.\nThe description of the section \"F.38.6. Built-in sharding in PostgreSQL\" assumes the use of\nBuilt-in sharding and will be modified to eliminate this assumption.\nThe title of this section should be changed to something like \"Aggregate on partitioned table\".\n\n> From: Bruce Momjian <bruce@momjian.us>\n> Sent: Saturday, June 10, 2023 1:44 AM\n> Looking at previous release note incompatibilities, we don't normally change non-administrative functions in a way that\n> causes errors if run on older servers. Based on Alexander's observations, I wonder if we need to re-add the postgres_fdw\n> option to control partial aggregate pushdown, and default it to enabled.\n> \n> If we ever add more function breakage we might need more postgres_fdw options. Fortunately, such changes are rare.\n\nI understand what the problem is. I will put a mechanism maintaining compatibility into the patch.\nI believe there are three approaches.\nApproach 1-1 is preferable because it does not require additional options for postgres_fdw.\nI will revise the patch according to Approach 1-1, unless otherwise commented.\n\nApproach1:\nI ensure that postgres_fdw retrieves the version of each remote server\nand does not partial aggregate pushd down if the server version is less than 17.\nThere are two approaches to obtaining remote server versions.\nApproach1-1: postgres_fdw connects a remote server and use PQserverVersion().\nApproach1-2: Adding a postgres_fdw option about a remote server version (like \"server_version\").\n\nApproach2:\nAdding a postgres_fdw option for partial aggregate pushdown is enable or not\n(like enable_partial_aggregate_pushdown).\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation\n\n\n", "msg_date": "Mon, 12 Jun 2023 08:51:30 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Mon, Jun 12, 2023 at 08:51:30AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Hi Mr.Bruce, Mr.Pyhalov, hackers.\n> \n> Thank you for comments. I will try to respond to both of your comments as follows.\n> I plan to start revising the patch next week. If you have any comments on the following\n> respondences, I would appreciate it if you could give them to me this week.\n> \n> > From: Bruce Momjian <bruce@momjian.us>\n> > Sent: Saturday, June 10, 2023 1:44 AM\n> > I agree that this feature is designed for built-in sharding, but it is possible people could be using aggregates on partitions\n> > backed by foreign tables without sharding. Adding a requirement for non-sharding setups to need PG 17+ servers might\n> > be unreasonable.\n> Indeed, it is possible to use partial aggregate pushdown feature for purposes other than sharding.\n> The description of the section \"F.38.6. Built-in sharding in PostgreSQL\" assumes the use of\n> Built-in sharding and will be modified to eliminate this assumption.\n> The title of this section should be changed to something like \"Aggregate on partitioned table\".\n\nSounds good.\n\n> > From: Bruce Momjian <bruce@momjian.us>\n> > Sent: Saturday, June 10, 2023 1:44 AM\n> > Looking at previous release note incompatibilities, we don't normally change non-administrative functions in a way that\n> > causes errors if run on older servers. Based on Alexander's observations, I wonder if we need to re-add the postgres_fdw\n> > option to control partial aggregate pushdown, and default it to enabled.\n> > \n> > If we ever add more function breakage we might need more postgres_fdw options. Fortunately, such changes are rare.\n> \n> I understand what the problem is. I will put a mechanism maintaining compatibility into the patch.\n> I believe there are three approaches.\n> Approach 1-1 is preferable because it does not require additional options for postgres_fdw.\n> I will revise the patch according to Approach 1-1, unless otherwise commented.\n> \n> Approach1:\n> I ensure that postgres_fdw retrieves the version of each remote server\n> and does not partial aggregate pushd down if the server version is less than 17.\n> There are two approaches to obtaining remote server versions.\n> Approach1-1: postgres_fdw connects a remote server and use PQserverVersion().\n> Approach1-2: Adding a postgres_fdw option about a remote server version (like \"server_version\").\n> \n> Approach2:\n> Adding a postgres_fdw option for partial aggregate pushdown is enable or not\n> (like enable_partial_aggregate_pushdown).\n\nThese are good questions. Adding a postgres_fdw option called\nenable_partial_aggregate_pushdown helps make the purpose of the option\nclear, but remote_version can be used for future breakage as well.\n\nI think remote_version is the best idea, and in the documention for the\noption, let's explcitly say it is useful to disable partial aggreates\npushdown on pre-PG 17 servers. If we need to use the option for other\ncases, we can just update the documentation. When the option is blank,\nthe default, everything is pushed down.\n\nI see remote_version a logical addition to match our \"extensions\" option\nthat controls what extension functions can be pushed down.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 12 Jun 2023 09:37:35 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Momjian.\n\nThank you for advises.\n\n> From: Bruce Momjian <bruce@momjian.us>\n> Sent: Monday, June 12, 2023 10:38 PM\n> > I understand what the problem is. I will put a mechanism maintaining compatibility into the patch.\n> > I believe there are three approaches.\n> > Approach 1-1 is preferable because it does not require additional options for postgres_fdw.\n> > I will revise the patch according to Approach 1-1, unless otherwise commented.\n> >\n> > Approach1:\n> > I ensure that postgres_fdw retrieves the version of each remote server\n> > and does not partial aggregate pushd down if the server version is less than 17.\n> > There are two approaches to obtaining remote server versions.\n> > Approach1-1: postgres_fdw connects a remote server and use PQserverVersion().\n> > Approach1-2: Adding a postgres_fdw option about a remote server version (like \"server_version\").\n> >\n> > Approach2:\n> > Adding a postgres_fdw option for partial aggregate pushdown is enable\n> > or not (like enable_partial_aggregate_pushdown).\n> \n> These are good questions. Adding a postgres_fdw option called enable_partial_aggregate_pushdown helps make the\n> purpose of the option clear, but remote_version can be used for future breakage as well.\n> \n> I think remote_version is the best idea, and in the documention for the option, let's explcitly say it is useful to disable\n> partial aggreates pushdown on pre-PG 17 servers. If we need to use the option for other cases, we can just update the\n> documentation. When the option is blank, the default, everything is pushed down.\n> \n> I see remote_version a logical addition to match our \"extensions\" option that controls what extension functions can be\n> pushed down.\n\nThank you for your perspective.\nSo, of the approaches I have presented, you think that approach 1-2 is\npreferable and that the option name remote_server is preferable?\nIndeed, the option of a remote version may have other uses.\nHowever, this information can be obtained by connecting to a remote server, \nI'm concerned that some people may find this option redundant.\n\nIs the problem with approach 1-1 because the user cannot decide whether to include the compatibility check in the decision to do partial aggregate pushdown or not?\n# If Approach 1-1 is taken, the problem is that this feature cannot be used for all buit-in aggregate functions\n# when the remote server is older than PG17.\nIf so, Approache1-3 below seem more desirable.\nWould it be possible for us to hear your thoughts?\n\nApproache1-3:We add a postgres_fdw option about a compatibility check for partial aggregate pushdown\n(like \"enable_aggpartialfunc_compatibility_check\"). This option is false, the default.\nWhen this option is true, postgres_fdw obtains the remote server version by connecting the remote server and using PQserverVersion(). \nAnd if the remote server version is older than PG17, then the partial aggregate pushdown feature is enable for all buit-in aggregate functions.\nOtherwise the partial aggregate pushdown feature is disable for all buit-in aggregate functions.\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation\n\n\n", "msg_date": "Tue, 13 Jun 2023 02:18:15 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Tue, Jun 13, 2023 at 02:18:15AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Hi Mr.Momjian.\n> \n> Thank you for advises.\n> \n> > From: Bruce Momjian <bruce@momjian.us>\n> > Sent: Monday, June 12, 2023 10:38 PM\n> > > I understand what the problem is. I will put a mechanism maintaining compatibility into the patch.\n> > > I believe there are three approaches.\n> > > Approach 1-1 is preferable because it does not require additional options for postgres_fdw.\n> > > I will revise the patch according to Approach 1-1, unless otherwise commented.\n> > >\n> > > Approach1:\n> > > I ensure that postgres_fdw retrieves the version of each remote server\n> > > and does not partial aggregate pushd down if the server version is less than 17.\n> > > There are two approaches to obtaining remote server versions.\n> > > Approach1-1: postgres_fdw connects a remote server and use PQserverVersion().\n> > > Approach1-2: Adding a postgres_fdw option about a remote server version (like \"server_version\").\n> > >\n> > > Approach2:\n> > > Adding a postgres_fdw option for partial aggregate pushdown is enable\n> > > or not (like enable_partial_aggregate_pushdown).\n> > \n> > These are good questions. Adding a postgres_fdw option called enable_partial_aggregate_pushdown helps make the\n> > purpose of the option clear, but remote_version can be used for future breakage as well.\n> > \n> > I think remote_version is the best idea, and in the documentation for the option, let's explicitly say it is useful to disable\n> > partial aggregates pushdown on pre-PG 17 servers. If we need to use the option for other cases, we can just update the\n> > documentation. When the option is blank, the default, everything is pushed down.\n> > \n> > I see remote_version a logical addition to match our \"extensions\" option that controls what extension functions can be\n> > pushed down.\n> \n> Thank you for your perspective.\n> So, of the approaches I have presented, you think that approach 1-2 is\n> preferable and that the option name remote_server is preferable?\n> Indeed, the option of a remote version may have other uses.\n> However, this information can be obtained by connecting to a remote server, \n> I'm concerned that some people may find this option redundant.\n> \n> Is the problem with approach 1-1 because the user cannot decide whether to include the compatibility check in the decision to do partial aggregate pushdown or not?\n> # If Approach 1-1 is taken, the problem is that this feature cannot be used for all bait-in aggregate functions\n> # when the remote server is older than PG17.\n> If so, Approache1-3 below seem more desirable.\n> Would it be possible for us to hear your thoughts?\n> \n> Approache1-3:We add a postgres_fdw option about a compatibility check for partial aggregate pushdown\n> (like \"enable_aggpartialfunc_compatibility_check\"). This option is false, the default.\n> When this option is true, postgres_fdw obtains the remote server version by connecting the remote server and using PQserverVersion(). \n> And if the remote server version is older than PG17, then the partial aggregate pushdown feature is enable for all buit-in aggregate functions.\n> Otherwise the partial aggregate pushdown feature is disable for all buit-in aggregate functions.\n\nApologies for the delay in my reply to this email. I looked into the\nexisting code and I found three things:\n\n1) PQserverVersion() just pulls the conn->sversion value from the\nexisting connection because pqSaveParameterStatus() pulls the\nserver_version sent by the backend; no need to issue SELECT version().\n\n2) postgres_fdw already has nine calls to GetConnection(), and only\nopens a connection if it already doesn't have one. Here is an example:\n\n\t/* Get the remote estimate */\n\tconn = GetConnection(fpinfo->user, false, NULL);\n\tget_remote_estimate(sql.data, conn, &rows, &width,\n\t\t\t &startup_cost, &total_cost);\n\tReleaseConnection(conn);\n\nTherefore, it seems like it would be near-zero cost to just call conn =\nGetConnection() and then PQserverVersion(conn), and ReleaseConnection().\nYou can then use the return value of PQserverVersion() to determine if\nyou can push down partial aggregates.\n\n3) Looking at postgresAcquireSampleRowsFunc(), I see this exact method\nused:\n\n conn = GetConnection(user, false, NULL);\n\n /* We'll need server version, so fetch it now. */\n server_version_num = PQserverVersion(conn);\n\n ...\n\n if ((server_version_num < 95000) &&\n\t(method == ANALYZE_SAMPLE_SYSTEM ||\n\t method == ANALYZE_SAMPLE_BERNOULLI))\n\tereport(ERROR,\n\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n\t\t errmsg(\"remote server does not support TABLESAMPLE feature\")));\n\nI am sorry if you already knew all this, but I didn't.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 19 Jun 2023 20:42:45 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Bruce Momjian писал 2023-06-20 03:42:\n> Apologies for the delay in my reply to this email. I looked into the\n> existing code and I found three things:\n> \n> 1) PQserverVersion() just pulls the conn->sversion value from the\n> existing connection because pqSaveParameterStatus() pulls the\n> server_version sent by the backend; no need to issue SELECT version().\n> \n> 2) postgres_fdw already has nine calls to GetConnection(), and only\n> opens a connection if it already doesn't have one. Here is an example:\n> \n> \t/* Get the remote estimate */\n> \tconn = GetConnection(fpinfo->user, false, NULL);\n> \tget_remote_estimate(sql.data, conn, &rows, &width,\n> \t\t\t &startup_cost, &total_cost);\n> \tReleaseConnection(conn);\n> \n> Therefore, it seems like it would be near-zero cost to just call conn =\n> GetConnection() and then PQserverVersion(conn), and \n> ReleaseConnection().\n> You can then use the return value of PQserverVersion() to determine if\n> you can push down partial aggregates.\n> \n\nHi.\nCurrently we don't get remote connection while planning if \nuse_remote_estimate is not set.\nSuch change would require to get remote connection in planner, not in \nexecutor.\nThis can lead to change of behavior (like errors in explain when user \nmapping is wrong - e.g. bad password is specified).\nAlso this potentially can lead to establishing connections even when \nplan node is not actually used\n(like extreme example - select sum(score) from t limit 0).\nI'm not saying we shouldn't do it - just hint at possible consequences.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Tue, 20 Jun 2023 09:59:11 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Tue, Jun 20, 2023 at 09:59:11AM +0300, Alexander Pyhalov wrote:\n> > Therefore, it seems like it would be near-zero cost to just call conn =\n> > GetConnection() and then PQserverVersion(conn), and ReleaseConnection().\n> > You can then use the return value of PQserverVersion() to determine if\n> > you can push down partial aggregates.\n> \n> Hi.\n> Currently we don't get remote connection while planning if\n> use_remote_estimate is not set.\n> Such change would require to get remote connection in planner, not in\n> executor.\n> This can lead to change of behavior (like errors in explain when user\n> mapping is wrong - e.g. bad password is specified).\n> Also this potentially can lead to establishing connections even when plan\n> node is not actually used\n> (like extreme example - select sum(score) from t limit 0).\n> I'm not saying we shouldn't do it - just hint at possible consequences.\n\nAgreed. I noticed it was doing FDW connections during optimization, but\ndidn't see the postgres_fdw option that would turn it off. \nInterestingly, it is disabled by default.\n\nAfter considering the options, I think we should have a postgres_fdw\noption called \"planner_version_check\", and default that false. When\nfalse, a remote server version check will not be performed during\nplanning and partial aggregates will be always be considered. When\ntrue, a version check will be performed during planning and partial\naggregate pushdown disabled for pre-PG 17 foreign servers during the\nquery.\n\nIf we want to be more specific, we can call it\n\"check_partial_aggregate_support\".\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 21 Jun 2023 11:43:57 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Momjian, Mr.Pyhalov, hackers.\n\n> From: Bruce Momjian <bruce@momjian.us>\n> Sent: Thursday, June 22, 2023 12:44 AM\n> On Tue, Jun 20, 2023 at 09:59:11AM +0300, Alexander Pyhalov wrote:\n> > > Therefore, it seems like it would be near-zero cost to just call\n> > > conn =\n> > > GetConnection() and then PQserverVersion(conn), and ReleaseConnection().\n> > > You can then use the return value of PQserverVersion() to determine\n> > > if you can push down partial aggregates.\n> >\n> > Hi.\n> > Currently we don't get remote connection while planning if\n> > use_remote_estimate is not set.\n> > Such change would require to get remote connection in planner, not in\n> > executor.\n> > This can lead to change of behavior (like errors in explain when user\n> > mapping is wrong - e.g. bad password is specified).\n> > Also this potentially can lead to establishing connections even when\n> > plan node is not actually used (like extreme example - select\n> > sum(score) from t limit 0).\n> > I'm not saying we shouldn't do it - just hint at possible consequences.\n> \n> Agreed. I noticed it was doing FDW connections during optimization, but didn't see the postgres_fdw option that would\n> turn it off.\n> Interestingly, it is disabled by default.\n> \n> After considering the options, I think we should have a postgres_fdw option called \"planner_version_check\", and default\n> that false. When false, a remote server version check will not be performed during planning and partial aggregates will be\n> always be considered. When true, a version check will be performed during planning and partial aggregate pushdown\n> disabled for pre-PG 17 foreign servers during the query.\n> \n> If we want to be more specific, we can call it \"check_partial_aggregate_support\".\nThank you both for your advice.\nWe will address the compatibility issues as follows.\n\nApproach1-3:\nI will add a postgres_fdw option \"check_partial_aggregate_support\".\nThis option is false, default.\nOnly if this option is true, postgres_fdw connect to the remote server and get the version of the remote server.\nAnd if the version of the remote server is less than PG17, then partial aggregate push down to the remote server is disable.\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation\n\n> -----Original Message-----\n> From: Bruce Momjian <bruce@momjian.us>\n> Sent: Thursday, June 22, 2023 12:44 AM\n> To: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\n> Cc: Fujii Yuki/藤井 雄規(MELCO/情報総研 DM最適G) <Fujii.Yuki@df.MitsubishiElectric.co.jp>;\n> PostgreSQL-development <pgsql-hackers@postgresql.org>; Andres Freund <andres@anarazel.de>; Tom Lane\n> <tgl@sss.pgh.pa.us>; Tomas Vondra <tomas.vondra@enterprisedb.com>; Julien Rouhaud <rjuju123@gmail.com>;\n> Daniel Gustafsson <daniel@yesql.se>; Ilya Gladyshev <i.gladyshev@postgrespro.ru>\n> Subject: Re: Partial aggregates pushdown\n> \n> On Tue, Jun 20, 2023 at 09:59:11AM +0300, Alexander Pyhalov wrote:\n> > > Therefore, it seems like it would be near-zero cost to just call\n> > > conn =\n> > > GetConnection() and then PQserverVersion(conn), and ReleaseConnection().\n> > > You can then use the return value of PQserverVersion() to determine\n> > > if you can push down partial aggregates.\n> >\n> > Hi.\n> > Currently we don't get remote connection while planning if\n> > use_remote_estimate is not set.\n> > Such change would require to get remote connection in planner, not in\n> > executor.\n> > This can lead to change of behavior (like errors in explain when user\n> > mapping is wrong - e.g. bad password is specified).\n> > Also this potentially can lead to establishing connections even when\n> > plan node is not actually used (like extreme example - select\n> > sum(score) from t limit 0).\n> > I'm not saying we shouldn't do it - just hint at possible consequences.\n> \n> Agreed. I noticed it was doing FDW connections during optimization, but didn't see the postgres_fdw option that would\n> turn it off.\n> Interestingly, it is disabled by default.\n> \n> After considering the options, I think we should have a postgres_fdw option called \"planner_version_check\", and default\n> that false. When false, a remote server version check will not be performed during planning and partial aggregates will be\n> always be considered. When true, a version check will be performed during planning and partial aggregate pushdown\n> disabled for pre-PG 17 foreign servers during the query.\n> \n> If we want to be more specific, we can call it \"check_partial_aggregate_support\".\n> \n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n> \n> Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 22 Jun 2023 05:23:33 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Thu, Jun 22, 2023 at 05:23:33AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Approach1-3:\n> I will add a postgres_fdw option \"check_partial_aggregate_support\".\n> This option is false, default.\n> Only if this option is true, postgres_fdw connect to the remote server and get the version of the remote server.\n> And if the version of the remote server is less than PG17, then partial aggregate push down to the remote server is disable.\n\nGreat!\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 22 Jun 2023 07:39:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Bruce, Mr.Pyhalov, hackers.\n\n> From: Bruce Momjian <bruce@momjian.us>\n> Sent: Monday, June 12, 2023 10:38 PM\n> \n> On Mon, Jun 12, 2023 at 08:51:30AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> > Hi Mr.Bruce, Mr.Pyhalov, hackers.\n> >\n> > Thank you for comments. I will try to respond to both of your comments as follows.\n> > I plan to start revising the patch next week. If you have any comments\n> > on the following respondences, I would appreciate it if you could give them to me this week.\n> >\n> > > From: Bruce Momjian <bruce@momjian.us>\n> > > Sent: Saturday, June 10, 2023 1:44 AM I agree that this feature is\n> > > designed for built-in sharding, but it is possible people could be\n> > > using aggregates on partitions backed by foreign tables without\n> > > sharding. Adding a requirement for non-sharding setups to need PG 17+ servers might be unreasonable.\n> > Indeed, it is possible to use partial aggregate pushdown feature for purposes other than sharding.\n> > The description of the section \"F.38.6. Built-in sharding in\n> > PostgreSQL\" assumes the use of Built-in sharding and will be modified to eliminate this assumption.\n> > The title of this section should be changed to something like \"Aggregate on partitioned table\".\n> \n> Sounds good.\nI have modified documents according to the above policy.\n\n> From: Bruce Momjian <bruce@momjian.us>\n> Sent: Thursday, June 22, 2023 8:39 PM\n> On Thu, Jun 22, 2023 at 05:23:33AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> > Approach1-3:\n> > I will add a postgres_fdw option \"check_partial_aggregate_support\".\n> > This option is false, default.\n> > Only if this option is true, postgres_fdw connect to the remote server and get the version of the remote server.\n> > And if the version of the remote server is less than PG17, then partial aggregate push down to the remote server is\n> disable.\n> \n> Great!\nI have modified the program except for the point \"if the version of the remote server is less than PG17\".\nInstead, we have addressed the following.\n\"If check_partial_aggregate_support is true and the remote server version is older than the local server\nversion, postgres_fdw does not assume that the partial aggregate function is on the remote server unless\nthe partial aggregate function and the aggregate function match.\"\nThe reason for this is to maintain compatibility with any aggregate function that does not support partial\naggregate in one version of V1 (V1 is PG17 or higher), even if the next version supports partial aggregate.\nFor example, string_agg does not support partial aggregation in PG15, but it will support partial aggregation\nin PG16.\n\nWe have not been able to add a test for the case where the remote server version is older than the\nlocal server version to the regression test. Is there any way to add such tests to the existing regression\ntests?\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Mon, 10 Jul 2023 07:35:27 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Fujii.Yuki@df.MitsubishiElectric.co.jp писал 2023-07-10 10:35:\n> I have modified the program except for the point \"if the version of\n> the remote server is less than PG17\".\n> Instead, we have addressed the following.\n> \"If check_partial_aggregate_support is true and the remote server\n> version is older than the local server\n> version, postgres_fdw does not assume that the partial aggregate\n> function is on the remote server unless\n> the partial aggregate function and the aggregate function match.\"\n> The reason for this is to maintain compatibility with any aggregate\n> function that does not support partial\n> aggregate in one version of V1 (V1 is PG17 or higher), even if the\n> next version supports partial aggregate.\n> For example, string_agg does not support partial aggregation in PG15,\n> but it will support partial aggregation\n> in PG16.\n> \n\nHi.\n\n1) In foreign_join_ok() should we set fpinfo->user if \nfpinfo->check_partial_aggregate_support is set like it's done for \nfpinfo->use_remote_estimate? It seems we can end up with fpinfo->user = \nNULL if use_remote_estimate is not set.\n\n2) It seeems we found an additional issue with original patch, which is \npresent in current one. I'm attaching a patch which seems to fix it, but \nI'm not quite sure in it.\n\n\n> We have not been able to add a test for the case where the remote\n> server version is older than the\n> local server version to the regression test. Is there any way to add\n> such tests to the existing regression\n> tests?\n> \n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Fri, 14 Jul 2023 16:40:16 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Pyhalov.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Friday, July 14, 2023 10:40 PM\r\n> 1) In foreign_join_ok() should we set fpinfo->user if\r\n> fpinfo->check_partial_aggregate_support is set like it's done for \r\n> fpinfo->use_remote_estimate? It seems we can end up with fpinfo->user \r\n> fpinfo->=\r\n> NULL if use_remote_estimate is not set.\r\nYou are right. I will modify this patch according to your advice.\r\nThank you for advice.\r\n\r\n> 2) It seeems we found an additional issue with original patch, which \r\n> is present in current one. I'm attaching a patch which seems to fix \r\n> it, but I'm not quite sure in it.\r\nThank you for pointing out the issue.\r\nIf a query's group-by clause contains variable based expression(not variable)\r\nand the query's select clause contains another expression,\r\nthe partial aggregate could be unsafe to push down.\r\n\r\nAn example of such queries:\r\nSELECT (b/2)::numeric, avg(a), max(a), count(*) FROM pagg_tab GROUP BY b/2\r\n\r\nYour patch disables partial aggregate pushdown for such queries.\r\nI'll see if we can modify the patch to safely do a partial aggregate pushdown for such queries as well.\r\nSuch a query expects the variable in the select clause expression to be included in the target of the grouped rel\r\n(let see make_partial_grouping_target), \r\nbut the original groupby clause has no reference to this variable,\r\nthis seems to be the direct cause(let see foreign_grouping_ok). \r\nI will examine whether a safe pushdown can be achieved by matching the\r\ngroupby clause information referenced by foreign_grouping_ok with the grouped rel target information.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation\r\n", "msg_date": "Tue, 18 Jul 2023 01:35:53 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Pyhalov, hackers.\n\nI have made the following three modifications about this patch.\n\n1)\n> <Fujii.Yuki@df.MitsubishiElectric.co.jp>\n> Sent: Tuesday, July 18, 2023 10:36 AM\n> > From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\n> > Sent: Friday, July 14, 2023 10:40 PM\n> > 1) In foreign_join_ok() should we set fpinfo->user if\n> > fpinfo->check_partial_aggregate_support is set like it's done for\n> > fpinfo->use_remote_estimate? It seems we can end up with fpinfo->user\n> > fpinfo->=\n> > NULL if use_remote_estimate is not set.\n> You are right. I will modify this patch according to your advice.\n> Thank you for advice.\nDone.\n\n2)\n> <Fujii.Yuki@df.MitsubishiElectric.co.jp>\n> Sent: Tuesday, July 18, 2023 10:36 AM\n> > 2) It seeems we found an additional issue with original patch, which\n> > is present in current one. I'm attaching a patch which seems to fix\n> > it, but I'm not quite sure in it.\n> Thank you for pointing out the issue.\n> If a query's group-by clause contains variable based expression(not variable)\n> and the query's select clause contains another expression,\n> the partial aggregate could be unsafe to push down.\n>\n> An example of such queries:\n> SELECT (b/2)::numeric, avg(a), max(a), count(*) FROM pagg_tab GROUP BY b/2\n>\n> Your patch disables partial aggregate pushdown for such queries.\n> I'll see if we can modify the patch to safely do a partial aggregate pushdown for such queries as well.\n> Such a query expects the variable in the select clause expression to be included in the target of the grouped rel\n> (let see make_partial_grouping_target),\n> but the original groupby clause has no reference to this variable,\n> this seems to be the direct cause(let see foreign_grouping_ok).\n> I will examine whether a safe pushdown can be achieved by matching the\n> groupby clause information referenced by foreign_grouping_ok with the grouped rel target information.\nI modified the patch to safely do a partial aggregate pushdown for such queries as well\n by matching the groupby clause information referenced by foreign_grouping_ok with the grouped rel target information.\n\n3)\nI modified the patch to safely do a partial aggregate pushdown for queries which contain having clauses.\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Wed, 19 Jul 2023 00:43:38 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Fujii.Yuki@df.MitsubishiElectric.co.jp писал 2023-07-19 03:43:\n> Hi Mr.Pyhalov, hackers.\n\n> 3)\n> I modified the patch to safely do a partial aggregate pushdown for\n> queries which contain having clauses.\n> \n\nHi.\nSorry, but I don't see how it could work.\nFor example, the attached test returns wrong result:\n\nCREATE FUNCTION f() RETURNS INT AS $$\nbegin\n return 10;\nend\n$$ LANGUAGE PLPGSQL;\n\nSELECT b, sum(a) FROM pagg_tab GROUP BY b HAVING sum(a) < f() ORDER BY \n1;\n b | sum\n----+-----\n 0 | 0\n 10 | 0\n 20 | 0\n 30 | 0\n 40 | 0\n+(5 rows)\n\nIn fact the above query should have returned 0 rows, as\n\nSELECT b, sum(a) FROM pagg_tab GROUP BY b ORDER BY 1;\n b | sum\n----+------\n 0 | 600\n 1 | 660\n 2 | 720\n 3 | 780\n 4 | 840\n 5 | 900\n 6 | 960\n 7 | 1020\n 8 | 1080\n 9 | 1140\n 10 | 600\n 11 | 660\n 12 | 720\n....\nshows no such rows.\n\nOr, on the same data\n\nSELECT b, sum(a) FROM pagg_tab GROUP BY b HAVING sum(a) > 660 ORDER BY \n1;\n\nYou'll get 0 rows.\n\nBut\nSELECT b, sum(a) FROM pagg_tab GROUP BY b;\n b | sum\n----+------\n 42 | 720\n 29 | 1140\n 4 | 840\n 34 | 840\n 41 | 660\n 0 | 600\n 40 | 600\ngives.\n\nThe issue is that you can't calculate \"partial\" having. You should \ncompare full aggregate in filter, but it's not possible on the level of \none partition.\nAnd you have this in plans\n\n Finalize GroupAggregate\n Output: pagg_tab.b, avg(pagg_tab.a), max(pagg_tab.a), count(*)\n Group Key: pagg_tab.b\n Filter: (sum(pagg_tab.a) < 700)\n -> Sort\n Output: pagg_tab.b, (PARTIAL avg(pagg_tab.a)), (PARTIAL \nmax(pagg_tab.a)), (PARTIAL count(*)), (PARTIAL sum(pagg_tab.a))\n Sort Key: pagg_tab.b\n -> Append\n -> Foreign Scan\n Output: pagg_tab.b, (PARTIAL avg(pagg_tab.a)), \n(PARTIAL max(pagg_tab.a)), (PARTIAL count(*)), (PARTIAL sum(pagg_tab.a))\n Filter: ((PARTIAL sum(pagg_tab.a)) < 700) !!!! \n<--- here we can't compare anything yet, sum is incomplete.\n Relations: Aggregate on (public.fpagg_tab_p1 \npagg_tab)\n Remote SQL: SELECT b, avg_p_int4(a), max(a), \ncount(*), sum(a) FROM public.pagg_tab_p1 GROUP BY 1\n -> Foreign Scan\n Output: pagg_tab_1.b, (PARTIAL avg(pagg_tab_1.a)), \n(PARTIAL max(pagg_tab_1.a)), (PARTIAL count(*)), (PARTIAL \nsum(pagg_tab_1.a))\n Filter: ((PARTIAL sum(pagg_tab_1.a)) < 700)\n Relations: Aggregate on (public.fpagg_tab_p2 \npagg_tab_1)\n Remote SQL: SELECT b, avg_p_int4(a), max(a), \ncount(*), sum(a) FROM public.pagg_tab_p2 GROUP BY 1\n -> Foreign Scan\n Output: pagg_tab_2.b, (PARTIAL avg(pagg_tab_2.a)), \n(PARTIAL max(pagg_tab_2.a)), (PARTIAL count(*)), (PARTIAL \nsum(pagg_tab_2.a))\n Filter: ((PARTIAL sum(pagg_tab_2.a)) < 700)\n Relations: Aggregate on (public.fpagg_tab_p3 \npagg_tab_2)\n Remote SQL: SELECT b, avg_p_int4(a), max(a), \ncount(*), sum(a) FROM public.pagg_tab_p3 GROUP BY 1\n\nIn foreign_grouping_ok()\n6586 if (IsA(expr, Aggref))\n6587 {\n6588 if (partial)\n6589 {\n6590 mark_partial_aggref((Aggref \n*) expr, AGGSPLIT_INITIAL_SERIAL);\n6591 continue;\n6592 }\n6593 else if (!is_foreign_expr(root, \ngrouped_rel, expr))\n6594 return false;\n6595\n6596 tlist = add_to_flat_tlist(tlist, \nlist_make1(expr));\n6597 }\n\nat least you shouldn't do anything with expr, if is_foreign_expr() \nreturned false. If we restrict pushing down queries with havingQuals, \nI'm not quite sure how Aggref can appear in local_conds.\n\nAs for changes in planner.c (setGroupClausePartial()) I have several \nquestions.\n\n1) Why don't we add non_group_exprs to pathtarget->exprs when \npartial_target->exprs is not set?\n\n2) We replace extra->partial_target->exprs with partial_target->exprs \nafter processing. Why are we sure that after this tleSortGroupRef is \ncorrect?\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Thu, 20 Jul 2023 13:23:44 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "When it is valid to filter based on a HAVING clause predicate, it should already have been converted into a WHERE clause predicate, except in the special case of an LIMIT TO .k .. ORDER BY case where the HAVING clause predicate can be determined approximately after having found k fully qualified tuples and then that predicate is successively tightened as more qualified records are found.\r\n\r\n*that*, by the way, is a very powerful optimization.\r\n\r\n /Jim F\r\n\r\nOn 7/20/23, 6:24 AM, \"Alexander Pyhalov\" <a.pyhalov@postgrespro.ru <mailto:a.pyhalov@postgrespro.ru>> wrote:\r\n\r\n\r\nCAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\r\n\r\n\r\n\r\n\r\n\r\n\r\nFujii.Yuki@df.MitsubishiElectric.co.jp <mailto:Fujii.Yuki@df.MitsubishiElectric.co.jp> писал 2023-07-19 03:43:\r\n> Hi Mr.Pyhalov, hackers.\r\n\r\n\r\n> 3)\r\n> I modified the patch to safely do a partial aggregate pushdown for\r\n> queries which contain having clauses.\r\n>\r\n\r\n\r\nHi.\r\nSorry, but I don't see how it could work.\r\nFor example, the attached test returns wrong result:\r\n\r\n\r\nCREATE FUNCTION f() RETURNS INT AS $$\r\nbegin\r\nreturn 10;\r\nend\r\n$$ LANGUAGE PLPGSQL;\r\n\r\n\r\nSELECT b, sum(a) FROM pagg_tab GROUP BY b HAVING sum(a) < f() ORDER BY\r\n1;\r\nb | sum\r\n----+-----\r\n0 | 0\r\n10 | 0\r\n20 | 0\r\n30 | 0\r\n40 | 0\r\n+(5 rows)\r\n\r\n\r\nIn fact the above query should have returned 0 rows, as\r\n\r\n\r\nSELECT b, sum(a) FROM pagg_tab GROUP BY b ORDER BY 1;\r\nb | sum\r\n----+------\r\n0 | 600\r\n1 | 660\r\n2 | 720\r\n3 | 780\r\n4 | 840\r\n5 | 900\r\n6 | 960\r\n7 | 1020\r\n8 | 1080\r\n9 | 1140\r\n10 | 600\r\n11 | 660\r\n12 | 720\r\n....\r\nshows no such rows.\r\n\r\n\r\nOr, on the same data\r\n\r\n\r\nSELECT b, sum(a) FROM pagg_tab GROUP BY b HAVING sum(a) > 660 ORDER BY\r\n1;\r\n\r\n\r\nYou'll get 0 rows.\r\n\r\n\r\nBut\r\nSELECT b, sum(a) FROM pagg_tab GROUP BY b;\r\nb | sum\r\n----+------\r\n42 | 720\r\n29 | 1140\r\n4 | 840\r\n34 | 840\r\n41 | 660\r\n0 | 600\r\n40 | 600\r\ngives.\r\n\r\n\r\nThe issue is that you can't calculate \"partial\" having. You should\r\ncompare full aggregate in filter, but it's not possible on the level of\r\none partition.\r\nAnd you have this in plans\r\n\r\n\r\nFinalize GroupAggregate\r\nOutput: pagg_tab.b, avg(pagg_tab.a), max(pagg_tab.a), count(*)\r\nGroup Key: pagg_tab.b\r\nFilter: (sum(pagg_tab.a) < 700)\r\n-> Sort\r\nOutput: pagg_tab.b, (PARTIAL avg(pagg_tab.a)), (PARTIAL\r\nmax(pagg_tab.a)), (PARTIAL count(*)), (PARTIAL sum(pagg_tab.a))\r\nSort Key: pagg_tab.b\r\n-> Append\r\n-> Foreign Scan\r\nOutput: pagg_tab.b, (PARTIAL avg(pagg_tab.a)),\r\n(PARTIAL max(pagg_tab.a)), (PARTIAL count(*)), (PARTIAL sum(pagg_tab.a))\r\nFilter: ((PARTIAL sum(pagg_tab.a)) < 700) !!!!\r\n<--- here we can't compare anything yet, sum is incomplete.\r\nRelations: Aggregate on (public.fpagg_tab_p1\r\npagg_tab)\r\nRemote SQL: SELECT b, avg_p_int4(a), max(a),\r\ncount(*), sum(a) FROM public.pagg_tab_p1 GROUP BY 1\r\n-> Foreign Scan\r\nOutput: pagg_tab_1.b, (PARTIAL avg(pagg_tab_1.a)),\r\n(PARTIAL max(pagg_tab_1.a)), (PARTIAL count(*)), (PARTIAL\r\nsum(pagg_tab_1.a))\r\nFilter: ((PARTIAL sum(pagg_tab_1.a)) < 700)\r\nRelations: Aggregate on (public.fpagg_tab_p2\r\npagg_tab_1)\r\nRemote SQL: SELECT b, avg_p_int4(a), max(a),\r\ncount(*), sum(a) FROM public.pagg_tab_p2 GROUP BY 1\r\n-> Foreign Scan\r\nOutput: pagg_tab_2.b, (PARTIAL avg(pagg_tab_2.a)),\r\n(PARTIAL max(pagg_tab_2.a)), (PARTIAL count(*)), (PARTIAL\r\nsum(pagg_tab_2.a))\r\nFilter: ((PARTIAL sum(pagg_tab_2.a)) < 700)\r\nRelations: Aggregate on (public.fpagg_tab_p3\r\npagg_tab_2)\r\nRemote SQL: SELECT b, avg_p_int4(a), max(a),\r\ncount(*), sum(a) FROM public.pagg_tab_p3 GROUP BY 1\r\n\r\n\r\nIn foreign_grouping_ok()\r\n6586 if (IsA(expr, Aggref))\r\n6587 {\r\n6588 if (partial)\r\n6589 {\r\n6590 mark_partial_aggref((Aggref\r\n*) expr, AGGSPLIT_INITIAL_SERIAL);\r\n6591 continue;\r\n6592 }\r\n6593 else if (!is_foreign_expr(root,\r\ngrouped_rel, expr))\r\n6594 return false;\r\n6595\r\n6596 tlist = add_to_flat_tlist(tlist,\r\nlist_make1(expr));\r\n6597 }\r\n\r\n\r\nat least you shouldn't do anything with expr, if is_foreign_expr()\r\nreturned false. If we restrict pushing down queries with havingQuals,\r\nI'm not quite sure how Aggref can appear in local_conds.\r\n\r\n\r\nAs for changes in planner.c (setGroupClausePartial()) I have several\r\nquestions.\r\n\r\n\r\n1) Why don't we add non_group_exprs to pathtarget->exprs when\r\npartial_target->exprs is not set?\r\n\r\n\r\n2) We replace extra->partial_target->exprs with partial_target->exprs\r\nafter processing. Why are we sure that after this tleSortGroupRef is\r\ncorrect?\r\n\r\n\r\n--\r\nBest regards,\r\nAlexander Pyhalov,\r\nPostgres Professional\r\n\r\n\r\n\r\n", "msg_date": "Tue, 1 Aug 2023 20:25:55 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Mon, Jul 10, 2023 at 07:35:27AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> > > I will add a postgres_fdw option \"check_partial_aggregate_support\".\n> > > This option is false, default.\n> > > Only if this option is true, postgres_fdw connect to the remote server and get the version of the remote server.\n> > > And if the version of the remote server is less than PG17, then partial aggregate push down to the remote server is\n> > disable.\n> > \n> > Great!\n> I have modified the program except for the point \"if the version of the remote server is less than PG17\".\n> Instead, we have addressed the following.\n> \"If check_partial_aggregate_support is true and the remote server version is older than the local server\n> version, postgres_fdw does not assume that the partial aggregate function is on the remote server unless\n> the partial aggregate function and the aggregate function match.\"\n> The reason for this is to maintain compatibility with any aggregate function that does not support partial\n> aggregate in one version of V1 (V1 is PG17 or higher), even if the next version supports partial aggregate.\n> For example, string_agg does not support partial aggregation in PG15, but it will support partial aggregation\n> in PG16.\n\nJust to clarify, I think you are saying:\n\n\tIf check_partial_aggregate_support is true and the remote server\n\tversion is older than the local server version, postgres_fdw\n\tchecks if the partial aggregate function exists on the remote\n\tserver during planning and only uses it if it does.\n\nI tried to phrase it in a positive way, and mentioned the plan time\ndistinction. Also, I am sorry I was away for most of July and am just\ngetting to this.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 7 Aug 2023 14:30:54 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Bruce, Mr.Pyhalov, Mr.Finnerty, hackers.\n\nThank you for your valuable comments. I sincerely apologize for the very late reply.\nHere is a response to your comments or a fix to the patch.\n\nTuesday, August 8, 2023 at 3:31 Bruce Momjian\n> > I have modified the program except for the point \"if the version of the remote server is less than PG17\".\n> > Instead, we have addressed the following.\n> > \"If check_partial_aggregate_support is true and the remote server version is older than the local server\n> > version, postgres_fdw does not assume that the partial aggregate function is on the remote server unless\n> > the partial aggregate function and the aggregate function match.\"\n> > The reason for this is to maintain compatibility with any aggregate function that does not support partial\n> > aggregate in one version of V1 (V1 is PG17 or higher), even if the next version supports partial aggregate.\n> > For example, string_agg does not support partial aggregation in PG15, but it will support partial aggregation\n> > in PG16.\n>\n> Just to clarify, I think you are saying:\n>\n> If check_partial_aggregate_support is true and the remote server\n> version is older than the local server version, postgres_fdw\n> checks if the partial aggregate function exists on the remote\n> server during planning and only uses it if it does.\n>\n> I tried to phrase it in a positive way, and mentioned the plan time\n> distinction. Also, I am sorry I was away for most of July and am just\n> getting to this.\nThanks for your comment. In the documentation, the description of check_partial_aggregate_support is as follows\n(please see postgres-fdw.sgml).\n--\ncheck_partial_aggregate_support (boolean)\nOnly if this option is true, during query planning, postgres_fdw connects to the remote server and check if the remote server version is older than the local server version. If so, postgres_fdw assumes that for each built-in aggregate function, the partial aggregate function is not defined on the remote server unless the partial aggregate function and the aggregate function match. The default is false.\n--\n\nThursday, 20 July 2023 19:23 Alexander Pyhalov <a.pyhalov@postgrespro.ru>:\n> Fujii.Yuki@df.MitsubishiElectric.co.jp писал 2023-07-19 03:43:\n> > Hi Mr.Pyhalov, hackers.\n>\n> > 3)\n> > I modified the patch to safely do a partial aggregate pushdown for\n> > queries which contain having clauses.\n> >\n>\n> Hi.\n> Sorry, but I don't see how it could work.\nWe apologize for any inconvenience caused.\nThanks to Pyhalov's and Jim's comments, I have realized that I have made a fundamental mistake regarding the pushdown of the HAVING clause and the difficulty of achieving it performing Partial aggregate pushdown.\nSo, I removed the codes about pushdown of the HAVING clause performing Partial aggregate pushdown.\n\nThursday, 20 July 2023 19:23 Alexander Pyhalov <a.pyhalov@postgrespro.ru>:\n> As for changes in planner.c (setGroupClausePartial()) I have several\n> questions.\n>\n> 1) Why don't we add non_group_exprs to pathtarget->exprs when\n> partial_target->exprs is not set?\n>\n> 2) We replace extra->partial_target->exprs with partial_target->exprs\n> after processing. Why are we sure that after this tleSortGroupRef is\n> correct?\nResponse to 1)\nThe code you pointed out was unnecessary. I have removed this code.\nAlso, the process of adding PlaceHolderVar's expr to partial_target was missing.\nSo I fixed this.\n\nResponse to 2)\nThe making procedures extra->groupClausePartial and extra->partial_target \nin make_partial_grouping_target for this patch is as follows.\nSTEP1. From grouping_target->exprs, extract Aggref, Var and Placeholdervar that are not included in Aggref.\nSTEP2. setGroupClausePartial sets the copy of original groupClause to extra->groupClausePartial\nand sets the copy of original partial_target to extra->partial_target.\nSTEP3. setGroupClausePartial adds Var and Placeholdervar in STEP1 to partial_target.\nThe sortgroupref of partial_target->sortgrouprefs to be added to value is set to\n(the maximum value of the existing sortgroupref) + 1.\nsetGroupClausePartial adds data sgc of sortgroupclause type where sgc->tlesortgroupref\nmatches the sortgroupref to GroupClause.\nSTEP4. add_new_columns_to_pathtarget adds STEP1's Aggref to partial_target.\n\nDue to STEP2, the list of tlesortgrouprefs set in extra->groupClausePartial is not duplicated.\nAlso, sortgrouprefs added to extra->partial_target matches with corresponding \ntlesortgrouprefs added to extra->groupClausePartial.\nSo these tlesortgrouprefs are correct.\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Mon, 25 Sep 2023 03:18:13 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Mon, Sep 25, 2023 at 03:18:13AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Hi Mr.Bruce, Mr.Pyhalov, Mr.Finnerty, hackers.\n> \n> Thank you for your valuable comments. I sincerely apologize for the very late reply.\n> Here is a response to your comments or a fix to the patch.\n> \n> Tuesday, August 8, 2023 at 3:31 Bruce Momjian\n> > > I have modified the program except for the point \"if the version of the remote server is less than PG17\".\n> > > Instead, we have addressed the following.\n> > > \"If check_partial_aggregate_support is true and the remote server version is older than the local server\n> > > version, postgres_fdw does not assume that the partial aggregate function is on the remote server unless\n> > > the partial aggregate function and the aggregate function match.\"\n> > > The reason for this is to maintain compatibility with any aggregate function that does not support partial\n> > > aggregate in one version of V1 (V1 is PG17 or higher), even if the next version supports partial aggregate.\n> > > For example, string_agg does not support partial aggregation in PG15, but it will support partial aggregation\n> > > in PG16.\n> >\n> > Just to clarify, I think you are saying:\n> >\n> > If check_partial_aggregate_support is true and the remote server\n> > version is older than the local server version, postgres_fdw\n> > checks if the partial aggregate function exists on the remote\n> > server during planning and only uses it if it does.\n> >\n> > I tried to phrase it in a positive way, and mentioned the plan time\n> > distinction. Also, I am sorry I was away for most of July and am just\n> > getting to this.\n> Thanks for your comment. In the documentation, the description of check_partial_aggregate_support is as follows\n> (please see postgres-fdw.sgml).\n> --\n> check_partial_aggregate_support (boolean)\n> Only if this option is true, during query planning, postgres_fdw connects to the remote server and check if the remote server version is older than the local server version. If so, postgres_fdw assumes that for each built-in aggregate function, the partial aggregate function is not defined on the remote server unless the partial aggregate function and the aggregate function match. The default is false.\n> --\n\nMy point is that there are three behaviors:\n\n* false - no check\n* true, remote version >= sender - no check\n* true, remove version < sender - check\n\nHere is your code:\n\n\t+ * Check that a buit-in aggpartialfunc exists on the remote server. If\n\t+ * check_partial_aggregate_support is false, we assume the partial aggregate\n\t+ * function exsits on the remote server. Otherwise we assume the partial\n\t+ * aggregate function exsits on the remote server only if the remote server\n\t+ * version is not less than the local server version.\n\t+ */\n\t+static bool\n\t+is_builtin_aggpartialfunc_shippable(Oid aggpartialfn, PgFdwRelationInfo *fpinfo)\n\t+{\n\t+ bool shippable = true;\n\t+\n\t+ if (fpinfo->check_partial_aggregate_support)\n\t+ {\n\t+ if (fpinfo->remoteversion == 0)\n\t+ {\n\t+ PGconn *conn = GetConnection(fpinfo->user, false, NULL);\n\t+\n\t+ fpinfo->remoteversion = PQserverVersion(conn);\n\t+ }\n\t+ if (fpinfo->remoteversion < PG_VERSION_NUM)\n\t+ shippable = false;\n\t+ }\n\t+ return shippable;\n\t+}\n\nI think this needs to be explained in the docs. I am ready to adjust\nthe patch to improve the wording whenever you are ready. Should I do it\nnow and post an updated version for you to use?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 25 Sep 2023 18:30:32 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Bruce.\n\nTuesday, September 26, 2023 7:31 Bruce Momjian\n> On Mon, Sep 25, 2023 at 03:18:13AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> > Hi Mr.Bruce, Mr.Pyhalov, Mr.Finnerty, hackers.\n> >\n> > Thank you for your valuable comments. I sincerely apologize for the very late reply.\n> > Here is a response to your comments or a fix to the patch.\n> >\n> > Tuesday, August 8, 2023 at 3:31 Bruce Momjian\n> > > > I have modified the program except for the point \"if the version of the remote server is less than PG17\".\n> > > > Instead, we have addressed the following.\n> > > > \"If check_partial_aggregate_support is true and the remote server\n> > > > version is older than the local server version, postgres_fdw does\n> > > > not assume that the partial aggregate function is on the remote server unless the partial aggregate function and the\n> aggregate function match.\"\n> > > > The reason for this is to maintain compatibility with any\n> > > > aggregate function that does not support partial aggregate in one version of V1 (V1 is PG17 or higher), even if the\n> next version supports partial aggregate.\n> > > > For example, string_agg does not support partial aggregation in\n> > > > PG15, but it will support partial aggregation in PG16.\n> > >\n> > > Just to clarify, I think you are saying:\n> > >\n> > > If check_partial_aggregate_support is true and the remote server\n> > > version is older than the local server version, postgres_fdw\n> > > checks if the partial aggregate function exists on the remote\n> > > server during planning and only uses it if it does.\n> > >\n> > > I tried to phrase it in a positive way, and mentioned the plan time\n> > > distinction. Also, I am sorry I was away for most of July and am\n> > > just getting to this.\n> > Thanks for your comment. In the documentation, the description of\n> > check_partial_aggregate_support is as follows (please see postgres-fdw.sgml).\n> > --\n> > check_partial_aggregate_support (boolean) Only if this option is true,\n> > during query planning, postgres_fdw connects to the remote server and check if the remote server version is older than\n> the local server version. If so, postgres_fdw assumes that for each built-in aggregate function, the partial aggregate\n> function is not defined on the remote server unless the partial aggregate function and the aggregate function match. The\n> default is false.\n> > --\n> \n> My point is that there are three behaviors:\n> \n> * false - no check\n> * true, remote version >= sender - no check\n> * true, remove version < sender - check\n> \n> Here is your code:\n> \n> \t+ * Check that a buit-in aggpartialfunc exists on the remote server. If\n> \t+ * check_partial_aggregate_support is false, we assume the partial aggregate\n> \t+ * function exsits on the remote server. Otherwise we assume the partial\n> \t+ * aggregate function exsits on the remote server only if the remote server\n> \t+ * version is not less than the local server version.\n> \t+ */\n> \t+static bool\n> \t+is_builtin_aggpartialfunc_shippable(Oid aggpartialfn, PgFdwRelationInfo *fpinfo)\n> \t+{\n> \t+ bool shippable = true;\n> \t+\n> \t+ if (fpinfo->check_partial_aggregate_support)\n> \t+ {\n> \t+ if (fpinfo->remoteversion == 0)\n> \t+ {\n> \t+ PGconn *conn = GetConnection(fpinfo->user, false, NULL);\n> \t+\n> \t+ fpinfo->remoteversion = PQserverVersion(conn);\n> \t+ }\n> \t+ if (fpinfo->remoteversion < PG_VERSION_NUM)\n> \t+ shippable = false;\n> \t+ }\n> \t+ return shippable;\n> \t+}\n> \n> I think this needs to be explained in the docs. I am ready to adjust the patch to improve the wording whenever you are\n> ready. Should I do it now and post an updated version for you to use?\nThe following explanation was omitted from the documentation, so I added it.\n> * false - no check\n> * true, remove version < sender - check\nI have responded to your comment, but if there is a problem with the wording, could you please suggest a correction?\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation\n\n> -----Original Message-----\n> From: Bruce Momjian <bruce@momjian.us>\n> Sent: Tuesday, September 26, 2023 7:31 AM\n> To: Fujii Yuki/藤井 雄規(MELCO/情報総研 DM最適G) <Fujii.Yuki@df.MitsubishiElectric.co.jp>\n> Cc: Alexander Pyhalov <a.pyhalov@postgrespro.ru>; Finnerty, Jim <jfinnert@amazon.com>; PostgreSQL-development\n> <pgsql-hackers@postgresql.org>; Andres Freund <andres@anarazel.de>; Tom Lane <tgl@sss.pgh.pa.us>; Tomas\n> Vondra <tomas.vondra@enterprisedb.com>; Julien Rouhaud <rjuju123@gmail.com>; Daniel Gustafsson\n> <daniel@yesql.se>; Ilya Gladyshev <i.gladyshev@postgrespro.ru>\n> Subject: Re: Partial aggregates pushdown\n> \n> On Mon, Sep 25, 2023 at 03:18:13AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> > Hi Mr.Bruce, Mr.Pyhalov, Mr.Finnerty, hackers.\n> >\n> > Thank you for your valuable comments. I sincerely apologize for the very late reply.\n> > Here is a response to your comments or a fix to the patch.\n> >\n> > Tuesday, August 8, 2023 at 3:31 Bruce Momjian\n> > > > I have modified the program except for the point \"if the version of the remote server is less than PG17\".\n> > > > Instead, we have addressed the following.\n> > > > \"If check_partial_aggregate_support is true and the remote server\n> > > > version is older than the local server version, postgres_fdw does\n> > > > not assume that the partial aggregate function is on the remote server unless the partial aggregate function and the\n> aggregate function match.\"\n> > > > The reason for this is to maintain compatibility with any\n> > > > aggregate function that does not support partial aggregate in one version of V1 (V1 is PG17 or higher), even if the\n> next version supports partial aggregate.\n> > > > For example, string_agg does not support partial aggregation in\n> > > > PG15, but it will support partial aggregation in PG16.\n> > >\n> > > Just to clarify, I think you are saying:\n> > >\n> > > If check_partial_aggregate_support is true and the remote server\n> > > version is older than the local server version, postgres_fdw\n> > > checks if the partial aggregate function exists on the remote\n> > > server during planning and only uses it if it does.\n> > >\n> > > I tried to phrase it in a positive way, and mentioned the plan time\n> > > distinction. Also, I am sorry I was away for most of July and am\n> > > just getting to this.\n> > Thanks for your comment. In the documentation, the description of\n> > check_partial_aggregate_support is as follows (please see postgres-fdw.sgml).\n> > --\n> > check_partial_aggregate_support (boolean) Only if this option is true,\n> > during query planning, postgres_fdw connects to the remote server and check if the remote server version is older than\n> the local server version. If so, postgres_fdw assumes that for each built-in aggregate function, the partial aggregate\n> function is not defined on the remote server unless the partial aggregate function and the aggregate function match. The\n> default is false.\n> > --\n> \n> My point is that there are three behaviors:\n> \n> * false - no check\n> * true, remote version >= sender - no check\n> * true, remove version < sender - check\n> \n> Here is your code:\n> \n> \t+ * Check that a buit-in aggpartialfunc exists on the remote server. If\n> \t+ * check_partial_aggregate_support is false, we assume the partial aggregate\n> \t+ * function exsits on the remote server. Otherwise we assume the partial\n> \t+ * aggregate function exsits on the remote server only if the remote server\n> \t+ * version is not less than the local server version.\n> \t+ */\n> \t+static bool\n> \t+is_builtin_aggpartialfunc_shippable(Oid aggpartialfn, PgFdwRelationInfo *fpinfo)\n> \t+{\n> \t+ bool shippable = true;\n> \t+\n> \t+ if (fpinfo->check_partial_aggregate_support)\n> \t+ {\n> \t+ if (fpinfo->remoteversion == 0)\n> \t+ {\n> \t+ PGconn *conn = GetConnection(fpinfo->user, false, NULL);\n> \t+\n> \t+ fpinfo->remoteversion = PQserverVersion(conn);\n> \t+ }\n> \t+ if (fpinfo->remoteversion < PG_VERSION_NUM)\n> \t+ shippable = false;\n> \t+ }\n> \t+ return shippable;\n> \t+}\n> \n> I think this needs to be explained in the docs. I am ready to adjust the patch to improve the wording whenever you are\n> ready. Should I do it now and post an updated version for you to use?\n> \n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n> \n> Only you can decide what is important to you.", "msg_date": "Tue, 26 Sep 2023 06:26:25 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Fujii.Yuki@df.MitsubishiElectric.co.jp писал 2023-09-25 06:18:\n> Hi Mr.Bruce, Mr.Pyhalov, Mr.Finnerty, hackers.\n> \n> Thank you for your valuable comments. I sincerely apologize for the\n> very late reply.\n> Here is a response to your comments or a fix to the patch.\n> \n> Tuesday, August 8, 2023 at 3:31 Bruce Momjian\n>> > I have modified the program except for the point \"if the version of the remote server is less than PG17\".\n>> > Instead, we have addressed the following.\n>> > \"If check_partial_aggregate_support is true and the remote server version is older than the local server\n>> > version, postgres_fdw does not assume that the partial aggregate function is on the remote server unless\n>> > the partial aggregate function and the aggregate function match.\"\n>> > The reason for this is to maintain compatibility with any aggregate function that does not support partial\n>> > aggregate in one version of V1 (V1 is PG17 or higher), even if the next version supports partial aggregate.\n>> > For example, string_agg does not support partial aggregation in PG15, but it will support partial aggregation\n>> > in PG16.\n>> \n>> Just to clarify, I think you are saying:\n>> \n>> If check_partial_aggregate_support is true and the remote \n>> server\n>> version is older than the local server version, postgres_fdw\n>> checks if the partial aggregate function exists on the remote\n>> server during planning and only uses it if it does.\n>> \n>> I tried to phrase it in a positive way, and mentioned the plan time\n>> distinction. Also, I am sorry I was away for most of July and am just\n>> getting to this.\n> Thanks for your comment. In the documentation, the description of\n> check_partial_aggregate_support is as follows\n> (please see postgres-fdw.sgml).\n> --\n> check_partial_aggregate_support (boolean)\n> Only if this option is true, during query planning, postgres_fdw\n> connects to the remote server and check if the remote server version\n> is older than the local server version. If so, postgres_fdw assumes\n> that for each built-in aggregate function, the partial aggregate\n> function is not defined on the remote server unless the partial\n> aggregate function and the aggregate function match. The default is\n> false.\n> --\n> \n> Thursday, 20 July 2023 19:23 Alexander Pyhalov \n> <a.pyhalov@postgrespro.ru>:\n>> Fujii.Yuki@df.MitsubishiElectric.co.jp писал 2023-07-19 03:43:\n>> > Hi Mr.Pyhalov, hackers.\n>> \n>> > 3)\n>> > I modified the patch to safely do a partial aggregate pushdown for\n>> > queries which contain having clauses.\n>> >\n>> \n>> Hi.\n>> Sorry, but I don't see how it could work.\n> We apologize for any inconvenience caused.\n> Thanks to Pyhalov's and Jim's comments, I have realized that I have\n> made a fundamental mistake regarding the pushdown of the HAVING clause\n> and the difficulty of achieving it performing Partial aggregate\n> pushdown.\n> So, I removed the codes about pushdown of the HAVING clause performing\n> Partial aggregate pushdown.\n> \n> Thursday, 20 July 2023 19:23 Alexander Pyhalov \n> <a.pyhalov@postgrespro.ru>:\n>> As for changes in planner.c (setGroupClausePartial()) I have several\n>> questions.\n>> \n>> 1) Why don't we add non_group_exprs to pathtarget->exprs when\n>> partial_target->exprs is not set?\n>> \n>> 2) We replace extra->partial_target->exprs with partial_target->exprs\n>> after processing. Why are we sure that after this tleSortGroupRef is\n>> correct?\n> Response to 1)\n> The code you pointed out was unnecessary. I have removed this code.\n> Also, the process of adding PlaceHolderVar's expr to partial_target was \n> missing.\n> So I fixed this.\n> \n> Response to 2)\n> The making procedures extra->groupClausePartial and \n> extra->partial_target\n> in make_partial_grouping_target for this patch is as follows.\n> STEP1. From grouping_target->exprs, extract Aggref, Var and\n> Placeholdervar that are not included in Aggref.\n> STEP2. setGroupClausePartial sets the copy of original groupClause to\n> extra->groupClausePartial\n> and sets the copy of original partial_target to extra->partial_target.\n> STEP3. setGroupClausePartial adds Var and Placeholdervar in STEP1 to\n> partial_target.\n> The sortgroupref of partial_target->sortgrouprefs to be added to value \n> is set to\n> (the maximum value of the existing sortgroupref) + 1.\n> setGroupClausePartial adds data sgc of sortgroupclause type where\n> sgc->tlesortgroupref\n> matches the sortgroupref to GroupClause.\n> STEP4. add_new_columns_to_pathtarget adds STEP1's Aggref to \n> partial_target.\n> \n> Due to STEP2, the list of tlesortgrouprefs set in\n> extra->groupClausePartial is not duplicated.\n\nDo you mean that extra->partial_target->sortgrouprefs is not replaced, \nand so we preserve tlesortgroupref numbers?\nI'm suspicious about rewriting extra->partial_target->exprs with \npartial_target->exprs - I'm still not sure why we\n don't we loose information, added by add_column_to_pathtarget() to \nextra->partial_target->exprs?\n\nAlso look at the following example.\n\nEXPLAIN VERBOSE SELECT count(*) , (b/2)::numeric FROM pagg_tab GROUP BY \nb/2 ORDER BY 1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Sort (cost=511.35..511.47 rows=50 width=44)\n Output: (count(*)), ((((pagg_tab.b / 2)))::numeric), ((pagg_tab.b / \n2))\n Sort Key: (count(*))\n -> Finalize HashAggregate (cost=509.06..509.94 rows=50 width=44)\n Output: count(*), (((pagg_tab.b / 2)))::numeric, ((pagg_tab.b / \n2))\n Group Key: ((pagg_tab.b / 2))\n -> Append (cost=114.62..506.06 rows=600 width=16)\n -> Foreign Scan (cost=114.62..167.69 rows=200 width=16)\n Output: ((pagg_tab.b / 2)), (PARTIAL count(*)), \npagg_tab.b\n Relations: Aggregate on (public.fpagg_tab_p1 \npagg_tab)\n Remote SQL: SELECT (b / 2), count(*), b FROM \npublic.pagg_tab_p1 GROUP BY 1, 2\n -> Foreign Scan (cost=114.62..167.69 rows=200 width=16)\n Output: ((pagg_tab_1.b / 2)), (PARTIAL count(*)), \npagg_tab_1.b\n Relations: Aggregate on (public.fpagg_tab_p2 \npagg_tab_1)\n Remote SQL: SELECT (b / 2), count(*), b FROM \npublic.pagg_tab_p2 GROUP BY 1, 2\n -> Foreign Scan (cost=114.62..167.69 rows=200 width=16)\n Output: ((pagg_tab_2.b / 2)), (PARTIAL count(*)), \npagg_tab_2.b\n Relations: Aggregate on (public.fpagg_tab_p3 \npagg_tab_2)\n Remote SQL: SELECT (b / 2), count(*), b FROM \npublic.pagg_tab_p3 GROUP BY 1, 2\n\nNote that group by is still deparsed incorrectly.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Tue, 26 Sep 2023 16:15:14 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Tue, Sep 26, 2023 at 06:26:25AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Hi Mr.Bruce.\n> > I think this needs to be explained in the docs. I am ready to adjust the patch to improve the wording whenever you are\n> > ready. Should I do it now and post an updated version for you to use?\n> The following explanation was omitted from the documentation, so I added it.\n> > * false - no check\n> > * true, remove version < sender - check\n> I have responded to your comment, but if there is a problem with the wording, could you please suggest a correction?\n\nI like your new wording, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 26 Sep 2023 10:16:30 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Momjian, Mr.Pyhalov.\n\nTuesday, 26 September 2023 22:15 Alexander Pyhalov <a.pyhalov@postgrespro.ru>:\n> Do you mean that extra->partial_target->sortgrouprefs is not replaced,\n> and so we preserve tlesortgroupref numbers?\nYes, that is correct.\n\n> I'm suspicious about rewriting extra->partial_target->exprs with\n> partial_target->exprs - I'm still not sure why we\n> don't we loose information, added by add_column_to_pathtarget() to\n> extra->partial_target->exprs?\n>\n> Also look at the following example.\n>\n> EXPLAIN VERBOSE SELECT count(*) , (b/2)::numeric FROM pagg_tab GROUP BY\n> b/2 ORDER BY 1;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------\n> Sort (cost=511.35..511.47 rows=50 width=44)\n> Output: (count(*)), ((((pagg_tab.b / 2)))::numeric), ((pagg_tab.b /\n> 2))\n> Sort Key: (count(*))\n> -> Finalize HashAggregate (cost=509.06..509.94 rows=50 width=44)\n> Output: count(*), (((pagg_tab.b / 2)))::numeric, ((pagg_tab.b /\n> 2))\n> Group Key: ((pagg_tab.b / 2))\n> -> Append (cost=114.62..506.06 rows=600 width=16)\n> -> Foreign Scan (cost=114.62..167.69 rows=200 width=16)\n> Output: ((pagg_tab.b / 2)), (PARTIAL count(*)),\n> pagg_tab.b\n> Relations: Aggregate on (public.fpagg_tab_p1\n> pagg_tab)\n> Remote SQL: SELECT (b / 2), count(*), b FROM\n> public.pagg_tab_p1 GROUP BY 1, 2\n> -> Foreign Scan (cost=114.62..167.69 rows=200 width=16)\n> Output: ((pagg_tab_1.b / 2)), (PARTIAL count(*)),\n> pagg_tab_1.b\n> Relations: Aggregate on (public.fpagg_tab_p2\n> pagg_tab_1)\n> Remote SQL: SELECT (b / 2), count(*), b FROM\n> public.pagg_tab_p2 GROUP BY 1, 2\n> -> Foreign Scan (cost=114.62..167.69 rows=200 width=16)\n> Output: ((pagg_tab_2.b / 2)), (PARTIAL count(*)),\n> pagg_tab_2.b\n> Relations: Aggregate on (public.fpagg_tab_p3\n> pagg_tab_2)\n> Remote SQL: SELECT (b / 2), count(*), b FROM\n> public.pagg_tab_p3 GROUP BY 1, 2\n>\n> Note that group by is still deparsed incorrectly.\nThank you for comments. You are right.\nIt is a mistake to rewrite extra->partial_target->exprs with partial_target->exprs.\nI fixed this point.\n\nSeptember 26, 2023 (Fire) 23:16 Bruce Momjian <bruce@momjian.us>.\n> On Tue, Sep 26, 2023 at 06:26:25AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> > Hi Mr.Bruce.\n> > > I think this needs to be explained in the docs. I am ready to adjust the patch to improve the wording whenever you are\n> > > ready. Should I do it now and post an updated version for you to use?\n> > The following explanation was omitted from the documentation, so I added it.\n> > > * false - no check\n> > > * true, remove version < sender - check\n> > I have responded to your comment, but if there is a problem with the wording, could you please suggest a correction?\n>\n> I like your new wording, thanks.\nThanks.\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Tue, 26 Sep 2023 22:35:28 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Fujii.Yuki@df.MitsubishiElectric.co.jp писал 2023-09-27 01:35:\n> Hi Mr.Momjian, Mr.Pyhalov.\n> \n> Tuesday, 26 September 2023 22:15 Alexander Pyhalov \n> <a.pyhalov@postgrespro.ru>:\n>> Do you mean that extra->partial_target->sortgrouprefs is not replaced,\n>> and so we preserve tlesortgroupref numbers?\n> Yes, that is correct.\n> \n>> I'm suspicious about rewriting extra->partial_target->exprs with\n>> partial_target->exprs - I'm still not sure why we\n>> don't we loose information, added by add_column_to_pathtarget() to\n>> extra->partial_target->exprs?\n>> \n\nHi.\n\nIn postgres_fdw.sql\n\n\"Partial aggregates are unsafe to push down having clause when there are \npartial aggregates\" - this comment likely should be fixed.\n\nSome comments should be added to setGroupClausePartial() and to \nmake_partial_grouping_target() - especially why setGroupClausePartial()\nis called prior to add_new_columns_to_pathtarget().\n\nI'm not sure that I like this mechanics of adding sort group clauses - \nit seems we do in core additional work, which is of use only for\none extension, but at least it seems to be working.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Wed, 27 Sep 2023 15:36:34 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Pyhalov.\r\n\r\nThank you for comments.\r\n> In postgres_fdw.sql\r\n>\r\n> \"Partial aggregates are unsafe to push down having clause when there are\r\n> partial aggregates\" - this comment likely should be fixed.\r\nFixed.\r\n\r\n> Some comments should be added to setGroupClausePartial() and to\r\n> make_partial_grouping_target() - especially why setGroupClausePartial()\r\n> is called prior to add_new_columns_to_pathtarget().\r\nI have added comments to setGroupClausePartial() and to make_partial_grouping_target().\r\n\r\n> I'm not sure that I like this mechanics of adding sort group clauses -\r\n> it seems we do in core additional work, which is of use only for\r\n> one extension, but at least it seems to be working.\r\nWe cannot deparse the original sort group clauses and pathtarget\r\nwhen performing partial aggreggate pushdown by any FDWs.\r\nSo I think the additional sort group clauses and pathtarget are\r\nneeded by any FDWs, not only postgres_fdw.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Thu, 28 Sep 2023 04:40:40 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Fujii.Yuki@df.MitsubishiElectric.co.jp писал 2023-09-28 07:40:\n\n>> I'm not sure that I like this mechanics of adding sort group clauses -\n>> it seems we do in core additional work, which is of use only for\n>> one extension, but at least it seems to be working.\n> We cannot deparse the original sort group clauses and pathtarget\n> when performing partial aggreggate pushdown by any FDWs.\n> So I think the additional sort group clauses and pathtarget are\n> needed by any FDWs, not only postgres_fdw.\n> \n\nHi.\nIt seems to me that *fdw postfixes don't clarify things, but just make \nnaming more ugly.\n\n+ * Adding these Vars and PlaceHolderVars to PathTarget,\n+ * FDW cannot deparse this by the original List of SortGroupClauses.\n+ * So, before this adding process,\n+ * setGroupClausePartial generates another Pathtarget and another\n+ * List of SortGroupClauses for FDW.\n\nIt seems that something like:\n\n/*\n * Modified PathTarget cannot be used by FDW as-is to deparse this \nstatement.\n * So, before modifying PathTarget, setGroupClausePartial generates\n * another Pathtarget and another list List of SortGroupClauses\n * to make deparsing possible.\n */\n\nsounds better.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Thu, 28 Sep 2023 09:15:43 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Pyhalov.\r\n\r\nAlexander Pyhalov <a.pyhalov@postgrespro.ru>\r\nThursday, September 28, 2023 3:16 PM\r\n> It seems to me that *fdw postfixes don't clarify things, but just make naming more ugly.\r\nI have removed *fdw postfixes.\r\n\r\n> + * Adding these Vars and PlaceHolderVars to PathTarget,\r\n> + * FDW cannot deparse this by the original List of SortGroupClauses.\r\n> + * So, before this adding process,\r\n> + * setGroupClausePartial generates another Pathtarget and another\r\n> + * List of SortGroupClauses for FDW.\r\n> \r\n> It seems that something like:\r\n> \r\n> /*\r\n> * Modified PathTarget cannot be used by FDW as-is to deparse this statement.\r\n> * So, before modifying PathTarget, setGroupClausePartial generates\r\n> * another Pathtarget and another list List of SortGroupClauses\r\n> * to make deparsing possible.\r\n> */\r\n> \r\n> sounds better.\r\nThank you for the suggested modifications. I have modified it according to your suggestion.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Thu, 28 Sep 2023 07:21:10 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Hi hackers.\r\n\r\nBecause there is a degrade in pg_dump.c, I fixed it.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Wed, 18 Oct 2023 05:22:34 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Wed, Oct 18, 2023 at 05:22:34AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Hi hackers.\n> \n> Because there is a degrade in pg_dump.c, I fixed it.\n\nFujii-san, to get this patch closer to finished, can I modify this\nversion of the patch to improve some wording and post an updated version\nyou can use for future changes?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 23 Oct 2023 12:55:46 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Momjian.\n\n> Fujii-san, to get this patch closer to finished, can I modify this version of the patch to improve some wording and post an\n> updated version you can use for future changes?\nYes, I greatly appreciate your offer.\nI would very much appreciate your modifications.\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation\n\n\n", "msg_date": "Tue, 24 Oct 2023 00:12:41 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Tue, Oct 24, 2023 at 12:12:41AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Hi Mr.Momjian.\n> \n> > Fujii-san, to get this patch closer to finished, can I modify this version of the patch to improve some wording and post an\n> > updated version you can use for future changes?\n> Yes, I greatly appreciate your offer.\n> I would very much appreciate your modifications.\n\nI am almost done updating the patch, but I got stuck on how the feature\nis supposed to work. This documentation sentence is where I got\nconfused:\n\n\t<varlistentry>\n\t <term><literal>check_partial_aggregate_support</literal> (<type>boolean</type>)</term>\n\t <listitem>\n\t <para>\n\t If this option is false, <filename>postgres_fdw</filename> assumes\n\t that for each built-in aggregate function,\n\t the partial aggregate function is defined on the remote server\n\t without checking the remote server version.\n\t If this option is true, during query planning,\n\t <filename>postgres_fdw</filename> connects to the remote server\n\t and checks if the remote server version is older than the local server version.\n\t If so,\n\t <filename>postgres_fdw</filename>\n-->\t assumes that for each built-in aggregate function, the partial aggregate function is not defined\n-->\t on the remote server unless the partial aggregate function and the aggregate\n-->\t function match.\n\t Otherwise <filename>postgres_fdw</filename> assumes that for each built-in aggregate function,\n\t the partial aggregate function is defined on the remote server.\n\t The default is <literal>false</literal>.\n\t </para>\n\t </listitem>\n\t</varlistentry>\n\nWhat does that marked sentence mean? What is match? Are one or both of\nthese remote? It sounds like you are checking the local aggregate\nagainst the remote partial aggregate, but I don't see any code that does\nthis in the patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 25 Oct 2023 18:08:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Momjian\n\n> From: Bruce Momjian <bruce@momjian.us>\n> Sent: Thursday, October 26, 2023 7:08 AM\n> I am almost done updating the patch, but I got stuck on how the feature is supposed to work. This documentation\n> sentence is where I got\n> confused:\n> \n> \t<varlistentry>\n> \t <term><literal>check_partial_aggregate_support</literal> (<type>boolean</type>)</term>\n> \t <listitem>\n> \t <para>\n> \t If this option is false, <filename>postgres_fdw</filename> assumes\n> \t that for each built-in aggregate function,\n> \t the partial aggregate function is defined on the remote server\n> \t without checking the remote server version.\n> \t If this option is true, during query planning,\n> \t <filename>postgres_fdw</filename> connects to the remote server\n> \t and checks if the remote server version is older than the local server version.\n> \t If so,\n> \t <filename>postgres_fdw</filename>\n> -->\t assumes that for each built-in aggregate function, the partial aggregate function is not defined\n> -->\t on the remote server unless the partial aggregate function and the aggregate\n> -->\t function match.\n> \t Otherwise <filename>postgres_fdw</filename> assumes that for each built-in aggregate function,\n> \t the partial aggregate function is defined on the remote server.\n> \t The default is <literal>false</literal>.\n> \t </para>\n> \t </listitem>\n> \t</varlistentry>\n> \n> What does that marked sentence mean? What is match? Are one or both of these remote? It sounds like you are\n> checking the local aggregate against the remote partial aggregate, but I don't see any code that does this in the patch.\nThis sentence means that\n\"If the partial aggregate function has the same OID as the aggregate function,\nthen postgres_fdw assumes that for each built-in aggregate function, the partial aggregate function is not defined\n on the remote server.\"\n\"Match\" means that the partial aggregate function has the same OID as the aggregate function in local server.\nBut, in v30, there is no code which checks the partial aggregate function has the same OID as the aggregate function in local server.\nSo I modified the code of is_builtin_aggpartialfunc_shippable().\nAlso, I modified wording postgres-fdw.sgml.\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Thu, 26 Oct 2023 11:11:09 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Thu, Oct 26, 2023 at 11:11:09AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> > \t and checks if the remote server version is older than the local server version.\n> > \t If so,\n> > \t <filename>postgres_fdw</filename>\n> > -->\t assumes that for each built-in aggregate function, the partial aggregate function is not defined\n> > -->\t on the remote server unless the partial aggregate function and the aggregate\n> > -->\t function match.\n> > \t Otherwise <filename>postgres_fdw</filename> assumes that for each built-in aggregate function,\n> > \t the partial aggregate function is defined on the remote server.\n> > \t The default is <literal>false</literal>.\n> > \t </para>\n> > \t </listitem>\n> > \t</varlistentry>\n> > \n> > What does that marked sentence mean? What is match? Are one or both of these remote? It sounds like you are\n> > checking the local aggregate against the remote partial aggregate, but I don't see any code that does this in the patch.\n> This sentence means that\n> \"If the partial aggregate function has the same OID as the aggregate function,\n> then postgres_fdw assumes that for each built-in aggregate function, the partial aggregate function is not defined\n> on the remote server.\"\n> \"Match\" means that the partial aggregate function has the same OID as the aggregate function in local server.\n> But, in v30, there is no code which checks the partial aggregate function has the same OID as the aggregate function in local server.\n> So I modified the code of is_builtin_aggpartialfunc_shippable().\n> Also, I modified wording postgres-fdw.sgml.\n\nYes, that is what I needed. Attached is a modification of your v31\npatch (the most recent) that mostly improves the documentation and\ncomments. What else needs to be done before committers start to review\nthis?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Thu, 26 Oct 2023 15:43:29 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Momjian.\n\nThank you for your improvement.\nAs a matter of detail, I think that the areas marked below are erroneous.\n\n--\n+ Pushdown causes aggregate function cals to send partial aggregate\n ^\n+ function calls to the remote server. If the partial aggregate\n+ function doesn't doesn't exist on the remote server, it causes\n ^^^^^^^\n--\n\n> What else needs to be done before committers start to review\n> this?\nThere are no others. May I make a new version of v31 with your\nsuggested improvements for the committer's review?\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation\n\n\n", "msg_date": "Fri, 27 Oct 2023 02:44:42 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Fri, Oct 27, 2023 at 02:44:42AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Hi Momjian.\n> \n> Thank you for your improvement.\n> As a matter of detail, I think that the areas marked below are erroneous.\n> \n> --\n> + Pushdown causes aggregate function cals to send partial aggregate\n> ^\n> + function calls to the remote server. If the partial aggregate\n> + function doesn't doesn't exist on the remote server, it causes\n> ^^^^^^^\n> --\n\nAgreed. Do you want to fix that on your vesion? I don't have any more\nimprovements to make.\n\n> > What else needs to be done before committers start to review\n> > this?\n> There are no others. May I make a new version of v31 with your\n> suggested improvements for the committer's review?\n\nYes, please. I think the updated docs will help people understand how\nthe patch works.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 26 Oct 2023 23:06:17 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Momjian.\n\n> From: Bruce Momjian <bruce@momjian.us>\n> Sent: Friday, October 27, 2023 12:06 PM\n> > Thank you for your improvement.\n> > As a matter of detail, I think that the areas marked below are erroneous.\n> >\n> > --\n> > + Pushdown causes aggregate function cals to send partial aggregate\n> > ^\n> > + function calls to the remote server. If the partial aggregate\n> > + function doesn't doesn't exist on the remote server, it causes\n> > ^^^^^^^\n> > --\n> \n> Agreed. Do you want to fix that on your vesion? I don't have any more improvements to make.\nYes, I have fixed that in v32(Attached).\n\n> > > What else needs to be done before committers start to review this?\n> > There are no others. May I make a new version of v31 with your\n> > suggested improvements for the committer's review?\n> \n> Yes, please. I think the updated docs will help people understand how the patch works.\nThank you. v32(Attached) is tha updated version.\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Fri, 27 Oct 2023 05:32:48 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Hi Hackers.\n\nI have rebased this patch.\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Mon, 13 Nov 2023 05:48:47 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Hi Hackers.\n\nIn postgres_fdw.sql, I have corrected the output format for floating point numbers\nby extra_float_digits.\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Mon, 13 Nov 2023 08:25:48 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Mon, Nov 13, 2023 at 3:26 AM Fujii.Yuki@df.MitsubishiElectric.co.jp\n<Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n> In postgres_fdw.sql, I have corrected the output format for floating point numbers\n> by extra_float_digits.\n\nLooking at this, I find that it's not at all clear to me how the\npartial aggregate function is defined. Let's look at what we have for\ndocumentation:\n\n+ <para>\n+ Paraemter <literal>AGGPARTIALFUNC</literal> optionally defines a\n+ partial aggregate function used for partial aggregate pushdown; see\n+ <xref linkend=\"xaggr-partial-aggregates\"/> for details.\n+ </para>\n\n+ Partial aggregate function (zero if none).\n+ See <xref linkend=\"partial-aggregate-pushdown\"/> for the definition\n+ of partial aggregate function.\n\n+ Partial aggregate pushdown is an optimization for queries that contains\n+ aggregate expressions for a partitioned table across one or more remote\n+ servers. If multiple conditions are met, partial aggregate function\n\n+ When partial aggregate pushdown is used for aggregate expressions,\n+ remote queries replace aggregate function calls with partial\n+ aggregate function calls. If the data type of the state value is not\n\nBut there's no definition of what the behavior of the function is\nanywhere that I can see, not even in <sect2\nid=\"partial-aggregate-pushdown\">. Everywhere it only describes how the\npartial aggregate function is used, not what it is supposed to do.\n\nLooking at the changes in pg_aggregate.dat, it seems like the partial\naggregate function is a second aggregate defined in a way that mostly\nmatches the original, except that (1) if the original final function\nwould have returned a data type other than internal, then the final\nfunction is removed; and (2) if the original final function would have\nreturned a value of internal type, then the final function is the\nserialization function of the original aggregate. I think that's a\nreasonable definition, but the documentation and code comments need to\nbe a lot clearer.\n\nI do have a concern about this, though. It adds a lot of bloat. It\nadds a whole lot of additional entries to pg_aggregate, and every new\naggregate we add in the future will require a bonus entry for this,\nand it needs a bunch of new pg_proc entries as well. One idea that\nI've had in the past is to instead introduce syntax that just does\nthis, without requiring a separate aggregate definition in each case.\nFor example, maybe instead of changing string_agg(whatever) to\nstring_agg_p_text_text(whatever), you can say PARTIAL_AGGREGATE\nstring_agg(whatever) or string_agg(PARTIAL_AGGREGATE whatever) or\nsomething. Then all aggregates could be treated in a generic way. I'm\nnot completely sure that's better, but I think it's worth considering.\n\nI think that the control mechanism needs some thought. Right now,\nthere are two possible behaviors: either we assume that the local and\nremote sides are the same unconditionally, or we assume that they're\nthe same if the remote side is a new enough version. I do like having\nthose behaviors available, but I wonder if we need to do something\nbetter or different. What if somebody wants to push down a\nnon-built-in aggregate, for example? I realize that we don't have\ngreat solutions to the problem of knowing which functions are\npush-downable in general, and I don't know that partial aggregation\nneeds to be any better than anything else, but it's probably worth\ncomparing and contrasting the approach we take here with the\napproaches we've taken in other, similar cases. From that point of\nview, I think check_partial_aggregate_support is a novelty: we don't\ndo those kinds of checks in other cases, AFAIK. But on the other hand,\nthere is the 'extensions' argument to postgres_fdw.\n\nI don't think the patch does a good job explaining why HAVING,\nDISTINCT, and ORDER BY are a problem. It seems to me that HAVING\nshouldn't really be a problem, because HAVING is basically a WHERE\nclause that occurs after aggregation is complete, and whether or not\nthe aggregation is safe shouldn't depend on what we're going to do\nwith the value afterward. The HAVING clause can't necessarily be\npushed to the remote side, but I don't see how or why it could make\nthe aggregate itself unsafe to push down. DISTINCT and ORDER BY are a\nlittle trickier: if we pushed down DISTINCT, we'd still have to\nre-DISTINCT-ify when combining locally, and if we pushed down ORDER\nBY, we'd have to do a merge pass to combine the returned values unless\nwe could prove that the partitions were non-overlapping ranges that\nwould be visited in the correct order. Although that all sounds\ndoable, I think it's probably a good thing that the current patch\ndoesn't try to handle it -- this is complicated already. But it should\nexplain why it's not handling it and maybe even a bit about how it\ncould be handling in the future, rather than just saying \"well, this\nkind of thing is not safe.\" The trouble with that explanation is that\nit does nothing to help the reader understand whether the thing in\nquestion is *fundamentally* unsafe or whether we just don't have the\nright code to make it work.\n\nTypo: Paraemter\n\nI'm so sorry to keep complaining about comments, but I think the\ncomments in src/backend/optimizer are very far from being adequate.\nThey are strictly formulaic and don't really explain anything. For\nexample, I see that the patch adds a partial_target to\nGroupPathExtraData, but how do I understand the reason why we now need\na second pathtarget beside the one that already exists? Certainly not\nfrom the comments in setGroupClausePartial, because there basically\naren't any. True, there's a header comment, but it just says we\ngenerate this thing, not WHY we generate this thing. There's nothing\nmeaningful to be found in src/include/nodes/pathnodes.h about why\nwe're doing this, either.\n\nAnd this problem really extends throughout the patch: comments are\nmostly short and just describe what the code does, not WHY it does\nthat. And the WHY is really the important part. Otherwise we will not\nbe able to maintain this code going forward.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Nov 2023 15:51:33 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Mon, Nov 20, 2023 at 03:51:33PM -0500, Robert Haas wrote:\n> On Mon, Nov 13, 2023 at 3:26 AM Fujii.Yuki@df.MitsubishiElectric.co.jp\n> <Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n> > In postgres_fdw.sql, I have corrected the output format for floating point numbers\n> > by extra_float_digits.\n> \n> Looking at this, I find that it's not at all clear to me how the\n> partial aggregate function is defined. Let's look at what we have for\n> documentation:\n> \n> + <para>\n> + Paraemter <literal>AGGPARTIALFUNC</literal> optionally defines a\n> + partial aggregate function used for partial aggregate pushdown; see\n> + <xref linkend=\"xaggr-partial-aggregates\"/> for details.\n> + </para>\n> \n> + Partial aggregate function (zero if none).\n> + See <xref linkend=\"partial-aggregate-pushdown\"/> for the definition\n> + of partial aggregate function.\n> \n> + Partial aggregate pushdown is an optimization for queries that contains\n> + aggregate expressions for a partitioned table across one or more remote\n> + servers. If multiple conditions are met, partial aggregate function\n> \n> + When partial aggregate pushdown is used for aggregate expressions,\n> + remote queries replace aggregate function calls with partial\n> + aggregate function calls. If the data type of the state value is not\n> \n> But there's no definition of what the behavior of the function is\n> anywhere that I can see, not even in <sect2\n> id=\"partial-aggregate-pushdown\">. Everywhere it only describes how the\n> partial aggregate function is used, not what it is supposed to do.\n\nYes, I had to figure that out myself, and I was wondering how much\ndetail to have in our docs vs README files vs. C comments. I think we\nshould put more details somewhere.\n\n> Looking at the changes in pg_aggregate.dat, it seems like the partial\n> aggregate function is a second aggregate defined in a way that mostly\n> matches the original, except that (1) if the original final function\n> would have returned a data type other than internal, then the final\n> function is removed; and (2) if the original final function would have\n> returned a value of internal type, then the final function is the\n> serialization function of the original aggregate. I think that's a\n> reasonable definition, but the documentation and code comments need to\n> be a lot clearer.\n\nAgreed. I wasn't sure enough about this to add it when I was reviewing\nthe patch.\n\n> I do have a concern about this, though. It adds a lot of bloat. It\n> adds a whole lot of additional entries to pg_aggregate, and every new\n> aggregate we add in the future will require a bonus entry for this,\n> and it needs a bunch of new pg_proc entries as well. One idea that\n> I've had in the past is to instead introduce syntax that just does\n> this, without requiring a separate aggregate definition in each case.\n> For example, maybe instead of changing string_agg(whatever) to\n> string_agg_p_text_text(whatever), you can say PARTIAL_AGGREGATE\n> string_agg(whatever) or string_agg(PARTIAL_AGGREGATE whatever) or\n> something. Then all aggregates could be treated in a generic way. I'm\n> not completely sure that's better, but I think it's worth considering.\n\nSo use an SQL keyword to indicates a pushdown call? We could then\nautomate the behavior rather than requiring special catalog functions?\n\n> I think that the control mechanism needs some thought. Right now,\n> there are two possible behaviors: either we assume that the local and\n> remote sides are the same unconditionally, or we assume that they're\n> the same if the remote side is a new enough version. I do like having\n> those behaviors available, but I wonder if we need to do something\n> better or different. What if somebody wants to push down a\n> non-built-in aggregate, for example? I realize that we don't have\n\nIt does allow specification of extensions that can be pushed down.\n\n> great solutions to the problem of knowing which functions are\n> push-downable in general, and I don't know that partial aggregation\n> needs to be any better than anything else, but it's probably worth\n> comparing and contrasting the approach we take here with the\n> approaches we've taken in other, similar cases. From that point of\n> view, I think check_partial_aggregate_support is a novelty: we don't\n> do those kinds of checks in other cases, AFAIK. But on the other hand,\n> there is the 'extensions' argument to postgres_fdw.\n\nRight. I am not sure how to improve what the patch does.\n\n> I don't think the patch does a good job explaining why HAVING,\n> DISTINCT, and ORDER BY are a problem. It seems to me that HAVING\n> shouldn't really be a problem, because HAVING is basically a WHERE\n> clause that occurs after aggregation is complete, and whether or not\n> the aggregation is safe shouldn't depend on what we're going to do\n> with the value afterward. The HAVING clause can't necessarily be\n> pushed to the remote side, but I don't see how or why it could make\n> the aggregate itself unsafe to push down. DISTINCT and ORDER BY are a\n> little trickier: if we pushed down DISTINCT, we'd still have to\n> re-DISTINCT-ify when combining locally, and if we pushed down ORDER\n> BY, we'd have to do a merge pass to combine the returned values unless\n> we could prove that the partitions were non-overlapping ranges that\n> would be visited in the correct order. Although that all sounds\n> doable, I think it's probably a good thing that the current patch\n> doesn't try to handle it -- this is complicated already. But it should\n> explain why it's not handling it and maybe even a bit about how it\n> could be handling in the future, rather than just saying \"well, this\n> kind of thing is not safe.\" The trouble with that explanation is that\n> it does nothing to help the reader understand whether the thing in\n> question is *fundamentally* unsafe or whether we just don't have the\n> right code to make it work.\n\nMakes sense.\n\n> Typo: Paraemter\n> \n> I'm so sorry to keep complaining about comments, but I think the\n> comments in src/backend/optimizer are very far from being adequate.\n> They are strictly formulaic and don't really explain anything. For\n> example, I see that the patch adds a partial_target to\n> GroupPathExtraData, but how do I understand the reason why we now need\n> a second pathtarget beside the one that already exists? Certainly not\n> from the comments in setGroupClausePartial, because there basically\n> aren't any. True, there's a header comment, but it just says we\n> generate this thing, not WHY we generate this thing. There's nothing\n> meaningful to be found in src/include/nodes/pathnodes.h about why\n> we're doing this, either.\n> \n> And this problem really extends throughout the patch: comments are\n> mostly short and just describe what the code does, not WHY it does\n> that. And the WHY is really the important part. Otherwise we will not\n> be able to maintain this code going forward.\n\nUnderstood. I wish I knew enough to add them myself. I can help if\nsomeone can supply the details.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 20 Nov 2023 17:48:42 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Mon, Nov 20, 2023 at 5:48 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I do have a concern about this, though. It adds a lot of bloat. It\n> > adds a whole lot of additional entries to pg_aggregate, and every new\n> > aggregate we add in the future will require a bonus entry for this,\n> > and it needs a bunch of new pg_proc entries as well. One idea that\n> > I've had in the past is to instead introduce syntax that just does\n> > this, without requiring a separate aggregate definition in each case.\n> > For example, maybe instead of changing string_agg(whatever) to\n> > string_agg_p_text_text(whatever), you can say PARTIAL_AGGREGATE\n> > string_agg(whatever) or string_agg(PARTIAL_AGGREGATE whatever) or\n> > something. Then all aggregates could be treated in a generic way. I'm\n> > not completely sure that's better, but I think it's worth considering.\n>\n> So use an SQL keyword to indicates a pushdown call? We could then\n> automate the behavior rather than requiring special catalog functions?\n\nRight. It would require more infrastructure in the parser, planner,\nand executor, but it would be infinitely reusable instead of needing a\nnew thing for every aggregate. I think that might be better, but to be\nhonest I'm not totally sure.\n\n> > I don't think the patch does a good job explaining why HAVING,\n> > DISTINCT, and ORDER BY are a problem. It seems to me that HAVING\n> > shouldn't really be a problem, because HAVING is basically a WHERE\n> > clause that occurs after aggregation is complete, and whether or not\n> > the aggregation is safe shouldn't depend on what we're going to do\n> > with the value afterward. The HAVING clause can't necessarily be\n> > pushed to the remote side, but I don't see how or why it could make\n> > the aggregate itself unsafe to push down. DISTINCT and ORDER BY are a\n> > little trickier: if we pushed down DISTINCT, we'd still have to\n> > re-DISTINCT-ify when combining locally, and if we pushed down ORDER\n> > BY, we'd have to do a merge pass to combine the returned values unless\n> > we could prove that the partitions were non-overlapping ranges that\n> > would be visited in the correct order. Although that all sounds\n> > doable, I think it's probably a good thing that the current patch\n> > doesn't try to handle it -- this is complicated already. But it should\n> > explain why it's not handling it and maybe even a bit about how it\n> > could be handling in the future, rather than just saying \"well, this\n> > kind of thing is not safe.\" The trouble with that explanation is that\n> > it does nothing to help the reader understand whether the thing in\n> > question is *fundamentally* unsafe or whether we just don't have the\n> > right code to make it work.\n>\n> Makes sense.\n\nActually, I think I was wrong about this. We can't handle ORDER BY or\nDISTINCT because we can't distinct-ify or order after we've already\npartially aggregated. At least not in general, and not without\nadditional aggregate support functions. So what I said above was wrong\nwith respect to those. Or so I believe, anyway. But I still don't see\nwhy HAVING should be a problem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Nov 2023 12:16:41 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Tue, Nov 21, 2023 at 12:16:41PM -0500, Robert Haas wrote:\n> On Mon, Nov 20, 2023 at 5:48 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > I do have a concern about this, though. It adds a lot of bloat. It\n> > > adds a whole lot of additional entries to pg_aggregate, and every new\n> > > aggregate we add in the future will require a bonus entry for this,\n> > > and it needs a bunch of new pg_proc entries as well. One idea that\n> > > I've had in the past is to instead introduce syntax that just does\n> > > this, without requiring a separate aggregate definition in each case.\n> > > For example, maybe instead of changing string_agg(whatever) to\n> > > string_agg_p_text_text(whatever), you can say PARTIAL_AGGREGATE\n> > > string_agg(whatever) or string_agg(PARTIAL_AGGREGATE whatever) or\n> > > something. Then all aggregates could be treated in a generic way. I'm\n> > > not completely sure that's better, but I think it's worth considering.\n> >\n> > So use an SQL keyword to indicates a pushdown call? We could then\n> > automate the behavior rather than requiring special catalog functions?\n> \n> Right. It would require more infrastructure in the parser, planner,\n> and executor, but it would be infinitely reusable instead of needing a\n> new thing for every aggregate. I think that might be better, but to be\n> honest I'm not totally sure.\n\nIt would make it automatic. I guess we need to look at how big the\npatch is to do it.\n\n> > > I don't think the patch does a good job explaining why HAVING,\n> > > DISTINCT, and ORDER BY are a problem. It seems to me that HAVING\n> > > shouldn't really be a problem, because HAVING is basically a WHERE\n> > > clause that occurs after aggregation is complete, and whether or not\n> > > the aggregation is safe shouldn't depend on what we're going to do\n> > > with the value afterward. The HAVING clause can't necessarily be\n> > > pushed to the remote side, but I don't see how or why it could make\n> > > the aggregate itself unsafe to push down. DISTINCT and ORDER BY are a\n> > > little trickier: if we pushed down DISTINCT, we'd still have to\n> > > re-DISTINCT-ify when combining locally, and if we pushed down ORDER\n> > > BY, we'd have to do a merge pass to combine the returned values unless\n> > > we could prove that the partitions were non-overlapping ranges that\n> > > would be visited in the correct order. Although that all sounds\n> > > doable, I think it's probably a good thing that the current patch\n> > > doesn't try to handle it -- this is complicated already. But it should\n> > > explain why it's not handling it and maybe even a bit about how it\n> > > could be handling in the future, rather than just saying \"well, this\n> > > kind of thing is not safe.\" The trouble with that explanation is that\n> > > it does nothing to help the reader understand whether the thing in\n> > > question is *fundamentally* unsafe or whether we just don't have the\n> > > right code to make it work.\n> >\n> > Makes sense.\n> \n> Actually, I think I was wrong about this. We can't handle ORDER BY or\n> DISTINCT because we can't distinct-ify or order after we've already\n> partially aggregated. At least not in general, and not without\n> additional aggregate support functions. So what I said above was wrong\n> with respect to those. Or so I believe, anyway. But I still don't see\n> why HAVING should be a problem.\n\nThis should probably be documented in the patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 21 Nov 2023 15:34:19 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Robert Haas писал 2023-11-21 20:16:\n\n>> > I don't think the patch does a good job explaining why HAVING,\n>> > DISTINCT, and ORDER BY are a problem. It seems to me that HAVING\n>> > shouldn't really be a problem, because HAVING is basically a WHERE\n>> > clause that occurs after aggregation is complete, and whether or not\n>> > the aggregation is safe shouldn't depend on what we're going to do\n>> > with the value afterward. The HAVING clause can't necessarily be\n>> > pushed to the remote side, but I don't see how or why it could make\n>> > the aggregate itself unsafe to push down. DISTINCT and ORDER BY are a\n>> > little trickier: if we pushed down DISTINCT, we'd still have to\n>> > re-DISTINCT-ify when combining locally, and if we pushed down ORDER\n>> > BY, we'd have to do a merge pass to combine the returned values unless\n>> > we could prove that the partitions were non-overlapping ranges that\n>> > would be visited in the correct order. Although that all sounds\n>> > doable, I think it's probably a good thing that the current patch\n>> > doesn't try to handle it -- this is complicated already. But it should\n>> > explain why it's not handling it and maybe even a bit about how it\n>> > could be handling in the future, rather than just saying \"well, this\n>> > kind of thing is not safe.\" The trouble with that explanation is that\n>> > it does nothing to help the reader understand whether the thing in\n>> > question is *fundamentally* unsafe or whether we just don't have the\n>> > right code to make it work.\n>> \n>> Makes sense.\n> \n> Actually, I think I was wrong about this. We can't handle ORDER BY or\n> DISTINCT because we can't distinct-ify or order after we've already\n> partially aggregated. At least not in general, and not without\n> additional aggregate support functions. So what I said above was wrong\n> with respect to those. Or so I believe, anyway. But I still don't see\n> why HAVING should be a problem.\n\nHi. HAVING is also a problem. Consider the following query\n\nSELECT count(a) FROM t HAVING count(a) > 10 - we can't push it down to\nforeign server as HAVING needs full aggregate result, but foreign server\ndon't know it.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Wed, 22 Nov 2023 09:32:58 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr. Haas, hackers.\r\n\r\nThank you for your thoughtful comments.\r\n\r\n> From: Robert Haas <robertmhaas@gmail.com>\r\n> Sent: Tuesday, November 21, 2023 5:52 AM\r\n> I do have a concern about this, though. It adds a lot of bloat. It adds a whole lot of additional entries to pg_aggregate, and\r\n> every new aggregate we add in the future will require a bonus entry for this, and it needs a bunch of new pg_proc entries\r\n> as well. One idea that I've had in the past is to instead introduce syntax that just does this, without requiring a separate\r\n> aggregate definition in each case.\r\n> For example, maybe instead of changing string_agg(whatever) to string_agg_p_text_text(whatever), you can say\r\n> PARTIAL_AGGREGATE\r\n> string_agg(whatever) or string_agg(PARTIAL_AGGREGATE whatever) or something. Then all aggregates could be treated\r\n> in a generic way. I'm not completely sure that's better, but I think it's worth considering.\r\nI believe this comment addresses a fundamental aspect of the approach.\r\nSo, firstly, could we discuss whether we should fundamentally reconsider the approach?\r\n\r\nThe approach adopted in this patch is as follows.\r\nApproach 1: Adding partial aggregation functions to the catalogs(pg_aggregate, pg_proc)\r\n\r\nThe approach proposed by Mr.Haas is as follows.\r\nApproach 2: Adding a keyword to the SQL syntax to indicate partial aggregation requests\r\n\r\nThe amount of code required to implement Approach 2 has not been investigated,\r\nbut comparing Approach 1 and Approach 2 in other aspects, \r\nI believe they each have the following advantages and disadvantages. \r\n\r\n1. Approach 1\r\n(1) Advantages\r\n(a) No need to change the SQL syntax\r\n(2) Disadvantages\r\n(a) Catalog bloat\r\nAs Mr.Haas pointed out, the catalog will bloat by adding partial aggregation functions (e.g. avg_p_int8(int8)) \r\nfor each individual aggregate function (e.g. avg(int8)) in pg_aggregate and pg_proc (theoretically doubling the size).\r\nSome PostgreSQL developers and users may find this uncomfortable.\r\n(b) Increase in manual procedures\r\nDevelopers of new aggregate functions (both built-in and user-defined) need to manually add the partial aggregation\r\nfunctions when defining the aggregate functions.\r\nHowever, the procedure for adding partial aggregation functions for a certain aggregate function can be automated,\r\nso this problem can be resolved by improving the patch.\r\nThe automation method involves the core part (AggregateCreate() and related functions) that executes\r\nthe CREATE AGGREGATE command for user-defined functions.\r\nFor built-in functions, it involves generating the initial data for the pg_aggregate catalog and pg_proc catalog from pg_aggregate.dat and pg_proc.dat\r\n(using the genbki.pl script and related scripts).\r\n\r\n2. Approach 2\r\n(1) Advantages\r\n(a) No need to add partial aggregate functions to the catalogs for each aggregation\r\n(2) Disadvantages\r\n(a) Need to add non-standard keywords to the SQL syntax.\r\n\r\nI did not choose Approach2 because I was not confident that the disadvantage mentioned in 2.(2)(a)\r\nwould be accepted by the PostgreSQL development community.\r\nIf it is accepted, I think Approach 2 is smarter.\r\nCould you please provide your opinion on which\r\napproach is preferable after comparing these two approaches?\r\nIf we cannot say anything without comparing the amount of source code, as Mr.Momjian mentioned,\r\nwe need to estimate the amount of source code required to implement Approach2.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n \r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation\r\n", "msg_date": "Wed, 22 Nov 2023 10:16:16 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Wed, Nov 22, 2023 at 10:16:16AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> 2. Approach 2\n> (1) Advantages\n> (a) No need to add partial aggregate functions to the catalogs for each aggregation\n> (2) Disadvantages\n> (a) Need to add non-standard keywords to the SQL syntax.\n> \n> I did not choose Approach2 because I was not confident that the disadvantage mentioned in 2.(2)(a)\n> would be accepted by the PostgreSQL development community.\n> If it is accepted, I think Approach 2 is smarter.\n> Could you please provide your opinion on which\n> approach is preferable after comparing these two approaches?\n\nI didn't know #2 was possible, but given the great number of catalog\nentries, doing it in the SQL grammar seems cleaner to me.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 22 Nov 2023 16:16:02 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Momjian, Mr.Haas, hackers.\r\n\r\n> From: Bruce Momjian <bruce@momjian.us>\r\n> Sent: Thursday, November 23, 2023 6:16 AM\r\n> On Wed, Nov 22, 2023 at 10:16:16AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\r\n> > 2. Approach 2\r\n> > (1) Advantages\r\n> > (a) No need to add partial aggregate functions to the catalogs for\r\n> > each aggregation\r\n> > (2) Disadvantages\r\n> > (a) Need to add non-standard keywords to the SQL syntax.\r\n> >\r\n> > I did not choose Approach2 because I was not confident that the\r\n> > disadvantage mentioned in 2.(2)(a) would be accepted by the PostgreSQL development community.\r\n> > If it is accepted, I think Approach 2 is smarter.\r\n> > Could you please provide your opinion on which approach is preferable\r\n> > after comparing these two approaches?\r\n> \r\n> I didn't know #2 was possible, but given the great number of catalog entries, doing it in the SQL grammar seems cleaner\r\n> to me.\r\nThank you for comments. Yes, I understand.\r\n\r\n> From: Bruce Momjian <bruce@momjian.us>\r\n> Sent: Wednesday, November 22, 2023 5:34 AM\r\n> On Tue, Nov 21, 2023 at 12:16:41PM -0500, Robert Haas wrote:\r\n> > On Mon, Nov 20, 2023 at 5:48 PM Bruce Momjian <bruce@momjian.us> wrote:\r\n> > > > I do have a concern about this, though. It adds a lot of bloat. It\r\n> > > > adds a whole lot of additional entries to pg_aggregate, and every\r\n> > > > new aggregate we add in the future will require a bonus entry for\r\n> > > > this, and it needs a bunch of new pg_proc entries as well. One\r\n> > > > idea that I've had in the past is to instead introduce syntax that\r\n> > > > just does this, without requiring a separate aggregate definition in each case.\r\n> > > > For example, maybe instead of changing string_agg(whatever) to\r\n> > > > string_agg_p_text_text(whatever), you can say PARTIAL_AGGREGATE\r\n> > > > string_agg(whatever) or string_agg(PARTIAL_AGGREGATE whatever) or\r\n> > > > something. Then all aggregates could be treated in a generic way.\r\n> > > > I'm not completely sure that's better, but I think it's worth considering.\r\n> > >\r\n> > > So use an SQL keyword to indicates a pushdown call? We could then\r\n> > > automate the behavior rather than requiring special catalog functions?\r\n> >\r\n> > Right. It would require more infrastructure in the parser, planner,\r\n> > and executor, but it would be infinitely reusable instead of needing a\r\n> > new thing for every aggregate. I think that might be better, but to be\r\n> > honest I'm not totally sure.\r\n> \r\n> It would make it automatic. I guess we need to look at how big the patch is to do it.\r\nI will investigate specifically which parts of the PostgreSQL source code need to be modified and how big the patch will be if you take this approach.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation\r\n", "msg_date": "Mon, 27 Nov 2023 07:04:07 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Wed, Nov 22, 2023 at 1:32 AM Alexander Pyhalov\n<a.pyhalov@postgrespro.ru> wrote:\n> Hi. HAVING is also a problem. Consider the following query\n>\n> SELECT count(a) FROM t HAVING count(a) > 10 - we can't push it down to\n> foreign server as HAVING needs full aggregate result, but foreign server\n> don't know it.\n\nI don't see it that way. What we would push to the foreign server\nwould be something like SELECT count(a) FROM t. Then, after we get the\nresults back and combine the various partial counts locally, we would\nlocally evaluate the HAVING clause afterward. That is, partial\naggregation is a barrier to pushing down HAVING clause itself, but it\ndoesn't preclude pushing down the aggregation.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Nov 2023 14:07:40 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Wed, Nov 22, 2023 at 5:16 AM Fujii.Yuki@df.MitsubishiElectric.co.jp\n<Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n> I did not choose Approach2 because I was not confident that the disadvantage mentioned in 2.(2)(a)\n> would be accepted by the PostgreSQL development community.\n> If it is accepted, I think Approach 2 is smarter.\n> Could you please provide your opinion on which\n> approach is preferable after comparing these two approaches?\n> If we cannot say anything without comparing the amount of source code, as Mr.Momjian mentioned,\n> we need to estimate the amount of source code required to implement Approach2.\n\nI've had the same concern, that approach #2 would draw objections, so\nI think you were right to be cautious about it. I don't think it is a\nwonderful approach in all ways, but I do think that it is superior to\napproach #1. If we add dedicated support to the grammar, it is mostly\na one-time effort, and after that, there should not be much need for\nanyone to be concerned about it. If we instead add extra aggregates,\nthen that generates extra work every time someone writes a patch that\nadds a new aggregate to core. I have a difficult time believing that\nanyone will prefer an approach that involves an ongoing maintenance\neffort of that type over one that doesn't.\n\nOne point that seems to me to be of particular importance is that if\nwe add new aggregates, there is a risk that some future aggregate\nmight do that incorrectly, so that the main aggregate works, but the\nsecondary aggregate created for this feature does not work. That seems\nlike it would be very frustrating for future code authors so I'd like\nto avoid the risk as much as we can.\n\nAlso, I want to make one other point here about security and\nreliability. Right now, there is no way for a user to feed arbitrary\ndata to a deserialization function. Since serialization and\ndeserialization functions are only used in the context of parallel\nquery, we always know that the data fed to the deserialization\nfunction must have come from the serialization function on the same\nmachine. Nor can users call the deserialization function directly with\narbitrary data of their own choosing, because users cannot call\nfunctions that take or return internal. But with this system, it\nbecomes possible to feed arbitrary data to a deserialization function.\nThe user could redefine the function on the remote side so that it\nproduces arbitrary data of their choosing, and the local\ndeserialization function will ingest it.\n\nThat's potentially quite a significant problem. Consider for example\nthat numericvar_deserialize() does no validity checking on any of the\nweight, sign, or dscale, but not all values for those fields are\nlegal. Right now that doesn't matter, but if you can feed arbitrary\ndata to that function, then it is. I don't know exactly what the\nconsequences are if you can get that function to spit out a NumericVar\nwith values outside the normal legal range. What can you do then?\nStore a bogus numeric on disk? Crash the server? Worst case, some\nproblem like this could be a security issue allowing for escalation to\nsuperuser; more likely, it would be a crash bug, corrupt your\ndatabase, or lead to unexpected and strange error messages.\n\nUnfortunately, I have the unpleasant suspicion that most internal-type\naggregates will be affected by this problem. Consider, for example,\nstring_agg_deserialize(). Generally, strings are one of the\nleast-constrained data types, so you might hope that this function\nwould be OK. But it doesn't look very promising. The first int4 in the\nserialized representation is the cursor, which would have to be\nbounds-checked, lest someone provide a cursor that falls outside the\nbounds of the StringInfo and, maybe, cause a reference to an arbitrary\nmemory location. Then the rest of the representation is the actual\ndata, which could be anything. This function is used for both bytea\nand for text, and for bytea, letting the payload be anything is OK.\nBut for text, the supplied data shouldn't contain an embedded zero\nbyte, or otherwise be invalid in the server encoding. If it were, that\nwould provide a vector to inject invalidly encoded data into the\ndatabase. This feature can't be allowed to do that.\n\nWhat could be a solution to this class of problems? One option is to\njust give up on supporting this feature for internal-type aggregates\nfor now. That's easy enough to do, and just means we have less\nfunctionality, but it's sad because that's functionality we'd like to\nhave. Another approach is to add necessary sanity checks to the\nrelevant deserialization functions, but that seems a little hard to\nget right, and it would slow down parallel query cases which are\nprobably going to be more common than the use of this feature. I think\nthe slowdown might be significant, too. A third option is to change\nthose aggregates in some way, like giving them a transition function\nthat operates on some data type other than internal, but there again\nwe have to be careful of slowdowns. A final option is to rethink the\ninfrastructure in some way, like having a way to serialize to\nsomething other than bytea, for which we already have input functions\nwith adequate checks. For instance, if string_agg_serialize() produced\na record containing an integer column and a text or bytea column, we\ncould attempt to ingest that record on the other side and presumably\nthe right things would happen in the case of any invalid data. But I'm\nnot quite sure what infrastructure would be required to make this kind\nof idea work.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Nov 2023 15:03:22 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Also, I want to make one other point here about security and\n> reliability. Right now, there is no way for a user to feed arbitrary\n> data to a deserialization function. Since serialization and\n> deserialization functions are only used in the context of parallel\n> query, we always know that the data fed to the deserialization\n> function must have come from the serialization function on the same\n> machine. Nor can users call the deserialization function directly with\n> arbitrary data of their own choosing, because users cannot call\n> functions that take or return internal. But with this system, it\n> becomes possible to feed arbitrary data to a deserialization function.\n\nOuch. That is absolutely horrid --- we have a lot of stuff that\ndepends on users not being able to get at \"internal\" values, and\nit sounds like the current proposal breaks all of that.\n\nQuite aside from security concerns, there is no justification for\nassuming that the \"internal\" values used on one platform/PG version\nare identical to those used on another. So if the idea is to\nship back \"internal\" values from the remote server to the local one,\nI think it's basically impossible to make that work.\n\nEven if the partial-aggregate serialization value isn't \"internal\"\nbut some more-narrowly-defined type, it is still an internal\nimplementation detail of the aggregate. You have no right to assume\nthat the remote server implements the aggregate the same way the\nlocal one does. If we start making such an assumption then we'll\nbe unable to revise the implementation of an aggregate ever again.\n\nTBH, I think this entire proposal is dead in the water. Which is\nsad from a performance standpoint, but I can't see any way that\nwe would not regret shipping a feature that makes such assumptions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Nov 2023 15:59:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Mon, Nov 27, 2023 at 3:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Even if the partial-aggregate serialization value isn't \"internal\"\n> but some more-narrowly-defined type, it is still an internal\n> implementation detail of the aggregate. You have no right to assume\n> that the remote server implements the aggregate the same way the\n> local one does. If we start making such an assumption then we'll\n> be unable to revise the implementation of an aggregate ever again.\n>\n> TBH, I think this entire proposal is dead in the water. Which is\n> sad from a performance standpoint, but I can't see any way that\n> we would not regret shipping a feature that makes such assumptions.\n\nI think it's ridiculous to just hold our breath and pretend like this\nfeature isn't needed -- it's at least half a decade overdue. We engage\nin endless hand-wringing over local-remote symmetry in cases where\nother systems seem to effortlessly make that assumption and then get\non with building new features. It's not that I disagree with the\nconcern; we're *already* doing stuff that is unprincipled in a bunch\nof different areas and that could and occasionally does cause queries\nthat push things to the remote side to return wrong answers, and I\nhate that. But the response to that can't be to refuse to add new\nfeatures and maybe rip out the features we already have. Users don't\nlike it when pushdown causes queries to return wrong answers, but they\nlike it even less when the pushdown doesn't happen in the first place\nand the query runs until the heat death of the universe. I'm not\nentirely sure what the right design ideas are here, but giving up and\nrefusing to add features ought to be completely off the table.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Nov 2023 16:10:02 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Nov 27, 2023 at 3:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> TBH, I think this entire proposal is dead in the water. Which is\n>> sad from a performance standpoint, but I can't see any way that\n>> we would not regret shipping a feature that makes such assumptions.\n\n> I think it's ridiculous to just hold our breath and pretend like this\n> feature isn't needed -- it's at least half a decade overdue. We engage\n> in endless hand-wringing over local-remote symmetry in cases where\n> other systems seem to effortlessly make that assumption and then get\n> on with building new features.\n\nWell, one of the founding principles of postgres_fdw was to be able\nto talk to PG servers that are not of the same version as yours.\nIf we break that in the name of performance, we are going to have\na lot of unhappy users. Even the ones who do get the benefit of\nthe speedup are going to be unhappy when it breaks because they\ndidn't upgrade local and remote at exactly the same time.\n\nJust because we'd like to have it doesn't make the patch workable\nin the real world.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Nov 2023 16:23:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "First of all, that last email of mine was snippy, and I apologize for it. Sorry.\n\nThat said:\n\nOn Mon, Nov 27, 2023 at 4:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Well, one of the founding principles of postgres_fdw was to be able\n> to talk to PG servers that are not of the same version as yours.\n> If we break that in the name of performance, we are going to have\n> a lot of unhappy users. Even the ones who do get the benefit of\n> the speedup are going to be unhappy when it breaks because they\n> didn't upgrade local and remote at exactly the same time.\n\nI agree with this.\n\n> Just because we'd like to have it doesn't make the patch workable\n> in the real world.\n\nAnd also with this in concept - I'd like to plan arbitrarily\ncomplicated queries perfectly and near-instantly, and then execute\nthem at faster-than-light speed, but we can't. However, I don't\nunderstand the fatalism with respect to the feature at hand. As I said\nbefore, it's not like no other product has made this work. Sure, some\nof those products may not have the extensible system of data types\nthat we do, or may not care about cross-version communication, but\nthose don't seem like good enough reasons to just immediately give up.\n\nTBH, I suspect even some PG forks have made this work, like maybe PGXC\nor PGXL, although I don't know for certain. We might not like the\ntrade-offs they made to get there, but we haven't even talked through\npossible design ideas yet, so it seems way too early to give up.\n\nOne of the things that I think is a problem in this area is that the\nways we have to configure FDW connections are just not very rich.\nWe're trying to cram everything into a set of strings that can be\nattached to the foreign server or the user mapping, but that's not a\nvery good fit for something like how all the local SQL functions that\nmight exist map onto all of the remote SQL functions that might exist.\nNow you might well say that we don't want the act of configuring a\nforeign data wrapper to be insanely complicated, and I would agree\nwith that. But, on the other hand, as Larry Wall once said, a good\nprogramming language makes simple things simple and complicated things\npossible. I think our current configuration system is only\naccomplishing the first of those goals.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Nov 2023 18:50:34 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Tue, Nov 28, 2023 at 5:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> TBH, I suspect even some PG forks have made this work, like maybe PGXC\n> or PGXL, although I don't know for certain. We might not like the\n> trade-offs they made to get there, but we haven't even talked through\n> possible design ideas yet, so it seems way too early to give up.\n\nIf my memory serves me right, PGXC implemented partial aggregation\nonly when the output of partial aggregate was a SQL data type\n(non-Internal, non-Unknown). But I may be wrong. But at that time,\nJSONB wasn't there or wasn't that widespread.\n\nProblem with Internal is it's just a binary string whose content can\nchange across version and which can be interpreted differently across\ndifferent versions. There is no metadata in it to know how to\ninterpret it. We can add that metadata to JSONB. The result of partial\naggregate can be sent as a JSONB. If the local server finds the JSONB\nfamiliar it will construct the right partial aggregate value otherwise\nit will throw an error. If there's a way to even avoid that error (by\nlooking at server version etc.) the error can be avoided too. But\nJSONB leaves very very less chance that the value will be interpreted\nwrong. Downside is we are tying PARTIAL's output to be JSONB thus\ntying SQL syntax with a data type.\n\nDoes that look acceptable?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 28 Nov 2023 15:53:58 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Tue, Nov 28, 2023 at 5:24 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> If my memory serves me right, PGXC implemented partial aggregation\n> only when the output of partial aggregate was a SQL data type\n> (non-Internal, non-Unknown). But I may be wrong. But at that time,\n> JSONB wasn't there or wasn't that widespread.\n>\n> Problem with Internal is it's just a binary string whose content can\n> change across version and which can be interpreted differently across\n> different versions. There is no metadata in it to know how to\n> interpret it. We can add that metadata to JSONB. The result of partial\n> aggregate can be sent as a JSONB. If the local server finds the JSONB\n> familiar it will construct the right partial aggregate value otherwise\n> it will throw an error. If there's a way to even avoid that error (by\n> looking at server version etc.) the error can be avoided too. But\n> JSONB leaves very very less chance that the value will be interpreted\n> wrong. Downside is we are tying PARTIAL's output to be JSONB thus\n> tying SQL syntax with a data type.\n>\n> Does that look acceptable?\n\nIf somebody had gone to the trouble of making this work, and had done\na good job, I wouldn't vote against it, but in a vacuum, I'm not sure\nit's the best design. The problem in my view is that working with JSON\nis not actually very pleasant. It's super-easy to generate, and\nsuper-easy for humans to read. But parsing and validating it is a\npain. You basically have to have two parsers, one to do syntactical\nvalidation and then a second one to ensure that the structure of the\ndocument and the contents of each item are as expected. See\nparse_manifest.c for an example of what I mean by that. Now, if we add\nnew code, it can reuse the JSON parser we've already got, so it's not\nthat you need to write a new JSON parser for every new application of\nJSON, but the semantic validator (a la parse_manifest.c) isn't\nnecessarily any less code than a whole new parser for a bespoke\nformat.\n\nTo make that a bit more concrete, for something like string_agg(), is\nit easier to write a validator for the existing deserialization\nfunction that accepts a bytea blob, or to write a validator for a JSON\nblob that we could be passing instead? My suspicion is that the former\nis less work and easier to verify, but it's possible I'm wrong about\nthat and they're more or less equal. I don't really see any way that\nthe JSON thing is straight-up better; at best it's a toss-up in terms\nof amount of code. Now somebody could still make an argument that they\nwould like JSON better because it would be more useful for some\npurpose other than this feature, and that is fine, but here I'm just\nthinking about this feature in particular.\n\nMy personal suspicion is that the easiest way to support internal-type\naggregates here is to convert them to use an array or record type as a\ntransition state instead, or maybe serialize the internal state to one\nof those things instead of to bytea. I suspect that would allow us to\nleverage more of our existing validation infrastructure than using\nJSON or sticking with bytea. But I'm certainly amenable to other\npoints of view. I'm not trying to pretend that my gut feeling is\nnecessarily correct; I'm just explaining what I currently think.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 28 Nov 2023 07:44:51 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Nov 27, 2023 at 4:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Well, one of the founding principles of postgres_fdw was to be able\n> > to talk to PG servers that are not of the same version as yours.\n> > If we break that in the name of performance, we are going to have\n> > a lot of unhappy users. Even the ones who do get the benefit of\n> > the speedup are going to be unhappy when it breaks because they\n> > didn't upgrade local and remote at exactly the same time.\n> \n> I agree with this.\n\n+1. We do want to continue to make this work- to the extent possible.\nI don't think there's any problem with saying that when talking to an\nolder server, you don't get the same capabilities as you do when talking\nto a newer server.\n\n> > Just because we'd like to have it doesn't make the patch workable\n> > in the real world.\n> \n> And also with this in concept - I'd like to plan arbitrarily\n> complicated queries perfectly and near-instantly, and then execute\n> them at faster-than-light speed, but we can't. However, I don't\n> understand the fatalism with respect to the feature at hand. As I said\n> before, it's not like no other product has made this work. Sure, some\n> of those products may not have the extensible system of data types\n> that we do, or may not care about cross-version communication, but\n> those don't seem like good enough reasons to just immediately give up.\n\nCertainly there are other projects out there which are based on PG that\nhave managed to make this work and work really quite well.\n\n> TBH, I suspect even some PG forks have made this work, like maybe PGXC\n> or PGXL, although I don't know for certain. We might not like the\n> trade-offs they made to get there, but we haven't even talked through\n> possible design ideas yet, so it seems way too early to give up.\n\nYes, Citus[1] and Greenplum[2], to just name two.\n\nI certainly understand the concern around the security of this and would\nhave thought the approach we'd use would be to not just take internal\nstate and pass it along but rather to provide a way for aggregates to\nopt-in to supporting this and have them serialize/deserialize with\nnew dedicated functions that have appropriate checks to avoid bad things\nhappening. That could also be versioned, perhaps, if we feel that's\nnecessary (I'm a bit skeptical, but it would hopefully address the\nconcern about different versions having different data that they want to\npass along).\n\n> One of the things that I think is a problem in this area is that the\n> ways we have to configure FDW connections are just not very rich.\n\nAgreed.\n\n> We're trying to cram everything into a set of strings that can be\n> attached to the foreign server or the user mapping, but that's not a\n> very good fit for something like how all the local SQL functions that\n> might exist map onto all of the remote SQL functions that might exist.\n> Now you might well say that we don't want the act of configuring a\n> foreign data wrapper to be insanely complicated, and I would agree\n> with that. But, on the other hand, as Larry Wall once said, a good\n> programming language makes simple things simple and complicated things\n> possible. I think our current configuration system is only\n> accomplishing the first of those goals.\n\nWe've already got issues in this area with extensions- there's no way\nfor a user to say what version of an extension exists on the remote side\nand no way for an extension to do anything different based on that\ninformation. Perhaps we could work on a solution to both of these\nissues, but at the least I don't see holding back on this effort for a\nproblem that already exists but which we've happily accepted because of\nthe benefit it provides, like being able to push-down postgis bounding\nbox conditionals to allow for indexed lookups.\n\nThanks,\n\nStephen\n\n[1]: https://docs.citusdata.com/en/v11.1/develop/reference_sql.html\n[2]: https://postgresconf.org/conferences/Beijing/program/proposals/implementation-of-distributed-aggregation-in-greenplum", "msg_date": "Tue, 28 Nov 2023 09:24:31 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Haas, hackers.\r\n\r\n> From: Robert Haas <robertmhaas@gmail.com>\r\n> Sent: Tuesday, November 28, 2023 5:03 AM\r\n> Also, I want to make one other point here about security and reliability. Right now, there is no way for a user to feed\r\n> arbitrary data to a deserialization function. Since serialization and deserialization functions are only used in the context of\r\n> parallel query, we always know that the data fed to the deserialization function must have come from the serialization\r\n> function on the same machine. Nor can users call the deserialization function directly with arbitrary data of their own\r\n> choosing, because users cannot call functions that take or return internal. But with this system, it becomes possible to\r\n> feed arbitrary data to a deserialization function.\r\n> The user could redefine the function on the remote side so that it produces arbitrary data of their choosing, and the local\r\n> deserialization function will ingest it.\r\n> \r\n> That's potentially quite a significant problem. Consider for example that numericvar_deserialize() does no validity\r\n> checking on any of the weight, sign, or dscale, but not all values for those fields are legal. Right now that doesn't matter,\r\n> but if you can feed arbitrary data to that function, then it is. I don't know exactly what the consequences are if you can get\r\n> that function to spit out a NumericVar with values outside the normal legal range. What can you do then?\r\n> Store a bogus numeric on disk? Crash the server? Worst case, some problem like this could be a security issue allowing for\r\n> escalation to superuser; more likely, it would be a crash bug, corrupt your database, or lead to unexpected and strange\r\n> error messages.\r\n> \r\n> Unfortunately, I have the unpleasant suspicion that most internal-type aggregates will be affected by this problem.\r\n> Consider, for example, string_agg_deserialize(). Generally, strings are one of the least-constrained data types, so you\r\n> might hope that this function would be OK. But it doesn't look very promising. The first int4 in the serialized representation\r\n> is the cursor, which would have to be bounds-checked, lest someone provide a cursor that falls outside the bounds of the\r\n> StringInfo and, maybe, cause a reference to an arbitrary memory location. Then the rest of the representation is the actual\r\n> data, which could be anything. This function is used for both bytea and for text, and for bytea, letting the payload be\r\n> anything is OK.\r\n> But for text, the supplied data shouldn't contain an embedded zero byte, or otherwise be invalid in the server encoding. If\r\n> it were, that would provide a vector to inject invalidly encoded data into the database. This feature can't be allowed to do\r\n> that.\r\nI completely overlooked this issue. I should have considered the risks of sending raw state values or serialized state\r\ndata directly from remote to local. I apologize.\r\n\r\n> What could be a solution to this class of problems? One option is to just give up on supporting this feature for internal-type\r\n> aggregates for now. That's easy enough to do, and just means we have less functionality, but it's sad because that's\r\n> functionality we'd like to have. Another approach is to add necessary sanity checks to the relevant deserialization\r\n> functions, but that seems a little hard to get right, and it would slow down parallel query cases which are probably going to\r\n> be more common than the use of this feature. I think the slowdown might be significant, too. A third option is to change\r\n> those aggregates in some way, like giving them a transition function that operates on some data type other than internal,\r\n> but there again we have to be careful of slowdowns. A final option is to rethink the infrastructure in some way, like having\r\n> a way to serialize to something other than bytea, for which we already have input functions with adequate checks. For\r\n> instance, if string_agg_serialize() produced a record containing an integer column and a text or bytea column, we could\r\n> attempt to ingest that record on the other side and presumably the right things would happen in the case of any invalid\r\n> data. But I'm not quite sure what infrastructure would be required to make this kind of idea work.\r\nThank you very much for providing a direction towards resolving this issue.\r\nAs you have suggested as the last option, it seems that expanding the current mechanism of the aggregation\r\nfunction is the only choice. It may take some time, but I will consider specific solutions.\r\n\r\n> From: Robert Haas <robertmhaas@gmail.com>\r\n> Sent: Tuesday, November 28, 2023 4:08 AM\r\n> On Wed, Nov 22, 2023 at 1:32 AM Alexander Pyhalov <a.pyhalov@postgrespro.ru> wrote:\r\n> > Hi. HAVING is also a problem. Consider the following query\r\n> >\r\n> > SELECT count(a) FROM t HAVING count(a) > 10 - we can't push it down to\r\n> > foreign server as HAVING needs full aggregate result, but foreign\r\n> > server don't know it.\r\n> \r\n> I don't see it that way. What we would push to the foreign server would be something like SELECT count(a) FROM t. Then,\r\n> after we get the results back and combine the various partial counts locally, we would locally evaluate the HAVING clause\r\n> afterward. That is, partial aggregation is a barrier to pushing down HAVING clause itself, but it doesn't preclude pushing\r\n> down the aggregation.\r\nI understand what the problem is. I will try to fix it in the next version.\r\n\r\n> From: Robert Haas <robertmhaas@gmail.com>\r\n> Sent: Tuesday, November 28, 2023 5:03 AM\r\n> On Wed, Nov 22, 2023 at 5:16 AM Fujii.Yuki@df.MitsubishiElectric.co.jp\r\n> <Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\r\n> > I did not choose Approach2 because I was not confident that the\r\n> > disadvantage mentioned in 2.(2)(a) would be accepted by the PostgreSQL development community.\r\n> > If it is accepted, I think Approach 2 is smarter.\r\n> > Could you please provide your opinion on which approach is preferable\r\n> > after comparing these two approaches?\r\n> > If we cannot say anything without comparing the amount of source code,\r\n> > as Mr.Momjian mentioned, we need to estimate the amount of source code required to implement Approach2.\r\n> \r\n> I've had the same concern, that approach #2 would draw objections, so I think you were right to be cautious about it. I\r\n> don't think it is a wonderful approach in all ways, but I do think that it is superior to approach #1. If we add dedicated\r\n> support to the grammar, it is mostly a one-time effort, and after that, there should not be much need for anyone to be\r\n> concerned about it. If we instead add extra aggregates, then that generates extra work every time someone writes a patch\r\n> that adds a new aggregate to core. I have a difficult time believing that anyone will prefer an approach that involves an\r\n> ongoing maintenance effort of that type over one that doesn't.\r\n> \r\n> One point that seems to me to be of particular importance is that if we add new aggregates, there is a risk that some\r\n> future aggregate might do that incorrectly, so that the main aggregate works, but the secondary aggregate created for this\r\n> feature does not work. That seems like it would be very frustrating for future code authors so I'd like to avoid the risk as\r\n> much as we can.\r\nAre you concerned about the hassle and potential human errors of manually adding new partial\r\naggregation functions, rather than the catalog becoming bloated?\r\nThe process of creating partial aggregation functions from aggregation functions can be automated,\r\nso I believe this issue can be resolved. However, automating it may increase the size of the patch\r\neven more, so overall, approach#2 might be better.\r\nTo implement approach #2, it would be necessary to investigate how much additional code is required.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation\r\n", "msg_date": "Wed, 6 Dec 2023 08:41:21 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Wed, Dec 6, 2023 at 3:41 AM Fujii.Yuki@df.MitsubishiElectric.co.jp\n<Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n> Are you concerned about the hassle and potential human errors of manually adding new partial\n> aggregation functions, rather than the catalog becoming bloated?\n\nI'm concerned about both.\n\n> The process of creating partial aggregation functions from aggregation functions can be automated,\n> so I believe this issue can be resolved. However, automating it may increase the size of the patch\n> even more, so overall, approach#2 might be better.\n> To implement approach #2, it would be necessary to investigate how much additional code is required.\n\nYes. Unfortunately I fear that there is quite a lot of work left to do\nhere in order to produce a committable feature. To me it seems\nnecessary to conduct an investigation of approach #2. If the result of\nthat investigation is that nothing major stands in the way of approach\n#2, then I think we should adopt it, which is more work. In addition,\nthe problems around transmitting serialized bytea blobs between\nmachines that can't be assumed to fully trust each other will need to\nbe addressed in some way, which seems like it will require a good deal\nof design work, forming some kind of consensus, and then\nimplementation work to follow. In addition to that there may be some\nsmall problems that need to be solved at a detail level, such as the\nHAVING issue. I think the last category won't be too hard to sort out,\nbut that still leaves two really major areas to address.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Dec 2023 08:25:16 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr.Haas.\r\n\r\n> -----Original Message-----\r\n> From: Robert Haas <robertmhaas@gmail.com>\r\n> Sent: Wednesday, December 6, 2023 10:25 PM\r\n> On Wed, Dec 6, 2023 at 3:41 AM Fujii.Yuki@df.MitsubishiElectric.co.jp\r\n> <Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\r\n> > Are you concerned about the hassle and potential human errors of\r\n> > manually adding new partial aggregation functions, rather than the catalog becoming bloated?\r\n> \r\n> I'm concerned about both.\r\nUnderstood. Thank you for your response.\r\n\r\n> > The process of creating partial aggregation functions from aggregation\r\n> > functions can be automated, so I believe this issue can be resolved.\r\n> > However, automating it may increase the size of the patch even more, so overall, approach#2 might be better.\r\n> > To implement approach #2, it would be necessary to investigate how much additional code is required.\r\n> \r\n> Yes. Unfortunately I fear that there is quite a lot of work left to do here in order to produce a committable feature. To me it\r\n> seems necessary to conduct an investigation of approach #2. If the result of that investigation is that nothing major\r\n> stands in the way of approach #2, then I think we should adopt it, which is more work. In addition, the problems around\r\n> transmitting serialized bytea blobs between machines that can't be assumed to fully trust each other will need to be\r\n> addressed in some way, which seems like it will require a good deal of design work, forming some kind of consensus, and\r\n> then implementation work to follow. In addition to that there may be some small problems that need to be solved at a\r\n> detail level, such as the HAVING issue. I think the last category won't be too hard to sort out, but that still leaves two really\r\n> major areas to address.\r\nYes, I agree with you. It is clear that further investigation and discussion are still needed. \r\nI would be grateful if we can resolve this issue gradually. I would also like to continue the discussion if possible in the future.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation\r\n", "msg_date": "Thu, 7 Dec 2023 00:10:58 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: [CAUTION!! freemail] Re: Partial aggregates pushdown" }, { "msg_contents": "On Wed, Dec 6, 2023 at 7:11 PM Fujii.Yuki@df.MitsubishiElectric.co.jp\n<Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n> I would be grateful if we can resolve this issue gradually. I would also like to continue the discussion if possible in the future.\n\nI think that would be good. Thanks for your work on this. It is a hard problem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Dec 2023 09:56:08 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CAUTION!! freemail] Re: Partial aggregates pushdown" }, { "msg_contents": "On Thu, Dec 7, 2023 at 09:56:08AM -0500, Robert Haas wrote:\n> On Wed, Dec 6, 2023 at 7:11 PM Fujii.Yuki@df.MitsubishiElectric.co.jp\n> <Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n> > I would be grateful if we can resolve this issue gradually. I would also like to continue the discussion if possible in the future.\n> \n> I think that would be good. Thanks for your work on this. It is a hard problem.\n\nAgreed. First, Robert is right that this feature is long overdue. It\nmight not help many of our existing workloads, but it opens us up to\nhandling new, larger workloads.\n\nSecond, the patch already has a mechanism to check the remote server\nversion to see if it is the same or newer. Here is the version check\ndocumentation patch:\n\n\tcheck_partial_aggregate_support (boolean)\n\t\n\tIf this option is false, <filename>postgres_fdw</filename> always\n\tuses partial aggregate pushdown by assuming that each built-in\n\taggregate function has a partial aggregate function defined on\n\tthe remote server. If this option is true, local aggregates\n\twhose partial computation function references itself are assumed\n\tto exist on the remote server.\tIf not, during query planning,\n\t<filename>postgres_fdw</filename> will connect to the remote\n\tserver and retrieve the remote server version.\tIf the remote\n\tversion is the same or newer, partial aggregate functions will be\n\tassumed to exist. If older, <filename>postgres_fdw</filename>\n\tchecks that the remote server has a matching partial aggregate\n\tfunction before performing partial aggregate pushdown.\tThe default\n\tis <literal>false</literal>.\n\nThere is also an extension list that specifies which extension-owned\nfunctions can be pushed down; from the doc patch:\n\n\tTo reduce the risk of misexecution of queries, WHERE clauses and\n\taggregate expressions are not sent to the remote server unless they\n\tonly use data types, operators, and functions that are built-in\n\tor belong to an extension that is listed in the foreign server's\n\t<literal>extensions</literal> option.\n\nThird, we already have a way of creating records for tables:\n\n\tSELECT pg_language FROM pg_language;\n\t pg_language\n\t-------------------------------------------\n\t (12,internal,10,f,f,0,0,2246,)\n\t (13,c,10,f,f,0,0,2247,)\n\t (14,sql,10,f,t,0,0,2248,)\n\t (13576,plpgsql,10,t,t,13573,13574,13575,)\n\nAnd we do have record input functionality:\n\n\tCREATE TABLE test (x int, language pg_language);\n\t\n\tINSERT INTO test SELECT 0, pg_language FROM pg_language;\n\t\n\tSELECT * FROM test;\n\t x | language\n\t---+-------------------------------------------\n\t 0 | (12,internal,10,f,f,0,0,2246,)\n\t 0 | (13,c,10,f,f,0,0,2247,)\n\t 0 | (14,sql,10,f,t,0,0,2248,)\n\t 0 | (13576,plpgsql,10,t,t,13573,13574,13575,)\n\t(4 rows)\n\nHowever, functions don't have pre-created records, and internal\nfunctions don't see to have an SQL-defined structure, but as I remember\nthe internal aggregate functions all take the same internal structure,\nso I guess we only need one fixed input and one output that would\noutput/input such records. Performance might be an issue, but at this\npoint let's just implement this and measure the overhead since there are\nfew/any(?) other viable options.\n\nFourth, going with #2 where we do the pushdown using an SQL keyword also\nallows extensions to automatically work, while requiring partial\naggregate functions for every non-partial aggregate will require work\nfor extensions, and potentially lead to more version mismatch issues.\n\nFinally, I am now concerned that this will not be able to be in PG 17,\nwhich I was hoping for.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 7 Dec 2023 16:12:18 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [CAUTION!! freemail] Re: Partial aggregates pushdown" }, { "msg_contents": "On Thu, Dec 7, 2023 at 4:12 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Second, the patch already has a mechanism to check the remote server\n> version to see if it is the same or newer. Here is the version check\n> documentation patch:\n\nRight. This feature can certainly be implemented in a\nbackward-compatible way. I'm not sure that we have as much control\nover what does and does not get pushed down as we really want here,\nbut it's completely possible to do this in a way that doesn't break\nother use cases.\n\n> However, functions don't have pre-created records, and internal\n> functions don't see to have an SQL-defined structure, but as I remember\n> the internal aggregate functions all take the same internal structure,\n> so I guess we only need one fixed input and one output that would\n> output/input such records. Performance might be an issue, but at this\n> point let's just implement this and measure the overhead since there are\n> few/any(?) other viable options.\n\nIMHO records will be the easiest approach, but it will be some work to try it.\n\n> Fourth, going with #2 where we do the pushdown using an SQL keyword also\n> allows extensions to automatically work, while requiring partial\n> aggregate functions for every non-partial aggregate will require work\n> for extensions, and potentially lead to more version mismatch issues.\n\nYeah.\n\n> Finally, I am now concerned that this will not be able to be in PG 17,\n> which I was hoping for.\n\nGetting it ready to ship by March seems very difficult. I'm not saying\nit couldn't happen, but I think more likely it won't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Dec 2023 09:24:25 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CAUTION!! freemail] Re: Partial aggregates pushdown" }, { "msg_contents": "On Thu, 7 Dec 2023 at 05:41, Fujii.Yuki@df.MitsubishiElectric.co.jp\n<Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n>\n> Hi Mr.Haas.\n>\n> > -----Original Message-----\n> > From: Robert Haas <robertmhaas@gmail.com>\n> > Sent: Wednesday, December 6, 2023 10:25 PM\n> > On Wed, Dec 6, 2023 at 3:41 AM Fujii.Yuki@df.MitsubishiElectric.co.jp\n> > <Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n> > > Are you concerned about the hassle and potential human errors of\n> > > manually adding new partial aggregation functions, rather than the catalog becoming bloated?\n> >\n> > I'm concerned about both.\n> Understood. Thank you for your response.\n>\n> > > The process of creating partial aggregation functions from aggregation\n> > > functions can be automated, so I believe this issue can be resolved.\n> > > However, automating it may increase the size of the patch even more, so overall, approach#2 might be better.\n> > > To implement approach #2, it would be necessary to investigate how much additional code is required.\n> >\n> > Yes. Unfortunately I fear that there is quite a lot of work left to do here in order to produce a committable feature. To me it\n> > seems necessary to conduct an investigation of approach #2. If the result of that investigation is that nothing major\n> > stands in the way of approach #2, then I think we should adopt it, which is more work. In addition, the problems around\n> > transmitting serialized bytea blobs between machines that can't be assumed to fully trust each other will need to be\n> > addressed in some way, which seems like it will require a good deal of design work, forming some kind of consensus, and\n> > then implementation work to follow. In addition to that there may be some small problems that need to be solved at a\n> > detail level, such as the HAVING issue. I think the last category won't be too hard to sort out, but that still leaves two really\n> > major areas to address.\n> Yes, I agree with you. It is clear that further investigation and discussion are still needed.\n> I would be grateful if we can resolve this issue gradually. I would also like to continue the discussion if possible in the future.\n\nThanks for all the efforts on this patch. I have changed the status of\nthe commitfest entry to \"Returned with Feedback\" as there is still\nsome work to get this patch out. Feel free to continue the discussion\nand add a new entry when the patch is in a reviewable shape.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 27 Jan 2024 07:27:34 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CAUTION!! freemail] Re: Partial aggregates pushdown" }, { "msg_contents": "Hi. Mr.Haas, hackers.\r\n\r\nI apologize for the significant delay since my last post.\r\nI have conducted investigations and considerations regarding the remaining tasks as follows.\r\nWould it be possible for you to review them?\r\nIn particular, could you please confirm if the approach mentioned in 1. is acceptable? \r\nIf there are no issues with the direction outlined in 1., I plan to make a simple prototype based on this approach.\r\n\r\n1. Transmitting state value safely between machines\r\n> From: Robert Haas <robertmhaas@gmail.com>\r\n> Sent: Wednesday, December 6, 2023 10:25 PM\r\n> the problems around transmitting\r\n> serialized bytea blobs between machines that can't be assumed to fully trust each other will need to be addressed in some\r\n> way, which seems like it will require a good deal of design work, forming some kind of consensus, and then implementation\r\n> work to follow. \r\nI have considered methods for safely transmitting state values between different machines.\r\nI have taken into account the version policy of PostgreSQL (5 years of support) and the major version release cycle over the past 10 years (1 year), and as a result, I have made the assumption that transmission is allowed only when the difference between the local version and the remote version is 5 or less.\r\nI believe that by adding new components, \"export function\" and \"import function\", to the aggregate functions, and further introducing a new SQL keyword to the query syntax of aggregate expressions, we can address this issue.\r\nIf the version of the local server is higher than or equal to the version of the remote server, the proposed method can be simplified. The export version mentioned later in (1) would not be necessary. Furthermore, if the version of the local server matches the version of the remote server, the proposed method can be further simplified.\r\nI would appreciate your input on reasonable assumptions regarding the differences in versions between the local server and the remote server.\r\nI will explain the specifications of the export function, import function, the new SQL keyword for aggregate expressions, and the behavior of query processing for partial aggregation separately.\r\n(1) Export Function Specification\r\nThis function is another final function for partial aggregate.\r\nThis function converts the state value that represents the result of partial aggregation into a format that can be read by the local server. \r\nThis function is called instead of the existing finalfunc during the final stage of aggregation when performing partial aggregation.\r\nThe conversion process described above will be referred to as \"export\".\r\nThe argument of an export function is the version of the server that will receive the return value.\r\nHereafter, this version will be referred to as the export version.\r\nThe concept of an export version is necessary to handle cases where the version of the local server is smaller than the version of the remote server.\r\nThe return value of the export function is the transformed state value, and its data type is bytea.\r\nFor backward compatibility, the developer of the export function must ensure that the export can be performed for major versions up to five versions prior to the major version of PostgreSQL that the export function is being developed for.\r\nFor built-in functions, I believe it is necessary to allow for the possibility of not developing the export functionality for specific versions in the future (due to reasons such as development burden) after the export function is developed for a certain version.\r\nTo achieve this, for built-in functions, we will add a column to the pg_aggregate catalog that indicates the presence or absence of export functionality for each major version, including the major version being developed and the previous five major versions. This column will be named safety_export_versions and will have a data type of boolean[6].\r\nFor user-defined functions, we will refer to the extensions option and add an external server option called safety_export_extensions, which will maintain a list of extensions that include only the aggregate functions that can be exported to the local server version.\r\n\r\n(2) Import Function Specification\r\nThe import function is a function that performs validity checks on the exported data and converts it into a state value. The process of this conversion is referred to as importing.\r\nThe import function is called from postgres_fdw in the local server.\r\nThe arguments of the import function are the exported data and the export version.\r\nThe return value of the import function is a state value that can be read on the local server.\r\nThe import function will terminate with an error if the validity check determines that the exported result cannot be read on the local server.\r\nFor backward compatibility, developers of the import function must ensure that it can be imported if the export version is up to five versions prior to their own version.\r\n\r\n(3) The new SQL keyword for aggregate expressions\r\nThe local server passes the instructions for partial aggregation and the export version to the remote server using SQL keywords. The syntax specification is as follows:\r\naggregate_function(PARTIAL_AGGREGATE(export_version) expr)\r\n\r\nHere, PARTIAL_AGGREGATE is a keyword that indicates partial aggregation, and export_version is a string constant that indicates the export version.\r\n\r\n(4) The behavior of query processing for partial aggregation\r\nI will illustrate the flow of query processing using the example query \"select aggfunc(c) from t\".\r\nIn the following explanation, the major version of the remote server will be referred to as remote_version, and the major version of the local server will be referred to as local_version.\r\nSTEP1. Checking the feasibility of partial aggregation pushdown on the local server\r\n(i) Retrieving the remote_version\r\nThe postgres_fdw connects to the remote server and retrieves the remote_version.\r\n(ii) Checking the versions\r\nThe postgres_fdw determines whether the difference between local_version and remote_version is within 5. If the difference is 6 or more, it is determined that partial aggregation pushdown is not possible.\r\n(iii) Checking the import function\r\nThe postgres_fdw checks the pg_aggregate catalog to see if there is an import function for aggfunc. If there is none, it is determined that partial aggregation pushdown is not possible.\r\n(iv) Checking the export function\r\nIf aggfunc is a built-in function, the postgres_fdw checks the pg_aggregate catalog. It checks if there is a version number export_version that satisfies the conditions local_version >= export_version >= local_version-5 and if there is an export function available for that version. If the version number export_version does not exist, it is determined that partial aggregation pushdown is not possible. This check is only performed if local_version >= remote_version.\r\nIf aggfunc is a user-defined function, the postgres_fdw checks if the extension on which aggfunc depends is included in export_safety_extensions. If it is not included, it is determined that partial aggregation pushdown is not possible.\r\n\r\nSTEP2. Sending a remote query on the local server\r\nThe query containing the keyword indicating partial aggregation is sent to the remote server. The remote query for the sample query would be as follows: \r\n\"select aggfunc(PARTIAL_AGGREGATE(export_version) c) from t\"\r\n\r\nSTEP3. Executing the remote query on the remote server\r\nThe remote server performs partial aggregation for aggfunc. Instead of calling the final function at the last stage of aggregation, the remote server calls the export function with export_version and generates the return value of the partial aggregation.\r\n\r\nSTEP4. Receiving the result of the remote query on the local server\r\nThe postgres_fdw passes the export_version and the return value of STEP3 to the import function of aggfunc and receives the state value. The postgres_fdw then passes the received state value to the executor of the local server.\r\n\r\n2. The approach of adding SQL keywords\r\n> From: Robert Haas <robertmhaas@gmail.com>\r\n> Sent: Tuesday, November 21, 2023 5:52 AM\r\n> I do have a concern about this, though. It adds a lot of bloat. It adds a whole lot of additional entries to pg_aggregate, and\r\n> every new aggregate we add in the future will require a bonus entry for this, and it needs a bunch of new pg_proc entries as\r\n> well. One idea that I've had in the past is to instead introduce syntax that just does this, without requiring a separate\r\n> aggregate definition in each case.\r\n> For example, maybe instead of changing string_agg(whatever) to string_agg_p_text_text(whatever), you can say\r\n> PARTIAL_AGGREGATE\r\n> string_agg(whatever) or string_agg(PARTIAL_AGGREGATE whatever) or something. Then all aggregates could be treated\r\n> in a generic way. I'm not completely sure that's better, but I think it's worth considering.\r\nI have prototyped an approach using SQL keywords for the patch that does not include the functionality of Step 1. Please find the prototype attached as a file.\r\n# I apologize for not including sufficient comments, documentation, and tests in the prototype. Please understand.\r\nMainly, it seems that we can address this by adding handling for the new SQL keywords in the parser and making modifications to the finalize process for aggregation in the executor.\r\nAs pointed out by Mr.Haas, it has been realized that the code can be significantly simplified.\r\nThe additional lines of code, excluding documentation and tests, are as follows.\r\nAdding new aggregate functions approach(approach #1): 1,069\r\nAdding new SQL keyword approach(approach #2): 318\r\nAs mentioned in 1., I plan to modify the patch by adding SQL keywords in the future.\r\n\r\n3. Fixing the behavior when the HAVING clause is present\r\n> From: Robert Haas <robertmhaas@gmail.com>\r\n> Sent: Tuesday, November 28, 2023 4:08 AM\r\n> \r\n> On Wed, Nov 22, 2023 at 1:32 AM Alexander Pyhalov <a.pyhalov@postgrespro.ru> wrote:\r\n> > Hi. HAVING is also a problem. Consider the following query\r\n> >\r\n> > SELECT count(a) FROM t HAVING count(a) > 10 - we can't push it down to\r\n> > foreign server as HAVING needs full aggregate result, but foreign\r\n> > server don't know it.\r\n> \r\n> I don't see it that way. What we would push to the foreign server would be something like SELECT count(a) FROM t. Then,\r\n> after we get the results back and combine the various partial counts locally, we would locally evaluate the HAVING clause\r\n> afterward. That is, partial aggregation is a barrier to pushing down HAVING clause itself, but it doesn't preclude pushing\r\n> down the aggregation.\r\nI have made modifications in the attached patch to ensure that when the HAVING clause is present, the HAVING clause is executed locally while the partial aggregations are pushed down.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Thu, 22 Feb 2024 07:20:45 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi.\n\nFujii.Yuki@df.MitsubishiElectric.co.jp писал(а) 2024-02-22 10:20:\n> Hi. Mr.Haas, hackers.\n> \n> I apologize for the significant delay since my last post.\n> I have conducted investigations and considerations regarding the \n> remaining tasks as follows.\n> Would it be possible for you to review them?\n> In particular, could you please confirm if the approach mentioned in 1. \n> is acceptable?\n> If there are no issues with the direction outlined in 1., I plan to \n> make a simple prototype based on this approach.\n> \n> 1. Transmitting state value safely between machines\n>> From: Robert Haas <robertmhaas@gmail.com>\n>> Sent: Wednesday, December 6, 2023 10:25 PM\n>> the problems around transmitting\n>> serialized bytea blobs between machines that can't be assumed to fully \n>> trust each other will need to be addressed in some\n>> way, which seems like it will require a good deal of design work, \n>> forming some kind of consensus, and then implementation\n>> work to follow.\n> I have considered methods for safely transmitting state values between \n> different machines.\n> I have taken into account the version policy of PostgreSQL (5 years of \n> support) and the major version release cycle over the past 10 years (1 \n> year), and as a result, I have made the assumption that transmission is \n> allowed only when the difference between the local version and the \n> remote version is 5 or less.\n> I believe that by adding new components, \"export function\" and \"import \n> function\", to the aggregate functions, and further introducing a new \n> SQL keyword to the query syntax of aggregate expressions, we can \n> address this issue.\n> If the version of the local server is higher than or equal to the \n> version of the remote server, the proposed method can be simplified. \n> The export version mentioned later in (1) would not be necessary. \n> Furthermore, if the version of the local server matches the version of \n> the remote server, the proposed method can be further simplified.\n> I would appreciate your input on reasonable assumptions regarding the \n> differences in versions between the local server and the remote server.\n> I will explain the specifications of the export function, import \n> function, the new SQL keyword for aggregate expressions, and the \n> behavior of query processing for partial aggregation separately.\n> (1) Export Function Specification\n> This function is another final function for partial aggregate.\n> This function converts the state value that represents the result of \n> partial aggregation into a format that can be read by the local server.\n> This function is called instead of the existing finalfunc during the \n> final stage of aggregation when performing partial aggregation.\n> The conversion process described above will be referred to as \"export\".\n> The argument of an export function is the version of the server that \n> will receive the return value.\n> Hereafter, this version will be referred to as the export version.\n> The concept of an export version is necessary to handle cases where the \n> version of the local server is smaller than the version of the remote \n> server.\n> The return value of the export function is the transformed state value, \n> and its data type is bytea.\n> For backward compatibility, the developer of the export function must \n> ensure that the export can be performed for major versions up to five \n> versions prior to the major version of PostgreSQL that the export \n> function is being developed for.\n> For built-in functions, I believe it is necessary to allow for the \n> possibility of not developing the export functionality for specific \n> versions in the future (due to reasons such as development burden) \n> after the export function is developed for a certain version.\n> To achieve this, for built-in functions, we will add a column to the \n> pg_aggregate catalog that indicates the presence or absence of export \n> functionality for each major version, including the major version being \n> developed and the previous five major versions. This column will be \n> named safety_export_versions and will have a data type of boolean[6].\n> For user-defined functions, we will refer to the extensions option and \n> add an external server option called safety_export_extensions, which \n> will maintain a list of extensions that include only the aggregate \n> functions that can be exported to the local server version.\n> ...\n\nI honestly think that achieving cross-version compatibility in this way \nputs a significant burden on developers. Can we instead always use the \nmore or less universal export and import function to fix possible issues \nwith binary representations on different architectures and just refuse \nto push down partial aggregates on server version mismatch? At least at \nthe first step?\n\n> \n> 3. Fixing the behavior when the HAVING clause is present\n>> From: Robert Haas <robertmhaas@gmail.com>\n>> Sent: Tuesday, November 28, 2023 4:08 AM\n>> \n>> On Wed, Nov 22, 2023 at 1:32 AM Alexander Pyhalov \n>> <a.pyhalov@postgrespro.ru> wrote:\n>> > Hi. HAVING is also a problem. Consider the following query\n>> >\n>> > SELECT count(a) FROM t HAVING count(a) > 10 - we can't push it down to\n>> > foreign server as HAVING needs full aggregate result, but foreign\n>> > server don't know it.\n>> \n>> I don't see it that way. What we would push to the foreign server \n>> would be something like SELECT count(a) FROM t. Then,\n>> after we get the results back and combine the various partial counts \n>> locally, we would locally evaluate the HAVING clause\n>> afterward. That is, partial aggregation is a barrier to pushing down \n>> HAVING clause itself, but it doesn't preclude pushing\n>> down the aggregation.\n> I have made modifications in the attached patch to ensure that when the \n> HAVING clause is present, the HAVING clause is executed locally while \n> the partial aggregations are pushed down.\n> \n> \n\nSorry, I don't see how it works. When we have partial aggregates and \nhaving clause, foreign_grouping_ok() returns false and \nadd_foreign_grouping_paths() adds no paths.\nI'm not saying it's necessary to fix this in the first patch version.\n\nexplain verbose select sum(a) from pagg_tab having sum(a)>10;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=2282.49..2282.50 rows=1 width=8)\n Output: sum(pagg_tab.a)\n Filter: (sum(pagg_tab.a) > 10)\n -> Append (cost=760.81..2282.48 rows=3 width=8)\n -> Partial Aggregate (cost=760.81..760.82 rows=1 width=8)\n Output: PARTIAL sum(pagg_tab.a)\n -> Foreign Scan on public.fpagg_tab_p1 pagg_tab \n(cost=100.00..753.50 rows=2925 width=4)\n Output: pagg_tab.a\n Remote SQL: SELECT a FROM public.pagg_tab_p1\n -> Partial Aggregate (cost=760.81..760.82 rows=1 width=8)\n Output: PARTIAL sum(pagg_tab_1.a)\n -> Foreign Scan on public.fpagg_tab_p2 pagg_tab_1 \n(cost=100.00..753.50 rows=2925 width=4)\n Output: pagg_tab_1.a\n Remote SQL: SELECT a FROM public.pagg_tab_p2\n -> Partial Aggregate (cost=760.81..760.82 rows=1 width=8)\n Output: PARTIAL sum(pagg_tab_2.a)\n -> Foreign Scan on public.fpagg_tab_p3 pagg_tab_2 \n(cost=100.00..753.50 rows=2925 width=4)\n Output: pagg_tab_2.a\n Remote SQL: SELECT a FROM public.pagg_tab_p3\n\n\nAlso I have some minor notices on the code.\n\ncontrib/postgres_fdw/deparse.c: comment before appendFunctionName() has \ngone, this seems to be wrong.\n\nIn finalize_aggregate()\n\n1079 /*\n1080 * Apply the agg's finalfn if one is provided, else return \ntransValue.\n1081 */\n\nComment should be updated to note behavior for agg_partial aggregates.\n\n1129 else if (peragg->aggref->agg_partial\n1130 && (peragg->aggref->aggtranstype == \nINTERNALOID)\n1131 && OidIsValid(peragg->serialfn_oid))\n\nIn this if branch, should we check just for peragg->aggref->agg_partial \nand peragg->aggref->aggtranstype == INTERNALOID? It seems that if \nperagg->aggref->aggtranstype == INTERNALOID and there's no\nserialfn_oid, it's likely an error (and one should be generated).\n\nOverall patch seems nicer. Will look at it more this week.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Wed, 28 Feb 2024 16:43:07 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi. Mr.Pyhalov.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Wednesday, February 28, 2024 10:43 PM\r\n> > 1. Transmitting state value safely between machines\r\n> >> From: Robert Haas <robertmhaas@gmail.com>\r\n> >> Sent: Wednesday, December 6, 2023 10:25 PM the problems around\r\n> >> transmitting serialized bytea blobs between machines that can't be\r\n> >> assumed to fully trust each other will need to be addressed in some\r\n> >> way, which seems like it will require a good deal of design work,\r\n> >> forming some kind of consensus, and then implementation work to\r\n> >> follow.\r\n> > I have considered methods for safely transmitting state values between\r\n> > different machines.\r\n> > I have taken into account the version policy of PostgreSQL (5 years of\r\n> > support) and the major version release cycle over the past 10 years (1\r\n> > year), and as a result, I have made the assumption that transmission\r\n> > is allowed only when the difference between the local version and the\r\n> > remote version is 5 or less.\r\n> > I believe that by adding new components, \"export function\" and \"import\r\n> > function\", to the aggregate functions, and further introducing a new\r\n> > SQL keyword to the query syntax of aggregate expressions, we can\r\n> > address this issue.\r\n> >\r\n ...\r\n> \r\n> I honestly think that achieving cross-version compatibility in this way puts a significant burden on developers. Can we\r\n> instead always use the more or less universal export and import function to fix possible issues with binary representations\r\n> on different architectures and just refuse to push down partial aggregates on server version mismatch? At least at the first\r\n> step?\r\nThank you for your comment. I agree with your point that the proposed method would impose a significant burden on developers. In order to ensure cross-version compatibility, it is necessary to impose constraints on the format of the state values exchanged between servers, which would indeed burden developers. As you mentioned, I think that it is realistic to allow partial aggregation pushdown only when coordinating between the same versions in the first step.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Wednesday, February 28, 2024 10:43 PM\r\n> > 3. Fixing the behavior when the HAVING clause is present\r\n> >> From: Robert Haas <robertmhaas@gmail.com>\r\n> >> Sent: Tuesday, November 28, 2023 4:08 AM\r\n> >>\r\n> >> On Wed, Nov 22, 2023 at 1:32 AM Alexander Pyhalov\r\n> >> <a.pyhalov@postgrespro.ru> wrote:\r\n> >> > Hi. HAVING is also a problem. Consider the following query\r\n> >> >\r\n> >> > SELECT count(a) FROM t HAVING count(a) > 10 - we can't push it down\r\n> >> > to foreign server as HAVING needs full aggregate result, but\r\n> >> > foreign server don't know it.\r\n> >>\r\n> >> I don't see it that way. What we would push to the foreign server\r\n> >> would be something like SELECT count(a) FROM t. Then, after we get\r\n> >> the results back and combine the various partial counts locally, we\r\n> >> would locally evaluate the HAVING clause afterward. That is, partial\r\n> >> aggregation is a barrier to pushing down HAVING clause itself, but it\r\n> >> doesn't preclude pushing down the aggregation.\r\n> > I have made modifications in the attached patch to ensure that when\r\n> > the HAVING clause is present, the HAVING clause is executed locally\r\n> > while the partial aggregations are pushed down.\r\n> >\r\n> >\r\n> \r\n> Sorry, I don't see how it works. When we have partial aggregates and having clause, foreign_grouping_ok() returns false and\r\n> add_foreign_grouping_paths() adds no paths.\r\n> I'm not saying it's necessary to fix this in the first patch version.\r\nOur sincere apologies. I had attached an older version before this modification.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Wednesday, February 28, 2024 10:43 PM\r\n> contrib/postgres_fdw/deparse.c: comment before appendFunctionName() has gone, this seems to be wrong.\r\nFixed.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Wednesday, February 28, 2024 10:43 PM\r\n> In finalize_aggregate()\r\n> \r\n> 1079 /*\r\n> 1080 * Apply the agg's finalfn if one is provided, else return\r\n> transValue.\r\n> 1081 */\r\n> \r\n> Comment should be updated to note behavior for agg_partial aggregates.\r\nFixed.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Wednesday, February 28, 2024 10:43 PM\r\n> In this if branch, should we check just for peragg->aggref->agg_partial and peragg->aggref->aggtranstype ==\r\n> INTERNALOID? It seems that if\r\n> peragg->aggref->aggtranstype == INTERNALOID and there's no\r\n> serialfn_oid, it's likely an error (and one should be generated).\r\nAs you pointed out, I have made modifications to the source code so that it terminates with an error if serialfn is invalid.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Sat, 16 Mar 2024 02:28:50 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Sat, Mar 16, 2024 at 02:28:50AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Hi. Mr.Pyhalov.\n>\n> > From: Alexander Pyhalov <a.pyhalov@postgrespro.ru> Sent: Wednesday,\n> > February 28, 2024 10:43 PM\n> > > 1. Transmitting state value safely between machines\n> > >> From: Robert Haas <robertmhaas@gmail.com> Sent: Wednesday,\n> > >> December 6, 2023 10:25 PM the problems around transmitting\n> > >> serialized bytea blobs between machines that can't be assumed to\n> > >> fully trust each other will need to be addressed in some way,\n> > >> which seems like it will require a good deal of design work,\n> > >> forming some kind of consensus, and then implementation work to\n> > >> follow.\n> > > I have considered methods for safely transmitting state values\n> > > between different machines. I have taken into account the version\n> > > policy of PostgreSQL (5 years of support) and the major version\n> > > release cycle over the past 10 years (1 year), and as a result, I\n> > > have made the assumption that transmission is allowed only when\n> > > the difference between the local version and the remote version\n> > > is 5 or less. I believe that by adding new components, \"export\n> > > function\" and \"import function\", to the aggregate functions, and\n> > > further introducing a new SQL keyword to the query syntax of\n> > > aggregate expressions, we can address this issue.\n> >\n> > I honestly think that achieving cross-version compatibility in\n> > this way puts a significant burden on developers. Can we instead\n> > always use the more or less universal export and import function\n> > to fix possible issues with binary representations on different\n> > architectures and just refuse to push down partial aggregates on\n> > server version mismatch? At least at the first step?\n>\n> Thank you for your comment. I agree with your point that the proposed\n> method would impose a significant burden on developers. In order\n> to ensure cross-version compatibility, it is necessary to impose\n> constraints on the format of the state values exchanged between\n> servers, which would indeed burden developers. As you mentioned, I\n> think that it is realistic to allow partial aggregation pushdown only\n> when coordinating between the same versions in the first step.\n\nThe current patch has:\n\n if ((OidIsValid(aggform->aggfinalfn) ||\n (aggform->aggtranstype == INTERNALOID)) &&\n fpinfo->check_partial_aggregate_support)\n {\n if (fpinfo->remoteversion == 0)\n {\n PGconn *conn = GetConnection(fpinfo->user, false, NULL);\n\n fpinfo->remoteversion = PQserverVersion(conn);\n }\n\n if (fpinfo->remoteversion < PG_VERSION_NUM)\n partial_agg_ok = false;\n }\n\nIt uses check_partial_aggregate_support, which defaults to false,\nmeaning partial aggregates will be pushed down with no version check by\ndefault. If set to true, pushdown will happen if the remote server is\nthe same version or newer, which seems acceptable to me.\n\nFYI, the patch is much smaller now. :-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 19 Mar 2024 16:29:42 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> The current patch has:\n\n> if ((OidIsValid(aggform->aggfinalfn) ||\n> (aggform->aggtranstype == INTERNALOID)) &&\n> fpinfo->check_partial_aggregate_support)\n> {\n> if (fpinfo->remoteversion == 0)\n> {\n> PGconn *conn = GetConnection(fpinfo->user, false, NULL);\n\n> fpinfo->remoteversion = PQserverVersion(conn);\n> }\n\n> if (fpinfo->remoteversion < PG_VERSION_NUM)\n> partial_agg_ok = false;\n> }\n\n> It uses check_partial_aggregate_support, which defaults to false,\n> meaning partial aggregates will be pushed down with no version check by\n> default. If set to true, pushdown will happen if the remote server is\n> the same version or newer, which seems acceptable to me.\n\nI'd like to vociferously protest both of those decisions.\n\n\"No version check by default\" means \"unsafe by default\", which is not\nproject style in general and is especially not so for postgres_fdw.\nWe have tried very hard for years to ensure that postgres_fdw will\nwork with a wide range of remote server versions, and generally been\nextremely conservative about what we think will work (example:\ncollations); but this patch seems ready to throw that principle away.\n\nAlso, surely \"remoteversion < PG_VERSION_NUM\" is backwards. What\nthis would mean is that nobody can ever change a partial aggregate's\nimplementation, because that would break queries issued from older\nservers (that couldn't know about the change) to newer ones.\n\nRealistically, I think it's fairly unsafe to try aggregate pushdown\nto anything but the same PG major version; otherwise, you're buying\ninto knowing which aggregates have partial support in which versions,\nas well as predicting the future about incompatible state changes.\nEven that isn't bulletproof --- e.g, maybe somebody wasn't careful\nabout endianness-independence of the serialized partial state, making\nit unsafe to ship --- so there had better be a switch whereby the user\ncan disable it.\n\nMaybe we could define a three-way setting:\n\n* default: push down partial aggs only to same major PG version\n* disable: don't push down, period\n* force: push down regardless of remote version\n\nWith the \"force\" setting, it's the user's responsibility not to\nissue any remote-able aggregation that would be unsafe to push\ndown. This is still a pretty crude tool: I can foresee people\nwanting to have per-aggregate control over things, especially\nextension-supplied aggregates. But it'd do for starters.\n\nI'm not super thrilled by the fact that the patch contains zero\nuser-facing documentation, even though it's created new SQL syntax,\nnot to mention a new postgres_fdw option. I assume this means that\nnobody thinks it's anywhere near ready to commit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Mar 2024 17:29:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Tue, Mar 19, 2024 at 05:29:07PM -0400, Tom Lane wrote:\n> I'd like to vociferously protest both of those decisions.\n> \n> \"No version check by default\" means \"unsafe by default\", which is not\n> project style in general and is especially not so for postgres_fdw.\n> We have tried very hard for years to ensure that postgres_fdw will\n> work with a wide range of remote server versions, and generally been\n> extremely conservative about what we think will work (example:\n> collations); but this patch seems ready to throw that principle away.\n> \n> Also, surely \"remoteversion < PG_VERSION_NUM\" is backwards. What\n> this would mean is that nobody can ever change a partial aggregate's\n> implementation, because that would break queries issued from older\n> servers (that couldn't know about the change) to newer ones.\n\nWell it is the origin server that is issuing the PUSHDOWN syntax, so an\nolder origin server should be able to push to a newer remote server.\n\n> Realistically, I think it's fairly unsafe to try aggregate pushdown\n> to anything but the same PG major version; otherwise, you're buying\n> into knowing which aggregates have partial support in which versions,\n> as well as predicting the future about incompatible state changes.\n\nYes, incompatible state changes would be a problem with an older origin\nserver with a newer remote server setup.\n\nIf we require matching versions, we must accept that upgrades will\nrequire more downtime.\n\n> Even that isn't bulletproof --- e.g, maybe somebody wasn't careful\n> about endianness-independence of the serialized partial state, making\n> it unsafe to ship --- so there had better be a switch whereby the user\n> can disable it.\n\nMakes sense. I was also wondering how a user would know whether the\npushdown is happening, or not.\n\n> Maybe we could define a three-way setting:\n> \n> * default: push down partial aggs only to same major PG version\n> * disable: don't push down, period\n> * force: push down regardless of remote version\n\nWhat would be the default? If it is the first one, it requires a\nremote version check on first in the session.\n\n> With the \"force\" setting, it's the user's responsibility not to\n> issue any remote-able aggregation that would be unsafe to push\n> down. This is still a pretty crude tool: I can foresee people\n> wanting to have per-aggregate control over things, especially\n> extension-supplied aggregates. But it'd do for starters.\n\nWe have the postgres_fdw extensions option to control function pushdown\nto extensions.\n\n> I'm not super thrilled by the fact that the patch contains zero\n> user-facing documentation, even though it's created new SQL syntax,\n> not to mention a new postgres_fdw option. I assume this means that\n> nobody thinks it's anywhere near ready to commit.\n\nPrevious versions of the patch had docs since I know I worked on\nimproving them. I am not sure what happened to them.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 19 Mar 2024 19:09:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi. Mr.Momjian, Mr.Lane, Mr.Haas, hackers.\n\nI apologize for any misunderstanding regarding the context of the attached patch and\nthe points on which I requested a review. Could you please allow me to clarify?\n\nIn the review around early December 2023, I received the following three issues pointed out by Mr.Haas[1].\n1. Transmitting state value safely between machines\n2. Making the patch clearer by adding SQL keywords\n3. Fixing the behavior when the HAVING clause is present\n\nIn the email sent on February 22, 2024[2], I provided an update on the progress made in addressing these issues.\nRegarding issue 1, I have only provided a proposed solution in the email and have not started the programming. \nTherefore, the latest patch is not in a commit-ready state. As mentioned later, we have also temporarily reverted the changes made to the documentation.\nBefore proceeding with the programming, I would like to discuss the proposed solution with the community and seek consensus.\nIf it is necessary to have source code in order to discuss, I can create a simple prototype so that I can receive your feedback.\nWould you be able to provide your opinions on it?\n\nRegarding issue 2., I have confirmed that creating a prototype allows us to address the issue and clear the patch.\nIn this prototype creation, the main purpose was to verify if the patch can be cleared and significant revisions were made to the previous version.\nTherefore, I have removed all the document differences.\nI have submitted a patch [3] that includes the fixes for issue 3. to the patch that was posted in [2].\nRegarding the proposed solution for issue 1, unlike the patch posted in [3], \nwe have a policy of not performing partial aggregation pushdown if we cannot guarantee compatibility and safety.\nThe latest patch in [3] is a POC patch. The patch that Mr. Momjian reviewed is this.\nIf user-facing documentation is needed for this POC patch, it can be added.\n\nI apologize for the lack of explanation regarding this positioning, which may have caused misunderstandings regarding the patch posted in [3].\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoYCrtOvk2f32qQKZV%3DjNL35tandf2A2Dp_2F5ASuiG1BA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/TYAPR01MB5514F0CBD9CD4F84A261198195562%40TYAPR01MB5514.jpnprd01.prod.outlook.com\n[3] https://www.postgresql.org/message-id/TYAPR01MB55141D18188AC86ADCE35FCB952F2%40TYAPR01MB5514.jpnprd01.prod.outlook.com\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation\n\n\n", "msg_date": "Thu, 21 Mar 2024 11:37:50 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Thu, Mar 21, 2024 at 11:37:50AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Hi. Mr.Momjian, Mr.Lane, Mr.Haas, hackers.\n> \n> I apologize for any misunderstanding regarding the context of the attached patch and\n> the points on which I requested a review. Could you please allow me to clarify?\n> \n> In the review around early December 2023, I received the following three issues pointed out by Mr.Haas[1].\n> 1. Transmitting state value safely between machines\n> 2. Making the patch clearer by adding SQL keywords\n> 3. Fixing the behavior when the HAVING clause is present\n> \n> In the email sent on February 22, 2024[2], I provided an update on the progress made in addressing these issues.\n> Regarding issue 1, I have only provided a proposed solution in the email and have not started the programming. \n> Therefore, the latest patch is not in a commit-ready state. As mentioned later, we have also temporarily reverted the changes made to the documentation.\n> Before proceeding with the programming, I would like to discuss the proposed solution with the community and seek consensus.\n> If it is necessary to have source code in order to discuss, I can create a simple prototype so that I can receive your feedback.\n> Would you be able to provide your opinions on it?\n> \n> Regarding issue 2., I have confirmed that creating a prototype allows us to address the issue and clear the patch.\n> In this prototype creation, the main purpose was to verify if the patch can be cleared and significant revisions were made to the previous version.\n> Therefore, I have removed all the document differences.\n> I have submitted a patch [3] that includes the fixes for issue 3. to the patch that was posted in [2].\n> Regarding the proposed solution for issue 1, unlike the patch posted in [3], \n> we have a policy of not performing partial aggregation pushdown if we cannot guarantee compatibility and safety.\n> The latest patch in [3] is a POC patch. The patch that Mr. Momjian reviewed is this.\n> If user-facing documentation is needed for this POC patch, it can be added.\n> \n> I apologize for the lack of explanation regarding this positioning, which may have caused misunderstandings regarding the patch posted in [3].\n\nThat makes sense. Let's get you answers to those questions first before\nyou continue.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 21 Mar 2024 18:01:00 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Fujii.Yuki@df.MitsubishiElectric.co.jp писал(а) 2024-03-16 05:28:\n> Hi. Mr.Pyhalov.\n>> >>\n>> >> I don't see it that way. What we would push to the foreign server\n>> >> would be something like SELECT count(a) FROM t. Then, after we get\n>> >> the results back and combine the various partial counts locally, we\n>> >> would locally evaluate the HAVING clause afterward. That is, partial\n>> >> aggregation is a barrier to pushing down HAVING clause itself, but it\n>> >> doesn't preclude pushing down the aggregation.\n>> > I have made modifications in the attached patch to ensure that when\n>> > the HAVING clause is present, the HAVING clause is executed locally\n>> > while the partial aggregations are pushed down.\n>> >\n>> >\n>> \n>> Sorry, I don't see how it works. When we have partial aggregates and \n>> having clause, foreign_grouping_ok() returns false and\n>> add_foreign_grouping_paths() adds no paths.\n>> I'm not saying it's necessary to fix this in the first patch version.\n> Our sincere apologies. I had attached an older version before this \n> modification.\n> \n\nHi.\n\nIn foreign_grouping_ok() having qual is added to local conds here:\n\n6635 if (is_foreign_expr(root, grouped_rel, \nexpr) && !partial)\n6636 fpinfo->remote_conds = \nlappend(fpinfo->remote_conds, rinfo);\n6637 else\n6638 fpinfo->local_conds = \nlappend(fpinfo->local_conds, rinfo);\n6639 }\n6640 }\n\n\nThis is incorrect. If you look at plan for query in postgres_fdw.sql\n\n\n-- Partial aggregates are safe to push down when there is a HAVING \nclause\nEXPLAIN (VERBOSE, COSTS OFF)\nSELECT b, avg(a), max(a), count(*) FROM pagg_tab GROUP BY b HAVING \nsum(a) < 700 ORDER BY 1;\n \nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Finalize GroupAggregate\n Output: pagg_tab.b, avg(pagg_tab.a), max(pagg_tab.a), count(*)\n Group Key: pagg_tab.b\n Filter: (sum(pagg_tab.a) < 700)\n -> Sort\n Output: pagg_tab.b, (PARTIAL avg(pagg_tab.a)), (PARTIAL \nmax(pagg_tab.a)), (PARTIAL count(*)), (PARTIAL sum(pagg_tab.a))\n Sort Key: pagg_tab.b\n -> Append\n -> Foreign Scan\n Output: pagg_tab.b, (PARTIAL avg(pagg_tab.a)), \n(PARTIAL max(pagg_tab.a)), (PARTIAL count(*)), (PARTIAL sum(pagg_tab.a))\n Filter: ((sum(pagg_tab.a)) < 700)\n Relations: Aggregate on (public.fpagg_tab_p1 \npagg_tab)\n Remote SQL: SELECT b, avg(PARTIAL_AGGREGATE a), \nmax(a), count(*), sum(a), sum(a) FROM public.pagg_tab_p1 GROUP BY 1\n -> Foreign Scan\n Output: pagg_tab_1.b, (PARTIAL avg(pagg_tab_1.a)), \n(PARTIAL max(pagg_tab_1.a)), (PARTIAL count(*)), (PARTIAL \nsum(pagg_tab_1.a))\n Filter: ((sum(pagg_tab_1.a)) < 700)\n Relations: Aggregate on (public.fpagg_tab_p2 \npagg_tab_1)\n Remote SQL: SELECT b, avg(PARTIAL_AGGREGATE a), \nmax(a), count(*), sum(a), sum(a) FROM public.pagg_tab_p2 GROUP BY 1\n -> Foreign Scan\n Output: pagg_tab_2.b, (PARTIAL avg(pagg_tab_2.a)), \n(PARTIAL max(pagg_tab_2.a)), (PARTIAL count(*)), (PARTIAL \nsum(pagg_tab_2.a))\n Filter: ((sum(pagg_tab_2.a)) < 700)\n Relations: Aggregate on (public.fpagg_tab_p3 \npagg_tab_2)\n Remote SQL: SELECT b, avg(PARTIAL_AGGREGATE a), \nmax(a), count(*), sum(a), sum(a) FROM public.pagg_tab_p3 GROUP BY 1\n\n\nYou can see that filter is applied before append. The result is correct \nonly by chance, as sum in every partition is actually < 700. If you \nlower this bound, let's say, to 200, you'll start getting wrong results \nas data is filtered prior to aggregation.\n\nIt seems, however, that in partial case you should just avoid pulling \nconditions from having qual at all, all filters will be applied on upper \nlevel. Something like\n\ndiff --git a/contrib/postgres_fdw/postgres_fdw.c \nb/contrib/postgres_fdw/postgres_fdw.c\nindex 42eb17ae7c0..54918b9f1a4 100644\n--- a/contrib/postgres_fdw/postgres_fdw.c\n+++ b/contrib/postgres_fdw/postgres_fdw.c\n@@ -6610,7 +6610,7 @@ foreign_grouping_ok(PlannerInfo *root, RelOptInfo \n*grouped_rel,\n * Classify the pushable and non-pushable HAVING clauses and \nsave them in\n * remote_conds and local_conds of the grouped rel's fpinfo.\n */\n- if (extra->havingQual)\n+ if (extra->havingQual && !partial)\n {\n foreach(lc, (List *) extra->havingQual)\n {\n@@ -6632,7 +6632,7 @@ foreign_grouping_ok(PlannerInfo *root, RelOptInfo \n*grouped_rel,\n \n grouped_rel->relids,\n \n NULL,\n \n NULL);\n- if (is_foreign_expr(root, grouped_rel, expr) && \n!partial)\n+ if (is_foreign_expr(root, grouped_rel, expr))\n fpinfo->remote_conds = \nlappend(fpinfo->remote_conds, rinfo);\n else\n fpinfo->local_conds = \nlappend(fpinfo->local_conds, rinfo);\n\n>> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\n>> Sent: Wednesday, February 28, 2024 10:43 PM\n>> contrib/postgres_fdw/deparse.c: comment before appendFunctionName() \n>> has gone, this seems to be wrong.\n> Fixed.\n> \n>> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\n>> Sent: Wednesday, February 28, 2024 10:43 PM\n>> In finalize_aggregate()\n>> \n>> 1079 /*\n>> 1080 * Apply the agg's finalfn if one is provided, else \n>> return\n>> transValue.\n>> 1081 */\n>> \n>> Comment should be updated to note behavior for agg_partial aggregates.\n> Fixed.\n\nComment in nodeAgg.c seems to be strange:\n\n1079 /*\n1080 * If the agg's finalfn is provided and PARTIAL_AGGREGATE \nkeyword is\n1081 * not specified, apply the agg's finalfn.\n1082 * If PARTIAL_AGGREGATE keyword is specified and the \ntransValue type\n1083 * is internal, apply the agg's serialfn. In this case, if \nthe agg's\n1084 * serialfn must not be invalid. Otherwise return \ntransValue.\n1085 */\n\nLikely, you mean:\n\n... In this case the agg'ss serialfn must not be invalid...\n\n\nLower, in the same file, please, correct error message:\n\n1136 if(!OidIsValid(peragg->serialfn_oid))\n1137 elog(ERROR, \"serialfunc is note provided \nfor partial aggregate\");\n\nit should be \"serialfunc is not provided for partial aggregate\"\n\nAlso something is wrong with the following test :\n\n SELECT /* aggregate <> partial aggregate */\n array_agg(c_int4array), array_agg(b),\n avg(b::int2), avg(b::int4), avg(b::int8), avg(c_interval),\n avg(b::float4), avg(b::float8),\n corr(b::float8, (b * b)::float8),\n covar_pop(b::float8, (b * b)::float8),\n covar_samp(b::float8, (b * b)::float8),\n regr_avgx((2 * b)::float8, b::float8),\n.....\n\nIts results have changed since last patch. Do they depend on daylight \nsaving time?\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Mon, 25 Mar 2024 10:00:51 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Alexander Pyhalov писал(а) 2024-03-25 10:00:\n> Fujii.Yuki@df.MitsubishiElectric.co.jp писал(а) 2024-03-16 05:28:\n>> Hi. Mr.Pyhalov.\n>>> >>\n>>> >> I don't see it that way. What we would push to the foreign server\n>>> >> would be something like SELECT count(a) FROM t. Then, after we get\n>>> >> the results back and combine the various partial counts locally, we\n>>> >> would locally evaluate the HAVING clause afterward. That is, partial\n>>> >> aggregation is a barrier to pushing down HAVING clause itself, but it\n>>> >> doesn't preclude pushing down the aggregation.\n>>> > I have made modifications in the attached patch to ensure that when\n>>> > the HAVING clause is present, the HAVING clause is executed locally\n>>> > while the partial aggregations are pushed down.\n>>> >\n>>> >\n>>> \n>>> Sorry, I don't see how it works. When we have partial aggregates and \n>>> having clause, foreign_grouping_ok() returns false and\n>>> add_foreign_grouping_paths() adds no paths.\n>>> I'm not saying it's necessary to fix this in the first patch version.\n>> Our sincere apologies. I had attached an older version before this \n>> modification.\n>> \n> \n\nHi.\nFound one more problem. You can fire partial aggregate over partitioned \ntable, but convert_combining_aggrefs() will make non-partial copy, which \nleads to\n'variable not found in subplan target list' error.\n\nAttaching fixed version. Also I've added changes, related to HAVING \nprocessing.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Tue, 26 Mar 2024 14:33:21 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Alexander Pyhalov писал(а) 2024-03-26 14:33:\n> Alexander Pyhalov писал(а) 2024-03-25 10:00:\n>> Fujii.Yuki@df.MitsubishiElectric.co.jp писал(а) 2024-03-16 05:28:\n>>> Hi. Mr.Pyhalov.\n>>>> >>\n>>>> >> I don't see it that way. What we would push to the foreign server\n>>>> >> would be something like SELECT count(a) FROM t. Then, after we get\n>>>> >> the results back and combine the various partial counts locally, we\n>>>> >> would locally evaluate the HAVING clause afterward. That is, partial\n>>>> >> aggregation is a barrier to pushing down HAVING clause itself, but it\n>>>> >> doesn't preclude pushing down the aggregation.\n>>>> > I have made modifications in the attached patch to ensure that when\n>>>> > the HAVING clause is present, the HAVING clause is executed locally\n>>>> > while the partial aggregations are pushed down.\n>>>> >\n>>>> >\n>>>> \n>>>> Sorry, I don't see how it works. When we have partial aggregates and \n>>>> having clause, foreign_grouping_ok() returns false and\n>>>> add_foreign_grouping_paths() adds no paths.\n>>>> I'm not saying it's necessary to fix this in the first patch \n>>>> version.\n>>> Our sincere apologies. I had attached an older version before this \n>>> modification.\n>>> \n>> \n> \n> Hi.\n> Found one more problem. You can fire partial aggregate over partitioned \n> table, but convert_combining_aggrefs() will make non-partial copy, \n> which leads to\n> 'variable not found in subplan target list' error.\n> \n> Attaching fixed version. Also I've added changes, related to HAVING \n> processing.\n\nHi.\n\nThere was an issue in previous patch version - setGroupClausePartial() \nlooked at root->parse->groupClause, not at root->processed_groupClause.\nFixed and added test to cover this.\n\n\nAlso denied partial agregates pushdown on server version mismatch. \nShould check_partial_aggregate_support be 'true' by default?\n\nI'm not sure what to do with current grammar - it precludes partial \ndistinct aggregates. I understand that it's currently impossible to have \npartial aggregation for distinct agregates -but does it worth to have \nsuch restriction at grammar level?\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Fri, 29 Mar 2024 17:46:31 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Mr. Pyhalov.\r\n\r\nSorry for the late reply.\r\nThank you for your modification and detailed review.\r\nI attach a fixed patch, have been not yet rebased.\r\n\r\nMonday, 25 March 2024 16:01 Alexander Pyhalov <a.pyhalov@postgrespro.ru>:.\r\n> Comment in nodeAgg.c seems to be strange:\r\n>\r\n> 1079 /*\r\n> 1080 * If the agg's finalfn is provided and PARTIAL_AGGREGATE\r\n> keyword is\r\n> 1081 * not specified, apply the agg's finalfn.\r\n> 1082 * If PARTIAL_AGGREGATE keyword is specified and the\r\n> transValue type\r\n> 1083 * is internal, apply the agg's serialfn. In this case, if\r\n> the agg's\r\n> 1084 * serialfn must not be invalid. Otherwise return\r\n> transValue.\r\n> 1085 */\r\n>\r\n> Likely, you mean:\r\n>\r\n> ... In this case the agg'ss serialfn must not be invalid...\r\nFixed.\r\n\r\n> Lower, in the same file, please, correct error message:\r\n>\r\n> 1136 if(!OidIsValid(peragg->serialfn_oid))\r\n> 1137 elog(ERROR, \"serialfunc is note provided\r\n> for partial aggregate\");\r\n>\r\n> it should be \"serialfunc is not provided for partial aggregate\"\r\nFixed.\r\n\r\n> Also something is wrong with the following test :\r\n>\r\n> SELECT /* aggregate <> partial aggregate */\r\n> array_agg(c_int4array), array_agg(b),\r\n> avg(b::int2), avg(b::int4), avg(b::int8), avg(c_interval),\r\n> avg(b::float4), avg(b::float8),\r\n> corr(b::float8, (b * b)::float8),\r\n> covar_pop(b::float8, (b * b)::float8),\r\n> covar_samp(b::float8, (b * b)::float8),\r\n> regr_avgx((2 * b)::float8, b::float8),\r\n> .....\r\n>\r\n> Its results have changed since last patch. Do they depend on daylight\r\n> saving time?\r\nYou are right. In my environment, TimeZone is set to 'PST8PDT'\r\nwith which timetz values depends on daylight saving time.\r\nChanged TimeZone to 'UTC' in this test.\r\n\r\n> You can see that filter is applied before append. The result is correct\r\n> only by chance, as sum in every partition is actually < 700. If you\r\n> lower this bound, let's say, to 200, you'll start getting wrong results\r\n> as data is filtered prior to aggregation.\r\n>\r\n> It seems, however, that in partial case you should just avoid pulling\r\n> conditions from having qual at all, all filters will be applied on upper\r\n> level. Something like\r\nThank you for your modification.\r\n\r\n> Found one more problem. You can fire partial aggregate over partitioned\r\n> table, but convert_combining_aggrefs() will make non-partial copy, which\r\n> leads to\r\n> 'variable not found in subplan target list' error.\r\nThanks for the correction as well.\r\nAs you pointed out,\r\nthe original patch certainly had the potential to cause problems.\r\nHowever, I could not actually reproduce the problem in cases such as the following.\r\n\r\n Settings:\r\n t(c1, c2) is a patitioned table whose partition key is c1.\r\n t1, t2 are patitions of t and are partitioned table.\r\n t11, t12: partitions of t1 and foreign table of postgres_fdw.\r\n t21, t22: partitions of t2 and foreign table of postgres_fdw.\r\n Query:\r\n select c2 / 2, sum(c1) from t group by c2 / 2 order by 1\r\n\r\nIf you have a reproducible example, I would like to add it to\r\nthe regression test.\r\nDo you have a reproducible example?\r\n\r\n> Also denied partial agregates pushdown on server version mismatch.\r\n> Should check_partial_aggregate_support be 'true' by default?\r\nCould we discuss this point after we determine how to transfer state values?\r\nIf we determine this point, we can easly determine whether check_partial_aggregate_support shold be 'true' by default.\r\n\r\n> I'm not sure what to do with current grammar - it precludes partial\r\n> distinct aggregates. I understand that it's currently impossible to have\r\n> partial aggregation for distinct agregates -but does it worth to have\r\n> such restriction at grammar level?\r\nIf partial aggregation for distinct agregates becomes possible in the future,\r\nI see no problem with the policy of accepting new SQL keywords,\r\nsuch as \"PARTIL_AGGREGATE DISTINCT\".\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Mon, 27 May 2024 21:30:59 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Fujii.Yuki@df.MitsubishiElectric.co.jp писал(а) 2024-05-28 00:30:\n> Hi Mr. Pyhalov.\n> \nHi.\n\n>> Found one more problem. You can fire partial aggregate over \n>> partitioned\n>> table, but convert_combining_aggrefs() will make non-partial copy, \n>> which\n>> leads to\n>> 'variable not found in subplan target list' error.\n> Thanks for the correction as well.\n> As you pointed out,\n> the original patch certainly had the potential to cause problems.\n> However, I could not actually reproduce the problem in cases such as \n> the following.\n> \n> Settings:\n> t(c1, c2) is a patitioned table whose partition key is c1.\n> t1, t2 are patitions of t and are partitioned table.\n> t11, t12: partitions of t1 and foreign table of postgres_fdw.\n> t21, t22: partitions of t2 and foreign table of postgres_fdw.\n> Query:\n> select c2 / 2, sum(c1) from t group by c2 / 2 order by 1\n> \n> If you have a reproducible example, I would like to add it to\n> the regression test.\n> Do you have a reproducible example?\n> \n\nThe fix was to set child_agg->agg_partial to orig_agg->agg_partial in \nconvert_combining_aggrefs(), it's already in the patch,\nas well as the example - without this fix\n\n-- Check partial aggregate over partitioned table\nEXPLAIN (VERBOSE, COSTS OFF)\nSELECT avg(PARTIAL_AGGREGATE a), avg(a) FROM pagg_tab;\n\nfails with\n\nERROR: variable not found in subplan target list\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Tue, 28 May 2024 08:45:04 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Fujii.Yuki@df.MitsubishiElectric.co.jp писал(а) 2024-05-28 00:30:\n> Hi Mr. Pyhalov.\n> \n> Sorry for the late reply.\n> Thank you for your modification and detailed review.\n> I attach a fixed patch, have been not yet rebased.\n> \n> Monday, 25 March 2024 16:01 Alexander Pyhalov \n> <a.pyhalov@postgrespro.ru>:.\n>> Comment in nodeAgg.c seems to be strange:\n>> \n>> 1079 /*\n>> 1080 * If the agg's finalfn is provided and PARTIAL_AGGREGATE\n>> keyword is\n>> 1081 * not specified, apply the agg's finalfn.\n>> 1082 * If PARTIAL_AGGREGATE keyword is specified and the\n>> transValue type\n>> 1083 * is internal, apply the agg's serialfn. In this case, if\n>> the agg's\n>> 1084 * serialfn must not be invalid. Otherwise return\n>> transValue.\n>> 1085 */\n>> \n>> Likely, you mean:\n>> \n>> ... In this case the agg'ss serialfn must not be invalid...\n> Fixed.\n> \n>> Lower, in the same file, please, correct error message:\n>> \n>> 1136 if(!OidIsValid(peragg->serialfn_oid))\n>> 1137 elog(ERROR, \"serialfunc is note provided\n>> for partial aggregate\");\n>> \n>> it should be \"serialfunc is not provided for partial aggregate\"\n> Fixed.\n> \n>> Also something is wrong with the following test :\n>> \n>> SELECT /* aggregate <> partial aggregate */\n>> array_agg(c_int4array), array_agg(b),\n>> avg(b::int2), avg(b::int4), avg(b::int8), avg(c_interval),\n>> avg(b::float4), avg(b::float8),\n>> corr(b::float8, (b * b)::float8),\n>> covar_pop(b::float8, (b * b)::float8),\n>> covar_samp(b::float8, (b * b)::float8),\n>> regr_avgx((2 * b)::float8, b::float8),\n>> .....\n>> \n>> Its results have changed since last patch. Do they depend on daylight\n>> saving time?\n> You are right. In my environment, TimeZone is set to 'PST8PDT'\n> with which timetz values depends on daylight saving time.\n> Changed TimeZone to 'UTC' in this test.\n> \n>> You can see that filter is applied before append. The result is \n>> correct\n>> only by chance, as sum in every partition is actually < 700. If you\n>> lower this bound, let's say, to 200, you'll start getting wrong \n>> results\n>> as data is filtered prior to aggregation.\n>> \n>> It seems, however, that in partial case you should just avoid pulling\n>> conditions from having qual at all, all filters will be applied on \n>> upper\n>> level. Something like\n> Thank you for your modification.\n> \n>> Found one more problem. You can fire partial aggregate over \n>> partitioned\n>> table, but convert_combining_aggrefs() will make non-partial copy, \n>> which\n>> leads to\n>> 'variable not found in subplan target list' error.\n> Thanks for the correction as well.\n> As you pointed out,\n> the original patch certainly had the potential to cause problems.\n> However, I could not actually reproduce the problem in cases such as \n> the following.\n> \n> Settings:\n> t(c1, c2) is a patitioned table whose partition key is c1.\n> t1, t2 are patitions of t and are partitioned table.\n> t11, t12: partitions of t1 and foreign table of postgres_fdw.\n> t21, t22: partitions of t2 and foreign table of postgres_fdw.\n> Query:\n> select c2 / 2, sum(c1) from t group by c2 / 2 order by 1\n> \n> If you have a reproducible example, I would like to add it to\n> the regression test.\n> Do you have a reproducible example?\n> \n>> Also denied partial agregates pushdown on server version mismatch.\n>> Should check_partial_aggregate_support be 'true' by default?\n> Could we discuss this point after we determine how to transfer state \n> values?\n> If we determine this point, we can easly determine whether \n> check_partial_aggregate_support shold be 'true' by default.\n> \n>> I'm not sure what to do with current grammar - it precludes partial\n>> distinct aggregates. I understand that it's currently impossible to \n>> have\n>> partial aggregation for distinct agregates -but does it worth to have\n>> such restriction at grammar level?\n> If partial aggregation for distinct agregates becomes possible in the \n> future,\n> I see no problem with the policy of accepting new SQL keywords,\n> such as \"PARTIL_AGGREGATE DISTINCT\".\n\nBTW, there's I have an issue with test results in the last version of \nthe patch. Attaching regression diffs.\nI have partial sum over c_interval instead of sum(c_interval).\n\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Tue, 28 May 2024 08:57:39 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Mon, May 27, 2024 at 09:30:59PM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Hi Mr. Pyhalov.\n> \n> Sorry for the late reply.\n> Thank you for your modification and detailed review.\n> I attach a fixed patch, have been not yet rebased.\n\nI know this patch was discussed at the Vancouver conference. What are\nthe open issues? I know of several:\n\n* add tests that were requested by Fujii-san and now posted by\n Alexander Pyhalov\n* Where is the documentation? I know the original patch had some, and\n I improved it, but it seems to be missing.\n* Passes unsafe binary data from the foreign server.\n\nCan someone show me where that last item is in the patch, and why can't\nwe just pass back values cast to text?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 4 Jun 2024 13:12:04 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi. Bruce, Robert, Tom, Alexander, hackers.\n\nApologies for the late response. \nThanks to Bruce, I had the opportunity to have individual discussions with Robert and Tom at PGConf.dev 2024.\nI'm truly grateful to Bruce, Robert, and Tom for their time. \nI've attached the presentation slides used during our discussions. \nBelow is a summary of our conversations. \nAs my English is not perfect, there may be some misunderstandings. \nIf so, I apologize and would appreciate any corrections or comments.\n\n[Summary]\nBasically, I will make the prototype as the presentation slides.\nIn specialty, the prototype has the following two limits.\n Limit1. The server versions of the coordinator and the worker match.\n Limit2. Supported built-in aggregate functions are a few subset\n (Ex. avg, sum, count).\n However, there are many build-in aggregate functions in which import and export functions are not necessary.\n (See p.18 in the presentation slides).\n I will support these aggregate functions.\n\nIn this prototype, It is necessary for the following requirements.\n Requirement1. Ensure the compatibility even if the server setting(Ex. encodings) of the coordinator and the worker are different. (with Tom)\n Requirement2. Consider appropriate position of the new keyword \"PARTIAL_AGGREGATE\". (with Robert)\n Existing patch: Before the target expression. Ex. avg(PARTIAL_AGGREGATE c1)\n Ideal: Before the aggregate function. Ex. PARTIAL_AGGREGATE avg(c1)\n Requirement3. Consider to avoid to make the new keyword \"PARTIAL_AGGREGATE\" become a reserved word. (with Robert)\n In the existing patch, \"PARTIAL_AGGREGATE\" is a reserved word.\n\nIn addition to the summary, \nI will add sufficient document and comments to the next patch as Bruce and Robert said.\nAnd, I will fix the problem as Alexander pointed in the last week.\n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation", "msg_date": "Wed, 5 Jun 2024 00:14:45 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Wed, Jun 5, 2024 at 12:14:45AM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> I will add sufficient document and comments to the next patch as Bruce and Robert said.\n\nGreat, I am available to help improve the documentation.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 4 Jun 2024 21:35:46 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Bruce Momjian писал(а) 2024-06-04 20:12:\n> On Mon, May 27, 2024 at 09:30:59PM +0000, \n> Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n>> Hi Mr. Pyhalov.\n>> \n>> Sorry for the late reply.\n>> Thank you for your modification and detailed review.\n>> I attach a fixed patch, have been not yet rebased.\n> \n> I know this patch was discussed at the Vancouver conference. What are\n> the open issues? I know of several:\n> \n> * add tests that were requested by Fujii-san and now posted by\n> Alexander Pyhalov\n> * Where is the documentation? I know the original patch had some, and\n> I improved it, but it seems to be missing.\n> * Passes unsafe binary data from the foreign server.\n> \n> Can someone show me where that last item is in the patch, and why can't\n> we just pass back values cast to text?\n\nHi.\n\nIn finalize_aggregate() when we see partial aggregate with \nperagg->aggref->aggtranstype = INTERNALOID\nwe call aggregate's serialization function and return it as bytea.\n\nThe issue is that this internal representation isn't guaranteed to be \ncompatible between servers\nof different versions (or architectures?). So, likely, we instead should \nhave called some export function for aggregate\nand later - some import function on postgres_fdw side. It doesn't matter \nmuch, what this export function\ngenerates - text, json or some portable binary format,\n1) export/import functions should just \"understand\" it,\n2) it should be a stable representation.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Wed, 05 Jun 2024 08:19:04 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Wed, Jun 5, 2024 at 08:19:04AM +0300, Alexander Pyhalov wrote:\n> > * Passes unsafe binary data from the foreign server.\n> > \n> > Can someone show me where that last item is in the patch, and why can't\n> > we just pass back values cast to text?\n> \n> In finalize_aggregate() when we see partial aggregate with\n> peragg->aggref->aggtranstype = INTERNALOID\n> we call aggregate's serialization function and return it as bytea.\n> \n> The issue is that this internal representation isn't guaranteed to be\n> compatible between servers\n> of different versions (or architectures?). So, likely, we instead should\n> have called some export function for aggregate\n> and later - some import function on postgres_fdw side. It doesn't matter\n> much, what this export function\n> generates - text, json or some portable binary format,\n> 1) export/import functions should just \"understand\" it,\n> 2) it should be a stable representation.\n\nOkay, so looking at the serialization output functions already defined, I\nsee many zeros, which I assume means just the base data type, and eight\nmore:\n\n\tSELECT DISTINCT aggserialfn from pg_aggregate WHERE aggserialfn::oid != 0;\n\t aggserialfn\n\t---------------------------\n\t numeric_avg_serialize\n\t string_agg_serialize\n\t array_agg_array_serialize\n\t numeric_serialize\n\t int8_avg_serialize\n\t array_agg_serialize\n\t interval_avg_serialize\n\t numeric_poly_serialize\n\nI realize we need to return the sum and count for average, so that makes\nsense.\n\nSo, we need import/export text representation for the partial aggregate\nmode for these eight, and call the base data type text import/export\nfunctions for the zero ones when in this mode?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 5 Jun 2024 10:04:29 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi. Bruce.\n\nSorry for the late. Thank you for comment.\n\n> From: Bruce Momjian <bruce@momjian.us>\n> Sent: Wednesday, June 5, 2024 11:04 PM\n> > > * Passes unsafe binary data from the foreign server.\n> > >\n> > > Can someone show me where that last item is in the patch, and why\n> > > can't we just pass back values cast to text?\n> >\n> > In finalize_aggregate() when we see partial aggregate with\n> > peragg->aggref->aggtranstype = INTERNALOID\n> > we call aggregate's serialization function and return it as bytea.\n> >\n> > The issue is that this internal representation isn't guaranteed to be\n> > compatible between servers of different versions (or architectures?).\n> > So, likely, we instead should have called some export function for\n> > aggregate and later - some import function on postgres_fdw side. It\n> > doesn't matter much, what this export function generates - text, json\n> > or some portable binary format,\n> > 1) export/import functions should just \"understand\" it,\n> > 2) it should be a stable representation.\n> \n> Okay, so looking at the serialization output functions already defined, I see many zeros, which I assume means just the base\n> data type, and eight\n> more:\n> \n> \tSELECT DISTINCT aggserialfn from pg_aggregate WHERE aggserialfn::oid != 0;\n> \t aggserialfn\n> \t---------------------------\n> \t numeric_avg_serialize\n> \t string_agg_serialize\n> \t array_agg_array_serialize\n> \t numeric_serialize\n> \t int8_avg_serialize\n> \t array_agg_serialize\n> \t interval_avg_serialize\n> \t numeric_poly_serialize\n> \n> I realize we need to return the sum and count for average, so that makes sense.\n> \n> So, we need import/export text representation for the partial aggregate mode for these eight, and call the base data type\n> text import/export functions for the zero ones when in this mode?\n\nI think that you are basically right.\nBut, I think, in a perfect world we should also add an import/export function for the following\ntwo category.\n\n Category1. Validation Chek is needed for Safety.\n For example, I think a validation check is needed for avg(float4),\n whose transition type is not internal. (See p.18 in [1])\n I plan to add import functions of avg, count (See p.18, p.19 in [1]).\n Category1. Transition type is a pseudo data type.\n Aggregate functions of this category needs to accept many actual data types,\n including user-defined types. So I think that it is hard to implement import/export functions.\n Consequently, I do not plan to support these category. (See p.19 in [1])\n\nSincerely yours,\nYuki Fujii\n\n--\nYuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation\n\n[1] https://www.postgresql.org/message-id/attachment/160659/PGConfDev2024_Presentation_Aggregation_Scaleout_FDW_Sharding_20240531.pdf\n\n\n", "msg_date": "Tue, 11 Jun 2024 11:39:34 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Tue, 11 Jun 2024 at 13:40, Fujii.Yuki@df.MitsubishiElectric.co.jp\n<Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n> > From: Bruce Momjian <bruce@momjian.us>\n> > So, we need import/export text representation for the partial aggregate mode for these eight, and call the base data type\n> > text import/export functions for the zero ones when in this mode?\n>\n> I think that you are basically right.\n> But, I think, in a perfect world we should also add an import/export function for the following\n> two category.\n>\n> Category1. Validation Chek is needed for Safety.\n> For example, I think a validation check is needed for avg(float4),\n> whose transition type is not internal. (See p.18 in [1])\n> I plan to add import functions of avg, count (See p.18, p.19 in [1]).\n> Category1. Transition type is a pseudo data type.\n> Aggregate functions of this category needs to accept many actual data types,\n> including user-defined types. So I think that it is hard to implement import/export functions.\n> Consequently, I do not plan to support these category. (See p.19 in [1])\n\nHow about instead of trying to serialize the output of\nserialfn/deserialfn, instead we don't use the \"internal\" type and\ncreate actual types in pg_type for these transtypes? Then we can\nsimply use the in/out and recv/send functions of those types to\nserialize the values of the partial aggregate over the network.\nInstead of having to rely on serialfn/deserialfn to be network-safe\n(which they probably aren't).\n\nThat indeed still leaves the pseudo types. Since non of those\npseudotypes have a working in/recv function (they always error by\ndefinition), I agree that we can simply not support those.\n\nBasically that would mean that any aggregate with a non-internal and\nnon-pseudotype as a transtype could be used in this multi-node partial\naggregate pushdown.\n\n\n", "msg_date": "Tue, 11 Jun 2024 15:59:36 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Jelte,\r\n\r\nThank you for your comment!\r\n> From: Jelte Fennema-Nio <postgres@jeltef.nl>\r\n> Sent: Tuesday, June 11, 2024 11:00 PM\r\n> How about instead of trying to serialize the output of serialfn/deserialfn, instead we don't use the \"internal\" type and\r\n> create actual types in pg_type for these transtypes? Then we can simply use the in/out and recv/send functions of those\r\n> types to serialize the values of the partial aggregate over the network.\r\n> Instead of having to rely on serialfn/deserialfn to be network-safe (which they probably aren't).\r\n> \r\n> That indeed still leaves the pseudo types. Since non of those pseudotypes have a working in/recv function (they always\r\n> error by definition), I agree that we can simply not support those.\r\n> \r\n> Basically that would mean that any aggregate with a non-internal and non-pseudotype as a transtype could be used in this\r\n> multi-node partial aggregate pushdown.\r\nCould you please clarify what you mean?\r\nAre you referring to:\r\n Option 1: Modifying existing aggregate functions to minimize the use of internal state values.\r\n Option 2: Not supporting the push down of partial aggregates for functions with internal state values.\r\n Option 3: Something other than Option 1 and Option 2.\r\n\r\nThere are many aggregate functions with internal state values, so if we go with Option 1,\r\nwe might need to change a lot of existing code, like transition functions and finalize functions.\r\nAlso, I'm not sure how many existing aggregate functions can be modified this way.\r\n\r\nThere are also many popular functions with internal state values,\r\nlike sum(int8) and avg(int8)(see [1]), so I don't think Option2 would be acceptable.\r\n\r\nBest regards, Yuki Fujii\r\n\r\n--\r\nYuki Fujii\r\nInformation Technology R&D Center, Mitsubishi Electric Corporation\r\n\r\n[1] https://github.com/postgres/postgres/blob/REL_16_STABLE/src/include/catalog/pg_aggregate.dat\r\n", "msg_date": "Wed, 12 Jun 2024 05:27:02 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Wed, 12 Jun 2024 at 07:27, Fujii.Yuki@df.MitsubishiElectric.co.jp\n<Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n> Could you please clarify what you mean?\n> Are you referring to:\n> Option 1: Modifying existing aggregate functions to minimize the use of internal state values.\n> Option 2: Not supporting the push down of partial aggregates for functions with internal state values.\n\nBasically I mean both Option 1 and Option 2 together. i.e. once we do\noption 1, supporting partial aggregate pushdown for all important\naggregates with internal state values, then supporting pushdown of\ninternal state values becomes unnecessary.\n\n> There are many aggregate functions with internal state values, so if we go with Option 1,\n> we might need to change a lot of existing code, like transition functions and finalize functions.\n> Also, I'm not sure how many existing aggregate functions can be modified this way.\n\nThere are indeed 57 aggregate functions with internal state values:\n\n> SELECT count(*) from pg_aggregate where aggtranstype = 'internal'::regtype;\n count\n───────\n 57\n(1 row)\n\nBut there are only 26 different aggtransfns. And the most used 8 of\nthose cover 39 of those 57 aggregates.\n\n> SELECT\n sum (count) OVER(ORDER BY count desc, aggtransfn ROWS BETWEEN\nunbounded preceding and current row) AS cumulative_count\n , *\nFROM (\n SELECT\n count(*),\n aggtransfn\n from pg_aggregate\n where aggtranstype = 'internal'::regtype\n group by aggtransfn\n order by count(*) desc, aggtransfn\n);\n cumulative_count │ count │ aggtransfn\n──────────────────┼───────┼────────────────────────────────────────\n 7 │ 7 │ ordered_set_transition\n 13 │ 6 │ numeric_accum\n 19 │ 6 │ int2_accum\n 25 │ 6 │ int4_accum\n 31 │ 6 │ int8_accum\n 35 │ 4 │ ordered_set_transition_multi\n 37 │ 2 │ int8_avg_accum\n 39 │ 2 │ numeric_avg_accum\n 40 │ 1 │ array_agg_transfn\n 41 │ 1 │ json_agg_transfn\n 42 │ 1 │ json_object_agg_transfn\n 43 │ 1 │ jsonb_agg_transfn\n 44 │ 1 │ jsonb_object_agg_transfn\n 45 │ 1 │ string_agg_transfn\n 46 │ 1 │ bytea_string_agg_transfn\n 47 │ 1 │ array_agg_array_transfn\n 48 │ 1 │ range_agg_transfn\n 49 │ 1 │ multirange_agg_transfn\n 50 │ 1 │ json_agg_strict_transfn\n 51 │ 1 │ json_object_agg_strict_transfn\n 52 │ 1 │ json_object_agg_unique_transfn\n 53 │ 1 │ json_object_agg_unique_strict_transfn\n 54 │ 1 │ jsonb_agg_strict_transfn\n 55 │ 1 │ jsonb_object_agg_strict_transfn\n 56 │ 1 │ jsonb_object_agg_unique_transfn\n 57 │ 1 │ jsonb_object_agg_unique_strict_transfn\n(26 rows)\n\n\nAnd actually most of those don't have a serialfn, so they wouldn't be\nsupported by your suggested approach either. Looking at the\ndistribution of aggserialfns instead we see the following:\n\n> SELECT\n sum (count) OVER(ORDER BY count desc, aggserialfn ROWS BETWEEN\nunbounded preceding and current row) AS cumulative_count\n , *\nFROM (\n SELECT\n count(*),\n aggserialfn\n from pg_aggregate\n where\n aggtranstype = 'internal'::regtype\n AND aggserialfn != 0\n group by aggserialfn\n order by count(*) desc, aggserialfn\n);\n cumulative_count │ count │ aggserialfn\n──────────────────┼───────┼───────────────────────────\n 12 │ 12 │ numeric_serialize\n 24 │ 12 │ numeric_poly_serialize\n 26 │ 2 │ numeric_avg_serialize\n 28 │ 2 │ int8_avg_serialize\n 30 │ 2 │ string_agg_serialize\n 31 │ 1 │ array_agg_serialize\n 32 │ 1 │ array_agg_array_serialize\n(7 rows)\n\nSo there are only 7 aggserialfns, and thus at most 7 new postgres\ntypes that you would need to create to support the same aggregates as\nin your current proposal. But looking at the implementations of these\nserialize functions even that is an over-estimation: numeric_serialize\nand numeric_avg_serialize both serialize a NumericAggState, and\nnumeric_poly_serialize and int8_avg_serialize both serialize a\nPolyNumAggState. So probably a we could even do with only 5 types. And\nto be clear: only converting PolyNumAggState and NumericAggState to\nactual postgres types would already cover 28 out of the 32 aggregates.\nThat seems quite feasible to do.\n\nSo I agree it's probably more code than your current approach. At the\nvery least because you would need to implement in/out text\nserialization functions for these internal types that currently don't\nhave them. But I do think it would be quite a feasible amount. And to\nclarify, I see a few benefits of using the approach that I'm\nproposing:\n\n1. So far aggserialfn and aggdeserialfn haven't been required to be\nnetwork safe at all. In theory extensions might reference shared\nmemory pointers with in them, that are valid for serialization within\nthe same postgres process tree but not outside of it. Or they might\nserialize to bytes in a way that does not work across different\nbigendian/littleendian systems, thus causing wrong aggregation\nresults. Never sending results of serialfn over the network solves\nthat issue.\n2. Partial aggregate pushdown across different postgres version could\nbe made to work by using the in/out functions instead of receive/send\nfunctions, to use the text based serialization format (which should be\nstable across versions)\n3. It seems nice to be able to get the text representation of all\nPARTIAL_AGGREGATE output for debugging purposes. With your approach I\nthink what currently happens is that it will show a bytea for when\nusing PARTIAL_AGGREGATE for avg(bigint) directly from psql.\n4. In my experience it's easier to get patches merged if they don't\nchange a lot at once and are useful by themselves. This way you could\nsplit your current patch up into multiple smaller patches, each of\nwhich could be merged separately (appart from b relying on a).\na. Introduce PARTIAL_AGGREGATE syntax for non-internal & non-pseudo types\nb. Start using PARTIAL_AGGREGATE for FDW pushdown\nc. Convert NumericAggState to non-internal\nd. Convert PolyNumAggState to non-internal\ne. Use non-internal for string_agg_serialize aggregates\nf. Use non-internal for array_agg_serialize\ng. Use non-internal for array_agg_array_serialize\n\n\nP.S. The problem described in benefit 1 could also be solved in your\napproach by adding a boolean opt in flag to CREATE AGGREGATE. e.g.\nCREATE AGGREGATE ( ..., NETWORKSAFESERIALFUNCS = true)\n\n\n", "msg_date": "Wed, 12 Jun 2024 10:02:17 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi. Jelte, hackers.\r\n\r\nSorry for the late response.\r\nThank you for detailed and useful comments.\r\n\r\nOn Wed, Jun 12, 2024 at 5:02 PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\r\n>\r\n> On Wed, 12 Jun 2024 at 07:27, Fujii.Yuki@df.MitsubishiElectric.co.jp\r\n> <Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\r\n> > Could you please clarify what you mean?\r\n> > Are you referring to:\r\n> > Option 1: Modifying existing aggregate functions to minimize the use of internal state values.\r\n> > Option 2: Not supporting the push down of partial aggregates for functions with internal state values.\r\n>\r\n> Basically I mean both Option 1 and Option 2 together. i.e. once we do\r\n> option 1, supporting partial aggregate pushdown for all important\r\n> aggregates with internal state values, then supporting pushdown of\r\n> internal state values becomes unnecessary.\r\nUnderstood.\r\nHowever, there are points that I agree with and others that I don't agree with.\r\n\r\nI will show my opinion using one aggregate function avg(int8), whose transtype is internal.\r\n\r\nI agree that, in general, any remote server should transmit the state value to the local server using a format whose data type is a native data type and is not serialized, whenever possible. I call this format standard format along with [1].\r\nFor avg(int8), I think that it is rational that any remote server transmit the state value to the local server using the format whose data type is _numeric of count and sum.\r\nBefore your advice, I plan to use the standard format whose data type is text, like \"count=5 sum=13.3\". But now, I think that it is rational to use _numeric.\r\nI attached the POC patch(the second one) which supports avg(int8) whose standard format is _numeric type.\r\n\r\nHowever, I do not agree that I modify the internal transtype to the native data type. The reasons are the following three.\r\n1. Generality\r\nI believe we should develop a method that can theoretically apply to any aggregate function, even if we cannot implement it immediately. However, I find it exceptionally challenging to demonstrate that any internal transtype can be universally converted to a native data type for aggregate functions that are parallel-safe under the current parallel query feature. Specifically, proving this for user-defined aggregate functions presents a significant difficulty in my view. \r\nOn the other hand, I think that the usage of export and import functions can theoretically apply to any aggregate functions.\r\n\r\n2. Amount of codes.\r\nIt could need more codes.\r\n\r\n3. Concern about performance\r\nI'm concerned that altering the current internal data types could impact performance.\r\n\r\n> So I agree it's probably more code than your current approach. At the\r\n> very least because you would need to implement in/out text\r\n> serialization functions for these internal types that currently don't\r\n> have them. But I do think it would be quite a feasible amount. And to\r\n> clarify, I see a few benefits of using the approach that I'm\r\n> proposing:\r\n>\r\n> 1. So far aggserialfn and aggdeserialfn haven't been required to be\r\n> network safe at all. In theory extensions might reference shared\r\n> memory pointers with in them, that are valid for serialization within\r\n> the same postgres process tree but not outside of it. Or they might\r\n> serialize to bytes in a way that does not work across different\r\n> bigendian/littleendian systems, thus causing wrong aggregation\r\n> results. Never sending results of serialfn over the network solves\r\n> that issue.\r\nI know. In my proposal, the standard format is not seriarized data by serialfn, instead, is text or other native data type.\r\nJust to clarify, I'm writing this to avoid any potential misunderstanding.\r\n\r\n> 2. Partial aggregate pushdown across different postgres version could\r\n> be made to work by using the in/out functions instead of receive/send\r\n> functions, to use the text based serialization format (which should be\r\n> stable across versions)\r\nThank you for your advice. I agree that, as mentioned earlier, standard formats should generally use native data types whenever possible.\r\n\r\n> 3. It seems nice to be able to get the text representation of all\r\n> PARTIAL_AGGREGATE output for debugging purposes. With your approach I\r\n> think what currently happens is that it will show a bytea for when\r\n> using PARTIAL_AGGREGATE for avg(bigint) directly from psql.\r\nI may have caused some misunderstanding.\r\nTo clarify and prevent any potential misunderstanding: In my proposal, the standard format does not involve serialized data by serialfn; rather, it utilizes text or other native data types.\r\n\r\n> 4. In my experience it's easier to get patches merged if they don't\r\n> change a lot at once and are useful by themselves. This way you could\r\n> split your current patch up into multiple smaller patches, each of\r\n> which could be merged separately (appart from b relying on a).\r\n> a. Introduce PARTIAL_AGGREGATE syntax for non-internal & non-pseudo types\r\n> b. Start using PARTIAL_AGGREGATE for FDW pushdown\r\n> c. Convert NumericAggState to non-internal\r\n> d. Convert PolyNumAggState to non-internal\r\n> e. Use non-internal for string_agg_serialize aggregates\r\n> f. Use non-internal for array_agg_serialize\r\n> g. Use non-internal for array_agg_array_serialize\r\nI understand. Thank you for advice.\r\nBasically responding to your advice,\r\nfor now, I prepare two POC patches.\r\nThe first supports case a, currently covering only avg(int4) and other aggregate functions that do not require import or export functions, such as min, max, and count.\r\nThe second supports case b and commonly used functions like sum and avg. Currently, it only includes avg(int8).\r\n\r\nBest regards, Yuki Fujii\r\n--\r\nYuki Fujii\r\nInformation Technology R&D Center, Mitsubishi Electric Corporation\r\n\r\n[1] https://www.postgresql.org/message-id/attachment/160659/PGConfDev2024_Presentation_Aggregation_Scaleout_FDW_Sharding_20240531.pdf", "msg_date": "Sun, 23 Jun 2024 08:23:32 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Hi Alexander, hackers.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Tuesday, May 28, 2024 2:45 PM\r\n> The fix was to set child_agg->agg_partial to orig_agg->agg_partial in\r\n> convert_combining_aggrefs(), it's already in the patch, as well as the example -\r\n> without this fix\r\nI've just realized that you've added the necessary tests. I forgot to respond, my apologies.\r\n\r\n> From: Alexander Pyhalov <a.pyhalov@postgrespro.ru>\r\n> Sent: Tuesday, May 28, 2024 2:58 PM\r\n> BTW, there's I have an issue with test results in the last version of the patch.\r\n> Attaching regression diffs.\r\n> I have partial sum over c_interval instead of sum(c_interval).\r\nI think the difference stems from the commit[1], which add serialization function\r\nto sum(interval). I will fix it.\r\n\r\nBest regards, Yuki Fujii\r\n--\r\nYuki Fujii\r\nInformation Technology R&D Center, Mitsubishi Electric Corporation\r\n\r\n[1] https://github.com/postgres/postgres/commit/519fc1bd9e9d7b408903e44f55f83f6db30742b7\r\n\r\n\r\n\r\n\r\n", "msg_date": "Sun, 23 Jun 2024 08:25:42 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Sun, 23 Jun 2024 at 10:24, Fujii.Yuki@df.MitsubishiElectric.co.jp\n<Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n> I attached the POC patch(the second one) which supports avg(int8) whose standard format is _numeric type.\n\nOkay, very interesting. So instead of defining the\nserialization/deserialization functions to text/binary, you serialize\nthe internal type to an existing non-internal type, which then in turn\ngets serialized to text. In the specific case of avg(int8) this is\ndone to an array of numeric (with length 2).\n\n> However, I do not agree that I modify the internal transtype to the native data type. The reasons are the following three.\n> 1. Generality\n> I believe we should develop a method that can theoretically apply to any aggregate function, even if we cannot implement it immediately. However, I find it exceptionally challenging to demonstrate that any internal transtype can be universally converted to a native data type for aggregate functions that are parallel-safe under the current parallel query feature. Specifically, proving this for user-defined aggregate functions presents a significant difficulty in my view.\n> On the other hand, I think that the usage of export and import functions can theoretically apply to any aggregate functions.\n\nThe only thing required when doing CREATE TYPE is having an INPUT and\nOUTPUT function for the type, which (de)serialize the type to text\nformat. As far as I can tell by definition that requirement should be\nfine for any aggregates that we can do partial aggregation pushdown\nfor. To clarify I'm not suggesting we should change any of the\ninternal representation of the type for the current internal\naggregates. I'm suggesting we create new native types (i.e. do CREATE\nTYPE) for those internal representations and then use the name of that\ntype instead of internal.\n\n> 2. Amount of codes.\n> It could need more codes.\n\nI think it would be about the same as your proposal. Instead of\nserializing to an intermediary existing type you serialize to string\nstraight away. I think it would probably be slightly less code\nactually, since you don't have to add code to handle the new\naggpartialexportfn and aggpartialimportfn columns.\n\n> 3. Concern about performance\n> I'm concerned that altering the current internal data types could impact performance.\n\nAs explained above in my proposal all the aggregation code would\nremain unchanged, only new native types will be added. Thus\nperformance won't be affected, because all aggregation code will be\nthe same. The only thing that's changed is that the internal type now\nhas a name and an INPUT and OUTPUT function.\n\n> I know. In my proposal, the standard format is not seriarized data by serialfn, instead, is text or other native data type.\n> Just to clarify, I'm writing this to avoid any potential misunderstanding.\n\nAh alright, that definitely clarifies the proposal. I was looking at\nthe latest patch file on the thread and that one was still using\nserialfn. Your new one indeed doesn't, so this is fine.\n\n> Basically responding to your advice,\n> for now, I prepare two POC patches.\n\nGreat! I definitely think this makes the review/discussion easier.\n\n> The first supports case a, currently covering only avg(int4) and other aggregate functions that do not require import or export functions, such as min, max, and count.\n\nNot a full review but some initial notes:\n\n1. Why does this patch introduce aggpartialpushdownsafe? I'd have\nexpected that any type with a non-pseudo/internal type as aggtranstype\nwould be safe to partially push down.\n2. It seems the int4_avg_import function shouldn't be part of this\npatch (but maybe of a future one).\n3. I think splitting this patch in two pieces would make it even\neasier to review: First adding support for the new PARTIAL_AGGREGATE\nkeyword (adds the new feature). Second, using PARTIAL_AGGREGATE in\npostgres_fdw (starts using the new feature). Citus would only need the\nfirst patch not the second one, so I think the PARTIAL_AGGREGATE\nfeature has merit to be added on its own, even without the\npostgres_fdw usage.\n4. Related to 3, I think it would be good to have some tests of\nPARTIAL_AGGREGATE that don't involve postgres_fdw at all. I also\nspotted some comments too that mention FDW, even though they apply to\nthe \"pure\" PARTIAL_AGGREGATE code.\n5. This comment now seems incorrect:\n- * Apply the agg's finalfn if one is provided, else return transValue.\n+ * If the agg's finalfn is provided and PARTIAL_AGGREGATE keyword is\n+ * not specified, apply the agg's finalfn.\n+ * If PARTIAL_AGGREGATE keyword is specified and the transValue type\n+ * is internal, apply the agg's serialfn. In this case the agg's\n+ * serialfn must not be invalid. Otherwise return transValue.\n\n6. These errors are not on purpose afaict (if they are a comment in\nthe test would be good to explain why)\n\n+SELECT b, avg(a), max(a), count(*) FROM pagg_tab GROUP BY b ORDER BY 1;\n+ERROR: could not connect to server \"loopback\"\n+DETAIL: invalid connection option \"partial_aggregate_support\"\n\n\n> The second supports case b and commonly used functions like sum and avg. Currently, it only includes avg(int8).\n\n\n", "msg_date": "Mon, 24 Jun 2024 11:08:50 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi. Jelte, hackers.\r\n\r\nThank you for your proposal and comments.\r\n\r\n> From: Jelte Fennema-Nio <postgres@jeltef.nl>\r\n> Sent: Monday, June 24, 2024 6:09 PM\r\n> > 1. Generality\r\n> > I believe we should develop a method that can theoretically apply to any\r\n> aggregate function, even if we cannot implement it immediately. However, I find\r\n> it exceptionally challenging to demonstrate that any internal transtype can be\r\n> universally converted to a native data type for aggregate functions that are\r\n> parallel-safe under the current parallel query feature. Specifically, proving this\r\n> for user-defined aggregate functions presents a significant difficulty in my\r\n> view.\r\n> > On the other hand, I think that the usage of export and import functions can\r\n> theoretically apply to any aggregate functions.\r\n> \r\n> The only thing required when doing CREATE TYPE is having an INPUT and\r\n> OUTPUT function for the type, which (de)serialize the type to text format. As\r\n> far as I can tell by definition that requirement should be fine for any aggregates\r\n> that we can do partial aggregation pushdown for. To clarify I'm not suggesting\r\n> we should change any of the internal representation of the type for the current\r\n> internal aggregates. I'm suggesting we create new native types (i.e. do CREATE\r\n> TYPE) for those internal representations and then use the name of that type\r\n> instead of internal.\r\nI see. I maybe got your proposal.\r\nRefer to your proposal, for avg(int8), \r\nI create a new native type like state_int8_avg\r\nwith the new typsend/typreceive functions\r\nand use them to transmit the state value, right?\r\n\r\nThat might seem to be a more fundamental solution\r\nbecause I can avoid adding export/import functions of my proposal,\r\nwhich are the new components of aggregate function.\r\nI have never considered the solution.\r\nI appreciate your proposal.\r\n\r\nHowever, I still have the following two questions.\r\n\r\n1. Not necessary components of new native types\r\nRefer to pg_type.dat, typinput and typoutput are required.\r\nI think that in your proposal they are not necessary,\r\nso waste. I think that it is not acceptable.\r\nHow can I resolve the problem? \r\n\r\n2. Many new native types\r\nI think that, basically, each aggregate function does need a new native type.\r\nFor example,\r\navg(int8), avg(numeric), and var_pop(int4) has the same transtype, PolyNumAggState.\r\nYou said that it is enough to add only one native type like state_poly_num_agg\r\nfor supporting them, right?\r\n\r\nBut the combine functions of them might have quite different expectation\r\non the data items of PolyNumAggState like\r\nthe range of N(means count) and the true/false of calcSumX2\r\n(the flag of calculating sum of squares).\r\nThe final functions of them have the similar expectation.\r\nSo, I think that, responded to your proposal,\r\neach of them need a native data type\r\nlike state_int8_avg, state_numeric_avg, for safety.\r\n\r\nAnd, we do need a native type for an aggregate function\r\nwhose transtype is not internal and not pseudo.\r\nFor avg(int4), the current transtype is _int8.\r\nHowever, I do need a validation check on the number of the array\r\nAnd the positiveness of count(the first element of the array).\r\nResponded to your proposal,\r\nI do need a new native type like state_int4_avg.\r\n\r\nConsequently, I think that, responded to your proposal, finally\r\neach of aggregate functions need a new native type\r\nwith typinput and typoutput.\r\nThat seems need the same amount of codes and more catalog data,\r\nright?\r\n\r\n> > 2. Amount of codes.\r\n> > It could need more codes.\r\n> \r\n> I think it would be about the same as your proposal. Instead of serializing to an\r\n> intermediary existing type you serialize to string straight away. I think it would\r\n> probably be slightly less code actually, since you don't have to add code to\r\n> handle the new aggpartialexportfn and aggpartialimportfn columns.\r\n> \r\n> > 3. Concern about performance\r\n> > I'm concerned that altering the current internal data types could impact\r\n> performance.\r\n> \r\n> As explained above in my proposal all the aggregation code would remain\r\n> unchanged, only new native types will be added. Thus performance won't be\r\n> affected, because all aggregation code will be the same. The only thing that's\r\n> changed is that the internal type now has a name and an INPUT and OUTPUT\r\n> function.\r\nI got it. Thank you.\r\n\r\n> Not a full review but some initial notes:\r\nThank you. I don't have time today, so I'll answer after tomorrow.\r\n\r\nBest regards, Yuki Fujii\r\n--\r\nYuki Fujii\r\nInformation Technology R&D Center, Mitsubishi Electric Corporation\r\n", "msg_date": "Mon, 24 Jun 2024 13:03:04 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Mon, 24 Jun 2024 at 15:03, Fujii.Yuki@df.MitsubishiElectric.co.jp\n<Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n> I see. I maybe got your proposal.\n> Refer to your proposal, for avg(int8),\n> I create a new native type like state_int8_avg\n> with the new typsend/typreceive functions\n> and use them to transmit the state value, right?\n\nYes, that's roughly what I had in mind indeed.\n\n> That might seem to be a more fundamental solution\n> because I can avoid adding export/import functions of my proposal,\n> which are the new components of aggregate function.\n> I have never considered the solution.\n> I appreciate your proposal.\n\nThank you :)\n\n> However, I still have the following two questions.\n>\n> 1. Not necessary components of new native types\n> Refer to pg_type.dat, typinput and typoutput are required.\n> I think that in your proposal they are not necessary,\n> so waste. I think that it is not acceptable.\n> How can I resolve the problem?\n\nI think requiring typinput/typoutput is a benefit personally, because\nthat makes it possible to do PARTIAL_AGGREGATE pushdown to a different\nPG major version. Also it makes it easier to debug the partial\naggregate values when using psql/pgregress. So yes, it requires\nimplementing both binary (send/receive) and text (input/output)\nserialization, but it also has some benefits. And in theory you might\nbe able to skip implementing the binary serialization, and rely purely\non the text serialization to send partial aggregates between servers.\n\n> 2. Many new native types\n> I think that, basically, each aggregate function does need a new native type.\n> For example,\n> avg(int8), avg(numeric), and var_pop(int4) has the same transtype, PolyNumAggState.\n> You said that it is enough to add only one native type like state_poly_num_agg\n> for supporting them, right?\n\nYes, correct. That's what I had in mind.\n\n> But the combine functions of them might have quite different expectation\n> on the data items of PolyNumAggState like\n> the range of N(means count) and the true/false of calcSumX2\n> (the flag of calculating sum of squares).\n> The final functions of them have the similar expectation.\n> So, I think that, responded to your proposal,\n> each of them need a native data type\n> like state_int8_avg, state_numeric_avg, for safety.\n>\n> And, we do need a native type for an aggregate function\n> whose transtype is not internal and not pseudo.\n> For avg(int4), the current transtype is _int8.\n> However, I do need a validation check on the number of the array\n> And the positiveness of count(the first element of the array).\n> Responded to your proposal,\n> I do need a new native type like state_int4_avg.\n\nTo help avoid confusion let me try to restate what I think you mean\nhere: You're worried about someone passing in a bogus native type into\nthe final/combine functions and then getting crashes and/or wrong\nresults. With internal type people cannot do this because they cannot\nmanually call the combinefunc/finalfunc because the argument type is\ninternal. To solve this problem your suggestion is to make the type\nspecific to the specific aggregate such that send/receive or\ninput/output can validate the input as reasonable. But this would then\nmean that we have many native types (and also many\ndeserialize/serialize functions).\n\nAssuming that's indeed what you meant, that's an interesting thought,\nI didn't consider that much indeed. My thinking was that we only need\nto implement send/receive & input/output functions for these types,\nand even though their meaning is very different we can serialize them\nin the same way.\n\nAs you say though, something like that is already true for avg(int4)\ntoday. The way avg(int4) handles this issue is by doing some input\nvalidation for every call to its trans/final/combinefunc (see\nint4_avg_accum, int4_avg_combine, and int8_avg). It checks the length\nof the array there, but it doesn't check the positiveness of the\ncount. I think that makes sense. IMHO these functions only need to\nprotect against crashes (e.g. null pointer dereferences). But I don't\nthink there is a good reason for them to protect the user against\npassing in weird data. These functions aren't really meant to be\ncalled manually in the first place anyway, so if the user does that\nand they pass in weird data then I'm fine with them getting a weird\nresult back, even errors are fine (only crashes are not).\n\nSo as long as our input/output & send/receive functions for\nstate_poly_num_agg handle all the inconsistencies that could cause\ncrashes later on (which I think is pretty simple to do for\nPolyNumAggState), then I don't think we need state_int8_avg,\nstate_numeric_avg, etc.\n\n> > Not a full review but some initial notes:\n> Thank you. I don't have time today, so I'll answer after tomorrow.\n\nSure, no rush.\n\n\n", "msg_date": "Mon, 24 Jun 2024 22:49:13 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Jelte, hackers.\r\n\r\nThank you for explanations.\r\n\r\nActually, I have other tasks about \"PARTIAL_AGGREAGATE\" keyword\r\nto respond Requirement1 and Requirement2 in the following mail.\r\nhttps://www.postgresql.org/message-id/TYAPR01MB3088755F2281D41F5EEF06D495F92%40TYAPR01MB3088.jpnprd01.prod.outlook.com\r\n\r\nAfter that tasks, I plan to compare your proposal with mine seriously, with additional POC patches if necessary.\r\n\r\nI think that your proposal might seem to be a more fundamental solution.\r\nHowever, to be honest, so far, I don't perfectly get the benefits and impacts by stopping usage of internal types\r\ninstead using a native types, especially on handling memory contexts of existing deserialization functions and\r\non the amount of codes to be modified or added.\r\nThe followings are the answers with the knowledge I have right now.\r\n\r\n> From: Jelte Fennema-Nio <postgres@jeltef.nl>\r\n> Sent: Tuesday, June 25, 2024 5:49 AM\r\n> > However, I still have the following two questions.\r\n> >\r\n> > 1. Not necessary components of new native types Refer to pg_type.dat,\r\n> > typinput and typoutput are required.\r\n> > I think that in your proposal they are not necessary, so waste. I\r\n> > think that it is not acceptable.\r\n> > How can I resolve the problem?\r\n> \r\n> I think requiring typinput/typoutput is a benefit personally, because that makes it possible to do PARTIAL_AGGREGATE\r\n> pushdown to a different PG major version. Also it makes it easier to debug the partial aggregate values when using\r\n> psql/pgregress. So yes, it requires implementing both binary (send/receive) and text (input/output) serialization, but it also\r\n> has some benefits. And in theory you might be able to skip implementing the binary serialization, and rely purely on the text\r\n> serialization to send partial aggregates between servers.\r\nI see. It seems that adding new natives might make it easier to transmit the state values between local and remote have different major versions.\r\nHowever, in my opinion, we should be careful to support the case in which local and remote have different major versions,\r\nbecause the transtype of an aggregate function would may change in future major version due to\r\nsomething related to the implementation.\r\nActually, something like that occurs recently, see\r\nhttps://github.com/postgres/postgres/commit/519fc1bd9e9d7b408903e44f55f83f6db30742b7\r\nI think the transtype of an aggregate function quite more changeable than retype.\r\nConsequently, so far, I want to support the cases in which local and remote have the same major version.\r\nIf we try to resolve the limitation, it seems to need more additional codes.\r\n\r\nAnd, I'm afraid that adding typinput/typoutput bothers the developers.\r\nThey also have to create a new native types in addition to create their new aggregate functions.\r\nI wonder if this concern might outweigh the benefits for debugging.\r\nAnd, if skipping send/receive, they have to select only the text representation on\r\nthe transmission of the state value. I think it is narrow.\r\n\r\n> > 2. Many new native types\r\n> > I think that, basically, each aggregate function does need a new native type.\r\n> > For example,\r\n> > avg(int8), avg(numeric), and var_pop(int4) has the same transtype, PolyNumAggState.\r\n> > You said that it is enough to add only one native type like\r\n> > state_poly_num_agg for supporting them, right?\r\n> \r\n> Yes, correct. That's what I had in mind.\r\n> \r\n> > But the combine functions of them might have quite different\r\n> > expectation on the data items of PolyNumAggState like the range of\r\n> > N(means count) and the true/false of calcSumX2 (the flag of\r\n> > calculating sum of squares).\r\n> > The final functions of them have the similar expectation.\r\n> > So, I think that, responded to your proposal, each of them need a\r\n> > native data type like state_int8_avg, state_numeric_avg, for safety.\r\n> >\r\n> > And, we do need a native type for an aggregate function whose\r\n> > transtype is not internal and not pseudo.\r\n> > For avg(int4), the current transtype is _int8.\r\n> > However, I do need a validation check on the number of the array And\r\n> > the positiveness of count(the first element of the array).\r\n> > Responded to your proposal,\r\n> > I do need a new native type like state_int4_avg.\r\n> \r\n> To help avoid confusion let me try to restate what I think you mean\r\n> here: You're worried about someone passing in a bogus native type into the final/combine functions and then getting\r\n> crashes and/or wrong results. With internal type people cannot do this because they cannot manually call the\r\n> combinefunc/finalfunc because the argument type is internal. To solve this problem your suggestion is to make the type\r\n> specific to the specific aggregate such that send/receive or input/output can validate the input as reasonable. But this\r\n> would then mean that we have many native types (and also many deserialize/serialize functions).\r\nYes, right.\r\n\r\n> Assuming that's indeed what you meant, that's an interesting thought, I didn't consider that much indeed. My thinking was\r\n> that we only need to implement send/receive & input/output functions for these types, and even though their meaning is\r\n> very different we can serialize them in the same way.\r\n> \r\n> As you say though, something like that is already true for avg(int4) today. The way avg(int4) handles this issue is by doing\r\n> some input validation for every call to its trans/final/combinefunc (see int4_avg_accum, int4_avg_combine, and int8_avg).\r\n> It checks the length of the array there, but it doesn't check the positiveness of the count. I think that makes sense. IMHO\r\n> these functions only need to protect against crashes (e.g. null pointer dereferences). But I don't think there is a good reason\r\n> for them to protect the user against passing in weird data. These functions aren't really meant to be called manually in the\r\n> first place anyway, so if the user does that and they pass in weird data then I'm fine with them getting a weird result back,\r\n> even errors are fine (only crashes are not).\r\n> \r\n> So as long as our input/output & send/receive functions for state_poly_num_agg handle all the inconsistencies that could\r\n> cause crashes later on (which I think is pretty simple to do for PolyNumAggState), then I don't think we need state_int8_avg,\r\n> state_numeric_avg, etc.\r\nI see. Certainly, it might sufficient that, receive functions perform validation checks to avoid crash,\r\nand combine functions and final functions are responsible for avoiding crash\r\nin the range of values of data items of the native type.\r\n\r\nBest regards, Yuki Fujii\r\n--\r\nYuki Fujii\r\nInformation Technology R&D Center, Mitsubishi Electric Corporation\r\n", "msg_date": "Tue, 25 Jun 2024 06:33:07 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Tue, 25 Jun 2024 at 08:33, Fujii.Yuki@df.MitsubishiElectric.co.jp\n<Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n> Actually, I have other tasks about \"PARTIAL_AGGREAGATE\" keyword\n> to respond Requirement1 and Requirement2 in the following mail.\n> https://www.postgresql.org/message-id/TYAPR01MB3088755F2281D41F5EEF06D495F92%40TYAPR01MB3088.jpnprd01.prod.outlook.com\n\nNo problem. I totally think it makes sense to focus on basic\nPARTIAL_AGGREGATE first. Which is also why I suggested splitting the\npatchset up in multiple patches. That way it's easier to get everyone\naligned on PARTIAL_AGGREGATE behaviour for non-internal transtypes,\nwhich would already be a huge improvement over the current situation\nin my opinion.\n\n> After that tasks, I plan to compare your proposal with mine seriously, with additional POC patches if necessary.\n\nSounds great! To be clear, I'm not sure which proposal is best. I\nmainly thought mine seemed interesting because it doesn't require\nadditional columns. But maybe the benefits that the extra columns in\nyour proposal brings are worth adding those extra columns.\n\n> I see. It seems that adding new natives might make it easier to transmit the state values between local and remote have different major versions.\n> However, in my opinion, we should be careful to support the case in which local and remote have different major versions,\n> because the transtype of an aggregate function would may change in future major version due to\n> something related to the implementation.\n> Actually, something like that occurs recently, see\n> https://github.com/postgres/postgres/commit/519fc1bd9e9d7b408903e44f55f83f6db30742b7\n> I think the transtype of an aggregate function quite more changeable than retype.\n> Consequently, so far, I want to support the cases in which local and remote have the same major version.\n> If we try to resolve the limitation, it seems to need more additional codes.\n\nHmm, that's a very good point. Indeed cross-major-version partial\naggregates pushdown would not be fully solved with this yet.\n\n> And, I'm afraid that adding typinput/typoutput bothers the developers.\n> They also have to create a new native types in addition to create their new aggregate functions.\n> I wonder if this concern might outweigh the benefits for debugging.\n> And, if skipping send/receive, they have to select only the text representation on\n> the transmission of the state value. I think it is narrow.\n\nI kinda agree with this argument. But really this same argument\napplies just as well for regular CREATE TYPE. Developers are forced to\nimplement typinput/typoutput, even though send/receive might really be\nenough for their usecase. So in a sense with your proposal, you give\ntranstypes a special status over regular types: i.e. transtypes are\nthe only types where only send/receive is necessary.\n\nSo that leaves me two questions:\n1. Maybe CREATE TYPE should allow types without input/output functions\nas long as send/receive are defined. For these types text\nrepresentation could fall back to the hex representation of bytea.\n2. If for some reason 1 is undesired, then why are transtypes so\nspecial. Why is it fine for them to only have send/receive functions\nand not for other types?\n\n\n", "msg_date": "Tue, 25 Jun 2024 11:28:03 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi hackers.\r\n\r\nOn Wed, Jun 5, 2024 at 9:15?AM Fujii.Yuki@df.MitsubishiElectric.co.jp <Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\r\n> Requirement2. Consider appropriate position of the new keyword \"PARTIAL_AGGREGATE\". (with Robert)\r\n> Existing patch: Before the target expression. Ex. avg(PARTIAL_AGGREGATE c1)\r\n> Ideal: Before the aggregate function. Ex. PARTIAL_AGGREGATE avg(c1)\r\n> Requirement3. Consider to avoid to make the new keyword \"PARTIAL_AGGREGATE\" become a reserved word. (with Robert)\r\n> In the existing patch, \"PARTIAL_AGGREGATE\" is a reserved word.\r\nI considered the above two requirement.\r\nBased on my research, there is no way to use PARTIAL_AGGREGATE in front of a function name without making it a reserved word.\r\n\r\nInstead, I can make PARTIAL_AGGREGATE an unreserved word by placing it after the FILTER clause, like avg(c1) FILTER (WHERE c2 > 0) PARTIAL_AGGREGATE, and by marking it as an ASLABEL word like FILTER.\r\nI attached the patch of the method.\r\nIf there are no objections, I would like to proceed with the method described above.\r\nI'd appreciate it if anyone comment the method.\r\n\r\nI have addressed several comments, though not all of them.\r\n\r\nOn Mon, Jun 24, 2024 at 6:09?PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\r\n> 4. Related to 3, I think it would be good to have some tests of\r\n> PARTIAL_AGGREGATE that don't involve postgres_fdw at all. I also\r\n> spotted some comments too that mention FDW, even though they apply to\r\n> the \"pure\" PARTIAL_AGGREGATE code.\r\n> 5. This comment now seems incorrect:\r\n> - * Apply the agg's finalfn if one is provided, else return transValue.\r\n> + * If the agg's finalfn is provided and PARTIAL_AGGREGATE keyword is\r\n> + * not specified, apply the agg's finalfn.\r\n> + * If PARTIAL_AGGREGATE keyword is specified and the transValue type\r\n> + * is internal, apply the agg's serialfn. In this case the agg's\r\n> + * serialfn must not be invalid. Otherwise return transValue.\r\n>\r\n> 6. These errors are not on purpose afaict (if they are a comment in\r\n> the test would be good to explain why)\r\n>\r\n> +SELECT b, avg(a), max(a), count(*) FROM pagg_tab GROUP BY b ORDER BY 1;\r\n> +ERROR: could not connect to server \"loopback\"\r\n> +DETAIL: invalid connection option \"partial_aggregate_support\"\r\nFixed.\r\n\r\nBest regards, Yuki Fujii\r\n--\r\nYuki Fujii\r\nInformation Technology R&D Center, Mitsubishi Electric Corporation", "msg_date": "Sun, 30 Jun 2024 21:42:19 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "> From: Fujii Yuki <Fujii.Yuki@df.MitsubishiElectric.co.jp>\r\n> Sent: Monday, July 1, 2024 6:42 AM\r\n> Hi hackers.\r\n> \r\n> On Wed, Jun 5, 2024 at 9:15?AM Fujii.Yuki@df.MitsubishiElectric.co.jp <Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\r\n> > Requirement2. Consider appropriate position of the new keyword \"PARTIAL_AGGREGATE\". (with Robert)\r\n> > Existing patch: Before the target expression. Ex. avg(PARTIAL_AGGREGATE c1)\r\n> > Ideal: Before the aggregate function. Ex. PARTIAL_AGGREGATE avg(c1)\r\n> > Requirement3. Consider to avoid to make the new keyword \"PARTIAL_AGGREGATE\" become a reserved word. (with\r\n> Robert)\r\n> > In the existing patch, \"PARTIAL_AGGREGATE\" is a reserved word.\r\n> I considered the above two requirement.\r\n> Based on my research, there is no way to use PARTIAL_AGGREGATE in front of a function name without making it a\r\n> reserved word.\r\nWith this way, I couldn't resolve shift/reduce conflicts.\r\n", "msg_date": "Sun, 30 Jun 2024 22:07:26 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Sun, Jun 30, 2024 at 09:42:19PM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> On Mon, Jun 24, 2024 at 6:09?PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> > 4. Related to 3, I think it would be good to have some tests of\n> > PARTIAL_AGGREGATE that don't involve postgres_fdw at all. I also\n> > spotted some comments too that mention FDW, even though they apply to\n> > the \"pure\" PARTIAL_AGGREGATE code.\n> > 5. This comment now seems incorrect:\n> > - * Apply the agg's finalfn if one is provided, else return transValue.\n> > + * If the agg's finalfn is provided and PARTIAL_AGGREGATE keyword is\n> > + * not specified, apply the agg's finalfn.\n> > + * If PARTIAL_AGGREGATE keyword is specified and the transValue type\n> > + * is internal, apply the agg's serialfn. In this case the agg's\n> > + * serialfn must not be invalid. Otherwise return transValue.\n> >\n> > 6. These errors are not on purpose afaict (if they are a comment in\n> > the test would be good to explain why)\n> >\n> > +SELECT b, avg(a), max(a), count(*) FROM pagg_tab GROUP BY b ORDER BY 1;\n> > +ERROR: could not connect to server \"loopback\"\n> > +DETAIL: invalid connection option \"partial_aggregate_support\"\n> Fixed.\n\nIs there a reason the documentation is no longer a part of this patch? \nCan I help you keep it current?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 5 Jul 2024 13:35:43 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Jelte and hackers,\n\nI've reconsidered which of the following two approaches is the best.\n Approach1: Adding export/import functions to transmit state values.\n Approach 2: Adding native types which are equal to state values.\n\nIn my mind, Approach1 is superior. Therefore, if there are no objections this week, I plan to resume implementing Approach1 next week. I would appreciate it if anyone could discuss the topic with me or ask questions.\n\nI believe that while Approach1 has the extendability to support situations where local and remote major versions differ, Approach2 lacks this extendability. Additionally, it seems that Approach1 requires fewer additional lines of code compared to Approach2. I'm also concerned that Approach2 may cause the catalog pg_type to bloat.\n\nAlthough Approach2 offers the benefit of avoiding the addition of columns to pg_aggregate, I think this benefit is smaller than the advantages of Approach1 mentioned above.\n\nNext, I will present my complete comparison. The comparison points are as follows:\n 1. Extendability\n 2. Amount of codes\n 3. Catalog size\n 4. Developer burden\n 5. Additional columns to catalogs\n\n1. Extendability\nI believe it is crucial to support scenarios where the local and remote major versions may differ in the future (see the below).\n\nhttps://www.postgresql.org/message-id/4012625.1701120204%40sss.pgh.pa.us\n\nRegarding this aspect, I consider Approach1 superior to Approach2. The reason is that:\n・The data type of an aggregate function's state value may change with each major version increment.\n・In Approach1, by extending the export/import functionalities to include the major version in which the state value was created (refer to p.16 and p.17 of [1]), I can handle such situations.\n・On the other hand, it appears that Approach2 fundamentally lacks the capability to support these scenarios.\n\n2. Amount of codes\nRegarding this aspect, I find Approach1 to be better than Approach2.\nIn Approach1, developers only need to export/import functions and can use a standardized format for transmitting state values.\nIn Approach2, developers have two options:\n Option1: Adding typinput/typoutput and typsend/typreceive.\n Option2: Adding typinput/typoutput only.\nOption1 requires more lines of code, which may be seen as cumbersome by some developers.\nOption2 restricts developers to using only text representation for transmitting state values, which I consider limiting.\n\n3. Catalog size\nRegarding this point, I believe Approach1 is better than Approach2.\nIn Approach1, theoretically, it is necessary to add export/import functions to pg_proc for each aggregate.\nIn Approach2, theoretically, it is necessary to add typoutput/typinput functions (and typsend/typreceive if necessary) to pg_proc and add a native type to pg_type for each aggregate.\nI would like to emphasize that we should consider user-defined functions in addition to built-in aggregate functions.\nI think most developers prefer to avoid bloating catalogs, even if they may not be able to specify exact reasons.\nIn fact, in Robert's previous review, he expressed a similar concern (see below).\n\nhttps://www.postgresql.org/message-id/CA%2BTgmobvja%2Bjytj5zcEcYgqzOaeJiqrrJxgqDf1q%3D3k8FepuWQ%40mail.gmail.com\n\n4. Developer burden.\nRegarding this aspect, I believe Approach1 is better than Approach2.\nIn Approach1, developers have the following additional tasks:\n Task1-1: Create and define export/import functions.\n\nIn Approach2, developers have the following additional tasks:\n Task2-1: Create and define typoutput/input functions (and typesend/typreceive functions if necessary).\n Task2-2: Define a native type.\n\nApproach1 requires fewer additional tasks, although the difference may be not substantial.\n\n5. Additional columns to catalogs.\nRegarding this aspect, Approach2 is better than Approach1.\nApproach1 requires additional three columns in pg_aggregate, specifically the aggpartialpushdownsafe flag, export function reference, and import function reference.\nApproach2 does not require any additional columns in catalogs.\nHowever, over the past four years of discussions, no one has expressed concerns about additional columns in catalogs.\n\n[1] https://www.postgresql.org/message-id/attachment/160659/PGConfDev2024_Presentation_Aggregation_Scaleout_FDW_Sharding_20240531.pdf\n\nBest regards, Yuki Fujii\n--\nYuki Fujii\nInformation Technology R&D Center, Mitsubishi Electric Corporation\n\n\n", "msg_date": "Sun, 7 Jul 2024 21:46:31 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "Hi Bruce.\n\n> From: Bruce Momjian <bruce@momjian.us>\n> Is there a reason the documentation is no longer a part of this patch?\n> Can I help you keep it current?\n\nHere are the reasons:\n Reason1. The approach differs significantly from the previous patch that included documentation, the below.\n https://www.postgresql.org/message-id/attachment/152086/0001-Partial-aggregates-push-down-v34.patch\n Reason2. I have two options for transmitting the state value and I'm evaluating which one is optimal.\n One is what I presented you in PGConf.dev2024. The other is Jelte's one.\n He listened to my talk at the conference and gave me some useful comments on hackers. I'm very grateful that.\n Reason3. The implementation and test have been not finished yet.\nRegarding Reason 2, I provided my conclusion in the previous message.\n\nMy plan for advancing the patch involves the following steps:\n Step1. Decide the approach on transmitting state value.\n Step2. Implement code (including comments) and tests to support a subset of aggregate functions.\n Specifically, I plan to support avg, sum, and other aggregate functions like min and max which don't need export/import functions.\n Step3. Add documentations.\n\nTo clarify my approach, should I proceed with Step 3 before Step2?\nI would appreciate your feedback on this.\n\nBest regards, Yuki Fujii\n--\nYuki Fujii\nInformation Technology R&D Center, Mitsubishi Electric Corporation\n\n\n", "msg_date": "Sun, 7 Jul 2024 21:52:27 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Sun, 7 Jul 2024 at 23:46, Fujii.Yuki@df.MitsubishiElectric.co.jp\n<Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n> In my mind, Approach1 is superior. Therefore, if there are no objections this week, I plan to resume implementing Approach1 next week. I would appreciate it if anyone could discuss the topic with me or ask questions.\n\nHonestly, the more I think about this, the more I like Approach2. Not\nbecause I disagree with you about some of the limitations of\nApproach2, but because I'd rather see those limitations fixed in\nCREATE TYPE, instead of working around these limitations in CREATE\nAGGREGATE. That way more usages can benefit. Detailed explanation and\narguments below.\n\n> 1. Extendability\n> I believe it is crucial to support scenarios where the local and remote major versions may differ in the future (see the below).\n>\n> https://www.postgresql.org/message-id/4012625.1701120204%40sss.pgh.pa.us\n\n From my reading, Tom's concern is that different server versions\ncannot talk to each other anymore. So as long as this perf\noptimization is only enabled when server versions are the same, I\ndon't think there is a huge problem if we never implement this.\nHonestly, I even think making this optimization opt-in at the FDW\nserver creation level would already solve Tom's concert. I do agree\nthat it would be good if we could have cross version partial\naggregates though, so it's definitely something to consider.\n\n> Regarding this aspect, I consider Approach1 superior to Approach2. The reason is that:\n> ・The data type of an aggregate function's state value may change with each major version increment.\n> ・In Approach1, by extending the export/import functionalities to include the major version in which the state value was created (refer to p.16 and p.17 of [1]), I can handle such situations.\n> ・On the other hand, it appears that Approach2 fundamentally lacks the capability to support these scenarios.\n\nApproach 2 definitely has some cross-version capabilities, e.g.\njsonb_send includes a version. Such an approach can be used to solve a\nnewer coordinator talking to an older worker, if the transtypes are\nthe same.\n\nI personally don't think it's worth supporting this optimization for\nan older coordinator talking to a newer worker. Using binary protocol\nto talk to from an older server to a newer server doesn't work either.\n\nFinally, based on p.16 & p.17 it's unclear to me how cross-version\nwith different transtypes would work. That situation seems inherently\nincompatible to me.\n\n> 2. Amount of codes\n> Regarding this aspect, I find Approach1 to be better than Approach2.\n> In Approach1, developers only need to export/import functions and can use a standardized format for transmitting state values.\n> In Approach2, developers have two options:\n> Option1: Adding typinput/typoutput and typsend/typreceive.\n> Option2: Adding typinput/typoutput only.\n> Option1 requires more lines of code, which may be seen as cumbersome by some developers.\n> Option2 restricts developers to using only text representation for transmitting state values, which I consider limiting.\n\nIn my opinion this is your strongest argument for Approach1. But you\ndidn't answer my previous two questions yet:\n\nOn Tue, 25 Jun 2024 at 11:28, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> So that leaves me two questions:\n> 1. Maybe CREATE TYPE should allow types without input/output functions\n> as long as send/receive are defined. For these types text\n> representation could fall back to the hex representation of bytea.\n> 2. If for some reason 1 is undesired, then why are transtypes so\n> special. Why is it fine for them to only have send/receive functions\n> and not for other types?\n\nBasically: I agree with this argument, but I feel like this being a\nproblem for this usecase is probably a sign that we should take the\nsolution a step further and solve this at the CREATE TYPE level\ninstead of allowing people to hack around CREATE TYPE its limitations\njust for these partial aggregates.\n\n> 3. Catalog size\n> Regarding this point, I believe Approach1 is better than Approach2.\n> In Approach1, theoretically, it is necessary to add export/import functions to pg_proc for each aggregate.\n> In Approach2, theoretically, it is necessary to add typoutput/typinput functions (and typsend/typreceive if necessary) to pg_proc and add a native type to pg_type for each aggregate.\n> I would like to emphasize that we should consider user-defined functions in addition to built-in aggregate functions.\n> I think most developers prefer to avoid bloating catalogs, even if they may not be able to specify exact reasons.\n> In fact, in Robert's previous review, he expressed a similar concern (see below).\n>\n> https://www.postgresql.org/message-id/CA%2BTgmobvja%2Bjytj5zcEcYgqzOaeJiqrrJxgqDf1q%3D3k8FepuWQ%40mail.gmail.com\n\nSo, to summarize the difference (assuming we change CREATE TYPE to\nallow only typsend/typreceive): \"Approach 2 adds an additional pg_type\nentry per aggregate\"\n\nIMHO this is fine, especially since these types can usually be shared\nacross multiple aggregates. I think the main reason Robert expressed\nconcern before was the level of catalog bloat that was added in that\nversion of the proposal: It required a new function to be added to\nevery existing aggregate.\n\nBoth Approach1 and Approach2 only require new catalog entries to be\nadded for aggregates with internal transtypes. That means both\napproaches introduce significantly less bloat than the proposal that\nRobert expressed concern about.\n\n> 4. Developer burden.\n> Regarding this aspect, I believe Approach1 is better than Approach2.\n> In Approach1, developers have the following additional tasks:\n> Task1-1: Create and define export/import functions.\n>\n> In Approach2, developers have the following additional tasks:\n> Task2-1: Create and define typoutput/input functions (and typesend/typreceive functions if necessary).\n> Task2-2: Define a native type.\n>\n> Approach1 requires fewer additional tasks, although the difference may be not substantial.\n\nI think the difference here is so small that this isn't really an\nargument. Especially since you're skipping over Task-1-2: Use new\nexport/import functions in aggregate definition.\n\nWhich means the effective difference in SQL that needs to be typed is\nreally \"CREATE TYPE mytype\" (the arguments of the CREATE TYPE, would\nbe part of the aggregate definition in Approach1). In a sense this\nsame counter-argument applies to the catalog bloat section. The only\nextra data that gets to the catalog for Approach2 is the typename and\ntypeoid, the export/import vs send/receive functions just move between\npg_type and pg_aggregate.\n\n> 5. Additional columns to catalogs.\n> Regarding this aspect, Approach2 is better than Approach1.\n> Approach1 requires additional three columns in pg_aggregate, specifically the aggpartialpushdownsafe flag, export function reference, and import function reference.\n> Approach2 does not require any additional columns in catalogs.\n> However, over the past four years of discussions, no one has expressed concerns about additional columns in catalogs.\n\nI agree that the columns aren't a big deal. The main thing I don't\nlike is that now we have two ways of serializing types to be sent over\nthe network, one way for actual types and one way for transtypes of\ntype internal.\n\nFINALLY: I thought of one other advantage of using actual types is\nthat at the protocol level these types will be visible too. This\nallows for some extra safety checks to be performed on the coordinator\nin case the transtypes are not the same across servers, which might\nhappen with aggregates of extensions even if the PG server versions\nare the same, but the extension versions aren't. With Approach1 the\nonly way to detect this is the importfunc complaining, but that can\nonly fail with a vague error, like \"cannot parse message\". Instead of\nsaying something more useful, like \"expected type avg_int4_transtype,\ngot type avg_int4_transtype_v2\"\n\n\n", "msg_date": "Mon, 8 Jul 2024 10:31:08 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Sun, 30 Jun 2024 at 23:42, Fujii.Yuki@df.MitsubishiElectric.co.jp\n<Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n> Instead, I can make PARTIAL_AGGREGATE an unreserved word by placing it after the FILTER clause, like avg(c1) FILTER (WHERE c2 > 0) PARTIAL_AGGREGATE, and by marking it as an ASLABEL word like FILTER.\n> I attached the patch of the method.\n> If there are no objections, I would like to proceed with the method described above.\n> I'd appreciate it if anyone comment the method.\n\nI like this approach of using PARTIAL_AGGREGATE in the same place as\nthe FILTER clause.\n\n\n", "msg_date": "Mon, 8 Jul 2024 10:58:41 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Sun, 7 Jul 2024 at 23:52, Fujii.Yuki@df.MitsubishiElectric.co.jp\n<Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n> My plan for advancing the patch involves the following steps:\n> Step1. Decide the approach on transmitting state value.\n> Step2. Implement code (including comments) and tests to support a subset of aggregate functions.\n> Specifically, I plan to support avg, sum, and other aggregate functions like min and max which don't need export/import functions.\n> Step3. Add documentations.\n>\n> To clarify my approach, should I proceed with Step 3 before Step2?\n\n(my opinion, Bruce might have a different one)\n\nI think it's good that you split the original patch in two:\n0001: non-internal partial aggregates\n0002: internal partial aggregates\n\nI think we're aligned on the general design of 0001. So I think now is\ndefinitely the time to include documentation there, so we can discuss\nthis patch in more detail, and move it forward.\n\nI think generally for 0002 it would also be useful to have\ndocumentation, I personally like reading it to understand the general\ndesign and then comparing that to the code. But I also understand that\nthe language differences between Japanese and English, makes writing\nsuch docs a significant effort for you. So I think it would be fine to\nskip docs for 0002 for now until we decide on the approach we want to\ntake for internal partial aggregates.\n\n\n", "msg_date": "Mon, 8 Jul 2024 10:59:31 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Hi Jelte,\r\n\r\nThank you for comments and advises.\r\n\r\n> From: Jelte Fennema-Nio <postgres@jeltef.nl>\r\n> Sent: Monday, July 8, 2024 5:31 PM\r\n> On Sun, 7 Jul 2024 at 23:46, Fujii.Yuki@df.MitsubishiElectric.co.jp\r\n> <Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\r\n> > In my mind, Approach1 is superior. Therefore, if there are no objections this week, I plan to resume implementing\r\n> Approach1 next week. I would appreciate it if anyone could discuss the topic with me or ask questions.\r\n> \r\n> Honestly, the more I think about this, the more I like Approach2. Not because I disagree with you about some of the\r\n> limitations of Approach2, but because I'd rather see those limitations fixed in CREATE TYPE, instead of working around\r\n> these limitations in CREATE AGGREGATE. That way more usages can benefit. Detailed explanation and arguments below.\r\nFirstly, I may have jumped to conclusions too quickly. I apologize that.\r\nI would appreciate it if we clarify Approach 1 and Approach 2 more precisely and we can proceed with the discussion.\r\n\r\nBefore we get into the details, let me break down the main differences between Approach 1 and Approach 2.\r\n\r\nThe best thing about Approach2 is that it lets us send state values using the existing data type system.\r\nI'm worried that if we choose Approach2, we might face some limits because we have to create new types.\r\nBut, we might be able to fix these limits if we look into it more. \r\n\r\nApproach1 doesn't make new types, so we can avoid these limits.\r\nBut, it means we have to make export/import functions that are similar to the typsend/typreceive functions.\r\nSo, we need to make sure if we really need this method.\r\n\r\nIs this the right understanding?\r\n\r\n> From: Jelte Fennema-Nio <postgres@jeltef.nl>\r\n> Sent: Monday, July 8, 2024 5:59 PM\r\n> On Sun, 30 Jun 2024 at 23:42, Fujii.Yuki@df.MitsubishiElectric.co.jp\r\n> <Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\r\n> > Instead, I can make PARTIAL_AGGREGATE an unreserved word by placing it after the FILTER clause, like avg(c1) FILTER\r\n> (WHERE c2 > 0) PARTIAL_AGGREGATE, and by marking it as an ASLABEL word like FILTER.\r\n> > I attached the patch of the method.\r\n> > If there are no objections, I would like to proceed with the method described above.\r\n> > I'd appreciate it if anyone comment the method.\r\n> \r\n> I like this approach of using PARTIAL_AGGREGATE in the same place as the FILTER clause.\r\nThank you for comment.\r\n\r\n> On Sun, 7 Jul 2024 at 23:52, Fujii.Yuki@df.MitsubishiElectric.co.jp\r\n> <Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\r\n> > My plan for advancing the patch involves the following steps:\r\n> > Step1. Decide the approach on transmitting state value.\r\n> > Step2. Implement code (including comments) and tests to support a subset of aggregate functions.\r\n> > Specifically, I plan to support avg, sum, and other aggregate functions like min and max which don't need\r\n> export/import functions.\r\n> > Step3. Add documentations.\r\n> >\r\n> > To clarify my approach, should I proceed with Step 3 before Step2?\r\n> \r\n> (my opinion, Bruce might have a different one)\r\n> \r\n> I think it's good that you split the original patch in two:\r\n> 0001: non-internal partial aggregates\r\n> 0002: internal partial aggregates\r\n> \r\n> I think we're aligned on the general design of 0001. So I think now is definitely the time to include documentation there, so\r\n> we can discuss this patch in more detail, and move it forward.\r\n> \r\n> I think generally for 0002 it would also be useful to have documentation, I personally like reading it to understand the\r\n> general design and then comparing that to the code. But I also understand that the language differences between Japanese\r\n> and English, makes writing such docs a significant effort for you. So I think it would be fine to skip docs for 0002 for now\r\n> until we decide on the approach we want to take for internal partial aggregates.\r\nAt least for 0001, it seems like it would be a good idea to attach a document at this stage.\r\n\r\nBest regards, Yuki Fujii\r\n--\r\nYuki Fujii\r\nInformation Technology R&D Center, Mitsubishi Electric Corporation\r\n", "msg_date": "Mon, 8 Jul 2024 12:11:46 +0000", "msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>", "msg_from_op": false, "msg_subject": "RE: Partial aggregates pushdown" }, { "msg_contents": "On Mon, 8 Jul 2024 at 14:12, Fujii.Yuki@df.MitsubishiElectric.co.jp\n<Fujii.Yuki@df.mitsubishielectric.co.jp> wrote:\n> The best thing about Approach2 is that it lets us send state values using the existing data type system.\n> I'm worried that if we choose Approach2, we might face some limits because we have to create new types.\n> But, we might be able to fix these limits if we look into it more.\n>\n> Approach1 doesn't make new types, so we can avoid these limits.\n> But, it means we have to make export/import functions that are similar to the typsend/typreceive functions.\n> So, we need to make sure if we really need this method.\n>\n> Is this the right understanding?\n\nYeah, correct. To clarify my reasoning a bit more: IMHO, the main\ndownside of implementing Approach1 is that we then end up with two\ndifferent mechanisms to \"take data from memory and serialize it in a\nway in which it can be sent over the network\". I'd very much prefer if\nwe could have a single system responsible for that task. So if there's\nissues with the current system (e.g. having to implement\ntypinput/typoutput), then I'd rather address these problems in the\nexisting system. Only if that turns out to be impossible for some\nreason, then I think I'd prefer Approach1.\n\nPersonally, even if the Approach2 requires a bit more code, then I'd\nstill prefer a single serialization system over having two\nserializiation systems.\n\n\n", "msg_date": "Mon, 8 Jul 2024 14:30:12 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "SUMMARY OF THREAD\n\nThe design of patch 0001 is agreed upon by everyone on the thread (so\nfar). This adds the PARTIAL_AGGREGATE label for aggregates, which will\ncause the finalfunc not to run. It also starts using PARTIAL_AGGREGATE\nfor pushdown of aggregates in postgres_fdw. In 0001 PARTIAL_AGGREGATE\nis only supported for aggregates with a non-internal/pseudo type as\nthe stype.\n\nThe design for patch 0002 is still under debate. This would expand on\nthe functionality added by adding support for PARTIAL_AGGREGATE for\naggregates with an internal stype. This is done by returning a byte\narray containing the bytes that the serialfunc of the aggregate\nreturns.\n\nA competing proposal for 0002 is to instead change aggregates to not\nuse an internal stype anymore, and create dedicated types. The main\ndownside here is that infunc and outfunc would need to be added for\ntext serialization, in addition to the binary serialization. An open\nquestion is: Can we change the requirements for CREATE TYPE, so that\ntypes can be created without infunc and outfunc.\n\nWHAT IS NEEDED?\n\nThe things needed for this patch are that docs need to be added, and\ndetailed codereview needs to be done.\n\nFeedback from more people on the two competing proposals for 0002\nwould be very helpful in making a decision.\n\n\n", "msg_date": "Thu, 8 Aug 2024 13:48:49 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Sun, Jul 7, 2024 at 09:52:27PM +0000, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Hi Bruce.\n> \n> > From: Bruce Momjian <bruce@momjian.us>\n> > Is there a reason the documentation is no longer a part of this patch?\n> > Can I help you keep it current?\n> \n> Here are the reasons:\n> Reason1. The approach differs significantly from the previous patch that included documentation, the below.\n> https://www.postgresql.org/message-id/attachment/152086/0001-Partial-aggregates-push-down-v34.patch\n> Reason2. I have two options for transmitting the state value and I'm evaluating which one is optimal.\n> One is what I presented you in PGConf.dev2024. The other is Jelte's one.\n> He listened to my talk at the conference and gave me some useful comments on hackers. I'm very grateful that.\n> Reason3. The implementation and test have been not finished yet.\n> Regarding Reason 2, I provided my conclusion in the previous message.\n> \n> My plan for advancing the patch involves the following steps:\n> Step1. Decide the approach on transmitting state value.\n> Step2. Implement code (including comments) and tests to support a subset of aggregate functions.\n> Specifically, I plan to support avg, sum, and other aggregate functions like min and max which don't need export/import functions.\n> Step3. Add documentations.\n> \n> To clarify my approach, should I proceed with Step 3 before Step2?\n> I would appreciate your feedback on this.\n\nThanks, I now understand why the docs were remove, and I agree. I will\npost about the options now in a new email.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 15 Aug 2024 14:48:42 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Thu, Aug 8, 2024 at 01:48:49PM +0200, Jelte Fennema-Nio wrote:\n> SUMMARY OF THREAD\n> \n> The design of patch 0001 is agreed upon by everyone on the thread (so\n> far). This adds the PARTIAL_AGGREGATE label for aggregates, which will\n> cause the finalfunc not to run. It also starts using PARTIAL_AGGREGATE\n> for pushdown of aggregates in postgres_fdw. In 0001 PARTIAL_AGGREGATE\n> is only supported for aggregates with a non-internal/pseudo type as\n> the stype.\n> \n> The design for patch 0002 is still under debate. This would expand on\n> the functionality added by adding support for PARTIAL_AGGREGATE for\n> aggregates with an internal stype. This is done by returning a byte\n> array containing the bytes that the serialfunc of the aggregate\n> returns.\n> \n> A competing proposal for 0002 is to instead change aggregates to not\n> use an internal stype anymore, and create dedicated types. The main\n> downside here is that infunc and outfunc would need to be added for\n> text serialization, in addition to the binary serialization. An open\n> question is: Can we change the requirements for CREATE TYPE, so that\n> types can be created without infunc and outfunc.\n> \n> WHAT IS NEEDED?\n> \n> The things needed for this patch are that docs need to be added, and\n> detailed codereview needs to be done.\n> \n> Feedback from more people on the two competing proposals for 0002\n> would be very helpful in making a decision.\n\nFirst, I am sorry to be replying so late --- I have been traveling for\nthe past four weeks. Second, I consider this feature a big part of\nsharding, and I think sharding is Postgres's biggest missing feature. I\ntalk about this patch often when asked about what Postgres is working\non.\n\nThird, I would like to show a more specific example to clarify what is\nbeing considered above. If we look at MAX(), we can have FDWs return\nthe max for each FDW, and the coordinator can chose the highest value. \nThis is the patch 1 listed above. These can return the\npg_aggregate.aggtranstype data type using the pg_type.typoutput text\noutput.\n\nThe second case is for something like AVG(), which must return the SUM()\nand COUNT(), and we currently have no way to return multiple text values\non the wire. For patch 0002, we have the option of creating functions\nthat can do this and record them in new pg_attribute columns, or we can\ncreate a data type with these functions, and assign the data type to\npg_aggregate.aggtranstype.\n\nIs that accurate?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 15 Aug 2024 17:12:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Thu, 15 Aug 2024 at 23:12, Bruce Momjian <bruce@momjian.us> wrote:\n> Third, I would like to show a more specific example to clarify what is\n> being considered above. If we look at MAX(), we can have FDWs return\n> the max for each FDW, and the coordinator can chose the highest value.\n> This is the patch 1 listed above. These can return the\n> pg_aggregate.aggtranstype data type using the pg_type.typoutput text\n> output.\n>\n> The second case is for something like AVG(), which must return the SUM()\n> and COUNT(), and we currently have no way to return multiple text values\n> on the wire. For patch 0002, we have the option of creating functions\n> that can do this and record them in new pg_attribute columns, or we can\n> create a data type with these functions, and assign the data type to\n> pg_aggregate.aggtranstype.\n>\n> Is that accurate?\n\nIt's close to accurate, but not entirely. Patch 1 would actually\nsolves some AVG cases too, because some AVG implementations use an SQL\narray type to store the transtype instead of an internal type. And by\nusing an SQL array type we *can* send multiple text values on the\nwire. See below for a list of those aggregates:\n\n> select p.oid::regprocedure\nfrom pg_aggregate a join pg_proc p on a.aggfnoid = p.oid\nwhere aggfinalfn != 0 and aggtranstype::regtype not in ('internal',\n'anyenum', 'anyelement', 'anyrange', 'anyarray', 'anymultirange');\n oid\n───────────────────────────────────────────────────\n avg(integer)\n avg(smallint)\n avg(real)\n avg(double precision)\n avg(interval)\n var_pop(real)\n var_pop(double precision)\n var_samp(real)\n var_samp(double precision)\n variance(real)\n variance(double precision)\n stddev_pop(real)\n stddev_pop(double precision)\n stddev_samp(real)\n stddev_samp(double precision)\n stddev(real)\n stddev(double precision)\n regr_sxx(double precision,double precision)\n regr_syy(double precision,double precision)\n regr_sxy(double precision,double precision)\n regr_avgx(double precision,double precision)\n regr_avgy(double precision,double precision)\n regr_r2(double precision,double precision)\n regr_slope(double precision,double precision)\n regr_intercept(double precision,double precision)\n covar_pop(double precision,double precision)\n covar_samp(double precision,double precision)\n corr(double precision,double precision)\n(28 rows)\n\nAnd to be clear, these are in addition to the MAX type of aggregates\nyou were describing:\n> select p.oid::regprocedure\nfrom pg_aggregate a join pg_proc p on a.aggfnoid = p.oid\nwhere aggfinalfn = 0 and aggtranstype::regtype not in ('internal',\n'anyenum', 'anyelement', 'anyrange', 'anyarray', 'anymultirange');\n oid\n───────────────────────────────────────────────\n sum(integer)\n sum(smallint)\n sum(real)\n sum(double precision)\n sum(money)\n sum(interval)\n max(bigint)\n max(integer)\n max(smallint)\n max(oid)\n max(real)\n max(double precision)\n max(date)\n max(time without time zone)\n max(time with time zone)\n max(money)\n max(timestamp without time zone)\n max(timestamp with time zone)\n max(interval)\n max(text)\n max(numeric)\n max(character)\n max(tid)\n max(inet)\n max(pg_lsn)\n max(xid8)\n min(bigint)\n min(integer)\n min(smallint)\n min(oid)\n min(real)\n min(double precision)\n min(date)\n min(time without time zone)\n min(time with time zone)\n min(money)\n min(timestamp without time zone)\n min(timestamp with time zone)\n min(interval)\n min(text)\n min(numeric)\n min(character)\n min(tid)\n min(inet)\n min(pg_lsn)\n min(xid8)\n count(\"any\")\n count()\n regr_count(double precision,double precision)\n bool_and(boolean)\n bool_or(boolean)\n every(boolean)\n bit_and(smallint)\n bit_or(smallint)\n bit_xor(smallint)\n bit_and(integer)\n bit_or(integer)\n bit_xor(integer)\n bit_and(bigint)\n bit_or(bigint)\n bit_xor(bigint)\n bit_and(bit)\n bit_or(bit)\n bit_xor(bit)\n xmlagg(xml)\n(65 rows)\n\n\n", "msg_date": "Tue, 20 Aug 2024 10:07:32 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Tue, Aug 20, 2024 at 10:07:32AM +0200, Jelte Fennema-Nio wrote:\n> On Thu, 15 Aug 2024 at 23:12, Bruce Momjian <bruce@momjian.us> wrote:\n> > Third, I would like to show a more specific example to clarify what is\n> > being considered above. If we look at MAX(), we can have FDWs return\n> > the max for each FDW, and the coordinator can chose the highest value.\n> > This is the patch 1 listed above. These can return the\n> > pg_aggregate.aggtranstype data type using the pg_type.typoutput text\n> > output.\n> >\n> > The second case is for something like AVG(), which must return the SUM()\n> > and COUNT(), and we currently have no way to return multiple text values\n> > on the wire. For patch 0002, we have the option of creating functions\n> > that can do this and record them in new pg_attribute columns, or we can\n> > create a data type with these functions, and assign the data type to\n> > pg_aggregate.aggtranstype.\n> >\n> > Is that accurate?\n> \n> It's close to accurate, but not entirely. Patch 1 would actually\n> solves some AVG cases too, because some AVG implementations use an SQL\n> array type to store the transtype instead of an internal type. And by\n> using an SQL array type we *can* send multiple text values on the\n> wire. See below for a list of those aggregates:\n\nOkay, so we can do MAX easily, and AVG if the count can be represented\nas the same data type as the sum? Is that correct? Our only problem is\nthat something like AVG(interval) can't use an array because arrays have\nto have the same data type for all array elements, and an interval can't\nrepresent a count?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 20 Aug 2024 12:50:37 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Tue, 20 Aug 2024 at 18:50, Bruce Momjian <bruce@momjian.us> wrote:\n> Okay, so we can do MAX easily, and AVG if the count can be represented\n> as the same data type as the sum? Is that correct? Our only problem is\n> that something like AVG(interval) can't use an array because arrays have\n> to have the same data type for all array elements, and an interval can't\n> represent a count?\n\nClose, but still not completely correct. AVG(bigint) can also not be\nsupported by patch 1, because the sum and the count for that both\nstored using an int128. So we'd need an array of int128, and there's\ncurrently no int128 SQL type.\n\n\n", "msg_date": "Tue, 20 Aug 2024 19:03:56 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Tue, Aug 20, 2024 at 07:03:56PM +0200, Jelte Fennema-Nio wrote:\n> On Tue, 20 Aug 2024 at 18:50, Bruce Momjian <bruce@momjian.us> wrote:\n> > Okay, so we can do MAX easily, and AVG if the count can be represented\n> > as the same data type as the sum? Is that correct? Our only problem is\n> > that something like AVG(interval) can't use an array because arrays have\n> > to have the same data type for all array elements, and an interval can't\n> > represent a count?\n> \n> Close, but still not completely correct. AVG(bigint) can also not be\n> supported by patch 1, because the sum and the count for that both\n> stored using an int128. So we'd need an array of int128, and there's\n> currently no int128 SQL type.\n\nOkay. Have we considered having the FDW return a record:\n\n\tSELECT (oid, relname) FROM pg_class LIMIT 1;\n\t row\n\t---------------------\n\t (2619,pg_statistic)\n\n\tSELECT pg_typeof((oid, relname)) FROM pg_class LIMIT 1;\n\t pg_typeof\n\t-----------\n\t record\n\n\tSELECT pg_typeof(oid) FROM pg_class LIMIT 1;\n\t pg_typeof\n\t-----------\n\t oid\n\t\n\tSELECT pg_typeof(relname) FROM pg_class LIMIT 1;\n\t pg_typeof\n\t-----------\n\t name\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 20 Aug 2024 14:41:18 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "\n\nOn 8/20/24 20:41, Bruce Momjian wrote:\n> On Tue, Aug 20, 2024 at 07:03:56PM +0200, Jelte Fennema-Nio wrote:\n>> On Tue, 20 Aug 2024 at 18:50, Bruce Momjian <bruce@momjian.us> wrote:\n>>> Okay, so we can do MAX easily, and AVG if the count can be represented\n>>> as the same data type as the sum? Is that correct? Our only problem is\n>>> that something like AVG(interval) can't use an array because arrays have\n>>> to have the same data type for all array elements, and an interval can't\n>>> represent a count?\n>>\n>> Close, but still not completely correct. AVG(bigint) can also not be\n>> supported by patch 1, because the sum and the count for that both\n>> stored using an int128. So we'd need an array of int128, and there's\n>> currently no int128 SQL type.\n> \n> Okay. Have we considered having the FDW return a record:\n> \n> \tSELECT (oid, relname) FROM pg_class LIMIT 1;\n> \t row\n> \t---------------------\n> \t (2619,pg_statistic)\n> \n> \tSELECT pg_typeof((oid, relname)) FROM pg_class LIMIT 1;\n> \t pg_typeof\n> \t-----------\n> \t record\n> \n> \tSELECT pg_typeof(oid) FROM pg_class LIMIT 1;\n> \t pg_typeof\n> \t-----------\n> \t oid\n> \t\n> \tSELECT pg_typeof(relname) FROM pg_class LIMIT 1;\n> \t pg_typeof\n> \t-----------\n> \t name\n> \n\nHow would this help with the AVG(bigint) case? We don't have int128 as\nSQL type, so what would be part of the record? Also, which part of the\ncode would produce the record? If the internal state is \"internal\", that\nwould probably need to be something aggregate specific, and that's kinda\nwhat this patch series is adding, no?\n\nOr am I missing some cases where the record would make it work?\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Wed, 21 Aug 2024 16:59:12 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "\n\nOn 8/8/24 13:48, Jelte Fennema-Nio wrote:\n> SUMMARY OF THREAD\n> \n> The design of patch 0001 is agreed upon by everyone on the thread (so\n> far). This adds the PARTIAL_AGGREGATE label for aggregates, which will\n> cause the finalfunc not to run. It also starts using PARTIAL_AGGREGATE\n> for pushdown of aggregates in postgres_fdw. In 0001 PARTIAL_AGGREGATE\n> is only supported for aggregates with a non-internal/pseudo type as\n> the stype.\n> \n\nI don't have a strong opinion on this, but I wonder if someone might\nobject this essentially extends the syntax with something that is not\n(and never will be) in the SQL standard. I wonder if there's some\nprecedent for encoding such explicit execution instructions into the\nquery itself?\n\nThat reminds me - the PARTIAL_AGGREGATE label is per aggregate, but I\ndon't think it makes much sense to do this only for some aggregates,\nright? Do we have a way to make sure the query is \"consistent\"? I'm not\nsure if doing this on the source (before pushdown) is enough. Could\nthere be a difference in what the remote instance supports?\n\nThe only alternative that I can think of (and that I believe was already\nmentioned in this thread) is to set some GUC that forces the top-most\nquery level to do this (all aggregates at that level). That's have the\nbenefit of always affecting all aggregates.\n\n> The design for patch 0002 is still under debate. This would expand on\n> the functionality added by adding support for PARTIAL_AGGREGATE for\n> aggregates with an internal stype. This is done by returning a byte\n> array containing the bytes that the serialfunc of the aggregate\n> returns.\n> \n> A competing proposal for 0002 is to instead change aggregates to not\n> use an internal stype anymore, and create dedicated types. The main\n> downside here is that infunc and outfunc would need to be added for\n> text serialization, in addition to the binary serialization. An open\n> question is: Can we change the requirements for CREATE TYPE, so that\n> types can be created without infunc and outfunc.\n> \n\nI think it's +0.5 for the new dedicated data types from me.\n\nI admit I'm too lazy to read the whole thread from scratch, but I\nbelieve we did discuss the possibility to reuse the serial/deserial\nfunctions we already have, but the reason against that was the missing\ncross-version stability. Parallel queries always run within a single\ninstance, hence there are no concerns about other versions. But this is\nmeant to support the remote node having a wildly different version.\n\nI guess we might introduce another pair of serial/deserial functions,\nwith this guarantee. I know we (me and Jelte) discussed that in person\nat some point, and there were arguments for doing the data types. But I\nmanaged to forget the details :-(\n\n> WHAT IS NEEDED?\n> \n> The things needed for this patch are that docs need to be added, and\n> detailed codereview needs to be done.\n\nYeah, I think the docs are must-have for a proper review.\n\n> Feedback from more people on the two competing proposals for 0002\n> would be very helpful in making a decision.\n> \n\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Wed, 21 Aug 2024 17:41:02 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Wed, Aug 21, 2024 at 9:11 PM Tomas Vondra <tomas@vondra.me> wrote:\n>\n>\n>\n> On 8/8/24 13:48, Jelte Fennema-Nio wrote:\n> > SUMMARY OF THREAD\n> >\n> > The design of patch 0001 is agreed upon by everyone on the thread (so\n> > far). This adds the PARTIAL_AGGREGATE label for aggregates, which will\n> > cause the finalfunc not to run. It also starts using PARTIAL_AGGREGATE\n> > for pushdown of aggregates in postgres_fdw. In 0001 PARTIAL_AGGREGATE\n> > is only supported for aggregates with a non-internal/pseudo type as\n> > the stype.\n> >\n>\n> I don't have a strong opinion on this, but I wonder if someone might\n> object this essentially extends the syntax with something that is not\n> (and never will be) in the SQL standard. I wonder if there's some\n> precedent for encoding such explicit execution instructions into the\n> query itself?\n\nThis feature might be a useful feature to run aggregation in a\nfederated database across many source databases. So people in the\ncommunity who participate in SQL standard may add it there. While\nimplementing the feature, we might think of it as influencing the\nexecution but I don't see it that way. It's a feature allowing users\nto access a pre-finalization state of an aggregate which they can use\nto combine with such states from other data sources. There may be\nother uses as well. But adding it as a SQL feature means some\nstandardization of what is partial aggregate for each aggregate\nfunction - the notions of which are intuitive but they need to be\nstandardized. Of course, going this way means that it will take longer\nfor the feature to be available but it won't look like a kludge at\nleast.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 22 Aug 2024 16:35:51 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Wed, Aug 21, 2024 at 04:59:12PM +0200, Tomas Vondra wrote:\n> On 8/20/24 20:41, Bruce Momjian wrote:\n> > \tSELECT (oid, relname) FROM pg_class LIMIT 1;\n> > \t row\n> > \t---------------------\n> > \t (2619,pg_statistic)\n> > \n> > \tSELECT pg_typeof((oid, relname)) FROM pg_class LIMIT 1;\n> > \t pg_typeof\n> > \t-----------\n> > \t record\n> > \n> > \tSELECT pg_typeof(oid) FROM pg_class LIMIT 1;\n> > \t pg_typeof\n> > \t-----------\n> > \t oid\n> > \t\n> > \tSELECT pg_typeof(relname) FROM pg_class LIMIT 1;\n> > \t pg_typeof\n> > \t-----------\n> > \t name\n> > \n> \n> How would this help with the AVG(bigint) case? We don't have int128 as\n> SQL type, so what would be part of the record? Also, which part of the\n> code would produce the record? If the internal state is \"internal\", that\n> would probably need to be something aggregate specific, and that's kinda\n> what this patch series is adding, no?\n> \n> Or am I missing some cases where the record would make it work?\n\nRight now, my goal in this thread is to try to concretely explain what\nis being proposed. Therefore, I think I need to go back and make four\ncategories instead of two:\n\n1. cases like MAX(int), where we return only one value, and the FDW\nreturn value is an existing data type, e.g., int\n\n2. cases like AVG(int) where we return multiple FDW values of the same\ntype and can use an array, e.g., bigint array\n\n3. cases like AVG(bigint) where we return multiple FDW values of the\nsame type (or can), but the one of the FDW return values is not an\nexisting data type, e.g. int128\n\n4. cases like AVG(interval) where we return multiple FDW values of\ndifferent types, e.g. interval and an integral count\n\nFor #1, all MAX cases have aggregate input parameters the same as the\nFDW return types (aggtranstype):\n\n\tSELECT proargtypes[0]::regtype, aggtranstype::regtype\n\tFROM pg_aggregate a JOIN pg_proc p ON a.aggfnoid = p.oid\n\tWHERE proname = 'max' AND proargtypes[0] != aggtranstype;\n\n\t proargtypes | aggtranstype\n\t-------------+--------------\n\nFor #2-4, we have for AVG:\n\n\tSELECT proargtypes[0]::regtype, aggtranstype::regtype\n\tFROM pg_aggregate a JOIN pg_proc p ON a.aggfnoid = p.oid\n\tWHERE proname = 'avg';\n\t\n\t proargtypes | aggtranstype\n\t------------------+--------------------\n3->\t bigint | internal\n2->\t integer | bigint[]\n2->\t smallint | bigint[]\n3->\t numeric | internal\n2->\t real | double precision[]\n2->\t double precision | double precision[]\n4->\t interval | internal\n\nYou can see which AVG items fall into which categories. It seems we\nhave #1 and #2 handled cleanly in the patch.\n\nMy question is related to #3 and #4. For #3, if we are going to be\nbuilding infrastructure to handle passing int128 for AVG, wouldn't it be\nwiser to create an int128 type and an int128 array type, and then use\nmethod #2 to handle those, rather than creating custom code just to\nread/write int128 values for FDWs aggregate passing alone.\n\nFor #4, can we use or improve the RECORD data type to handle #4 --- that\nseems preferable to creating custom FDWs aggregate passing code.\n\nI know the open question was whether we should create custom FDWs\naggregate passing functions or custom data types for FDWs aggregate\npassing, but I am asking if we can improve existing facilities, like\nint128 or record passing, to reduce the need for some of these.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 22 Aug 2024 13:22:04 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On 8/22/24 19:22, Bruce Momjian wrote:\n> On Wed, Aug 21, 2024 at 04:59:12PM +0200, Tomas Vondra wrote:\n>> On 8/20/24 20:41, Bruce Momjian wrote:\n>>> \tSELECT (oid, relname) FROM pg_class LIMIT 1;\n>>> \t row\n>>> \t---------------------\n>>> \t (2619,pg_statistic)\n>>>\n>>> \tSELECT pg_typeof((oid, relname)) FROM pg_class LIMIT 1;\n>>> \t pg_typeof\n>>> \t-----------\n>>> \t record\n>>>\n>>> \tSELECT pg_typeof(oid) FROM pg_class LIMIT 1;\n>>> \t pg_typeof\n>>> \t-----------\n>>> \t oid\n>>> \t\n>>> \tSELECT pg_typeof(relname) FROM pg_class LIMIT 1;\n>>> \t pg_typeof\n>>> \t-----------\n>>> \t name\n>>>\n>>\n>> How would this help with the AVG(bigint) case? We don't have int128 as\n>> SQL type, so what would be part of the record? Also, which part of the\n>> code would produce the record? If the internal state is \"internal\", that\n>> would probably need to be something aggregate specific, and that's kinda\n>> what this patch series is adding, no?\n>>\n>> Or am I missing some cases where the record would make it work?\n> \n> Right now, my goal in this thread is to try to concretely explain what\n> is being proposed. Therefore, I think I need to go back and make four\n> categories instead of two:\n> \n> 1. cases like MAX(int), where we return only one value, and the FDW\n> return value is an existing data type, e.g., int\n> \n> 2. cases like AVG(int) where we return multiple FDW values of the same\n> type and can use an array, e.g., bigint array\n> \n> 3. cases like AVG(bigint) where we return multiple FDW values of the\n> same type (or can), but the one of the FDW return values is not an\n> existing data type, e.g. int128\n> \n> 4. cases like AVG(interval) where we return multiple FDW values of\n> different types, e.g. interval and an integral count\n> \n> For #1, all MAX cases have aggregate input parameters the same as the\n> FDW return types (aggtranstype):\n> \n> \tSELECT proargtypes[0]::regtype, aggtranstype::regtype\n> \tFROM pg_aggregate a JOIN pg_proc p ON a.aggfnoid = p.oid\n> \tWHERE proname = 'max' AND proargtypes[0] != aggtranstype;\n> \n> \t proargtypes | aggtranstype\n> \t-------------+--------------\n> \n> For #2-4, we have for AVG:\n> \n> \tSELECT proargtypes[0]::regtype, aggtranstype::regtype\n> \tFROM pg_aggregate a JOIN pg_proc p ON a.aggfnoid = p.oid\n> \tWHERE proname = 'avg';\n> \t\n> \t proargtypes | aggtranstype\n> \t------------------+--------------------\n> 3->\t bigint | internal\n> 2->\t integer | bigint[]\n> 2->\t smallint | bigint[]\n> 3->\t numeric | internal\n> 2->\t real | double precision[]\n> 2->\t double precision | double precision[]\n> 4->\t interval | internal\n> \n> You can see which AVG items fall into which categories. It seems we\n> have #1 and #2 handled cleanly in the patch.\n> \n\nAgreed.\n\n> My question is related to #3 and #4. For #3, if we are going to be\n> building infrastructure to handle passing int128 for AVG, wouldn't it be\n> wiser to create an int128 type and an int128 array type, and then use\n> method #2 to handle those, rather than creating custom code just to\n> read/write int128 values for FDWs aggregate passing alone.\n> \n\nYep, adding int128 as a data type would extend this to aggregates that\nstore state as int128 (or array of int128).\n\n> For #4, can we use or improve the RECORD data type to handle #4 --- that\n> seems preferable to creating custom FDWs aggregate passing code.\n> \n> I know the open question was whether we should create custom FDWs\n> aggregate passing functions or custom data types for FDWs aggregate\n> passing, but I am asking if we can improve existing facilities, like\n> int128 or record passing, to reduce the need for some of these.\n> \n\nBut which code would produce the record? AFAIK it can't happen in some\ngeneric executor code, because that only sees \"internal\" for each\naggregate. The exact structure of the aggstate is private within the\ncode of each aggregate - the record would have to be built there, no?\n\nI imagine we'd add this for each aggregate as a new pair of functions to\nbuild/parse the record, but that's kinda the serial/deserial way we\ndiscussed earlier.\n\nOr are you suggesting we'd actually say:\n\n CREATE AGGREGATE xyz(...) (\n STYPE = record,\n ...\n )\n\nor something like that? I have no idea if that would work, maybe it\nwould. In a way it'd not be that different from the \"custom data type\"\nexcept that it doesn't actually need a custom data type (assuming we\nknow how to serialize such \"generic\" records - I'm not familiar with\nthis code enough to answer that).\n\nThe reason why I found the \"custom data type\" approach interesting is\nthat it sets clear guarantees / expectations about the stability of the\noutput between versions. For serial/deserial we make no guarantees, it\ncan change even in a minor release. But here we need something stable\neven for major releases, to allow pushdown when querying older server.\nWe have that strong expectation for data types, and everyone knows it.\n\nBut I guess if we are OK with just sending the array aggregates, or or\neven just plain scalar types, we kinda shift this responsibility to\naggregates. Because while the data type in/out functions will stay the\nsame, we require the aggregate to not switch to some other state.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Thu, 22 Aug 2024 20:31:11 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Wed, Aug 21, 2024 at 05:41:02PM +0200, Tomas Vondra wrote:\n> On 8/8/24 13:48, Jelte Fennema-Nio wrote:\n> > SUMMARY OF THREAD\n> > \n> > The design of patch 0001 is agreed upon by everyone on the thread (so\n> > far). This adds the PARTIAL_AGGREGATE label for aggregates, which will\n> > cause the finalfunc not to run. It also starts using PARTIAL_AGGREGATE\n> > for pushdown of aggregates in postgres_fdw. In 0001 PARTIAL_AGGREGATE\n> > is only supported for aggregates with a non-internal/pseudo type as\n> > the stype.\n> \n> I don't have a strong opinion on this, but I wonder if someone might\n> object this essentially extends the syntax with something that is not\n> (and never will be) in the SQL standard. I wonder if there's some\n> precedent for encoding such explicit execution instructions into the\n> query itself?\n> \n> That reminds me - the PARTIAL_AGGREGATE label is per aggregate, but I\n> don't think it makes much sense to do this only for some aggregates,\n> right? Do we have a way to make sure the query is \"consistent\"? I'm not\n> sure if doing this on the source (before pushdown) is enough. Could\n> there be a difference in what the remote instance supports?\n> \n> The only alternative that I can think of (and that I believe was already\n> mentioned in this thread) is to set some GUC that forces the top-most\n> query level to do this (all aggregates at that level). That's have the\n> benefit of always affecting all aggregates.\n\nYou make a very good point above. Would there ever be cases where a\ntargetlist would have multiple aggregates, and some can be pushed down,\nand some have to return all matching rows so the sender can compute the\naggregate? If so, how would we handle that? How does parallelism\nhandle that now?\n\n> > The design for patch 0002 is still under debate. This would expand on\n> > the functionality added by adding support for PARTIAL_AGGREGATE for\n> > aggregates with an internal stype. This is done by returning a byte\n> > array containing the bytes that the serialfunc of the aggregate\n> > returns.\n> > \n> > A competing proposal for 0002 is to instead change aggregates to not\n> > use an internal stype anymore, and create dedicated types. The main\n> > downside here is that infunc and outfunc would need to be added for\n> > text serialization, in addition to the binary serialization. An open\n> > question is: Can we change the requirements for CREATE TYPE, so that\n> > types can be created without infunc and outfunc.\n> > \n> \n> I think it's +0.5 for the new dedicated data types from me.\n> \n> I admit I'm too lazy to read the whole thread from scratch, but I\n> believe we did discuss the possibility to reuse the serial/deserial\n> functions we already have, but the reason against that was the missing\n> cross-version stability. Parallel queries always run within a single\n> instance, hence there are no concerns about other versions. But this is\n> meant to support the remote node having a wildly different version.\n\nIt is more than different versions. Different CPU architectures can\nstore data types differently in binary. It would be kind of interesting\nto have a serial/deserial that was in text format. I guess that is\nwhere my RECORD idea came from that I just emailed to the list.\n\nWhat I would really like is something similar to pg_proc.proargtypes\n(which is data type oidvector) for pg_aggregate where we can supply the\noids of the pg_aggregate.aggtranstype and construct a record on the fly\nto return to the FDW caller, rather than having to create specialized\nfunctions for every aggregate that needs to return several different\ndata types, e.g., AVG(interval), #4 in my previous email. I realize\nthis would require the creation of data types int128 and int128 array.\n\n> I guess we might introduce another pair of serial/deserial functions,\n> with this guarantee. I know we (me and Jelte) discussed that in person\n> at some point, and there were arguments for doing the data types. But I\n> managed to forget the details :-(\n> \n> > WHAT IS NEEDED?\n> > \n> > The things needed for this patch are that docs need to be added, and\n> > detailed codereview needs to be done.\n> \n> Yeah, I think the docs are must-have for a proper review.\n\nI think the docs are on hold until we decide on a transfer method.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 22 Aug 2024 14:56:55 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Thu, Aug 22, 2024 at 08:31:11PM +0200, Tomas Vondra wrote:\n> > My question is related to #3 and #4. For #3, if we are going to be\n> > building infrastructure to handle passing int128 for AVG, wouldn't it be\n> > wiser to create an int128 type and an int128 array type, and then use\n> > method #2 to handle those, rather than creating custom code just to\n> > read/write int128 values for FDWs aggregate passing alone.\n> > \n> \n> Yep, adding int128 as a data type would extend this to aggregates that\n> store state as int128 (or array of int128).\n\nGreat, I am not too far off then.\n\n> > For #4, can we use or improve the RECORD data type to handle #4 --- that\n> > seems preferable to creating custom FDWs aggregate passing code.\n> > \n> > I know the open question was whether we should create custom FDWs\n> > aggregate passing functions or custom data types for FDWs aggregate\n> > passing, but I am asking if we can improve existing facilities, like\n> > int128 or record passing, to reduce the need for some of these.\n> > \n> \n> But which code would produce the record? AFAIK it can't happen in some\n> generic executor code, because that only sees \"internal\" for each\n> aggregate. The exact structure of the aggstate is private within the\n> code of each aggregate - the record would have to be built there, no?\n> \n> I imagine we'd add this for each aggregate as a new pair of functions to\n> build/parse the record, but that's kinda the serial/deserial way we\n> discussed earlier.\n> \n> Or are you suggesting we'd actually say:\n> \n> CREATE AGGREGATE xyz(...) (\n> STYPE = record,\n> ...\n> )\n\nSo my idea from the email I just sent is to create a\npg_proc.proargtypes-like column (data type oidvector) for pg_aggregate\nwhich stores the oids of the values we want to return, so AVG(interval)\nwould have an array of the oids for interval and int8, e.g.:\n\n\tSELECT oid FROM pg_type WHERE typname = 'interval';\n\t oid\n\t------\n\t 1186\n\n\tSELECT oid FROM pg_type WHERE typname = 'int8';\n\t oid\n\t-----\n\t 20\n\n\tSELECT '1186 20'::oidvector;\n\t oidvector\n\t-----------\n\t 1186 20\n\nIt seems all four methods could use this, again assuming we create\nint128/int16 and whatever other types we need.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 22 Aug 2024 15:30:53 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On 8/22/24 20:56, Bruce Momjian wrote:\n> On Wed, Aug 21, 2024 at 05:41:02PM +0200, Tomas Vondra wrote:\n>> On 8/8/24 13:48, Jelte Fennema-Nio wrote:\n>>> SUMMARY OF THREAD\n>>>\n>>> The design of patch 0001 is agreed upon by everyone on the thread (so\n>>> far). This adds the PARTIAL_AGGREGATE label for aggregates, which will\n>>> cause the finalfunc not to run. It also starts using PARTIAL_AGGREGATE\n>>> for pushdown of aggregates in postgres_fdw. In 0001 PARTIAL_AGGREGATE\n>>> is only supported for aggregates with a non-internal/pseudo type as\n>>> the stype.\n>>\n>> I don't have a strong opinion on this, but I wonder if someone might\n>> object this essentially extends the syntax with something that is not\n>> (and never will be) in the SQL standard. I wonder if there's some\n>> precedent for encoding such explicit execution instructions into the\n>> query itself?\n>>\n>> That reminds me - the PARTIAL_AGGREGATE label is per aggregate, but I\n>> don't think it makes much sense to do this only for some aggregates,\n>> right? Do we have a way to make sure the query is \"consistent\"? I'm not\n>> sure if doing this on the source (before pushdown) is enough. Could\n>> there be a difference in what the remote instance supports?\n>>\n>> The only alternative that I can think of (and that I believe was already\n>> mentioned in this thread) is to set some GUC that forces the top-most\n>> query level to do this (all aggregates at that level). That's have the\n>> benefit of always affecting all aggregates.\n> \n> You make a very good point above. Would there ever be cases where a\n> targetlist would have multiple aggregates, and some can be pushed down,\n> and some have to return all matching rows so the sender can compute the\n> aggregate? If so, how would we handle that? How does parallelism\n> handle that now?\n\nI think the only sane way to handle this is to disable partial pushdown\nfor that query, fetch all rows and do the aggregate locally.\n\n>>> The design for patch 0002 is still under debate. This would expand on\n>>> the functionality added by adding support for PARTIAL_AGGREGATE for\n>>> aggregates with an internal stype. This is done by returning a byte\n>>> array containing the bytes that the serialfunc of the aggregate\n>>> returns.\n>>>\n>>> A competing proposal for 0002 is to instead change aggregates to not\n>>> use an internal stype anymore, and create dedicated types. The main\n>>> downside here is that infunc and outfunc would need to be added for\n>>> text serialization, in addition to the binary serialization. An open\n>>> question is: Can we change the requirements for CREATE TYPE, so that\n>>> types can be created without infunc and outfunc.\n>>>\n>>\n>> I think it's +0.5 for the new dedicated data types from me.\n>>\n>> I admit I'm too lazy to read the whole thread from scratch, but I\n>> believe we did discuss the possibility to reuse the serial/deserial\n>> functions we already have, but the reason against that was the missing\n>> cross-version stability. Parallel queries always run within a single\n>> instance, hence there are no concerns about other versions. But this is\n>> meant to support the remote node having a wildly different version.\n> \n> It is more than different versions. Different CPU architectures can\n> store data types differently in binary. It would be kind of interesting\n> to have a serial/deserial that was in text format. I guess that is\n> where my RECORD idea came from that I just emailed to the list.\n>\n\nI believe everything in this thread assumes \"binary\" in the same sense\nas the protocol, with the same rules for sending data in text/binary\nformats, etc. With binary for integers meaning \"network order\" and that\nsort of stuff. Which is mostly independent of PG version.\n\nWhen I say version dependency, I mean the structure of the aggregate\nstate itself. That is, in one version the state may store (A,B), while\nin the other version it may be (B,C). For example, we could store\ndifferent values in the array, or something like that.\n\n\n> What I would really like is something similar to pg_proc.proargtypes\n> (which is data type oidvector) for pg_aggregate where we can supply the\n> oids of the pg_aggregate.aggtranstype and construct a record on the fly\n> to return to the FDW caller, rather than having to create specialized\n> functions for every aggregate that needs to return several different\n> data types, e.g., AVG(interval), #4 in my previous email. I realize\n> this would require the creation of data types int128 and int128 array.\n> \n\nIsn't that really just a definition of a composite type?\n\nI still don't understand which part of the code constructs the record.\nIf the aggregate has aggstate internal, I don't see how could it be\nanything else than the aggregate itself.\n\n>> I guess we might introduce another pair of serial/deserial functions,\n>> with this guarantee. I know we (me and Jelte) discussed that in person\n>> at some point, and there were arguments for doing the data types. But I\n>> managed to forget the details :-(\n>>\n>>> WHAT IS NEEDED?\n>>>\n>>> The things needed for this patch are that docs need to be added, and\n>>> detailed codereview needs to be done.\n>>\n>> Yeah, I think the docs are must-have for a proper review.\n> \n> I think the docs are on hold until we decide on a transfer method.\n> \n\n-- \nTomas Vondra\n\n\n", "msg_date": "Thu, 22 Aug 2024 21:54:02 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "On Thu, Aug 22, 2024 at 09:54:02PM +0200, Tomas Vondra wrote:\n> On 8/22/24 20:56, Bruce Momjian wrote:\n> > You make a very good point above. Would there ever be cases where a\n> > targetlist would have multiple aggregates, and some can be pushed down,\n> > and some have to return all matching rows so the sender can compute the\n> > aggregate? If so, how would we handle that? How does parallelism\n> > handle that now?\n> \n> I think the only sane way to handle this is to disable partial pushdown\n> for that query, fetch all rows and do the aggregate locally.\n\nOkay.\n\n> > It is more than different versions. Different CPU architectures can\n> > store data types differently in binary. It would be kind of interesting\n> > to have a serial/deserial that was in text format. I guess that is\n> > where my RECORD idea came from that I just emailed to the list.\n> >\n> \n> I believe everything in this thread assumes \"binary\" in the same sense\n> as the protocol, with the same rules for sending data in text/binary\n> formats, etc. With binary for integers meaning \"network order\" and that\n> sort of stuff. Which is mostly independent of PG version.\n\nIf the binary is tranferable between servers of the same PG version but\nperhaps different CPU architectures, you are saying we only are worrying\nabout not using the exact same aggregate passing method we use for\nparallelism, except for PG version differences. Seems we should just\nassume the same version for this optimization and call it done. Except\nwe don't always check for the same PG version and that could lead to\ncrashes or security problems?\n\n> > What I would really like is something similar to pg_proc.proargtypes\n> > (which is data type oidvector) for pg_aggregate where we can supply the\n> > oids of the pg_aggregate.aggtranstype and construct a record on the fly\n> > to return to the FDW caller, rather than having to create specialized\n> > functions for every aggregate that needs to return several different\n> > data types, e.g., AVG(interval), #4 in my previous email. I realize\n> > this would require the creation of data types int128 and int128 array.\n> > \n> \n> Isn't that really just a definition of a composite type?\n\nProbably. I am trying to leverage what we already have.\n\n> I still don't understand which part of the code constructs the record.\n> If the aggregate has aggstate internal, I don't see how could it be\n> anything else than the aggregate itself.\n\nYes, but I was unclear if we needed a special function for every\nconstruction.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 22 Aug 2024 16:07:12 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "\nOn 8/22/24 22:07, Bruce Momjian wrote:\n> On Thu, Aug 22, 2024 at 09:54:02PM +0200, Tomas Vondra wrote:\n>> On 8/22/24 20:56, Bruce Momjian wrote:\n>>> You make a very good point above. Would there ever be cases where a\n>>> targetlist would have multiple aggregates, and some can be pushed down,\n>>> and some have to return all matching rows so the sender can compute the\n>>> aggregate? If so, how would we handle that? How does parallelism\n>>> handle that now?\n>>\n>> I think the only sane way to handle this is to disable partial pushdown\n>> for that query, fetch all rows and do the aggregate locally.\n> \n> Okay.\n> \n>>> It is more than different versions. Different CPU architectures can\n>>> store data types differently in binary. It would be kind of interesting\n>>> to have a serial/deserial that was in text format. I guess that is\n>>> where my RECORD idea came from that I just emailed to the list.\n>>>\n>>\n>> I believe everything in this thread assumes \"binary\" in the same sense\n>> as the protocol, with the same rules for sending data in text/binary\n>> formats, etc. With binary for integers meaning \"network order\" and that\n>> sort of stuff. Which is mostly independent of PG version.\n> \n> If the binary is tranferable between servers of the same PG version but\n> perhaps different CPU architectures, you are saying we only are worrying\n> about not using the exact same aggregate passing method we use for\n> parallelism, except for PG version differences. Seems we should just\n> assume the same version for this optimization and call it done.\n\nIMHO restricting this to the exact same version (it'd really have to be\nthe same minor version, not just major) would be pretty annoying and I'm\nafraid it would seriously restrict how often we'd be able to enable this\noptimization.\n\nConsider a big sharded cluster using postgres_fdw to run queries. AFAIK\nit's not uncommon to do minor version upgrades node by node, possibly\neven with failover to make it as smooth as possible. For the duration of\nthis upgrade the pushdown would be impossible, because at least one node\nhas a different minor version. That doesn't seem great.\n\n> Except\n> we don't always check for the same PG version and that could lead to\n> crashes or security problems?\n> \n\nYes, we'd need to check this comprehensively. I'm not sure if this might\ncause crashes - presumably the input functions should be careful about\nthis (because otherwise it'd be a problem even without this feature).\nAlso not sure about security issues.\n\n>>> What I would really like is something similar to pg_proc.proargtypes\n>>> (which is data type oidvector) for pg_aggregate where we can supply the\n>>> oids of the pg_aggregate.aggtranstype and construct a record on the fly\n>>> to return to the FDW caller, rather than having to create specialized\n>>> functions for every aggregate that needs to return several different\n>>> data types, e.g., AVG(interval), #4 in my previous email. I realize\n>>> this would require the creation of data types int128 and int128 array.\n>>>\n>>\n>> Isn't that really just a definition of a composite type?\n> \n> Probably. I am trying to leverage what we already have.\n> \n>> I still don't understand which part of the code constructs the record.\n>> If the aggregate has aggstate internal, I don't see how could it be\n>> anything else than the aggregate itself.\n> \n> Yes, but I was unclear if we needed a special function for every\n> construction.\n> \n\nOK\n\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Fri, 23 Aug 2024 02:07:18 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" } ]
[ { "msg_contents": "In pgsql-docs, this patch has been recommended to you.\n\nLacking consensus and so not included is the the deletion of\ncomments pointing between the ref/MOVE and FETCH files. These\nwere of the form:\n\n <!-- Note the \"direction\" bit is also in ref/fetch.sgml -->\n\n\nThanks for the software,\nRob", "msg_date": "Fri, 15 Oct 2021 12:52:48 -0400", "msg_from": "rir <rirans@comcast.net>", "msg_from_op": true, "msg_subject": "Doc patch" }, { "msg_contents": "On Fri, 2021-10-15 at 12:52 -0400, rir wrote:\n> \n> In pgsql-docs, this patch has been recommended to you.\n> \n> Lacking consensus and so not included is the the deletion of\n> comments pointing between the ref/MOVE and FETCH files.  These\n> were of the form:\n> \n>     <!-- Note the \"direction\" bit is also in ref/fetch.sgml -->\n\nJust for context: the -docs thread that belongs to this is\nhttps://www.postgr.es/m/20211001163938.ifg4ayrsjwd7r6zr%40localhost\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 18 Oct 2021 12:14:13 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Doc patch" }, { "msg_contents": "On Fri, Oct 15, 2021 at 12:52:48PM -0400, rir wrote:\n> \n> In pgsql-docs, this patch has been recommended to you.\n> \n> Lacking consensus and so not included is the the deletion of\n> comments pointing between the ref/MOVE and FETCH files. These\n> were of the form:\n> \n> <!-- Note the \"direction\" bit is also in ref/fetch.sgml -->\n\nSorry, I am just looking at this patch.\n\n> diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml\n> index 4cd4bcba80..1cadec3dc4 100644\n> --- a/doc/src/sgml/plpgsql.sgml\n> +++ b/doc/src/sgml/plpgsql.sgml\n> @@ -3342,7 +3342,7 @@ BEGIN\n> <title><literal>FETCH</literal></title>\n> \n> <synopsis>\n> -FETCH <optional> <replaceable>direction</replaceable> { FROM | IN } </optional> <replaceable>cursor</replaceable> INTO <replaceable>target</replaceable>;\n> +FETCH <optional> <replaceable>direction</replaceable> </optional> <optional> FROM | IN </optional> <replaceable>cursor</replaceable> INTO <replaceable>target</replaceable>;\n> </synopsis>\n\nIf you look at pl/plpgsql/src/pl_gram.y, you can see in\nread_fetch_direction() and complete_direction() that a lot of work is\ndone to make sure FROM/IN appears (see check_FROM). Therefore, I think\nthe above change is incorrect. Yes, this doesn't match the backend\nsyntax, probably because of syntax requirements of the PL/pgSQL\nlanguage.\n\n> <synopsis>\n> -MOVE <optional> <replaceable>direction</replaceable> { FROM | IN } </optional> <replaceable>cursor</replaceable>;\n> +MOVE <optional> <replaceable>direction</replaceable> </optional> <optional> FROM | IN </optional> <replaceable>cursor</replaceable>;\n> </synopsis>\n\nSame above.\n\n> diff --git a/doc/src/sgml/ref/fetch.sgml b/doc/src/sgml/ref/fetch.sgml\n> index ec843f5684..5e19531742 100644\n> --- a/doc/src/sgml/ref/fetch.sgml\n> +++ b/doc/src/sgml/ref/fetch.sgml\n> @@ -27,9 +27,9 @@ PostgreSQL documentation\n> <refsynopsisdiv>\n> <!-- Note the \"direction\" bit is also in ref/move.sgml -->\n> <synopsis>\n> -FETCH [ <replaceable class=\"parameter\">direction</replaceable> [ FROM | IN ] ] <replaceable class=\"parameter\">cursor_name</replaceable>\n> +FETCH [ <replaceable class=\"parameter\">direction</replaceable> ] [ FROM | IN ] <replaceable class=\"parameter\">cursor_name</replaceable>\n> \n> -<phrase>where <replaceable class=\"parameter\">direction</replaceable> can be empty or one of:</phrase>\n> +<phrase>where <replaceable class=\"parameter\">direction</replaceable> can one of:</phrase>\n> \n> NEXT\n> PRIOR\n> diff --git a/doc/src/sgml/ref/move.sgml b/doc/src/sgml/ref/move.sgml\n> index 4c7d1dca39..c4d859d7b0 100644\n> --- a/doc/src/sgml/ref/move.sgml\n> +++ b/doc/src/sgml/ref/move.sgml\n> @@ -27,9 +27,9 @@ PostgreSQL documentation\n> <refsynopsisdiv>\n> <!-- Note the \"direction\" bit is also in ref/fetch.sgml -->\n> <synopsis>\n> -MOVE [ <replaceable class=\"parameter\">direction</replaceable> [ FROM | IN ] ] <replaceable class=\"parameter\">cursor_name</replaceable>\n> +MOVE [ <replaceable class=\"parameter\">direction</replaceable> ] [ FROM | IN ] <replaceable class=\"parameter\">cursor_name</replaceable>\n> \n> -<phrase>where <replaceable class=\"parameter\">direction</replaceable> can be empty or one of:</phrase>\n> +<phrase>where <replaceable class=\"parameter\">direction</replaceable> can one of:</phrase>\n\nYou are right about the above to changes. The existing syntax shows\nFROM/IN is only possible if a direction is specified, while\nsrc/parser/gram.y says that FROM/IN with no direction is supported.\n\nI plan to apply this second part of the patch soon.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Fri, 19 Aug 2022 12:04:54 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Doc patch" }, { "msg_contents": "On Fri, Aug 19, 2022 at 12:04:54PM -0400, Bruce Momjian wrote:\n> You are right about the above to changes. The existing syntax shows\n> FROM/IN is only possible if a direction is specified, while\n> src/parser/gram.y says that FROM/IN with no direction is supported.\n> \n> I plan to apply this second part of the patch soon.\n\nPatch applied back to PG 10. Thanks for the research on this.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Wed, 31 Aug 2022 19:29:12 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Doc patch" } ]
[ { "msg_contents": "This removes the outer square brackets in the create_database.sgml\nfile's synopsis. In the command sysopses, this is the only case\nwhere an optional group contains only optional groups.\n\nRob", "msg_date": "Fri, 15 Oct 2021 13:13:14 -0400", "msg_from": "rir <rirans@comcast.net>", "msg_from_op": true, "msg_subject": "Trivial doc patch" }, { "msg_contents": "On Fri, Oct 15, 2021 at 01:13:14PM -0400, rir wrote:\n> This removes the outer square brackets in the create_database.sgml\n> file's synopsis. In the command sysopses, this is the only case\n> where an optional group contains only optional groups.\n>\n> CREATE DATABASE <replaceable class=\"parameter\">name</replaceable>\n> - [ [ WITH ] [ OWNER [=] <replaceable class=\"parameter\">user_name</replaceable> ]\n> + [ WITH ] [ OWNER [=] <replaceable class=\"parameter\">user_name</replaceable> ]\n> [...]\n> - [ IS_TEMPLATE [=] <replaceable class=\"parameter\">istemplate</replaceable> ] ]\n> + [ IS_TEMPLATE [=] <replaceable class=\"parameter\">istemplate</replaceable> ]\n> </synopsis>\n> </refsynopsisdiv>\n\nYou are not wrong, and the existing doc is not wrong either. I tend\nto prefer the existing style, though, as it insists on the options\nas being a single group, with or without the keyword WITH.\n--\nMichael", "msg_date": "Sat, 16 Oct 2021 11:14:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Trivial doc patch" }, { "msg_contents": "On Sat, Oct 16, 2021 at 11:14:46AM +0900, Michael Paquier wrote:\n> On Fri, Oct 15, 2021 at 01:13:14PM -0400, rir wrote:\n> > This removes the outer square brackets in the create_database.sgml\n> > file's synopsis. In the command sysopses, this is the only case\n> > where an optional group contains only optional groups.\n> >\n> > CREATE DATABASE <replaceable class=\"parameter\">name</replaceable>\n> > - [ [ WITH ] [ OWNER [=] <replaceable class=\"parameter\">user_name</replaceable> ]\n> > + [ WITH ] [ OWNER [=] <replaceable class=\"parameter\">user_name</replaceable> ]\n> > [...]\n> > - [ IS_TEMPLATE [=] <replaceable class=\"parameter\">istemplate</replaceable> ] ]\n> > + [ IS_TEMPLATE [=] <replaceable class=\"parameter\">istemplate</replaceable> ]\n> > </synopsis>\n> > </refsynopsisdiv>\n> \n> You are not wrong, and the existing doc is not wrong either. I tend\n> to prefer the existing style, though, as it insists on the options\n> as being a single group, with or without the keyword WITH.\n\nMichael, perhaps I mistake you; it seems you would like it better with\nthe extra '[' before OWNER. That would more accurately point up\n\n CREATE DATABASE name WITH;\n\nEither way, my argument would have the basis.\n\nIn what sense are the options a single group? That they all might\nfollow the 'WITH' is expressed without the duplicated brackets.\nThat the extra braces promote readability relies on an assumption or\nknowledge of the command.\n\nGiven that 'optional, optional' has no independent meaning from\n'optional'; it requires one to scan the entire set looking for\nthe non-optional embedded in the option. So no gain.\n\nRob\n\n\n\n\n\n", "msg_date": "Sat, 16 Oct 2021 13:11:49 -0400", "msg_from": "rir <rirans@comcast.net>", "msg_from_op": true, "msg_subject": "Re: Trivial doc patch" }, { "msg_contents": "On Sat, Oct 16, 2021 at 01:11:49PM -0400, rir wrote:\n> On Sat, Oct 16, 2021 at 11:14:46AM +0900, Michael Paquier wrote:\n> > On Fri, Oct 15, 2021 at 01:13:14PM -0400, rir wrote:\n> > > This removes the outer square brackets in the create_database.sgml\n> > > file's synopsis. In the command sysopses, this is the only case\n> > > where an optional group contains only optional groups.\n> > >\n> > > CREATE DATABASE <replaceable class=\"parameter\">name</replaceable>\n> > > - [ [ WITH ] [ OWNER [=] <replaceable class=\"parameter\">user_name</replaceable> ]\n> > > + [ WITH ] [ OWNER [=] <replaceable class=\"parameter\">user_name</replaceable> ]\n> > > [...]\n> > > - [ IS_TEMPLATE [=] <replaceable class=\"parameter\">istemplate</replaceable> ] ]\n> > > + [ IS_TEMPLATE [=] <replaceable class=\"parameter\">istemplate</replaceable> ]\n> > > </synopsis>\n> > > </refsynopsisdiv>\n> > \n> > You are not wrong, and the existing doc is not wrong either. I tend\n> > to prefer the existing style, though, as it insists on the options\n> > as being a single group, with or without the keyword WITH.\n> \n> Michael, perhaps I mistake you; it seems you would like it better with\n> the extra '[' before OWNER. That would more accurately point up\n> \n> CREATE DATABASE name WITH;\n> \n> Either way, my argument would have the basis.\n> \n> In what sense are the options a single group? That they all might\n> follow the 'WITH' is expressed without the duplicated brackets.\n> That the extra braces promote readability relies on an assumption or\n> knowledge of the command.\n> \n> Given that 'optional, optional' has no independent meaning from\n> 'optional'; it requires one to scan the entire set looking for\n> the non-optional embedded in the option. So no gain.\n\nI originally had the same reaction Michael Paquier did, that having one\nbig optional block is nice, but seeing that 'CREATE DATABASE name WITH'\nactually works, I can see the point in having our syntax be accurate,\nand removing the outer optional brackets now does seem like an\nimprovement to me.\n\nAttached is the proposed change.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson", "msg_date": "Fri, 19 Aug 2022 10:42:45 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Trivial doc patch" }, { "msg_contents": "On Fri, Aug 19, 2022 at 10:42:45AM -0400, Bruce Momjian wrote:\n> > Given that 'optional, optional' has no independent meaning from\n> > 'optional'; it requires one to scan the entire set looking for\n> > the non-optional embedded in the option. So no gain.\n> \n> I originally had the same reaction Michael Paquier did, that having one\n> big optional block is nice, but seeing that 'CREATE DATABASE name WITH'\n> actually works, I can see the point in having our syntax be accurate,\n> and removing the outer optional brackets now does seem like an\n> improvement to me.\n\nBackpatched to PG 10. Thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Wed, 31 Aug 2022 17:09:08 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Trivial doc patch" } ]
[ { "msg_contents": "While fooling with something else, I happened to notice $SUBJECT.\nThe reason turns out to be that it's checking the wrong element\nof the tblinfo[] array; see one-liner fix attached.\n\nI had a feeling of deja vu about this bug, and indeed a dig in\nthe git history shows that we fixed it in passing in 403a3d91c.\nBut that later got reverted, and we forgot to keep the bug fix.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 15 Oct 2021 21:05:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pg_dump fails to lock partitioned tables" } ]
[ { "msg_contents": "Hi\n\nI played with one dynamic access to record's fields. I was surprised so I\ncannot to access to record field from dynamic SQL. Is there some reason why\nit is not possible? Today all composite types in PL/pgSQL are records:\n\ndo $$\ndeclare r record; _relname varchar;\nbegin\n for r in select * from pg_class limit 3\n loop\n execute 'select ($1).relname' using r into _relname;\n raise notice '%', _relname;\n end loop;\nend;\n$$;\nERROR: could not identify column \"relname\" in record data type\nLINE 1: select ($1).relname\n ^\nQUERY: select ($1).relname\nCONTEXT: PL/pgSQL function inline_code_block line 6 at EXECUTE\n\nbut:\ndo $$\ndeclare r record; _relname varchar;\nbegin\n for r in select * from pg_class limit 3\n loop\n --execute 'select ($1).relname' using r into _relname;\n raise notice '%', r.relname;\n end loop;\nend;\n$$;\nNOTICE: pg_statistic\nNOTICE: pg_type\nNOTICE: pg_toast_1255\n\nand\n\npostgres=# do $$\ndeclare r pg_class; _relname varchar;\nbegin\n for r in select * from pg_class limit 3\n loop\n execute 'select ($1).relname' using r into _relname;\n raise notice '%', _relname;\n end loop;\nend;\n$$;\nNOTICE: pg_statistic\nNOTICE: pg_type\nNOTICE: pg_toast_1255\n\nit is working too.\n\nWhy there is difference between typed composite and record type although\ninternally should be same?\n\nRegards\n\nPavel\n\nHiI played with one dynamic access to record's fields. I was surprised so I cannot to access to record field from dynamic SQL. Is there some reason why it is not possible? Today all composite types in PL/pgSQL are records:do $$declare r record; _relname varchar;begin  for r in select * from pg_class limit 3  loop    execute 'select ($1).relname' using r into _relname;    raise notice '%', _relname;  end loop;end;$$;ERROR:  could not identify column \"relname\" in record data typeLINE 1: select ($1).relname                ^QUERY:  select ($1).relnameCONTEXT:  PL/pgSQL function inline_code_block line 6 at EXECUTEbut:do $$declare r record; _relname varchar;begin  for r in select * from pg_class limit 3  loop    --execute 'select ($1).relname' using r into _relname;    raise notice '%', r.relname;  end loop;end;$$;NOTICE:  pg_statisticNOTICE:  pg_typeNOTICE:  pg_toast_1255and postgres=# do $$declare r pg_class; _relname varchar;begin  for r in select * from pg_class limit 3  loop    execute 'select ($1).relname' using r into _relname;    raise notice '%', _relname;  end loop;end;$$;NOTICE:  pg_statisticNOTICE:  pg_typeNOTICE:  pg_toast_1255it is working too.Why there is difference between typed composite and record type although internally should be same?RegardsPavel", "msg_date": "Sat, 16 Oct 2021 19:39:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "access to record's field in dynamic SQL doesn't work" } ]
[ { "msg_contents": "Attached is a proposed patch that refactors getTables() along the\nsame lines as some previous work (eg 047329624, ed2c7f65b, daa9fe8a5)\nto avoid having multiple partially-redundant copies of the SQL query.\nThis gets rid of nearly 300 lines of duplicative spaghetti code,\ncreates a uniform style for dealing with cross-version changes\n(replacing at least three different methods currently being used\nfor that in this same stretch of code), and allows moving some\ncomments to be closer to the code they describe.\n\nThere's a lot I still want to change here, but this part seems like it\nshould be fairly uncontroversial. I've tested it against servers back\nto 8.0 (which is what led me to trip over the bug fixed in 40dfac4fc).\nAny objections to just pushing it?\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 16 Oct 2021 17:17:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Refactoring pg_dump's getTables()" }, { "msg_contents": "On 2021-Oct-16, Tom Lane wrote:\n\n> Attached is a proposed patch that refactors getTables() along the\n> same lines as some previous work (eg 047329624, ed2c7f65b, daa9fe8a5)\n> to avoid having multiple partially-redundant copies of the SQL query.\n> This gets rid of nearly 300 lines of duplicative spaghetti code,\n> creates a uniform style for dealing with cross-version changes\n> (replacing at least three different methods currently being used\n> for that in this same stretch of code), and allows moving some\n> comments to be closer to the code they describe.\n\nYeah, this seems a lot better than the original coding. Maybe I would\ngroup together the changes that all require the same version test,\nrather than keeping the output columns in the same order. This reduces\nthe number of branches. Because the follow-on code uses column names\nrather than numbers, there is no reason to keep related columns\ntogether. But it's a clear improvement even without that.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"\n\n\n", "msg_date": "Sun, 17 Oct 2021 17:05:25 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Refactoring pg_dump's getTables()" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Yeah, this seems a lot better than the original coding. Maybe I would\n> group together the changes that all require the same version test,\n> rather than keeping the output columns in the same order. This reduces\n> the number of branches. Because the follow-on code uses column names\n> rather than numbers, there is no reason to keep related columns\n> together. But it's a clear improvement even without that.\n\nYeah, I thought about rearranging the code order some more, but\ndesisted since it'd make the patch footprint a bit bigger (I'd\nwant to make all the related stanzas list things in a uniform\norder). But maybe we should just do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 17 Oct 2021 18:38:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Refactoring pg_dump's getTables()" }, { "msg_contents": "> On 17 Oct 2021, at 22:05, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2021-Oct-16, Tom Lane wrote:\n> \n>> Attached is a proposed patch that refactors getTables() along the\n>> same lines as some previous work (eg 047329624, ed2c7f65b, daa9fe8a5)\n>> to avoid having multiple partially-redundant copies of the SQL query.\n>> This gets rid of nearly 300 lines of duplicative spaghetti code,\n>> creates a uniform style for dealing with cross-version changes\n>> (replacing at least three different methods currently being used\n>> for that in this same stretch of code), and allows moving some\n>> comments to be closer to the code they describe.\n> \n> Yeah, this seems a lot better than the original coding.\n\n+1\n\n> Maybe I would group together the changes that all require the same version\n> test, rather than keeping the output columns in the same order.\n\n\nI agree with that, if we're doing all this we might as well go all the way for\nreadability.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 19 Oct 2021 15:12:43 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Refactoring pg_dump's getTables()" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 17 Oct 2021, at 22:05, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> Maybe I would group together the changes that all require the same version\n>> test, rather than keeping the output columns in the same order.\n\n> I agree with that, if we're doing all this we might as well go all the way for\n> readability.\n\nOK, I'll make it so.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Oct 2021 09:23:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Refactoring pg_dump's getTables()" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 17 Oct 2021, at 22:05, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> Maybe I would group together the changes that all require the same version\n>> test, rather than keeping the output columns in the same order.\n\n> I agree with that, if we're doing all this we might as well go all the way for\n> readability.\n\nI had a go at doing that, but soon decided that it wasn't as great an\nidea as it first seemed. There are two problems:\n\n1. It's not clear what to do with fields where there are three or more\nvariants, such as reloptions and checkoptions.\n\n2. Any time we modify the behavior for a particular field, we'd\nhave to merge or un-merge it from the stanza for the\npreviously-most-recently-relevant version. This seems like it'd\nbe a maintenance headache and make patch footprints bigger than they\nneeded to be.\n\nSo I ended up not doing very much merging. I did make an effort\nto group the fields in perhaps a slightly more logical order\nthan before.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Oct 2021 17:27:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Refactoring pg_dump's getTables()" } ]
[ { "msg_contents": "From: Mikhail <mp39590@gmail.com>\n\nWe might be in situation when we have \"just enough\" semaphores in the\nsystem limit to start but previously crashed unexpectedly, in that case\nwe won't be able to start again - semget() will return ENOSPC, despite\nthe semaphores are ours, and we can recycle them, so check this\nsituation and try to remove the semaphore, if we are unable - give up\nand abort.\n---\n src/backend/port/sysv_sema.c | 31 +++++++++++++++++++++++++------\n 1 file changed, 25 insertions(+), 6 deletions(-)\n\ndiff --git a/src/backend/port/sysv_sema.c b/src/backend/port/sysv_sema.c\nindex 21c883ba9a..a889591dba 100644\n--- a/src/backend/port/sysv_sema.c\n+++ b/src/backend/port/sysv_sema.c\n@@ -88,10 +88,6 @@ static void ReleaseSemaphores(int status, Datum arg);\n *\n * Attempt to create a new semaphore set with the specified key.\n * Will fail (return -1) if such a set already exists.\n- *\n- * If we fail with a failure code other than collision-with-existing-set,\n- * print out an error and abort. Other types of errors suggest nonrecoverable\n- * problems.\n */\n static IpcSemaphoreId\n InternalIpcSemaphoreCreate(IpcSemaphoreKey semKey, int numSems)\n@@ -118,10 +114,33 @@ InternalIpcSemaphoreCreate(IpcSemaphoreKey semKey, int numSems)\n \t\t\treturn -1;\n \n \t\t/*\n-\t\t * Else complain and abort\n+\t\t * We might be in situation when we have \"just enough\" semaphores in the system\n+\t\t * limit to start but previously crashed unexpectedly, in that case we won't be\n+\t\t * able to start again - semget() will return ENOSPC, despite the semaphores\n+\t\t * are ours, and we can recycle them, so check this situation and try to remove\n+\t\t * the semaphore, if we are unable - give up and abort.\n+\t\t *\n+\t\t * We use same semkey for every start - it's gotten from inode number of the\n+\t\t * data folder. So on repeated starts we will use the same key.\n \t\t */\n+\t\tif (saved_errno == ENOSPC)\n+\t\t{\n+\t\t\tunion semun\t\tsemun;\n+\n+\t\t\tsemId = semget(semKey, 0, 0);\n+\n+\t\t\tsemun.val = 0;\t\t\t/* unused, but keep compiler quiet */\n+\t\t\tif (semctl(semId, 0, IPC_RMID, semun) == 0)\n+\t\t\t{\n+\t\t\t\t/* Recycled - get the same semaphore again */\n+\t\t\t\tsemId = semget(semKey, numSems, IPC_CREAT | IPC_EXCL | IPCProtection);\n+\n+\t\t\t\treturn semId;\n+\t\t\t}\n+\t\t}\n+\n \t\tereport(FATAL,\n-\t\t\t\t(errmsg(\"could not create semaphores: %m\"),\n+\t\t\t\t(errmsg(\"could not create semaphores: %s\", strerror(saved_errno)),\n \t\t\t\t errdetail(\"Failed system call was semget(%lu, %d, 0%o).\",\n \t\t\t\t\t\t (unsigned long) semKey, numSems,\n \t\t\t\t\t\t IPC_CREAT | IPC_EXCL | IPCProtection),\n-- \n2.33.0\n\n\n", "msg_date": "Sun, 17 Oct 2021 17:11:28 +0300", "msg_from": "mp39590@gmail.com", "msg_from_op": true, "msg_subject": "[PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "mp39590@gmail.com writes:\n> We might be in situation when we have \"just enough\" semaphores in the\n> system limit to start but previously crashed unexpectedly, in that case\n> we won't be able to start again - semget() will return ENOSPC, despite\n> the semaphores are ours, and we can recycle them, so check this\n> situation and try to remove the semaphore, if we are unable - give up\n> and abort.\n\nAFAICS, this patch could be disastrous. What if the semaphore in\nquestion belongs to some other postmaster?\n\nAlso, you haven't explained why the existing (and much safer) recycling\nlogic in IpcSemaphoreCreate doesn't solve your problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 17 Oct 2021 10:29:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "On Sun, Oct 17, 2021 at 10:29:24AM -0400, Tom Lane wrote:\n> mp39590@gmail.com writes:\n> > We might be in situation when we have \"just enough\" semaphores in the\n> > system limit to start but previously crashed unexpectedly, in that case\n> > we won't be able to start again - semget() will return ENOSPC, despite\n> > the semaphores are ours, and we can recycle them, so check this\n> > situation and try to remove the semaphore, if we are unable - give up\n> > and abort.\n> \n> AFAICS, this patch could be disastrous. What if the semaphore in\n> question belongs to some other postmaster?\n\nDoes running more than one postmaster on the same PGDATA is supported at\nall? Currently seed for the semaphore key is inode number of PGDATA.\n\n> Also, you haven't explained why the existing (and much safer) recycling\n> logic in IpcSemaphoreCreate doesn't solve your problem.\n\nThe logic of creating semas:\n\n218 /* Loop till we find a free IPC key */\n219 for (nextSemaKey++;; nextSemaKey++)\n220 {\n221 pid_t creatorPID;\n222 \n223 /* Try to create new semaphore set */\n224 semId = InternalIpcSemaphoreCreate(nextSemaKey, numSems + 1);\n225 if (semId >= 0)\n226 break; /* successful create */\n\nInternalIpcSemaphoreCreate:\n\n101 semId = semget(semKey, numSems, IPC_CREAT | IPC_EXCL | IPCProtection);\n102 \n103 if (semId < 0)\n104 {\n105 int saved_errno = errno;\n106 \n[...]\n113 if (saved_errno == EEXIST || saved_errno == EACCES\n114 #ifdef EIDRM\n115 || saved_errno == EIDRM\n116 #endif\n117 )\n118 return -1;\n119 \n120 /*\n121 * Else complain and abort\n122 */\n123 ereport(FATAL, [...]\n\nsemget() returns ENOSPC, so InternalIpcSemaphoreCreate doesn't return -1\nso the whole logic of IpcSemaphoreCreate is not checked.\n\n\n", "msg_date": "Sun, 17 Oct 2021 17:41:39 +0300", "msg_from": "Mikhail <mp39590@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "Mikhail <mp39590@gmail.com> writes:\n> On Sun, Oct 17, 2021 at 10:29:24AM -0400, Tom Lane wrote:\n>> AFAICS, this patch could be disastrous. What if the semaphore in\n>> question belongs to some other postmaster?\n\n> Does running more than one postmaster on the same PGDATA is supported at\n> all? Currently seed for the semaphore key is inode number of PGDATA.\n\nThat hardly guarantees no collisions. If it did, we'd never have bothered\nwith the PGSemaMagic business or the IpcSemaphoreGetLastPID check.\n\n>> Also, you haven't explained why the existing (and much safer) recycling\n>> logic in IpcSemaphoreCreate doesn't solve your problem.\n\n> semget() returns ENOSPC, so InternalIpcSemaphoreCreate doesn't return -1\n> so the whole logic of IpcSemaphoreCreate is not checked.\n\nHmm. Maybe you could improve this by removing the first\nInternalIpcSemaphoreCreate call in IpcSemaphoreCreate, and\nrearranging the logic so that the first step consists of seeing\nwhether a sema set is already there (and can safely be zapped),\nand only then proceed with creation.\n\nI am, however, concerned that this'll just trade off one hazard for\nanother. Instead of a risk of failing with ENOSPC (which the DBA\ncan fix), we'll have a risk of kneecapping some other process at\nrandom (which the DBA can do nothing to prevent).\n\nI'm also fairly unclear on when the logic you propose would trigger\nat all. If the sema set is already there, I'd expect EEXIST or\nequivalent, not ENOSPC.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 17 Oct 2021 10:52:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "On Sun, Oct 17, 2021 at 10:52:38AM -0400, Tom Lane wrote:\n> Mikhail <mp39590@gmail.com> writes:\n> > On Sun, Oct 17, 2021 at 10:29:24AM -0400, Tom Lane wrote:\n> >> AFAICS, this patch could be disastrous. What if the semaphore in\n> >> question belongs to some other postmaster?\n> \n> > Does running more than one postmaster on the same PGDATA is supported at\n> > all? Currently seed for the semaphore key is inode number of PGDATA.\n> \n> That hardly guarantees no collisions. If it did, we'd never have bothered\n> with the PGSemaMagic business or the IpcSemaphoreGetLastPID check.\n\nGot it, makes sense. Also, I was presented with examples that inode\nnumber can be reused across mounting points for different clusters.\n\n> >> Also, you haven't explained why the existing (and much safer) recycling\n> >> logic in IpcSemaphoreCreate doesn't solve your problem.\n> \n> > semget() returns ENOSPC, so InternalIpcSemaphoreCreate doesn't return -1\n> > so the whole logic of IpcSemaphoreCreate is not checked.\n> \n> Hmm. Maybe you could improve this by removing the first\n> InternalIpcSemaphoreCreate call in IpcSemaphoreCreate, and\n> rearranging the logic so that the first step consists of seeing\n> whether a sema set is already there (and can safely be zapped),\n> and only then proceed with creation.\n\nI think, I can look into this on the next weekend. On first glance the\nsolution works for me.\n\n> I am, however, concerned that this'll just trade off one hazard for\n> another. Instead of a risk of failing with ENOSPC (which the DBA\n> can fix), we'll have a risk of kneecapping some other process at\n> random (which the DBA can do nothing to prevent).\n\nGood argument, but I'll try to make second version of the patch with the\nproposed logic change to see what we will get. I think it's \"right\"\nbehavior to recycle our own used semaphores, so the whole approach is\ncorrect.\n\n> I'm also fairly unclear on when the logic you propose would trigger\n> at all. If the sema set is already there, I'd expect EEXIST or\n> equivalent, not ENOSPC.\n\nThe logic works - the initial call to semget() in\nInternalIpcSemaphoreCreate returns -1 and errno is set to ENOSPC - I\ntested the patch on OpenBSD 7.0, it successfully recycles sem's after\nprevious \"pkill -6 postgres\". Verified it with 'ipcs -s'.\n\n\n", "msg_date": "Sun, 17 Oct 2021 18:49:21 +0300", "msg_from": "Mikhail <mp39590@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "On Mon, Oct 18, 2021 at 4:49 AM Mikhail <mp39590@gmail.com> wrote:\n> The logic works - the initial call to semget() in\n> InternalIpcSemaphoreCreate returns -1 and errno is set to ENOSPC - I\n> tested the patch on OpenBSD 7.0, it successfully recycles sem's after\n> previous \"pkill -6 postgres\". Verified it with 'ipcs -s'.\n\nSince you mentioned OpenBSD, what do you think of the idea of making\nnamed POSIX semas the default on that platform? You can't run out of\nthose practically speaking, but then you get lots of little memory\nmappings (from memory, at least it does close the fd for each one,\nunlike some other OSes where we wouldn't want to use this technique).\nTrivial patch:\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGJVSjiDjbJpHwUrvA1TikFnJRfyJanrHofAWhnqcDJayQ%40mail.gmail.com\n\nNo strong opinion on the tradeoffs here, as I'm not an OpenBSD user,\nbut it's something I think about whenever testing portability stuff\nthere and having to adjust the relevant sysctls.\n\nNote: The best kind would be *unnamed* POSIX semas, where we get to\ncontrol their placement in existing memory; that's what we do on Linux\nand FreeBSD. They weren't supported on OpenBSD last time we checked:\nit rejects requests for shared ones. I wonder if someone could\nimplement them with just a few lines of user space code, using atomic\ncounters and futex() for waiting.\n\n\n", "msg_date": "Mon, 18 Oct 2021 10:07:40 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "On Mon, Oct 18, 2021 at 10:07:40AM +1300, Thomas Munro wrote:\n> On Mon, Oct 18, 2021 at 4:49 AM Mikhail <mp39590@gmail.com> wrote:\n> > The logic works - the initial call to semget() in\n> > InternalIpcSemaphoreCreate returns -1 and errno is set to ENOSPC - I\n> > tested the patch on OpenBSD 7.0, it successfully recycles sem's after\n> > previous \"pkill -6 postgres\". Verified it with 'ipcs -s'.\n> \n> Since you mentioned OpenBSD, what do you think of the idea of making\n> named POSIX semas the default on that platform? You can't run out of\n> those practically speaking, but then you get lots of little memory\n> mappings (from memory, at least it does close the fd for each one,\n> unlike some other OSes where we wouldn't want to use this technique).\n> Trivial patch:\n> \n> https://www.postgresql.org/message-id/CA%2BhUKGJVSjiDjbJpHwUrvA1TikFnJRfyJanrHofAWhnqcDJayQ%40mail.gmail.com\n> \n> No strong opinion on the tradeoffs here, as I'm not an OpenBSD user,\n> but it's something I think about whenever testing portability stuff\n> there and having to adjust the relevant sysctls.\n> \n> Note: The best kind would be *unnamed* POSIX semas, where we get to\n> control their placement in existing memory; that's what we do on Linux\n> and FreeBSD. They weren't supported on OpenBSD last time we checked:\n> it rejects requests for shared ones. I wonder if someone could\n> implement them with just a few lines of user space code, using atomic\n> counters and futex() for waiting.\n\nHello, sorry for not replying earlier - I was able to think about and\ntest the patch only on the weekend.\n\nI totally agree with your approach, in conversation with one of the\nOpenBSD developers he supported using of sem_open(), because most ports\nuse it and consistency is desirable across our ports tree. It looks like\nPostgreSQL was the only port to use semget().\n\nSwitching to sem_open() also looks much safer than patching sysv_sema.c\nfor corner ENOSPC case as Tom already mentioned.\n\nIn your patch I've removed testing for 5.x versions, because official\nreleases are supported only for one year, no need to worry about them.\nThe patch is tested with 'make installcheck', also I can confirm that\n'ipcs' shows that no semaphores are used, and server starts normally\nafter 'pkill -6 postgres' with the default semmns sysctl, what was the\noriginal motivation for the work.\n\n\ndiff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml\nindex d74d1ed7af..2dfea0662b 100644\n--- a/doc/src/sgml/runtime.sgml\n+++ b/doc/src/sgml/runtime.sgml\n@@ -998,21 +998,7 @@ psql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: No such\n <para>\n The default shared memory settings are usually good enough, unless\n you have set <literal>shared_memory_type</literal> to <literal>sysv</literal>.\n- You will usually want to\n- increase <literal>kern.seminfo.semmni</literal>\n- and <literal>kern.seminfo.semmns</literal>,\n- as <systemitem class=\"osname\">OpenBSD</systemitem>'s default settings\n- for these are uncomfortably small.\n- </para>\n-\n- <para>\n- IPC parameters can be adjusted using <command>sysctl</command>,\n- for example:\n-<screen>\n-<prompt>#</prompt> <userinput>sysctl kern.seminfo.semmni=100</userinput>\n-</screen>\n- To make these settings persist over reboots, modify\n- <filename>/etc/sysctl.conf</filename>.\n+ System V semaphores are not used on this platform.\n </para>\n \n </listitem>\ndiff --git a/src/template/openbsd b/src/template/openbsd\nindex 365268c489..41221af382 100644\n--- a/src/template/openbsd\n+++ b/src/template/openbsd\n@@ -2,3 +2,7 @@\n \n # Extra CFLAGS for code that will go into a shared library\n CFLAGS_SL=\"-fPIC -DPIC\"\n+\n+# OpenBSD 5.5 (2014) gained named POSIX semaphores. They work out of the box\n+# without changing any sysctl settings, unlike System V semaphores.\n+USE_NAMED_POSIX_SEMAPHORES=1\n\n\n", "msg_date": "Fri, 22 Oct 2021 22:07:14 +0300", "msg_from": "Mikhail <mp39590@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "Mikhail <mp39590@gmail.com> writes:\n> In your patch I've removed testing for 5.x versions, because official\n> releases are supported only for one year, no need to worry about them.\n\nOfficial support or no, we have OpenBSD 5.9 in our buildfarm, so\nignoring the case isn't going to fly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Oct 2021 15:43:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "On Sat, Oct 23, 2021 at 8:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Mikhail <mp39590@gmail.com> writes:\n> > In your patch I've removed testing for 5.x versions, because official\n> > releases are supported only for one year, no need to worry about them.\n>\n> Official support or no, we have OpenBSD 5.9 in our buildfarm, so\n> ignoring the case isn't going to fly.\n\nIt was a test for < 5.5, so that aspect's OK.\n\n\n", "msg_date": "Sat, 23 Oct 2021 08:53:59 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "On Fri, Oct 22, 2021 at 03:43:00PM -0400, Tom Lane wrote:\n> Mikhail <mp39590@gmail.com> writes:\n> > In your patch I've removed testing for 5.x versions, because official\n> > releases are supported only for one year, no need to worry about them.\n> \n> Official support or no, we have OpenBSD 5.9 in our buildfarm, so\n> ignoring the case isn't going to fly.\n\n5.9 has support for unnamed POSIX semas. Do you think new machine with\nOpenBSD <5.5 (when unnamed POSIX semas were introduced) can appear in\nbuildfarm or be used by real customer?\n\nI have no objections on testing \"openbsd5.[01234]\" and using SysV semas\nthere and can redo and test the patch, but isn't it over caution?\n\n\n", "msg_date": "Fri, 22 Oct 2021 22:55:08 +0300", "msg_from": "Mikhail <mp39590@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "Mikhail <mp39590@gmail.com> writes:\n> On Fri, Oct 22, 2021 at 03:43:00PM -0400, Tom Lane wrote:\n>> Official support or no, we have OpenBSD 5.9 in our buildfarm, so\n>> ignoring the case isn't going to fly.\n\n> 5.9 has support for unnamed POSIX semas. Do you think new machine with\n> OpenBSD <5.5 (when unnamed POSIX semas were introduced) can appear in\n> buildfarm or be used by real customer?\n\nNah, I misunderstood you to say that 5.9 would also be affected.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Oct 2021 16:04:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "Mikhail <mp39590@gmail.com> writes:\n> +# OpenBSD 5.5 (2014) gained named POSIX semaphores. They work out of the box\n> +# without changing any sysctl settings, unlike System V semaphores.\n> +USE_NAMED_POSIX_SEMAPHORES=1\n\nI tried this on an OpenBSD 6.0 image I had handy. The good news is\nthat it works, and I can successfully start the postmaster with a lot\nof semaphores (I tried with max_connections=10000) without any special\nsystem configuration. The bad news is it's *slow*. It takes the\npostmaster over a minute to start up at 10000 max_connections, and\nalso about 15 seconds to shut down. The regression tests also appear\nnoticeably slower, even at the default max_connections=100. I'm\nafraid that those \"lots of tiny mappings\" that Thomas noted have\na nasty impact on our process launch times, since the kernel\npresumably has to do work to clone them into the child process.\n\nNow this lashup that I'm testing on is by no means well suited for\nperformance tests, so maybe my numbers are bogus. Also, maybe it's\nbetter in more recent OpenBSD releases. But I think we need to take a\nharder look at performance before we decide that it's okay to change\nthe default semaphore type for this platform.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Oct 2021 21:00:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "On Fri, Oct 22, 2021 at 09:00:31PM -0400, Tom Lane wrote:\n> I tried this on an OpenBSD 6.0 image I had handy. The good news is\n> that it works, and I can successfully start the postmaster with a lot\n> of semaphores (I tried with max_connections=10000) without any special\n> system configuration. The bad news is it's *slow*. It takes the\n> postmaster over a minute to start up at 10000 max_connections, and\n> also about 15 seconds to shut down. The regression tests also appear\n> noticeably slower, even at the default max_connections=100. I'm\n> afraid that those \"lots of tiny mappings\" that Thomas noted have\n> a nasty impact on our process launch times, since the kernel\n> presumably has to do work to clone them into the child process.\n> \n> Now this lashup that I'm testing on is by no means well suited for\n> performance tests, so maybe my numbers are bogus. Also, maybe it's\n> better in more recent OpenBSD releases. But I think we need to take a\n> harder look at performance before we decide that it's okay to change\n> the default semaphore type for this platform.\n\nI got following results for \"time make installcheck\" on a laptop with\nOpenBSD 7.0 (amd64):\n\nPOSIX (max_connections=100) (default):\t1m32.39s real 0m03.82s user 0m05.75s system\nPOSIX (max_connections=10000):\t\t2m13.11s real 0m03.56s user 0m07.06s system\n\nSysV (max_connections=100) (default):\t1m24.39s real 0m03.30s user 0m04.94s system\nSysV (max_connections=10000):\t\tfailed to start\nafter sysctl tunning:\nSysV (max_connections=10000):\t\t1m47.51s real 0m03.78s user 0m05.61s system\n\nI can confirm that start and stop of the server was slower in POSIX\ncase, but not terribly different (seconds, not a minute, as in your\ncase).\n\nAs the OpenBSD developers said - those who use OpenBSD are never after a\ngood performance, the system has a lot of bottlenecks except IPCs.\n\nI see following reasons to switch from SysV to POSIX:\n\n- consistency in the ports tree, all major ports use POSIX, it means\n better testing of the API\n- as already pointed out - OpenBSD isn't about performance, and the\n results for default max_connections are pretty close\n- crash recovery with the OS defaults is automatic and don't require DBA\n intervention and knowledge of ipcs and ipcrm\n- higher density is available without system tuning\n\nThe disadvantage is in a worse performance for extreme cases, but I'm\nnot sure OpenBSD is used for them in production.\n\n\n", "msg_date": "Sat, 23 Oct 2021 18:40:44 +0300", "msg_from": "Mikhail <mp39590@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "On Sun, Oct 17, 2021 at 10:52:38AM -0400, Tom Lane wrote:\n> I am, however, concerned that this'll just trade off one hazard for\n> another. Instead of a risk of failing with ENOSPC (which the DBA\n> can fix), we'll have a risk of kneecapping some other process at\n> random (which the DBA can do nothing to prevent).\n\nI tend to agree, and along with semas patch would like to suggest error\nmessage improvement, it would have saved me about half a day of digging.\nTested on OpenBSD 7.0.\n\nI'm not a native speaker though, so grammar need to be checked.\n\ndiff --git a/src/backend/port/sysv_sema.c b/src/backend/port/sysv_sema.c\nindex 21c883ba9a..b84f70b5e2 100644\n--- a/src/backend/port/sysv_sema.c\n+++ b/src/backend/port/sysv_sema.c\n@@ -133,7 +133,10 @@ InternalIpcSemaphoreCreate(IpcSemaphoreKey semKey, int numSems)\n \t\t\t\t\t\t \"respective kernel parameter. Alternatively, reduce PostgreSQL's \"\n \t\t\t\t\t\t \"consumption of semaphores by reducing its max_connections parameter.\\n\"\n \t\t\t\t\t\t \"The PostgreSQL documentation contains more information about \"\n-\t\t\t\t\t\t \"configuring your system for PostgreSQL.\") : 0));\n+\t\t\t\t\t\t \"configuring your system for PostgreSQL.\\n\"\n+\t\t\t\t\t\t \"If server has crashed previously there may be resources left \"\n+\t\t\t\t\t\t \"after it - take a look at ipcs(1) and ipcrm(1) man pages to see \"\n+\t\t\t\t\t\t \"how to remove them.\") : 0));\n \t}\n \n \treturn semId;\n\n\n", "msg_date": "Sat, 23 Oct 2021 22:02:16 +0300", "msg_from": "Mikhail <mp39590@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "On Mon, Oct 18, 2021 at 10:07 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Note: The best kind would be *unnamed* POSIX semas, where we get to\n> control their placement in existing memory; that's what we do on Linux\n> and FreeBSD. They weren't supported on OpenBSD last time we checked:\n> it rejects requests for shared ones. I wonder if someone could\n> implement them with just a few lines of user space code, using atomic\n> counters and futex() for waiting.\n\nI meant that it'd be cool if OpenBSD implemented shared memory unnamed\nsemas that way (as other OSes do), but just for fun I tried\nimplementing that in PostgreSQL. I already had a patch to provide a\nwrapper API for futexes on a bunch of OSes including OpenBSD (because\nI've been looking into ways to rewrite lwlock.c to use futexes\ndirectly and skip all the per-backend semaphore stuff). That made it\neasy to write a quick-and-dirty clone of sem_{init,wait,post}() using\natomics and futexes.\n\nSadly, although the attached proof-of-concept patch allows a\nPREFERRED_SEMAPHORES=FUTEX build to pass tests on macOS (which also\nlacks native unnamed semas), FreeBSD and Linux (which don't need this\nbut are interesting to test), and it also works on OpenBSD with\nshared_memory_type=sysv, it doesn't work on OpenBSD with\nshared_memory_type=mmap (the default). I suspect OpenBSD's futex(2)\nhas a bug: inherited anonymous shared mmap memory seems to confuse it\nso that wakeups are lost. Arrrgh!", "msg_date": "Sun, 24 Oct 2021 22:50:15 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "On Sun, Oct 17, 2021 at 10:29:24AM -0400, Tom Lane wrote:\n> Also, you haven't explained why the existing (and much safer) recycling\n> logic in IpcSemaphoreCreate doesn't solve your problem.\n\nI think I'll drop the diffs, you're right that current proven logic need\nnot to be changed for such rare corner case, which DBA can fix.\n\nI've added references to ipcs(1) and ipcrm(1) in OpenBSD's semget(2) man\npage, so newcomer won't need to spend hours digging in sysv semas\nmanagement, if one would encounter the same situation as I did.\n\nThanks for reviews.\n\n\n", "msg_date": "Mon, 25 Oct 2021 18:37:47 +0300", "msg_from": "Mikhail <mp39590@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "On Sun, Oct 24, 2021 at 10:50 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Sadly, although the attached proof-of-concept patch allows a\n> PREFERRED_SEMAPHORES=FUTEX build to pass tests on macOS (which also\n> lacks native unnamed semas), FreeBSD and Linux (which don't need this\n> but are interesting to test), and it also works on OpenBSD with\n> shared_memory_type=sysv, it doesn't work on OpenBSD with\n> shared_memory_type=mmap (the default). I suspect OpenBSD's futex(2)\n> has a bug: inherited anonymous shared mmap memory seems to confuse it\n> so that wakeups are lost. Arrrgh!\n\nFWIW I'm trying to follow up with the OpenBSD list over here, because\nit'd be nice to get that working:\n\nhttps://marc.info/?l=openbsd-misc&m=163524454303022&w=2\n\n\n", "msg_date": "Fri, 29 Oct 2021 16:54:20 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "On Fri, Oct 29, 2021 at 4:54 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sun, Oct 24, 2021 at 10:50 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Sadly, although the attached proof-of-concept patch allows a\n> > PREFERRED_SEMAPHORES=FUTEX build to pass tests on macOS (which also\n> > lacks native unnamed semas), FreeBSD and Linux (which don't need this\n> > but are interesting to test), and it also works on OpenBSD with\n> > shared_memory_type=sysv, it doesn't work on OpenBSD with\n> > shared_memory_type=mmap (the default). I suspect OpenBSD's futex(2)\n> > has a bug: inherited anonymous shared mmap memory seems to confuse it\n> > so that wakeups are lost. Arrrgh!\n>\n> FWIW I'm trying to follow up with the OpenBSD list over here, because\n> it'd be nice to get that working:\n>\n> https://marc.info/?l=openbsd-misc&m=163524454303022&w=2\n\nThis has been fixed. So now there are working basic futexes on Linux,\nmacOS, {Free,Open,Net,Dragonfly}BSD (though capabilities beyond basic\nwait/wake vary, as do APIs). So the question is whether it would be\nworth trying to do our own futex-based semaphores, as sketched above,\njust for the benefit of the OSes where the available built-in\nsemaphores are of the awkward SysV kind, namely macOS, NetBSD and\nOpenBSD. Perhaps we shouldn't waste our time with that, and should\ninstead plan to use futexes for a more ambitious lwlock rewrite.\n\n\n", "msg_date": "Sat, 20 Nov 2021 09:18:37 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> This has been fixed. So now there are working basic futexes on Linux,\n> macOS, {Free,Open,Net,Dragonfly}BSD (though capabilities beyond basic\n> wait/wake vary, as do APIs). So the question is whether it would be\n> worth trying to do our own futex-based semaphores, as sketched above,\n> just for the benefit of the OSes where the available built-in\n> semaphores are of the awkward SysV kind, namely macOS, NetBSD and\n> OpenBSD. Perhaps we shouldn't waste our time with that, and should\n> instead plan to use futexes for a more ambitious lwlock rewrite.\n\nI kind of like the latter idea, but I wonder how we make it coexist\nwith (admittedly legacy) code for OSes that don't have usable futexes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Nov 2021 15:34:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" }, { "msg_contents": "On Sat, Nov 20, 2021 at 9:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > This has been fixed. So now there are working basic futexes on Linux,\n> > macOS, {Free,Open,Net,Dragonfly}BSD (though capabilities beyond basic\n> > wait/wake vary, as do APIs). So the question is whether it would be\n> > worth trying to do our own futex-based semaphores, as sketched above,\n> > just for the benefit of the OSes where the available built-in\n> > semaphores are of the awkward SysV kind, namely macOS, NetBSD and\n> > OpenBSD. Perhaps we shouldn't waste our time with that, and should\n> > instead plan to use futexes for a more ambitious lwlock rewrite.\n>\n> I kind of like the latter idea, but I wonder how we make it coexist\n> with (admittedly legacy) code for OSes that don't have usable futexes.\n\nOne very rough idea, not yet tried, is that they could keep using\nsemaphores, but use them to implement fake futexes. We'd put them in\nwait lists that live in a shared memory hash table (the futex address\nis the key, with some extra work needed for DSM-resident futexes),\nwith per-bucket spinlocks so that you can perform the value check\natomically with the decision to start waiting.\n\n\n", "msg_date": "Sat, 20 Nov 2021 10:21:31 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ENOSPC not fatal in semaphore creation" } ]
[ { "msg_contents": "While poking at pg_dump for some work I'll show later, I grew quite\nunhappy with the extent to which people have ignored this advice\ngiven near the head of getTables():\n\n * Note: in this phase we should collect only a minimal amount of\n * information about each table, basically just enough to decide if it is\n * interesting. We must fetch all tables in this phase because otherwise\n * we cannot correctly identify inherited columns, owned sequences, etc.\n\nFar from collecting \"a minimal amount of information\", we have\nrepeatedly stuffed extra joins and expensive sub-selects into this\nquery. That's fairly inefficient if we aren't interested in dumping\nevery table, but it's much worse than that: we are collecting all this\ninfo *before we have lock* on the tables. That means we are at\nserious risk of data skew in any place where we consult server-side\nfunctions, eg pg_get_partkeydef(). For example:\n\nregression=# create table foo(f1 int) partition by range(f1);\nCREATE TABLE\nregression=# begin; drop table foo;\nBEGIN\nDROP TABLE\n\nNow start a pg_dump in another shell session, wait a second or two,\nand\n\nregression=*# commit;\nCOMMIT\n\nand the pg_dump blows up:\n\n$ pg_dump -s regression\npg_dump: error: query failed: ERROR: could not open relation with OID 37796\n\n(Note: it's not so easy to provoke this failure manually before\ne2ff7d9a8 guaranteed that pg_dump would wait around for you,\nbut it's certainly possible.)\n\nTo add insult to injury, all that work being done in this query\nmakes the time window for trouble wider.\n\nTesting on the regression database, which isn't all that big,\nI observe getTable's query taking 140 ms, substantially longer\nthan the next slowest thing done by \"pg_dump -s regression\".\nIt seems that the largest chunk of this can be blamed on the\nsub-selects added by the pg_init_privs patch: EXPLAIN ANALYZE\nputs their total runtime at ~100ms.\n\nI believe that the pg_init_privs sub-selects can probably be\nnuked, or at least their cost can be paid later. The other work\nI mentioned has proven that we do not actually need to know\nthe ACL situation to determine whether a table is \"interesting\",\nso we don't have to do that work before acquiring lock.\n\nHowever, that still leaves us doing several inessential joins,\nnot to mention those unsafe partition-examination functions,\nbefore acquiring lock.\n\nI am thinking that the best fix is to make getTables() perform\ntwo queries (or, possibly, split it into two functions). The\nfirst pass would acquire only the absolutely minimal amount of\ndata needed to decide whether a table is \"interesting\", and then\nlock such tables. Then we'd go back to fill in the remaining\ndata. While it's fairly annoying to read pg_class twice, it\nlooks like the extra query might only take a handful of msec.\n\nAlso, if we don't do it like that, it seems that we'd have to\nadd entirely new queries to call pg_get_partkeydef() and\npg_get_expr(relpartbound) in. So I'm not really seeing a\nbetter-performing alternative.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 17 Oct 2021 14:45:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pg_dump does way too much before acquiring table locks" }, { "msg_contents": "On Wed, Oct 20, 2021 at 05:14:45PM -0400, Tom Lane wrote:\n> Lastly, patch 0003 addresses the concern I raised at [3] that it's\n> unsafe to call pg_get_partkeydef() and pg_get_expr(relpartbound)\n> in getTables(). Looking closer I realized that we can't cast\n> pg_class.reloftype to regtype at that point either, since regtypeout\n> is going to notice if the type has been concurrently dropped.\n> \n> In [3] I'd imagined that we could just delay those calls to a second\n> query in getTables(), but that doesn't work at all: if we apply\n> these functions to every row of pg_class, we still risk failure\n> against any relation that we didn't lock. So there seems little\n> alternative but to push these functions out to secondary queries\n> executed later.\n> \n> Arguably, 0003 is a bug fix that we should consider back-patching.\n> However, I've not heard field reports of the problems it fixes,\n> so maybe there's no need to bother.\n\n> [3] https://www.postgresql.org/message-id/1462940.1634496313%40sss.pgh.pa.us\n\nFYI, I see this issue happen in production environment.\n\nGrepping logfiles on ~40 servers, I see it happened at least 3 times since\nOctober 1.\n\nOur backup script is probably particularly sensitive to this: it separately\ndumps each \"old\" partition once and for all. We're more likely to hit this\nsince pg_dump is called in a loop.\n\nI never reported it, since I think it's a documented issue, and it's no great\nproblem, so long as it runs the next day. But it'd be a pain to find that the\nbackup was incomplete when we needed it. Also, I use the backups to migrate to\nnew servers, and it would be a pain to start the job at a calculated time\nexpecting it to finish at the beginning of a coordinated maintenance window,\nbut then discover that it had failed and needs to be rerun with fingers\ncrossed.\n\nOn Sun, Oct 17, 2021 at 02:45:13PM -0400, Tom Lane wrote:\n> While poking at pg_dump for some work I'll show later, I grew quite\n> unhappy with the extent to which people have ignored this advice\n> given near the head of getTables():\n> \n> * Note: in this phase we should collect only a minimal amount of\n> * information about each table, basically just enough to decide if it is\n> * interesting. We must fetch all tables in this phase because otherwise\n> * we cannot correctly identify inherited columns, owned sequences, etc.\n> \n> Far from collecting \"a minimal amount of information\", we have\n> repeatedly stuffed extra joins and expensive sub-selects into this\n> query. That's fairly inefficient if we aren't interested in dumping\n> every table, but it's much worse than that: we are collecting all this\n> info *before we have lock* on the tables. That means we are at\n> serious risk of data skew in any place where we consult server-side\n> functions, eg pg_get_partkeydef(). For example:\n> \n> regression=# create table foo(f1 int) partition by range(f1);\n> CREATE TABLE\n> regression=# begin; drop table foo;\n> BEGIN\n> DROP TABLE\n> \n> Now start a pg_dump in another shell session, wait a second or two,\n> and\n> \n> regression=*# commit;\n> COMMIT\n> \n> and the pg_dump blows up:\n> \n> $ pg_dump -s regression\n> pg_dump: error: query failed: ERROR: could not open relation with OID 37796\n...\n\n\n", "msg_date": "Thu, 21 Oct 2021 09:28:52 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Assorted improvements in pg_dump" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Wed, Oct 20, 2021 at 05:14:45PM -0400, Tom Lane wrote:\n>> Arguably, 0003 is a bug fix that we should consider back-patching.\n>> However, I've not heard field reports of the problems it fixes,\n>> so maybe there's no need to bother.\n\n> FYI, I see this issue happen in production environment.\n> I never reported it, since I think it's a documented issue, and it's no great\n> problem, so long as it runs the next day. But it'd be a pain to find that the\n> backup was incomplete when we needed it. Also, I use the backups to migrate to\n> new servers, and it would be a pain to start the job at a calculated time\n> expecting it to finish at the beginning of a coordinated maintenance window,\n> but then discover that it had failed and needs to be rerun with fingers\n> crossed.\n\nYeah, if you're dropping tables all the time, pg_dump is going to have\na problem with that. The fix I'm suggesting here would only ensure\nthat you get a \"clean\" failure at the LOCK TABLE command --- but from\nan operational standpoint, that's little improvement.\n\nThe natural response is to consider retrying the whole dump after a lock\nfailure. I'm not sure if it'd be practical to do that within pg_dump\nitself, as opposed to putting a loop in whatever script calls it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Oct 2021 10:53:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Assorted improvements in pg_dump" } ]
[ { "msg_contents": "Hi hackers,\n\nI'm writing an extension that employs `object_access_hook`. I want to\nmonitor the table creation event and record the mapping between `reloid`\nand `relfilenode` during a transaction. Here's my code snippet,\n\n```\nstatic void\nmy_object_access_hook(ObjectAccessType access,\n Oid classId,\n Oid objectId,\n int subId, void *arg)\n{\n do_some_checks(access, classId, ...);\n // open the relation using relation_open\n rel = relation_open(objectId, AccessShareLock);\n\n // record the reloid and relfilenode.\n record(objectId, rel->rd_node);\n relation_close(rel, AccessShareLock);\n}\n```\n\nHowever, when I replace the relation_open with try_relation_open, the\nrelation cannot be opened. I've checked the source code, it looks that\ntry_relation_open has an additional checker which causes the relation_open\nand try_relation_open behavior different:\n\n```\nRelation\ntry_relation_open(Oid relationId, LOCKMODE lockmode)\n{\n ...\n /*\n * Now that we have the lock, probe to see if the relation really exists\n * or not.\n */\n if (!SearchSysCacheExists1(RELOID, ObjectIdGetDatum(relationId)))\n {\n /* Release useless lock */\n if (lockmode != NoLock)\n UnlockRelationOid(relationId, lockmode);\n\n return NULL;\n }\n ...\n}\n```\n\nSee:\nhttps://github.com/postgres/postgres/blob/c30f54ad732ca5c8762bb68bbe0f51de9137dd72/src/backend/access/common/relation.c#L47\n\nMy question is, is it a deliberate design that makes try_relation_open and\nrelation_open different? Shall we mention it in the comment of\ntry_relation_open OR adding the checker to relation_open?\n\nBest Regards,\nXing\n\nHi hackers,I'm writing an extension that employs `object_access_hook`. I want to monitor the table creation event and record the mapping between `reloid` and `relfilenode` during a transaction. Here's my code snippet,```static voidmy_object_access_hook(ObjectAccessType access,                      Oid classId,                      Oid objectId,                      int subId, void *arg){    do_some_checks(access, classId, ...);    // open the relation using relation_open    rel = relation_open(objectId, AccessShareLock);    // record the reloid and relfilenode.    record(objectId, rel->rd_node);    relation_close(rel, AccessShareLock);}```However, when I replace the relation_open with try_relation_open, the relation cannot be opened. I've checked the source code, it looks that try_relation_open has an additional checker which causes the relation_open and try_relation_open behavior different:```Relationtry_relation_open(Oid relationId, LOCKMODE lockmode){    ...    /*     * Now that we have the lock, probe to see if the relation really exists     * or not.     */    if (!SearchSysCacheExists1(RELOID, ObjectIdGetDatum(relationId)))    {        /* Release useless lock */        if (lockmode != NoLock)           UnlockRelationOid(relationId, lockmode);        return NULL;    }    ...}```See: https://github.com/postgres/postgres/blob/c30f54ad732ca5c8762bb68bbe0f51de9137dd72/src/backend/access/common/relation.c#L47My question is, is it a deliberate design that makes try_relation_open and relation_open different? Shall we mention it in the comment of try_relation_open OR adding the checker to relation_open?Best Regards,Xing", "msg_date": "Mon, 18 Oct 2021 13:56:07 +0800", "msg_from": "Xing GUO <higuoxing@gmail.com>", "msg_from_op": true, "msg_subject": "try_relation_open and relation_open behave different." }, { "msg_contents": "On Mon, Oct 18, 2021 at 01:56:07PM +0800, Xing GUO wrote:\n> My question is, is it a deliberate design that makes try_relation_open and\n> relation_open different? Shall we mention it in the comment of\n> try_relation_open OR adding the checker to relation_open?\n\nI am not sure what you mean here, both functions are include comments\nto explain their differences, so..\n--\nMichael", "msg_date": "Mon, 18 Oct 2021 15:45:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: try_relation_open and relation_open behave different." }, { "msg_contents": "On Mon, Oct 18, 2021 at 2:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Oct 18, 2021 at 01:56:07PM +0800, Xing GUO wrote:\n> > My question is, is it a deliberate design that makes try_relation_open\n> and\n> > relation_open different? Shall we mention it in the comment of\n> > try_relation_open OR adding the checker to relation_open?\n>\n> I am not sure what you mean here, both functions are include comments\n> to explain their differences, so..\n>\n\nThe comments in try_relation_open says:\n\n```\n/* ----------------\n * try_relation_open - open any relation by relation OID\n *\n * Same as relation_open, except return NULL instead of failing\n * if the relation does not exist.\n * ----------------\n */\n```\n\nHowever, I can open an \"uncommitted\" relation using relation_open() and\ncannot open it using try_relation_open().\nSince Postgres doesn't write the \"uncommitted\" relation descriptor to\nSysCache and try_relation_open() checks if the\nrelation exists in SysCache while relation_open() doesn't check it.\n\n\n> --\n> Michael\n>\n\nOn Mon, Oct 18, 2021 at 2:45 PM Michael Paquier <michael@paquier.xyz> wrote:On Mon, Oct 18, 2021 at 01:56:07PM +0800, Xing GUO wrote:\n> My question is, is it a deliberate design that makes try_relation_open and\n> relation_open different? Shall we mention it in the comment of\n> try_relation_open OR adding the checker to relation_open?\n\nI am not sure what you mean here, both functions are include comments\nto explain their differences, so..The comments in try_relation_open says:```/* ---------------- *\t\ttry_relation_open - open any relation by relation OID * *\t\tSame as relation_open, except return NULL instead of failing *\t\tif the relation does not exist. * ---------------- */```However, I can open an \"uncommitted\" relation using relation_open() and cannot open it using try_relation_open().Since Postgres doesn't write the \"uncommitted\" relation descriptor to SysCache and try_relation_open() checks if therelation exists in SysCache while relation_open() doesn't check it. \n--\nMichael", "msg_date": "Mon, 18 Oct 2021 15:38:13 +0800", "msg_from": "Xing GUO <higuoxing@gmail.com>", "msg_from_op": true, "msg_subject": "Re: try_relation_open and relation_open behave different." }, { "msg_contents": "On 2021-Oct-18, Xing GUO wrote:\n\n> However, I can open an \"uncommitted\" relation using relation_open() and\n> cannot open it using try_relation_open().\n> Since Postgres doesn't write the \"uncommitted\" relation descriptor to\n> SysCache and try_relation_open() checks if the\n> relation exists in SysCache while relation_open() doesn't check it.\n\nHmm, is it sufficient to do CommandCounterIncrement() after your\n\"uncommitted\" relation change and the place where you do\ntry_relation_open()?\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Linux transformó mi computadora, de una `máquina para hacer cosas',\nen un aparato realmente entretenido, sobre el cual cada día aprendo\nalgo nuevo\" (Jaime Salinas)\n\n\n", "msg_date": "Mon, 18 Oct 2021 10:44:24 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: try_relation_open and relation_open behave different." } ]
[ { "msg_contents": "Hi, all\nI noticed that the \"else\" is missing during the error report after FileWrite() of mdwrite()/mdextend(), short write error is supposed to be reported when written bytes is not less than 0.\nI modified it in the attached patch:\ndiff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c\nindex b4bca7eed6..dd60479b65 100644\n--- a/src/backend/storage/smgr/md.c\n+++ b/src/backend/storage/smgr/md.c\n@@ -450,13 +450,14 @@ mdextend(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,\n errmsg(\"could not extend file \\\"%s\\\": %m\",\n FilePathName(v->mdfd_vfd)),\n errhint(\"Check free disk space.\")));\n- /* short write: complain appropriately */\n- ereport(ERROR,\n- (errcode(ERRCODE_DISK_FULL),\n- errmsg(\"could not extend file \\\"%s\\\": wrote only %d of %d bytes at block %u\",\n- FilePathName(v->mdfd_vfd),\n- nbytes, BLCKSZ, blocknum),\n- errhint(\"Check free disk space.\")));\n+ else\n+ /* short write: complain appropriately */\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DISK_FULL),\n+ errmsg(\"could not extend file \\\"%s\\\": wrote only %d of %d bytes at block %u\",\n+ FilePathName(v->mdfd_vfd),\n+ nbytes, BLCKSZ, blocknum),\n+ errhint(\"Check free disk space.\")));\n }\n\nDoes this match your previous expectations? Hope to get your reply.\nThanks & Best Regard", "msg_date": "Mon, 18 Oct 2021 16:14:39 +0800", "msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <mengjuan.cmj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?bW9kaWZ5IGVycm9yIHJlcG9ydCBpbiBtZHdyaXRlL21kZXh0ZW5k?=" }, { "msg_contents": "On Mon, Oct 18, 2021 at 1:45 PM 蔡梦娟(玊于) <mengjuan.cmj@alibaba-inc.com> wrote:\n>\n>\n> Hi, all\n> I noticed that the \"else\" is missing during the error report after FileWrite() of mdwrite()/mdextend(), short write error is supposed to be reported when written bytes is not less than 0.\n> I modified it in the attached patch:\n> diff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c\n> index b4bca7eed6..dd60479b65 100644\n> --- a/src/backend/storage/smgr/md.c\n> +++ b/src/backend/storage/smgr/md.c\n> @@ -450,13 +450,14 @@ mdextend(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,\n> errmsg(\"could not extend file \\\"%s\\\": %m\",\n> FilePathName(v->mdfd_vfd)),\n> errhint(\"Check free disk space.\")));\n> - /* short write: complain appropriately */\n> - ereport(ERROR,\n> - (errcode(ERRCODE_DISK_FULL),\n> - errmsg(\"could not extend file \\\"%s\\\": wrote only %d of %d bytes at block %u\",\n> - FilePathName(v->mdfd_vfd),\n> - nbytes, BLCKSZ, blocknum),\n> - errhint(\"Check free disk space.\")));\n> + else\n> + /* short write: complain appropriately */\n> + ereport(ERROR,\n> + (errcode(ERRCODE_DISK_FULL),\n> + errmsg(\"could not extend file \\\"%s\\\": wrote only %d of %d bytes at block %u\",\n> + FilePathName(v->mdfd_vfd),\n> + nbytes, BLCKSZ, blocknum),\n> + errhint(\"Check free disk space.\")));\n> }\n>\n> Does this match your previous expectations? Hope to get your reply.\n\nThe control from the below ereport(ERROR, doesn't reach the short\nwrite error part. IMO, the existing way does no harm, it is a mere\nprogramming choice.\n\n if (nbytes < 0)\n ereport(ERROR,\n (errcode_for_file_access(),\n errmsg(\"could not write block %u in file \\\"%s\\\": %m\",\n blocknum, FilePathName(v->mdfd_vfd))));\n /* short write: complain appropriately */\n ereport(ERROR,\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 18 Oct 2021 13:59:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: modify error report in mdwrite/mdextend" }, { "msg_contents": "On 2021-Oct-18, Bharath Rupireddy wrote:\n\n> On Mon, Oct 18, 2021 at 1:45 PM 蔡梦娟(玊于) <mengjuan.cmj@alibaba-inc.com> wrote:\n\n> > Does this match your previous expectations? Hope to get your reply.\n> \n> The control from the below ereport(ERROR, doesn't reach the short\n> write error part. IMO, the existing way does no harm, it is a mere\n> programming choice.\n\nYeah, this style is used extensively in many places of the backend code.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"World domination is proceeding according to plan\" (Andrew Morton)\n\n\n", "msg_date": "Mon, 18 Oct 2021 11:45:28 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: modify error report in mdwrite/mdextend" } ]
[ { "msg_contents": "Hi!\n\nI am still hoping to finish my work on reloptions I've started some years ago.\n\nI've renewed my patch and I think I need help from core team to finish it.\n\nGeneral idea of the patch: Now we have three ways to define options for \ndifferent objects, with more or less different code used for it. It wold be \nbetter to have unified context independent API for processing options, instead.\n\nLong story short:\n\nThere is Option Specification object, that has all information about single \noption, how it should be parsed and validated.\n\nThere is Option Specification Set object, an array of Option Specs, that defines \nall options available for certain object (am of some index for example).\n\nWhen some object (relation, opclass, etc) wants to have an options, it \ncreates an Option Spec Set for there options, and uses it for converting \noptions between different representations (to get is from SQL, to store it in \npg_class, to pass it to the core code as bytea etc)\n\nFor indexes Option Spec Set is available via Access Method API. \n\nFor non-index relations all Option Spec Sets are left in reloption.c file, and \nshould be moved to heap AM later. (They are not in AM now so will not change \nit now)\n\nMain problem:\n\nThere are LockModes. LockModes for options is also stored in Option Spec Set. \nFor indexes Option Spec Sec is accessable via AM. So to get LockMode for \noption of an index you need to have access for it's relation object (so you \ncan call proper AM method to fetch spec set). So you need \"Relation rel\" in \nAlterTableGetRelOptionsLockLevel where Lock Level is determinated (src/\nbackend/access/common/reloptions.c)\nAlterTableGetRelOptionsLockLevel is called from AlterTableGetLockLevel (src/\nbackend/commands/tablecmds.c) so we need \"Relation rel\" there too.\nAlterTableGetLockLevel is called from AlterTableInternal (/src/backend/\ncommands/tablecmds.c) There we have \"Oid relid\" so we can try to open relation \nlike this\n\n Relation rel = relation_open(relid, NoLock);\n cmd_lockmode = AlterTableGetRelOptionsLockLevel(rel,\n castNode(List, cmd->def));\n relation_close(rel,NoLock);\n break;\n\nbut this will trigger the assertion \n\n Assert(lockmode != NoLock ||\n IsBootstrapProcessingMode() ||\n CheckRelationLockedByMe(r, c, true));\n\nin relation_open (b/src/backend/access/common/relation.c)\n\nFor now I've commented this assertion out. I've tried to open relation with \nAccessShareLock but this caused one test to fail, and I am not sure this \nsolution is better.\n\nWhat I have done here I consider a hack, so I need a help of core-team here to \ndo it in right way.\n\nGeneral problems:\n\nI guess I need a coauthor, or supervisor from core team, to finish this patch. \nThe amount of code is big, and I guess there are parts that can be made more \nin postgres way, then I did them. And I would need an advice there, and I \nguess it would be better to do if before sending it to commitfest.\n\n\nCurrent patch status:\n\n1. It is Beta. Some minor issues and FIXMEs are not solved. Some code comments \nneeds revising, but in general it do what it is intended to do.\n\n2. This patch does not intend to change postgres behavior at all, all should \nwork as before, all changes are internal only.\n\nThe only exception is error message for unexciting option name in toast \nnamespace \n\n CREATE TABLE reloptions_test2 (i int) WITH (toast.not_existing_option = 42);\n-ERROR: unrecognized parameter \"not_existing_option\"\n+ERROR: unrecognized parameter \"toast.not_existing_option\"\n\nNew message is better I guess, though I can change it back if needed.\n\n3. I am doing my development in this blanch https://gitlab.com/dhyannataraj/\npostgres/-/tree/new_options_take_two I am making changes every day, so last \nversion will be available there\n\nWould be glad to hear from coreteam before I finish with this patch and made it \nready for commit-fest.", "msg_date": "Mon, 18 Oct 2021 16:24:23 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Suggestion: Unified options API. Need help from core team" }, { "msg_contents": "\nUh, the core team does not get involved in development issues, unless\nthere is a issue that clearly cannot be resolved by discussion on the\nhackers list.\n\n---------------------------------------------------------------------------\n\nOn Mon, Oct 18, 2021 at 04:24:23PM +0300, Nikolay Shaplov wrote:\n> Hi!\n> \n> I am still hoping to finish my work on reloptions I've started some years ago.\n> \n> I've renewed my patch and I think I need help from core team to finish it.\n> \n> General idea of the patch: Now we have three ways to define options for \n> different objects, with more or less different code used for it. It wold be \n> better to have unified context independent API for processing options, instead.\n> \n> Long story short:\n> \n> There is Option Specification object, that has all information about single \n> option, how it should be parsed and validated.\n> \n> There is Option Specification Set object, an array of Option Specs, that defines \n> all options available for certain object (am of some index for example).\n> \n> When some object (relation, opclass, etc) wants to have an options, it \n> creates an Option Spec Set for there options, and uses it for converting \n> options between different representations (to get is from SQL, to store it in \n> pg_class, to pass it to the core code as bytea etc)\n> \n> For indexes Option Spec Set is available via Access Method API. \n> \n> For non-index relations all Option Spec Sets are left in reloption.c file, and \n> should be moved to heap AM later. (They are not in AM now so will not change \n> it now)\n> \n> Main problem:\n> \n> There are LockModes. LockModes for options is also stored in Option Spec Set. \n> For indexes Option Spec Sec is accessable via AM. So to get LockMode for \n> option of an index you need to have access for it's relation object (so you \n> can call proper AM method to fetch spec set). So you need \"Relation rel\" in \n> AlterTableGetRelOptionsLockLevel where Lock Level is determinated (src/\n> backend/access/common/reloptions.c)\n> AlterTableGetRelOptionsLockLevel is called from AlterTableGetLockLevel (src/\n> backend/commands/tablecmds.c) so we need \"Relation rel\" there too.\n> AlterTableGetLockLevel is called from AlterTableInternal (/src/backend/\n> commands/tablecmds.c) There we have \"Oid relid\" so we can try to open relation \n> like this\n> \n> Relation rel = relation_open(relid, NoLock);\n> cmd_lockmode = AlterTableGetRelOptionsLockLevel(rel,\n> castNode(List, cmd->def));\n> relation_close(rel,NoLock);\n> break;\n> \n> but this will trigger the assertion \n> \n> Assert(lockmode != NoLock ||\n> IsBootstrapProcessingMode() ||\n> CheckRelationLockedByMe(r, c, true));\n> \n> in relation_open (b/src/backend/access/common/relation.c)\n> \n> For now I've commented this assertion out. I've tried to open relation with \n> AccessShareLock but this caused one test to fail, and I am not sure this \n> solution is better.\n> \n> What I have done here I consider a hack, so I need a help of core-team here to \n> do it in right way.\n> \n> General problems:\n> \n> I guess I need a coauthor, or supervisor from core team, to finish this patch. \n> The amount of code is big, and I guess there are parts that can be made more \n> in postgres way, then I did them. And I would need an advice there, and I \n> guess it would be better to do if before sending it to commitfest.\n> \n> \n> Current patch status:\n> \n> 1. It is Beta. Some minor issues and FIXMEs are not solved. Some code comments \n> needs revising, but in general it do what it is intended to do.\n> \n> 2. This patch does not intend to change postgres behavior at all, all should \n> work as before, all changes are internal only.\n> \n> The only exception is error message for unexciting option name in toast \n> namespace \n> \n> CREATE TABLE reloptions_test2 (i int) WITH (toast.not_existing_option = 42);\n> -ERROR: unrecognized parameter \"not_existing_option\"\n> +ERROR: unrecognized parameter \"toast.not_existing_option\"\n> \n> New message is better I guess, though I can change it back if needed.\n> \n> 3. I am doing my development in this blanch https://gitlab.com/dhyannataraj/\n> postgres/-/tree/new_options_take_two I am making changes every day, so last \n> version will be available there\n> \n> Would be glad to hear from coreteam before I finish with this patch and made it \n> ready for commit-fest.\n> \n> \n\n> diff --git a/contrib/bloom/bloom.h b/contrib/bloom/bloom.h\n> index a22a6df..8f2d5e7 100644\n> --- a/contrib/bloom/bloom.h\n> +++ b/contrib/bloom/bloom.h\n> @@ -17,6 +17,7 @@\n> #include \"access/generic_xlog.h\"\n> #include \"access/itup.h\"\n> #include \"access/xlog.h\"\n> +#include \"access/options.h\"\n> #include \"fmgr.h\"\n> #include \"nodes/pathnodes.h\"\n> \n> @@ -207,7 +208,8 @@ extern IndexBulkDeleteResult *blbulkdelete(IndexVacuumInfo *info,\n> \t\t\t\t\t\t\t\t\t\t void *callback_state);\n> extern IndexBulkDeleteResult *blvacuumcleanup(IndexVacuumInfo *info,\n> \t\t\t\t\t\t\t\t\t\t\t IndexBulkDeleteResult *stats);\n> -extern bytea *bloptions(Datum reloptions, bool validate);\n> +extern void *blrelopt_specset(void);\n> +extern void blReloptionPostprocess(void *, bool validate);\n> extern void blcostestimate(PlannerInfo *root, IndexPath *path,\n> \t\t\t\t\t\t double loop_count, Cost *indexStartupCost,\n> \t\t\t\t\t\t Cost *indexTotalCost, Selectivity *indexSelectivity,\n> diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c\n> index 754de00..54dad16 100644\n> --- a/contrib/bloom/blutils.c\n> +++ b/contrib/bloom/blutils.c\n> @@ -15,7 +15,7 @@\n> \n> #include \"access/amapi.h\"\n> #include \"access/generic_xlog.h\"\n> -#include \"access/reloptions.h\"\n> +#include \"access/options.h\"\n> #include \"bloom.h\"\n> #include \"catalog/index.h\"\n> #include \"commands/vacuum.h\"\n> @@ -34,53 +34,13 @@\n> \n> PG_FUNCTION_INFO_V1(blhandler);\n> \n> -/* Kind of relation options for bloom index */\n> -static relopt_kind bl_relopt_kind;\n> -\n> -/* parse table for fillRelOptions */\n> -static relopt_parse_elt bl_relopt_tab[INDEX_MAX_KEYS + 1];\n> +/* Catalog of relation options for bloom index */\n> +static options_spec_set *bl_relopt_specset;\n> \n> static int32 myRand(void);\n> static void mySrand(uint32 seed);\n> \n> /*\n> - * Module initialize function: initialize info about Bloom relation options.\n> - *\n> - * Note: keep this in sync with makeDefaultBloomOptions().\n> - */\n> -void\n> -_PG_init(void)\n> -{\n> -\tint\t\t\ti;\n> -\tchar\t\tbuf[16];\n> -\n> -\tbl_relopt_kind = add_reloption_kind();\n> -\n> -\t/* Option for length of signature */\n> -\tadd_int_reloption(bl_relopt_kind, \"length\",\n> -\t\t\t\t\t \"Length of signature in bits\",\n> -\t\t\t\t\t DEFAULT_BLOOM_LENGTH, 1, MAX_BLOOM_LENGTH,\n> -\t\t\t\t\t AccessExclusiveLock);\n> -\tbl_relopt_tab[0].optname = \"length\";\n> -\tbl_relopt_tab[0].opttype = RELOPT_TYPE_INT;\n> -\tbl_relopt_tab[0].offset = offsetof(BloomOptions, bloomLength);\n> -\n> -\t/* Number of bits for each possible index column: col1, col2, ... */\n> -\tfor (i = 0; i < INDEX_MAX_KEYS; i++)\n> -\t{\n> -\t\tsnprintf(buf, sizeof(buf), \"col%d\", i + 1);\n> -\t\tadd_int_reloption(bl_relopt_kind, buf,\n> -\t\t\t\t\t\t \"Number of bits generated for each index column\",\n> -\t\t\t\t\t\t DEFAULT_BLOOM_BITS, 1, MAX_BLOOM_BITS,\n> -\t\t\t\t\t\t AccessExclusiveLock);\n> -\t\tbl_relopt_tab[i + 1].optname = MemoryContextStrdup(TopMemoryContext,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t\t buf);\n> -\t\tbl_relopt_tab[i + 1].opttype = RELOPT_TYPE_INT;\n> -\t\tbl_relopt_tab[i + 1].offset = offsetof(BloomOptions, bitSize[0]) + sizeof(int) * i;\n> -\t}\n> -}\n> -\n> -/*\n> * Construct a default set of Bloom options.\n> */\n> static BloomOptions *\n> @@ -135,7 +95,7 @@ blhandler(PG_FUNCTION_ARGS)\n> \tamroutine->amvacuumcleanup = blvacuumcleanup;\n> \tamroutine->amcanreturn = NULL;\n> \tamroutine->amcostestimate = blcostestimate;\n> -\tamroutine->amoptions = bloptions;\n> +\tamroutine->amreloptspecset = blrelopt_specset;\n> \tamroutine->amproperty = NULL;\n> \tamroutine->ambuildphasename = NULL;\n> \tamroutine->amvalidate = blvalidate;\n> @@ -154,6 +114,28 @@ blhandler(PG_FUNCTION_ARGS)\n> \tPG_RETURN_POINTER(amroutine);\n> }\n> \n> +void\n> +blReloptionPostprocess(void *data, bool validate)\n> +{\n> +\tBloomOptions *opts = (BloomOptions *) data;\n> +\tint\t\t\ti;\n> +\n> +\tif (validate)\n> +\t\tfor (i = 0; i < INDEX_MAX_KEYS; i++)\n> +\t\t{\n> +\t\t\tif (opts->bitSize[i] >= opts->bloomLength)\n> +\t\t\t{\n> +\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t errmsg(\"col%i should not be grater than length\", i)));\n> +\t\t\t}\n> +\t\t}\n> +\n> +\t/* Convert signature length from # of bits to # to words, rounding up */\n> +\topts->bloomLength = (opts->bloomLength + SIGNWORDBITS - 1) / SIGNWORDBITS;\n> +}\n> +\n> +\n> /*\n> * Fill BloomState structure for particular index.\n> */\n> @@ -474,24 +456,39 @@ BloomInitMetapage(Relation index)\n> \tUnlockReleaseBuffer(metaBuffer);\n> }\n> \n> -/*\n> - * Parse reloptions for bloom index, producing a BloomOptions struct.\n> - */\n> -bytea *\n> -bloptions(Datum reloptions, bool validate)\n> +void *\n> +blrelopt_specset(void)\n> {\n> -\tBloomOptions *rdopts;\n> +\tint\t\t\ti;\n> +\tchar\t\tbuf[16];\n> \n> -\t/* Parse the user-given reloptions */\n> -\trdopts = (BloomOptions *) build_reloptions(reloptions, validate,\n> -\t\t\t\t\t\t\t\t\t\t\t bl_relopt_kind,\n> -\t\t\t\t\t\t\t\t\t\t\t sizeof(BloomOptions),\n> -\t\t\t\t\t\t\t\t\t\t\t bl_relopt_tab,\n> -\t\t\t\t\t\t\t\t\t\t\t lengthof(bl_relopt_tab));\n> +\tif (bl_relopt_specset)\n> +\t\treturn bl_relopt_specset;\n> \n> -\t/* Convert signature length from # of bits to # to words, rounding up */\n> -\tif (rdopts)\n> -\t\trdopts->bloomLength = (rdopts->bloomLength + SIGNWORDBITS - 1) / SIGNWORDBITS;\n> \n> -\treturn (bytea *) rdopts;\n> +\tbl_relopt_specset = allocateOptionsSpecSet(NULL,\n> +\t\t\t\t\t\t\t sizeof(BloomOptions), INDEX_MAX_KEYS + 1);\n> +\tbl_relopt_specset->postprocess_fun = blReloptionPostprocess;\n> +\n> +\toptionsSpecSetAddInt(bl_relopt_specset, \"length\",\n> +\t\t\t\t\t\t\t \"Length of signature in bits\",\n> +\t\t\t\t\t\t\t NoLock,\t\t/* No lock as far as ALTER is\n> +\t\t\t\t\t\t\t\t\t\t\t * forbidden */\n> +\t\t\t\t\t\t\t 0,\n> +\t\t\t\t\t\t\t offsetof(BloomOptions, bloomLength),\n> +\t\t\t\t\t\t\t DEFAULT_BLOOM_LENGTH, 1, MAX_BLOOM_LENGTH);\n> +\n> +\t/* Number of bits for each possible index column: col1, col2, ... */\n> +\tfor (i = 0; i < INDEX_MAX_KEYS; i++)\n> +\t{\n> +\t\tsnprintf(buf, 16, \"col%d\", i + 1);\n> +\t\toptionsSpecSetAddInt(bl_relopt_specset, buf,\n> +\t\t\t\t\t\t\t \"Number of bits for corresponding column\",\n> +\t\t\t\t\t\t\t\t NoLock,\t/* No lock as far as ALTER is\n> +\t\t\t\t\t\t\t\t\t\t\t * forbidden */\n> +\t\t\t\t\t\t\t\t 0,\n> +\t\t\t\t\t\t\t\t offsetof(BloomOptions, bitSize[i]),\n> +\t\t\t\t\t\t\t\t DEFAULT_BLOOM_BITS, 1, MAX_BLOOM_BITS);\n> +\t}\n> +\treturn bl_relopt_specset;\n> }\n> diff --git a/contrib/bloom/expected/bloom.out b/contrib/bloom/expected/bloom.out\n> index dae12a7..e79456d 100644\n> --- a/contrib/bloom/expected/bloom.out\n> +++ b/contrib/bloom/expected/bloom.out\n> @@ -228,3 +228,6 @@ CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH (length=0);\n> ERROR: value 0 out of bounds for option \"length\"\n> CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH (col1=0);\n> ERROR: value 0 out of bounds for option \"col1\"\n> +-- check post_validate for colN<lengh\n> +CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH (length=10,col1=11);\n> +ERROR: col0 should not be grater than length\n> diff --git a/contrib/bloom/sql/bloom.sql b/contrib/bloom/sql/bloom.sql\n> index 4733e1e..0bfc767 100644\n> --- a/contrib/bloom/sql/bloom.sql\n> +++ b/contrib/bloom/sql/bloom.sql\n> @@ -93,3 +93,6 @@ SELECT reloptions FROM pg_class WHERE oid = 'bloomidx'::regclass;\n> \\set VERBOSITY terse\n> CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH (length=0);\n> CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH (col1=0);\n> +\n> +-- check post_validate for colN<lengh\n> +CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH (length=10,col1=11);\n> diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c\n> index 3a0beaa..a15a10b 100644\n> --- a/contrib/dblink/dblink.c\n> +++ b/contrib/dblink/dblink.c\n> @@ -2005,7 +2005,7 @@ PG_FUNCTION_INFO_V1(dblink_fdw_validator);\n> Datum\n> dblink_fdw_validator(PG_FUNCTION_ARGS)\n> {\n> -\tList\t *options_list = untransformRelOptions(PG_GETARG_DATUM(0));\n> +\tList\t *options_list = optionsTextArrayToDefList(PG_GETARG_DATUM(0));\n> \tOid\t\t\tcontext = PG_GETARG_OID(1);\n> \tListCell *cell;\n> \n> diff --git a/contrib/file_fdw/file_fdw.c b/contrib/file_fdw/file_fdw.c\n> index 2c2f149..1194747 100644\n> --- a/contrib/file_fdw/file_fdw.c\n> +++ b/contrib/file_fdw/file_fdw.c\n> @@ -195,7 +195,7 @@ file_fdw_handler(PG_FUNCTION_ARGS)\n> Datum\n> file_fdw_validator(PG_FUNCTION_ARGS)\n> {\n> -\tList\t *options_list = untransformRelOptions(PG_GETARG_DATUM(0));\n> +\tList\t *options_list = optionsTextArrayToDefList(PG_GETARG_DATUM(0));\n> \tOid\t\t\tcatalog = PG_GETARG_OID(1);\n> \tchar\t *filename = NULL;\n> \tDefElem *force_not_null = NULL;\n> diff --git a/contrib/postgres_fdw/option.c b/contrib/postgres_fdw/option.c\n> index 5bb1af4..bbd4167 100644\n> --- a/contrib/postgres_fdw/option.c\n> +++ b/contrib/postgres_fdw/option.c\n> @@ -72,7 +72,7 @@ PG_FUNCTION_INFO_V1(postgres_fdw_validator);\n> Datum\n> postgres_fdw_validator(PG_FUNCTION_ARGS)\n> {\n> -\tList\t *options_list = untransformRelOptions(PG_GETARG_DATUM(0));\n> +\tList\t *options_list = optionsTextArrayToDefList(PG_GETARG_DATUM(0));\n> \tOid\t\t\tcatalog = PG_GETARG_OID(1);\n> \tListCell *cell;\n> \n> diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c\n> index ccc9fa0..5dd52a4 100644\n> --- a/src/backend/access/brin/brin.c\n> +++ b/src/backend/access/brin/brin.c\n> @@ -20,7 +20,6 @@\n> #include \"access/brin_pageops.h\"\n> #include \"access/brin_xlog.h\"\n> #include \"access/relation.h\"\n> -#include \"access/reloptions.h\"\n> #include \"access/relscan.h\"\n> #include \"access/table.h\"\n> #include \"access/tableam.h\"\n> @@ -40,7 +39,6 @@\n> #include \"utils/memutils.h\"\n> #include \"utils/rel.h\"\n> \n> -\n> /*\n> * We use a BrinBuildState during initial construction of a BRIN index.\n> * The running state is kept in a BrinMemTuple.\n> @@ -119,7 +117,6 @@ brinhandler(PG_FUNCTION_ARGS)\n> \tamroutine->amvacuumcleanup = brinvacuumcleanup;\n> \tamroutine->amcanreturn = NULL;\n> \tamroutine->amcostestimate = brincostestimate;\n> -\tamroutine->amoptions = brinoptions;\n> \tamroutine->amproperty = NULL;\n> \tamroutine->ambuildphasename = NULL;\n> \tamroutine->amvalidate = brinvalidate;\n> @@ -134,6 +131,7 @@ brinhandler(PG_FUNCTION_ARGS)\n> \tamroutine->amestimateparallelscan = NULL;\n> \tamroutine->aminitparallelscan = NULL;\n> \tamroutine->amparallelrescan = NULL;\n> +\tamroutine->amreloptspecset = bringetreloptspecset;\n> \n> \tPG_RETURN_POINTER(amroutine);\n> }\n> @@ -963,23 +961,6 @@ brinvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats)\n> }\n> \n> /*\n> - * reloptions processor for BRIN indexes\n> - */\n> -bytea *\n> -brinoptions(Datum reloptions, bool validate)\n> -{\n> -\tstatic const relopt_parse_elt tab[] = {\n> -\t\t{\"pages_per_range\", RELOPT_TYPE_INT, offsetof(BrinOptions, pagesPerRange)},\n> -\t\t{\"autosummarize\", RELOPT_TYPE_BOOL, offsetof(BrinOptions, autosummarize)}\n> -\t};\n> -\n> -\treturn (bytea *) build_reloptions(reloptions, validate,\n> -\t\t\t\t\t\t\t\t\t RELOPT_KIND_BRIN,\n> -\t\t\t\t\t\t\t\t\t sizeof(BrinOptions),\n> -\t\t\t\t\t\t\t\t\t tab, lengthof(tab));\n> -}\n> -\n> -/*\n> * SQL-callable function to scan through an index and summarize all ranges\n> * that are not currently summarized.\n> */\n> @@ -1765,3 +1746,32 @@ check_null_keys(BrinValues *bval, ScanKey *nullkeys, int nnullkeys)\n> \n> \treturn true;\n> }\n> +\n> +static options_spec_set *brin_relopt_specset = NULL;\n> +\n> +void *\n> +bringetreloptspecset(void)\n> +{\n> +\tif (brin_relopt_specset)\n> +\t\treturn brin_relopt_specset;\n> +\tbrin_relopt_specset = allocateOptionsSpecSet(NULL,\n> +\t\t\t\t\t\t\t\t\t\t\t\t sizeof(BrinOptions), 2);\n> +\n> +\toptionsSpecSetAddInt(brin_relopt_specset, \"pages_per_range\",\n> +\t\t \"Number of pages that each page range covers in a BRIN index\",\n> +\t\t\t\t\t\t\t NoLock,\t\t/* since ALTER is not allowed\n> +\t\t\t\t\t\t\t\t\t\t\t * no lock needed */\n> +\t\t\t\t\t\t\t 0,\n> +\t\t\t\t\t\t\t offsetof(BrinOptions, pagesPerRange),\n> +\t\t\t\t\t\t\t BRIN_DEFAULT_PAGES_PER_RANGE,\n> +\t\t\t\t\t\t\t BRIN_MIN_PAGES_PER_RANGE,\n> +\t\t\t\t\t\t\t BRIN_MAX_PAGES_PER_RANGE);\n> +\t\toptionsSpecSetAddBool(brin_relopt_specset, \"autosummarize\",\n> +\t\t\t\t\t\"Enables automatic summarization on this BRIN index\",\n> +\t\t\t\t\t\t\t AccessExclusiveLock,\n> +\t\t\t\t\t\t\t 0,\n> +\t\t\t\t\t\t\t offsetof(BrinOptions, autosummarize),\n> +\t\t\t\t\t\t\t false);\n> +\treturn brin_relopt_specset;\n> +}\n> +\n> diff --git a/src/backend/access/brin/brin_pageops.c b/src/backend/access/brin/brin_pageops.c\n> index df9ffc2..1940b3d 100644\n> --- a/src/backend/access/brin/brin_pageops.c\n> +++ b/src/backend/access/brin/brin_pageops.c\n> @@ -420,6 +420,9 @@ brin_doinsert(Relation idxrel, BlockNumber pagesPerRange,\n> \t\tfreespace = br_page_get_freespace(page);\n> \n> \tItemPointerSet(&tid, blk, off);\n> +\n> +//elog(WARNING, \"pages_per_range = %i\", pagesPerRange);\n> +\n> \tbrinSetHeapBlockItemptr(revmapbuf, pagesPerRange, heapBlk, tid);\n> \tMarkBufferDirty(revmapbuf);\n> \n> diff --git a/src/backend/access/common/Makefile b/src/backend/access/common/Makefile\n> index b9aff0c..78c9c5a 100644\n> --- a/src/backend/access/common/Makefile\n> +++ b/src/backend/access/common/Makefile\n> @@ -18,6 +18,7 @@ OBJS = \\\n> \tdetoast.o \\\n> \theaptuple.o \\\n> \tindextuple.o \\\n> +\toptions.o \\\n> \tprintsimple.o \\\n> \tprinttup.o \\\n> \trelation.o \\\n> diff --git a/src/backend/access/common/options.c b/src/backend/access/common/options.c\n> new file mode 100644\n> index 0000000..752cddc\n> --- /dev/null\n> +++ b/src/backend/access/common/options.c\n> @@ -0,0 +1,1468 @@\n> +/*-------------------------------------------------------------------------\n> + *\n> + * options.c\n> + *\t An unifom, context-free API for processing name=value options. Used\n> + *\t to process relation optons (reloptions), attribute options, opclass\n> + *\t options, etc.\n> + *\n> + * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group\n> + * Portions Copyright (c) 1994, Regents of the University of California\n> + *\n> + *\n> + * IDENTIFICATION\n> + *\t src/backend/access/common/options.c\n> + *\n> + *-------------------------------------------------------------------------\n> + */\n> +\n> +#include \"postgres.h\"\n> +\n> +#include \"access/options.h\"\n> +#include \"catalog/pg_type.h\"\n> +#include \"commands/defrem.h\"\n> +#include \"nodes/makefuncs.h\"\n> +#include \"utils/builtins.h\"\n> +#include \"utils/guc.h\"\n> +#include \"utils/memutils.h\"\n> +#include \"mb/pg_wchar.h\"\n> +\n> +\n> +/*\n> + * OPTIONS SPECIFICATION and OPTION SPECIFICATION SET\n> + *\n> + * Each option is defined via Option Specification object (Option Spec).\n> + * Option Spec should have all information that is needed for processing\n> + * (parsing, validating, converting) of a single option. Implemented via set of\n> + * option_spec_* structures.\n> + *\n> + * A set of Option Specs (Options Spec Set), defines all options available for\n> + * certain object (certain relation kind for example). It is a list of\n> + * Options Specs, plus validation functions that can be used to validate whole\n> + * option set, if needed. Implemenred via options_spec_set structure and set of\n> + * optionsSpecSetAdd* functions that are used for adding Option Specs items to\n> + * a Set.\n> + *\n> + * NOTE: we choose therm \"sepcification\" instead of \"definition\" because therm\n> + * \"definition\" is used for objects that came from lexer. So to avoud confusion\n> + * here we have Option Specifications, and all \"definitions\" are from lexer.\n> + */\n> +\n> +/*\n> + * OPTION VALUES REPRESENTATIONS\n> + *\n> + * Option values usually came from lexer in form of defList obect, stored in\n> + * pg_catalog as text array, and used when they are stored in memory as\n> + * C-structure. These are different option values representations. Here goes\n> + * brief description of all representations used in the code.\n> + *\n> + * Values\n> + *\n> + * Values are an internal representation that is used while converting\n> + * Values between other representation. Value is called \"parsed\",\n> + * when Value's value is converted to a proper type and validated, or is called\n> + * \"unparsed\", when Value's value is stored as raw string that was obtained\n> + * from the source without any cheks. In convertation funcion names first case\n> + * is refered as Values, second case is refered as RawValues. Values is\n> + * implemented as List of option_value C-structures.\n> + *\n> + * defList\n> + *\n> + * Options in form of definition List that comes from lexer. (For reloptions it\n> + * is a part of SQL query that goes after WITH, SET or RESET keywords). Can be\n> + * converted to and from Values using optionsDefListToRawValues and\n> + * optionsTextArrayToRawValues functions.\n> + *\n> + * TEXT[]\n> + *\n> + * Options in form suitable for storig in TEXT[] field in DB. (E.g. reloptions\n> + * are stores in pg_catalog.pg_class table in reloptions field). Can be converted\n> + * to and from Values using optionsValuesToTextArray and optionsTextArrayToRawValues\n> + * functions.\n> + *\n> + * Bytea\n> + *\n> + * Option data stored in C-structure with varlena header in the beginning of the\n> + * structure. This representation is used to pass option values to the core\n> + * postgres. It is fast to read, it can be cached and so on. Bytea rpresentation\n> + * can be obtained from Vales using optionsValuesToBytea function, and can't be\n> + * converted back.\n> + */\n> +\n> +static option_spec_basic *allocateOptionSpec(int type, const char *name,\n> +\t\t\t\t\t\t const char *desc, LOCKMODE lockmode,\n> +\t\t\t\t\t\t option_spec_flags flags, int struct_offset);\n> +\n> +static void parse_one_option(option_value * option, const char *text_str,\n> +\t\t\t\t int text_len, bool validate);\n> +static void *optionsAllocateBytea(options_spec_set * spec_set, List *options);\n> +\n> +\n> +static List *\n> +optionsDefListToRawValues(List *defList, options_parse_mode\n> +\t\t\t\t\t\t parse_mode);\n> +static Datum optionsValuesToTextArray(List *options_values);\n> +static List *optionsMergeOptionValues(List *old_options, List *new_options);\n> +static bytea *optionsValuesToBytea(List *options, options_spec_set * spec_set);\n> +List *optionsTextArrayToRawValues(Datum array_datum);\n> +List *optionsParseRawValues(List *raw_values, options_spec_set * spec_set,\n> +\t\t\t\t\t options_parse_mode mode);\n> +\n> +\n> +/*\n> + * Options spec_set functions\n> + */\n> +\n> +/*\n> + * Options catalog describes options available for certain object. Catalog has\n> + * all necessary information for parsing transforming and validating options\n> + * for an object. All parsing/validation/transformation functions should not\n> + * know any details of option implementation for certain object, all this\n> + * information should be stored in catalog instead and interpreted by\n> + * pars/valid/transf functions blindly.\n> + *\n> + * The heart of the option catalog is an array of option definitions. Options\n> + * definition specifies name of option, type, range of acceptable values, and\n> + * default value.\n> + *\n> + * Options values can be one of the following types: bool, int, real, enum,\n> + * string. For more info see \"option_type\" and \"optionsCatalogAddItemYyyy\"\n> + * functions.\n> + *\n> + * Option definition flags allows to define parser behavior for special (or not\n> + * so special) cases. See option_spec_flags for more info.\n> + *\n> + * Options and Lock levels:\n> + *\n> + * The default choice for any new option should be AccessExclusiveLock.\n> + * In some cases the lock level can be reduced from there, but the lock\n> + * level chosen should always conflict with itself to ensure that multiple\n> + * changes aren't lost when we attempt concurrent changes.\n> + * The choice of lock level depends completely upon how that parameter\n> + * is used within the server, not upon how and when you'd like to change it.\n> + * Safety first. Existing choices are documented here, and elsewhere in\n> + * backend code where the parameters are used.\n> + *\n> + * In general, anything that affects the results obtained from a SELECT must be\n> + * protected by AccessExclusiveLock.\n> + *\n> + * Autovacuum related parameters can be set at ShareUpdateExclusiveLock\n> + * since they are only used by the AV procs and don't change anything\n> + * currently executing.\n> + *\n> + * Fillfactor can be set because it applies only to subsequent changes made to\n> + * data blocks, as documented in heapio.c\n> + *\n> + * n_distinct options can be set at ShareUpdateExclusiveLock because they\n> + * are only used during ANALYZE, which uses a ShareUpdateExclusiveLock,\n> + * so the ANALYZE will not be affected by in-flight changes. Changing those\n> + * values has no affect until the next ANALYZE, so no need for stronger lock.\n> + *\n> + * Planner-related parameters can be set with ShareUpdateExclusiveLock because\n> + * they only affect planning and not the correctness of the execution. Plans\n> + * cannot be changed in mid-flight, so changes here could not easily result in\n> + * new improved plans in any case. So we allow existing queries to continue\n> + * and existing plans to survive, a small price to pay for allowing better\n> + * plans to be introduced concurrently without interfering with users.\n> + *\n> + * Setting parallel_workers is safe, since it acts the same as\n> + * max_parallel_workers_per_gather which is a USERSET parameter that doesn't\n> + * affect existing plans or queries.\n> +*/\n> +\n> +/*\n> + * allocateOptionsSpecSet\n> + *\t\tCreates new Option Spec Set object: Allocates memory and initializes\n> + *\t\tstructure members.\n> + *\n> + * Spec Set items can be add via allocateOptionSpec and optionSpecSetAddItem functions\n> + * or by calling directly any of optionsSpecSetAdd* function (preferable way)\n> + *\n> + * namespace - Spec Set can be bind to certain namespace (E.g.\n> + * namespace.option=value). Options from other namespaces will be ignored while\n> + * processing. If set to NULL, no namespace will be used at all.\n> + *\n> + * size_of_bytea - size of target structure of Bytea options represenation\n> + *\n> + * num_items_expected - if you know expected number of Spec Set items set it here.\n> + * Set to -1 in other cases. num_items_expected will be used for preallocating memory\n> + * and will trigger error, if you try to add more items than you expected.\n> + */\n> +\n> +options_spec_set *\n> +allocateOptionsSpecSet(const char *namespace, int size_of_bytea, int num_items_expected)\n> +{\n> +\tMemoryContext oldcxt;\n> +\toptions_spec_set *spec_set;\n> +\n> +\toldcxt = MemoryContextSwitchTo(TopMemoryContext);\n> +\tspec_set = palloc(sizeof(options_spec_set));\n> +\tif (namespace)\n> +\t{\n> +\t\tspec_set->namespace = palloc(strlen(namespace) + 1);\n> +\t\tstrcpy(spec_set->namespace, namespace);\n> +\t}\n> +\telse\n> +\t\tspec_set->namespace = NULL;\n> +\tif (num_items_expected > 0)\n> +\t{\n> +\t\tspec_set->num_allocated = num_items_expected;\n> +\t\tspec_set->forbid_realloc = true;\n> +\t\tspec_set->definitions = palloc(\n> +\t\t\t\t spec_set->num_allocated * sizeof(option_spec_basic *));\n> +\t}\n> +\telse\n> +\t{\n> +\t\tspec_set->num_allocated = 0;\n> +\t\tspec_set->forbid_realloc = false;\n> +\t\tspec_set->definitions = NULL;\n> +\t}\n> +\tspec_set->num = 0;\n> +\tspec_set->struct_size = size_of_bytea;\n> +\tspec_set->postprocess_fun = NULL;\n> +\tMemoryContextSwitchTo(oldcxt);\n> +\treturn spec_set;\n> +}\n> +\n> +/*\n> + * allocateOptionSpec\n> + *\t\tAllocates a new Option Specifiation object of desired type and\n> + *\t\tinitialize the type-independent fields\n> + */\n> +static option_spec_basic *\n> +allocateOptionSpec(int type, const char *name, const char *desc, LOCKMODE lockmode,\n> +\t\t\t\t\t\t option_spec_flags flags, int struct_offset)\n> +{\n> +\tMemoryContext oldcxt;\n> +\tsize_t\t\tsize;\n> +\toption_spec_basic *newoption;\n> +\n> +\toldcxt = MemoryContextSwitchTo(TopMemoryContext);\n> +\n> +\tswitch (type)\n> +\t{\n> +\t\tcase OPTION_TYPE_BOOL:\n> +\t\t\tsize = sizeof(option_spec_bool);\n> +\t\t\tbreak;\n> +\t\tcase OPTION_TYPE_INT:\n> +\t\t\tsize = sizeof(option_spec_int);\n> +\t\t\tbreak;\n> +\t\tcase OPTION_TYPE_REAL:\n> +\t\t\tsize = sizeof(option_spec_real);\n> +\t\t\tbreak;\n> +\t\tcase OPTION_TYPE_ENUM:\n> +\t\t\tsize = sizeof(option_spec_enum);\n> +\t\t\tbreak;\n> +\t\tcase OPTION_TYPE_STRING:\n> +\t\t\tsize = sizeof(option_spec_string);\n> +\t\t\tbreak;\n> +\t\tdefault:\n> +\t\t\telog(ERROR, \"unsupported reloption type %d\", type);\n> +\t\t\treturn NULL;\t\t/* keep compiler quiet */\n> +\t}\n> +\n> +\tnewoption = palloc(size);\n> +\n> +\tnewoption->name = pstrdup(name);\n> +\tif (desc)\n> +\t\tnewoption->desc = pstrdup(desc);\n> +\telse\n> +\t\tnewoption->desc = NULL;\n> +\tnewoption->type = type;\n> +\tnewoption->lockmode = lockmode;\n> +\tnewoption->flags = flags;\n> +\tnewoption->struct_offset = struct_offset;\n> +\n> +\tMemoryContextSwitchTo(oldcxt);\n> +\n> +\treturn newoption;\n> +}\n> +\n> +/*\n> + * optionSpecSetAddItem\n> + *\t\tAdds pre-created Option Specification objec to the Spec Set\n> + */\n> +static void\n> +optionSpecSetAddItem(option_spec_basic * newoption,\n> +\t\t\t\t\t options_spec_set * spec_set)\n> +{\n> +\tif (spec_set->num >= spec_set->num_allocated)\n> +\t{\n> +\t\tMemoryContext oldcxt;\n> +\n> +\t\tAssert(!spec_set->forbid_realloc);\n> +\t\toldcxt = MemoryContextSwitchTo(TopMemoryContext);\n> +\n> +\t\tif (spec_set->num_allocated == 0)\n> +\t\t{\n> +\t\t\tspec_set->num_allocated = 8;\n> +\t\t\tspec_set->definitions = palloc(\n> +\t\t\t\t spec_set->num_allocated * sizeof(option_spec_basic *));\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\tspec_set->num_allocated *= 2;\n> +\t\t\tspec_set->definitions = repalloc(spec_set->definitions,\n> +\t\t\t\t spec_set->num_allocated * sizeof(option_spec_basic *));\n> +\t\t}\n> +\t\tMemoryContextSwitchTo(oldcxt);\n> +\t}\n> +\tspec_set->definitions[spec_set->num] = newoption;\n> +\tspec_set->num++;\n> +}\n> +\n> +\n> +/*\n> + * optionsSpecSetAddBool\n> + *\t\tAdds boolean Option Specification entry to the Spec Set\n> + */\n> +void\n> +optionsSpecSetAddBool(options_spec_set * spec_set, const char *name, const char *desc,\n> +\t\t\t\t\t\t LOCKMODE lockmode, option_spec_flags flags,\n> +\t\t\t\t\t\t int struct_offset, bool default_val)\n> +{\n> +\toption_spec_bool *spec_set_item;\n> +\n> +\tspec_set_item = (option_spec_bool *)\n> +\t\tallocateOptionSpec(OPTION_TYPE_BOOL, name, desc, lockmode,\n> +\t\t\t\t\t\t\t\t flags, struct_offset);\n> +\n> +\tspec_set_item->default_val = default_val;\n> +\n> +\toptionSpecSetAddItem((option_spec_basic *) spec_set_item, spec_set);\n> +}\n> +\n> +/*\n> + * optionsSpecSetAddInt\n> + *\t\tAdds integer Option Specification entry to the Spec Set\n> + */\n> +void\n> +optionsSpecSetAddInt(options_spec_set * spec_set, const char *name,\n> +\t\t const char *desc, LOCKMODE lockmode, option_spec_flags flags,\n> +\t\t\t\tint struct_offset, int default_val, int min_val, int max_val)\n> +{\n> +\toption_spec_int *spec_set_item;\n> +\n> +\tspec_set_item = (option_spec_int *)\n> +\t\tallocateOptionSpec(OPTION_TYPE_INT, name, desc, lockmode,\n> +\t\t\t\t\t\t\t\t flags, struct_offset);\n> +\n> +\tspec_set_item->default_val = default_val;\n> +\tspec_set_item->min = min_val;\n> +\tspec_set_item->max = max_val;\n> +\n> +\toptionSpecSetAddItem((option_spec_basic *) spec_set_item, spec_set);\n> +}\n> +\n> +/*\n> + * optionsSpecSetAddReal\n> + *\t\tAdds float Option Specification entry to the Spec Set\n> + */\n> +void\n> +optionsSpecSetAddReal(options_spec_set * spec_set, const char *name, const char *desc,\n> +\t\t LOCKMODE lockmode, option_spec_flags flags, int struct_offset,\n> +\t\t\t\t\t\t double default_val, double min_val, double max_val)\n> +{\n> +\toption_spec_real *spec_set_item;\n> +\n> +\tspec_set_item = (option_spec_real *)\n> +\t\tallocateOptionSpec(OPTION_TYPE_REAL, name, desc, lockmode,\n> +\t\t\t\t\t\t\t\t flags, struct_offset);\n> +\n> +\tspec_set_item->default_val = default_val;\n> +\tspec_set_item->min = min_val;\n> +\tspec_set_item->max = max_val;\n> +\n> +\toptionSpecSetAddItem((option_spec_basic *) spec_set_item, spec_set);\n> +}\n> +\n> +/*\n> + * optionsSpecSetAddEnum\n> + *\t\tAdds enum Option Specification entry to the Spec Set\n> + *\n> + * The members array must have a terminating NULL entry.\n> + *\n> + * The detailmsg is shown when unsupported values are passed, and has this\n> + * form: \"Valid values are \\\"foo\\\", \\\"bar\\\", and \\\"bar\\\".\"\n> + *\n> + * The members array and detailmsg are not copied -- caller must ensure that\n> + * they are valid throughout the life of the process.\n> + */\n> +\n> +void\n> +optionsSpecSetAddEnum(options_spec_set * spec_set, const char *name, const char *desc,\n> +\t\tLOCKMODE lockmode, option_spec_flags flags, int struct_offset,\n> +\t\topt_enum_elt_def * members, int default_val, const char *detailmsg)\n> +{\n> +\toption_spec_enum *spec_set_item;\n> +\n> +\tspec_set_item = (option_spec_enum *)\n> +\t\tallocateOptionSpec(OPTION_TYPE_ENUM, name, desc, lockmode,\n> +\t\t\t\t\t\t\t\t flags, struct_offset);\n> +\n> +\tspec_set_item->default_val = default_val;\n> +\tspec_set_item->members = members;\n> +\tspec_set_item->detailmsg = detailmsg;\n> +\n> +\toptionSpecSetAddItem((option_spec_basic *) spec_set_item, spec_set);\n> +}\n> +\n> +/*\n> + * optionsSpecSetAddString\n> + *\t\tAdds string Option Specification entry to the Spec Set\n> + *\n> + * \"validator\" is an optional function pointer that can be used to test the\n> + * validity of the values. It must elog(ERROR) when the argument string is\n> + * not acceptable for the variable. Note that the default value must pass\n> + * the validation.\n> + */\n> +void\n> +optionsSpecSetAddString(options_spec_set * spec_set, const char *name, const char *desc,\n> +\t\t LOCKMODE lockmode, option_spec_flags flags, int struct_offset,\n> +\t\t\t\t const char *default_val, validate_string_option validator)\n> +{\n> +\toption_spec_string *spec_set_item;\n> +\n> +\t/* make sure the validator/default combination is sane */\n> +\tif (validator)\n> +\t\t(validator) (default_val);\n> +\n> +\tspec_set_item = (option_spec_string *)\n> +\t\tallocateOptionSpec(OPTION_TYPE_STRING, name, desc, lockmode,\n> +\t\t\t\t\t\t\t\t flags, struct_offset);\n> +\tspec_set_item->validate_cb = validator;\n> +\n> +\tif (default_val)\n> +\t\tspec_set_item->default_val = MemoryContextStrdup(TopMemoryContext,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\tdefault_val);\n> +\telse\n> +\t\tspec_set_item->default_val = NULL;\n> +\toptionSpecSetAddItem((option_spec_basic *) spec_set_item, spec_set);\n> +}\n> +\n> +\n> +/*\n> + * Options transform functions\n> + */\n> +\n> +/* FIXME this comment should be updated\n> + * Option values exists in five representations: DefList, TextArray, Values and\n> + * Bytea:\n> + *\n> + * DefList: Is a List of DefElem structures, that comes from syntax analyzer.\n> + * It can be transformed to Values representation for further parsing and\n> + * validating\n> + *\n> + * Values: A List of option_value structures. Is divided into two subclasses:\n> + * RawValues, when values are already transformed from DefList or TextArray,\n> + * but not parsed yet. (In this case you should use raw_name and raw_value\n> + * structure members to see option content). ParsedValues (or just simple\n> + * Values) is crated after finding a definition for this option in a spec_set\n> + * and after parsing of the raw value. For ParsedValues content is stored in\n> + * values structure member, and name can be taken from option definition in gen\n> + * structure member. Actually Value list can have both Raw and Parsed values,\n> + * as we do not validate options that came from database, and db option that\n> + * does not exist in spec_set is just ignored, and kept as RawValues\n> + *\n> + * TextArray: The representation in which options for existing object comes\n> + * and goes from/to database; for example from pg_class.reloptions. It is a\n> + * plain TEXT[] db object with name=value text inside. This representation can\n> + * be transformed into Values for further processing, using options spec_set.\n> + *\n> + * Bytea: Is a binary representation of options. Each object that has code that\n> + * uses options, should create a C-structure for this options, with varlen\n> + * 4-byte header in front of the data; all items of options spec_set should have\n> + * an offset of a corresponding binary data in this structure, so transform\n> + * function can put this data in the correct place. One can transform options\n> + * data from values representation into Bytea, using spec_set data, and then use\n> + * it as a usual Datum object, when needed. This Datum should be cached\n> + * somewhere (for example in rel->rd_options for relations) when object that\n> + * has option is loaded from db.\n> + */\n> +\n> +\n> +/* optionsDefListToRawValues\n> + *\t\tConverts option values that came from syntax analyzer (DefList) into\n> + *\t\tValues List.\n> + *\n> + * No parsing is done here except for checking that RESET syntax is correct\n> + * (syntax analyzer do not see difference between SET and RESET cases, we\n> + * should treat it here manually\n> + */\n> +static List *\n> +optionsDefListToRawValues(List *defList, options_parse_mode parse_mode)\n> +{\n> +\tListCell *cell;\n> +\tList\t *result = NIL;\n> +\n> +\tforeach(cell, defList)\n> +\t{\n> +\t\toption_value *option_dst;\n> +\t\tDefElem *def = (DefElem *) lfirst(cell);\n> +\t\tchar\t *value;\n> +\n> +\t\toption_dst = palloc(sizeof(option_value));\n> +\n> +\t\tif (def->defnamespace)\n> +\t\t{\n> +\t\t\toption_dst->namespace = palloc(strlen(def->defnamespace) + 1);\n> +\t\t\tstrcpy(option_dst->namespace, def->defnamespace);\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\toption_dst->namespace = NULL;\n> +\t\t}\n> +\t\toption_dst->raw_name = palloc(strlen(def->defname) + 1);\n> +\t\tstrcpy(option_dst->raw_name, def->defname);\n> +\n> +\t\tif (parse_mode & OPTIONS_PARSE_MODE_FOR_RESET)\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * If this option came from RESET statement we should throw error\n> +\t\t\t * it it brings us name=value data, as syntax analyzer do not\n> +\t\t\t * prevent it\n> +\t\t\t */\n> +\t\t\tif (def->arg != NULL)\n> +\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n> +\t\t\t\t\terrmsg(\"RESET must not include values for parameters\")));\n> +\n> +\t\t\toption_dst->status = OPTION_VALUE_STATUS_FOR_RESET;\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * For SET statement we should treat (name) expression as if it is\n> +\t\t\t * actually (name=true) so do it here manually. In other cases\n> +\t\t\t * just use value as we should use it\n> +\t\t\t */\n> +\t\t\toption_dst->status = OPTION_VALUE_STATUS_RAW;\n> +\t\t\tif (def->arg != NULL)\n> +\t\t\t\tvalue = defGetString(def);\n> +\t\t\telse\n> +\t\t\t\tvalue = \"true\";\n> +\t\t\toption_dst->raw_value = palloc(strlen(value) + 1);\n> +\t\t\tstrcpy(option_dst->raw_value, value);\n> +\t\t}\n> +\n> +\t\tresult = lappend(result, option_dst);\n> +\t}\n> +\treturn result;\n> +}\n> +\n> +/*\n> + * optionsValuesToTextArray\n> + *\t\tConverts List of option_values into TextArray\n> + *\n> + *\tConvertation is made to put options into database (e.g. in\n> + *\tpg_class.reloptions for all relation options)\n> + */\n> +\n> +Datum\n> +optionsValuesToTextArray(List *options_values)\n> +{\n> +\tArrayBuildState *astate = NULL;\n> +\tListCell *cell;\n> +\tDatum\t\tresult;\n> +\n> +\tforeach(cell, options_values)\n> +\t{\n> +\t\toption_value *option = (option_value *) lfirst(cell);\n> +\t\tconst char *name;\n> +\t\tchar\t *value;\n> +\t\ttext\t *t;\n> +\t\tint\t\t\tlen;\n> +\n> +\t\t/*\n> +\t\t * Raw value were not cleared while parsing, so instead of converting\n> +\t\t * it back, just use it to store value as text\n> +\t\t */\n> +\t\tvalue = option->raw_value;\n> +\n> +\t\tAssert(option->status != OPTION_VALUE_STATUS_EMPTY);\n> +\n> +\t\t/*\n> +\t\t * Name will be taken from option definition, if option were parsed or\n> +\t\t * from raw_name if option were not parsed for some reason\n> +\t\t */\n> +\t\tif (option->status == OPTION_VALUE_STATUS_PARSED)\n> +\t\t\tname = option->gen->name;\n> +\t\telse\n> +\t\t\tname = option->raw_name;\n> +\n> +\t\t/*\n> +\t\t * Now build \"name=value\" string and append it to the array\n> +\t\t */\n> +\t\tlen = VARHDRSZ + strlen(name) + strlen(value) + 1;\n> +\t\tt = (text *) palloc(len + 1);\n> +\t\tSET_VARSIZE(t, len);\n> +\t\tsprintf(VARDATA(t), \"%s=%s\", name, value);\n> +\t\tastate = accumArrayResult(astate, PointerGetDatum(t), false,\n> +\t\t\t\t\t\t\t\t TEXTOID, CurrentMemoryContext);\n> +\t}\n> +\tif (astate)\n> +\t\tresult = makeArrayResult(astate, CurrentMemoryContext);\n> +\telse\n> +\t\tresult = (Datum) 0;\n> +\n> +\treturn result;\n> +}\n> +\n> +/*\n> + * optionsTextArrayToRawValues\n> + *\t\tConverts options from TextArray format into RawValues list.\n> + *\n> + *\tThis function is used to convert options data that comes from database to\n> + *\tList of option_values, for further parsing, and, in the case of ALTER\n> + *\tcommand, for merging with new option values.\n> + */\n> +List *\n> +optionsTextArrayToRawValues(Datum array_datum)\n> +{\n> +\tList\t *result = NIL;\n> +\n> +\tif (PointerIsValid(DatumGetPointer(array_datum)))\n> +\t{\n> +\t\tArrayType *array = DatumGetArrayTypeP(array_datum);\n> +\t\tDatum\t *options;\n> +\t\tint\t\t\tnoptions;\n> +\t\tint\t\t\ti;\n> +\n> +\t\tdeconstruct_array(array, TEXTOID, -1, false, 'i',\n> +\t\t\t\t\t\t &options, NULL, &noptions);\n> +\n> +\t\tfor (i = 0; i < noptions; i++)\n> +\t\t{\n> +\t\t\toption_value *option_dst;\n> +\t\t\tchar\t *text_str = VARDATA(options[i]);\n> +\t\t\tint\t\t\ttext_len = VARSIZE(options[i]) - VARHDRSZ;\n> +\t\t\tint\t\t\ti;\n> +\t\t\tint\t\t\tname_len = -1;\n> +\t\t\tchar\t *name;\n> +\t\t\tint\t\t\traw_value_len;\n> +\t\t\tchar\t *raw_value;\n> +\n> +\t\t\t/*\n> +\t\t\t * Find position of '=' sign and treat id as a separator between\n> +\t\t\t * name and value in \"name=value\" item\n> +\t\t\t */\n> +\t\t\tfor (i = 0; i < text_len; i = i + pg_mblen(text_str))\n> +\t\t\t{\n> +\t\t\t\tif (text_str[i] == '=')\n> +\t\t\t\t{\n> +\t\t\t\t\tname_len = i;\n> +\t\t\t\t\tbreak;\n> +\t\t\t\t}\n> +\t\t\t}\n> +\t\t\tAssert(name_len >= 1);\t\t/* Just in case */\n> +\n> +\t\t\traw_value_len = text_len - name_len - 1;\n> +\n> +\t\t\t/*\n> +\t\t\t * Copy name from src\n> +\t\t\t */\n> +\t\t\tname = palloc(name_len + 1);\n> +\t\t\tmemcpy(name, text_str, name_len);\n> +\t\t\tname[name_len] = '\\0';\n> +\n> +\t\t\t/*\n> +\t\t\t * Copy value from src\n> +\t\t\t */\n> +\t\t\traw_value = palloc(raw_value_len + 1);\n> +\t\t\tmemcpy(raw_value, text_str + name_len + 1, raw_value_len);\n> +\t\t\traw_value[raw_value_len] = '\\0';\n> +\n> +\t\t\t/*\n> +\t\t\t * Create new option_value item\n> +\t\t\t */\n> +\t\t\toption_dst = palloc(sizeof(option_value));\n> +\t\t\toption_dst->status = OPTION_VALUE_STATUS_RAW;\n> +\t\t\toption_dst->raw_name = name;\n> +\t\t\toption_dst->raw_value = raw_value;\n> +\t\t\toption_dst->namespace = NULL;\n> +\n> +\t\t\tresult = lappend(result, option_dst);\n> +\t\t}\n> +\t}\n> +\treturn result;\n> +}\n> +\n> +/*\n> + * optionsMergeOptionValues\n> + *\t\tMerges two lists of option_values into one list\n> + *\n> + * This function is used to merge two Values list into one. It is used for all\n> + * kinds of ALTER commands when existing options are merged|replaced with new\n> + * options list. This function also process RESET variant of ALTER command. It\n> + * merges two lists as usual, and then removes all items with RESET flag on.\n> + *\n> + * Both incoming lists will be destroyed while merging\n> + */\n> +static List *\n> +optionsMergeOptionValues(List *old_options, List *new_options)\n> +{\n> +\tList\t *result = NIL;\n> +\tListCell *old_cell;\n> +\tListCell *new_cell;\n> +\n> +\t/*\n> +\t * First add to result all old options that are not mentioned in new list\n> +\t */\n> +\tforeach(old_cell, old_options)\n> +\t{\n> +\t\tbool\t\tfound;\n> +\t\tconst char *old_name;\n> +\t\toption_value *old_option;\n> +\n> +\t\told_option = (option_value *) lfirst(old_cell);\n> +\t\tif (old_option->status == OPTION_VALUE_STATUS_PARSED)\n> +\t\t\told_name = old_option->gen->name;\n> +\t\telse\n> +\t\t\told_name = old_option->raw_name;\n> +\n> +\t\t/*\n> +\t\t * Looking for a new option with same name\n> +\t\t */\n> +\t\tfound = false;\n> +\t\tforeach(new_cell, new_options)\n> +\t\t{\n> +\t\t\toption_value *new_option;\n> +\t\t\tconst char *new_name;\n> +\n> +\t\t\tnew_option = (option_value *) lfirst(new_cell);\n> +\t\t\tif (new_option->status == OPTION_VALUE_STATUS_PARSED)\n> +\t\t\t\tnew_name = new_option->gen->name;\n> +\t\t\telse\n> +\t\t\t\tnew_name = new_option->raw_name;\n> +\n> +\t\t\tif (strcmp(new_name, old_name) == 0)\n> +\t\t\t{\n> +\t\t\t\tfound = true;\n> +\t\t\t\tbreak;\n> +\t\t\t}\n> +\t\t}\n> +\t\tif (!found)\n> +\t\t\tresult = lappend(result, old_option);\n> +\t}\n> +\t/*\n> +\t * Now add all to result all new options that are not designated for reset\n> +\t */\n> +\tforeach(new_cell, new_options)\n> +\t{\n> +\t\toption_value *new_option;\n> +\t\tnew_option = (option_value *) lfirst(new_cell);\n> +\n> +\t\tif(new_option->status != OPTION_VALUE_STATUS_FOR_RESET)\n> +\t\t\tresult = lappend(result, new_option);\n> +\t}\n> +\treturn result;\n> +}\n> +\n> +/*\n> + * optionsDefListValdateNamespaces\n> + *\t\tFunction checks that all options represented as DefList has no\n> + *\t\tnamespaces or have namespaces only from allowed list\n> + *\n> + * Function accept options as DefList and NULL terminated list of allowed\n> + * namespaces. It throws an error if not proper namespace was found.\n> + *\n> + * This function actually used only for tables with it's toast. namespace\n> + */\n> +void\n> +optionsDefListValdateNamespaces(List *defList, char **allowed_namespaces)\n> +{\n> +\tListCell *cell;\n> +\n> +\tforeach(cell, defList)\n> +\t{\n> +\t\tDefElem *def = (DefElem *) lfirst(cell);\n> +\n> +\t\t/*\n> +\t\t * Checking namespace only for options that have namespaces. Options\n> +\t\t * with no namespaces are always accepted\n> +\t\t */\n> +\t\tif (def->defnamespace)\n> +\t\t{\n> +\t\t\tbool\t\tfound = false;\n> +\t\t\tint\t\t\ti = 0;\n> +\n> +\t\t\twhile (allowed_namespaces[i])\n> +\t\t\t{\n> +\t\t\t\tif (strcmp(def->defnamespace,\n> +\t\t\t\t\t\t\t\t allowed_namespaces[i]) == 0)\n> +\t\t\t\t{\n> +\t\t\t\t\tfound = true;\n> +\t\t\t\t\tbreak;\n> +\t\t\t\t}\n> +\t\t\t\ti++;\n> +\t\t\t}\n> +\t\t\tif (!found)\n> +\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t\t errmsg(\"unrecognized parameter namespace \\\"%s\\\"\",\n> +\t\t\t\t\t\t\t\tdef->defnamespace)));\n> +\t\t}\n> +\t}\n> +}\n> +\n> +/*\n> + * optionsDefListFilterNamespaces\n> + *\t\tIterates over DefList, choose items with specified namespace and adds\n> + *\t\tthem to a result List\n> + *\n> + * This function does not destroy source DefList but does not create copies\n> + * of List nodes.\n> + * It is actually used only for tables, in order to split toast and heap\n> + * reloptions, so each one can be stored in on it's own pg_class record\n> + */\n> +List *\n> +optionsDefListFilterNamespaces(List *defList, const char *namespace)\n> +{\n> +\tListCell *cell;\n> +\tList\t *result = NIL;\n> +\n> +\tforeach(cell, defList)\n> +\t{\n> +\t\tDefElem *def = (DefElem *) lfirst(cell);\n> +\n> +\t\tif ((!namespace && !def->defnamespace) ||\n> +\t\t\t(namespace && def->defnamespace &&\n> +\t\t\t strcmp(namespace, def->defnamespace) == 0))\n> +\t\t{\n> +\t\t\tresult = lappend(result, def);\n> +\t\t}\n> +\t}\n> +\treturn result;\n> +}\n> +\n> +/*\n> + * optionsTextArrayToDefList\n> + *\t\tConvert the text-array format of reloptions into a List of DefElem.\n> + */\n> +List *\n> +optionsTextArrayToDefList(Datum options)\n> +{\n> +\tList\t *result = NIL;\n> +\tArrayType *array;\n> +\tDatum\t *optiondatums;\n> +\tint\t\t\tnoptions;\n> +\tint\t\t\ti;\n> +\n> +\t/* Nothing to do if no options */\n> +\tif (!PointerIsValid(DatumGetPointer(options)))\n> +\t\treturn result;\n> +\n> +\tarray = DatumGetArrayTypeP(options);\n> +\n> +\tdeconstruct_array(array, TEXTOID, -1, false, 'i',\n> +\t\t\t\t\t &optiondatums, NULL, &noptions);\n> +\n> +\tfor (i = 0; i < noptions; i++)\n> +\t{\n> +\t\tchar\t *s;\n> +\t\tchar\t *p;\n> +\t\tNode\t *val = NULL;\n> +\n> +\t\ts = TextDatumGetCString(optiondatums[i]);\n> +\t\tp = strchr(s, '=');\n> +\t\tif (p)\n> +\t\t{\n> +\t\t\t*p++ = '\\0';\n> +\t\t\tval = (Node *) makeString(pstrdup(p));\n> +\t\t}\n> +\t\tresult = lappend(result, makeDefElem(pstrdup(s), val, -1));\n> +\t}\n> +\n> +\treturn result;\n> +}\n> +\n> +/* FIXME write comment here */\n> +\n> +Datum\n> +optionsDefListToTextArray(List *defList)\n> +{\n> +\tListCell *cell;\n> +\tDatum\t\tresult;\n> +\tArrayBuildState *astate = NULL;\n> +\n> +\tforeach(cell, defList)\n> +\t{\n> +\t\tDefElem\t *def = (DefElem *) lfirst(cell);\n> +\t\tconst char *name = def->defname;\n> +\t\tconst char *value;\n> +\t\ttext\t *t;\n> +\t\tint\t\t\tlen;\n> +\n> +\t\tif (def->arg != NULL)\n> +\t\t\tvalue = defGetString(def);\n> +\t\telse\n> +\t\t\tvalue = \"true\";\n> +\n> +\t\tif (def->defnamespace)\n> +\t\t{\n> +\t\t\tAssert(false); /* Should not get here */\n> +\t\t\t/* This function is used for backward compatibility in the place were namespases are not allowed */\n> +\t\t\treturn (Datum) 0;\n> +\t\t}\n> +\t\tlen = VARHDRSZ + strlen(name) + strlen(value) + 1;\n> +\t\tt = (text *) palloc(len + 1);\n> +\t\tSET_VARSIZE(t, len);\n> +\t\tsprintf(VARDATA(t), \"%s=%s\", name, value);\n> +\t\tastate = accumArrayResult(astate, PointerGetDatum(t), false,\n> +\t\t\t\t\t\t\t\t TEXTOID, CurrentMemoryContext);\n> +\n> +\t}\n> +\tif (astate)\n> +\t\tresult = makeArrayResult(astate, CurrentMemoryContext);\n> +\telse\n> +\t\tresult = (Datum) 0;\n> +\treturn result;\n> +}\n> +\n> +\n> +/*\n> + * optionsParseRawValues\n> + *\t\tParses and vlaidates (if proper flag is set) option_values. As a result\n> + *\t\tcaller will get the list of parsed (or partly parsed) option_values\n> + *\n> + * This function is used in cases when caller gets raw values from db or\n> + * syntax and want to parse them.\n> + * This function uses option_spec_set to get information about how each option\n> + * should be parsed.\n> + * If validate mode is off, function found an option that do not have proper\n> + * option_spec_set entry, this option kept unparsed (if some garbage came from\n> + * the DB, we should put it back there)\n> + *\n> + * This function destroys incoming list.\n> + */\n> +List *\n> +optionsParseRawValues(List *raw_values, options_spec_set * spec_set,\n> +\t\t\t\t\t options_parse_mode mode)\n> +{\n> +\tListCell *cell;\n> +\tList\t *result = NIL;\n> +\tbool\t *is_set;\n> +\tint\t\t\ti;\n> +\tbool\t\tvalidate = mode & OPTIONS_PARSE_MODE_VALIDATE;\n> +\tbool\t\tfor_alter = mode & OPTIONS_PARSE_MODE_FOR_ALTER;\n> +\n> +\n> +\tis_set = palloc0(sizeof(bool) * spec_set->num);\n> +\tforeach(cell, raw_values)\n> +\t{\n> +\t\toption_value *option = (option_value *) lfirst(cell);\n> +\t\tbool\t\tfound = false;\n> +\t\tbool\t\tskip = false;\n> +\n> +\n> +\t\tif (option->status == OPTION_VALUE_STATUS_PARSED)\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * This can happen while ALTER, when new values were already\n> +\t\t\t * parsed, but old values merged from DB are still raw\n> +\t\t\t */\n> +\t\t\tresult = lappend(result, option);\n> +\t\t\tcontinue;\n> +\t\t}\n> +\t\tif (validate && option->namespace && (!spec_set->namespace ||\n> +\t\t\t\t strcmp(spec_set->namespace, option->namespace) != 0))\n> +\t\t{\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t errmsg(\"unrecognized parameter namespace \\\"%s\\\"\",\n> +\t\t\t\t\t\t\toption->namespace)));\n> +\t\t}\n> +\n> +\t\tfor (i = 0; i < spec_set->num; i++)\n> +\t\t{\n> +\t\t\toption_spec_basic *definition = spec_set->definitions[i];\n> +\n> +\t\t\tif (strcmp(option->raw_name,\n> +\t\t\t\t\t\t\t definition->name) == 0)\n> +\t\t\t{\n> +\t\t\t\t/*\n> +\t\t\t\t * Skip option with \"ignore\" flag, as it is processed\n> +\t\t\t\t * somewhere else. (WITH OIDS special case)\n> +\t\t\t\t */\n> +\t\t\t\tif (definition->flags & OPTION_DEFINITION_FLAG_IGNORE)\n> +\t\t\t\t{\n> +\t\t\t\t\tfound = true;\n> +\t\t\t\t\tskip = true;\n> +\t\t\t\t\tbreak;\n> +\t\t\t\t}\n> +\n> +\t\t\t\t/*\n> +\t\t\t\t * Reject option as if it was not in spec_set. Needed for cases\n> +\t\t\t\t * when option should have default value, but should not be\n> +\t\t\t\t * changed\n> +\t\t\t\t */\n> +\t\t\t\tif (definition->flags & OPTION_DEFINITION_FLAG_REJECT)\n> +\t\t\t\t{\n> +\t\t\t\t\tfound = false;\n> +\t\t\t\t\tbreak;\n> +\t\t\t\t}\n> +\n> +\t\t\t\tif (validate && is_set[i])\n> +\t\t\t\t{\n> +\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t\t errmsg(\"parameter \\\"%s\\\" specified more than once\",\n> +\t\t\t\t\t\t\t\t option->raw_name)));\n> +\t\t\t\t}\n> +\t\t\t\tif ((for_alter) &&\n> +\t\t\t\t\t(definition->flags & OPTION_DEFINITION_FLAG_FORBID_ALTER))\n> +\t\t\t\t{\n> +\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t\t errmsg(\"changing parameter \\\"%s\\\" is not allowed\",\n> +\t\t\t\t\t\t\t\t definition->name)));\n> +\t\t\t\t}\n> +\t\t\t\tif (option->status == OPTION_VALUE_STATUS_FOR_RESET)\n> +\t\t\t\t{\n> +\t\t\t\t\t/*\n> +\t\t\t\t\t * For RESET options do not need further processing so\n> +\t\t\t\t\t * mark it found and stop searching\n> +\t\t\t\t\t */\n> +\t\t\t\t\tfound = true;\n> +\t\t\t\t\tbreak;\n> +\t\t\t\t}\n> +\t\t\t\tpfree(option->raw_name);\n> +\t\t\t\toption->raw_name = NULL;\n> +\t\t\t\toption->gen = definition;\n> +\t\t\t\tparse_one_option(option, NULL, -1, validate);\n> +\t\t\t\tis_set[i] = true;\n> +\t\t\t\tfound = true;\n> +\t\t\t\tbreak;\n> +\t\t\t}\n> +\t\t}\n> +\t\tif (!found)\n> +\t\t{\n> +\t\t\tif (validate)\n> +\t\t\t{\n> +\t\t\t\tif (option->namespace)\n> +\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t\t\t errmsg(\"unrecognized parameter \\\"%s.%s\\\"\",\n> +\t\t\t\t\t\t\t\t\toption->namespace, option->raw_name)));\n> +\t\t\t\telse\n> +\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t\t\t errmsg(\"unrecognized parameter \\\"%s\\\"\",\n> +\t\t\t\t\t\t\t\t\toption->raw_name)));\n> +\t\t\t} else\n> +\t\t\t{\n> +\t\t\t\t/* RESET is always in non-validating mode, unkown names should\n> +\t\t\t\t * be ignored. This is traditional behaviour of postgres/\n> +\t\t\t\t * FIXME may be it should be changed someday\n> +\t\t\t\t */\n> +\t\t\t\tif (option->status == OPTION_VALUE_STATUS_FOR_RESET)\n> +\t\t\t\t{\n> +\t\t\t\t\tskip = true;\n> +\t\t\t\t}\n> +\t\t\t}\n> +\t\t\t/*\n> +\t\t\t * In other cases, if we are parsing not in validate mode, then\n> +\t\t\t * we should keep unknown node, because non-validate mode is for\n> +\t\t\t * data that is already in the DB and should not be changed after\n> +\t\t\t * altering another entries\n> +\t\t\t */\n> +\t\t}\n> +\t\tif (!skip)\n> +\t\t\tresult = lappend(result, option);\n> +\t}\n> +\treturn result;\n> +}\n> +\n> +/*\n> + * parse_one_option\n> + *\n> + *\t\tSubroutine for optionsParseRawValues, to parse and validate a\n> + *\t\tsingle option's value\n> + */\n> +static void\n> +parse_one_option(option_value * option, const char *text_str, int text_len,\n> +\t\t\t\t bool validate)\n> +{\n> +\tchar\t *value;\n> +\tbool\t\tparsed;\n> +\n> +\tvalue = option->raw_value;\n> +\n> +\tswitch (option->gen->type)\n> +\t{\n> +\t\tcase OPTION_TYPE_BOOL:\n> +\t\t\t{\n> +\t\t\t\tparsed = parse_bool(value, &option->values.bool_val);\n> +\t\t\t\tif (validate && !parsed)\n> +\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t\terrmsg(\"invalid value for boolean option \\\"%s\\\": %s\",\n> +\t\t\t\t\t\t\t option->gen->name, value)));\n> +\t\t\t}\n> +\t\t\tbreak;\n> +\t\tcase OPTION_TYPE_INT:\n> +\t\t\t{\n> +\t\t\t\toption_spec_int *optint =\n> +\t\t\t\t(option_spec_int *) option->gen;\n> +\n> +\t\t\t\tparsed = parse_int(value, &option->values.int_val, 0, NULL);\n> +\t\t\t\tif (validate && !parsed)\n> +\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t\terrmsg(\"invalid value for integer option \\\"%s\\\": %s\",\n> +\t\t\t\t\t\t\t option->gen->name, value)));\n> +\t\t\t\tif (validate && (option->values.int_val < optint->min ||\n> +\t\t\t\t\t\t\t\t option->values.int_val > optint->max))\n> +\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t\t errmsg(\"value %s out of bounds for option \\\"%s\\\"\",\n> +\t\t\t\t\t\t\t\t value, option->gen->name),\n> +\t\t\t\t\t errdetail(\"Valid values are between \\\"%d\\\" and \\\"%d\\\".\",\n> +\t\t\t\t\t\t\t optint->min, optint->max)));\n> +\t\t\t}\n> +\t\t\tbreak;\n> +\t\tcase OPTION_TYPE_REAL:\n> +\t\t\t{\n> +\t\t\t\toption_spec_real *optreal =\n> +\t\t\t\t(option_spec_real *) option->gen;\n> +\n> +\t\t\t\tparsed = parse_real(value, &option->values.real_val, 0, NULL);\n> +\t\t\t\tif (validate && !parsed)\n> +\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t\t\t errmsg(\"invalid value for floating point option \\\"%s\\\": %s\",\n> +\t\t\t\t\t\t\t\t\toption->gen->name, value)));\n> +\t\t\t\tif (validate && (option->values.real_val < optreal->min ||\n> +\t\t\t\t\t\t\t\t option->values.real_val > optreal->max))\n> +\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t\t errmsg(\"value %s out of bounds for option \\\"%s\\\"\",\n> +\t\t\t\t\t\t\t\t value, option->gen->name),\n> +\t\t\t\t\t errdetail(\"Valid values are between \\\"%f\\\" and \\\"%f\\\".\",\n> +\t\t\t\t\t\t\t optreal->min, optreal->max)));\n> +\t\t\t}\n> +\t\t\tbreak;\n> +\t\tcase OPTION_TYPE_ENUM:\n> +\t\t\t{\n> +\t\t\t\toption_spec_enum *optenum =\n> +\t\t\t\t\t\t\t\t\t\t(option_spec_enum *) option->gen;\n> +\t\t\t\topt_enum_elt_def *elt;\n> +\t\t\t\tparsed = false;\n> +\t\t\t\tfor (elt = optenum->members; elt->string_val; elt++)\n> +\t\t\t\t{\n> +\t\t\t\t\tif (strcmp(value, elt->string_val) == 0)\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\toption->values.enum_val = elt->symbol_val;\n> +\t\t\t\t\t\tparsed = true;\n> +\t\t\t\t\t\tbreak;\n> +\t\t\t\t\t}\n> +\t\t\t\t}\n> +\t\t\t\tif (!parsed)\n> +\t\t\t\t{\n> +\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t\t\t errmsg(\"invalid value for enum option \\\"%s\\\": %s\",\n> +\t\t\t\t\t\t\t\t\toption->gen->name, value),\n> +\t\t\t\t\t\t\t optenum->detailmsg ?\n> +\t\t\t\t\t\t\t errdetail_internal(\"%s\", _(optenum->detailmsg)) : 0));\n> +\t\t\t\t}\n> +\t\t\t}\n> +\t\t\tbreak;\n> +\t\tcase OPTION_TYPE_STRING:\n> +\t\t\t{\n> +\t\t\t\toption_spec_string *optstring =\n> +\t\t\t\t(option_spec_string *) option->gen;\n> +\n> +\t\t\t\toption->values.string_val = value;\n> +\t\t\t\tif (validate && optstring->validate_cb)\n> +\t\t\t\t\t(optstring->validate_cb) (value);\n> +\t\t\t\tparsed = true;\n> +\t\t\t}\n> +\t\t\tbreak;\n> +\t\tdefault:\n> +\t\t\telog(ERROR, \"unsupported reloption type %d\", option->gen->type);\n> +\t\t\tparsed = true;\t\t/* quiet compiler */\n> +\t\t\tbreak;\n> +\t}\n> +\n> +\tif (parsed)\n> +\t\toption->status = OPTION_VALUE_STATUS_PARSED;\n> +\n> +}\n> +\n> +/*\n> + * optionsAllocateBytea\n> + *\t\tAllocates memory for bytea options representation\n> + *\n> + * Function allocates memory for byrea structure of an option, plus adds space\n> + * for values of string options. We should keep all data including string\n> + * values in the same memory chunk, because Cache code copies bytea option\n> + * data from one MemoryConext to another without knowing about it's internal\n> + * structure, so it would not be able to copy string values if they are outside\n> + * of bytea memory chunk.\n> + */\n> +static void *\n> +optionsAllocateBytea(options_spec_set * spec_set, List *options)\n> +{\n> +\tSize\t\tsize;\n> +\tint\t\t\ti;\n> +\tListCell *cell;\n> +\tint\t\t\tlength;\n> +\tvoid\t *res;\n> +\n> +\tsize = spec_set->struct_size;\n> +\n> +\t/* Calculate size needed to store all string values for this option */\n> +\tfor (i = 0; i < spec_set->num; i++)\n> +\t{\n> +\t\toption_spec_basic *definition = spec_set->definitions[i];\n> +\t\tbool\t\tfound = false;\n> +\t\toption_value *option;\n> +\n> +\t\t/* Not interested in non-string options, skipping */\n> +\t\tif (definition->type != OPTION_TYPE_STRING)\n> +\t\t\tcontinue;\n> +\n> +\t\t/*\n> +\t\t * Trying to find option_value that references definition spec_set\n> +\t\t * entry\n> +\t\t */\n> +\t\tforeach(cell, options)\n> +\t\t{\n> +\t\t\toption = (option_value *) lfirst(cell);\n> +\t\t\tif (option->status == OPTION_VALUE_STATUS_PARSED &&\n> +\t\t\t\tstrcmp(option->gen->name, definition->name) == 0)\n> +\t\t\t{\n> +\t\t\t\tfound = true;\n> +\t\t\t\tbreak;\n> +\t\t\t}\n> +\t\t}\n> +\t\tif (found)\n> +\t\t\t/* If found, it'value will be stored */\n> +\t\t\tlength = strlen(option->values.string_val) + 1;\n> +\t\telse\n> +\t\t\t/* If not found, then there would be default value there */\n> +\t\tif (((option_spec_string *) definition)->default_val)\n> +\t\t\tlength = strlen(\n> +\t\t\t\t ((option_spec_string *) definition)->default_val) + 1;\n> +\t\telse\n> +\t\t\tlength = 0;\n> +\t\t/* Add total length of all string values to basic size */\n> +\t\tsize += length;\n> +\t}\n> +\n> +\tres = palloc0(size);\n> +\tSET_VARSIZE(res, size);\n> +\treturn res;\n> +}\n> +\n> +/*\n> + * optionsValuesToBytea\n> + *\t\tConverts options from List of option_values to binary bytea structure\n> + *\n> + * Convertation goes according to options_spec_set: each spec_set item\n> + * has offset value, and option value in binary mode is written to the\n> + * structure with that offset.\n> + *\n> + * More special case is string values. Memory for bytea structure is allocated\n> + * by optionsAllocateBytea which adds some more space for string values to\n> + * the size of original structure. All string values are copied there and\n> + * inside the bytea structure an offset to that value is kept.\n> + *\n> + */\n> +static bytea *\n> +optionsValuesToBytea(List *options, options_spec_set * spec_set)\n> +{\n> +\tchar\t *data;\n> +\tchar\t *string_values_buffer;\n> +\tint\t\t\ti;\n> +\n> +\tdata = optionsAllocateBytea(spec_set, options);\n> +\n> +\t/* place for string data starts right after original structure */\n> +\tstring_values_buffer = data + spec_set->struct_size;\n> +\n> +\tfor (i = 0; i < spec_set->num; i++)\n> +\t{\n> +\t\toption_value *found = NULL;\n> +\t\tListCell *cell;\n> +\t\tchar\t *item_pos;\n> +\t\toption_spec_basic *definition = spec_set->definitions[i];\n> +\n> +\t\tif (definition->flags & OPTION_DEFINITION_FLAG_IGNORE)\n> +\t\t\tcontinue;\n> +\n> +\t\t/* Calculate the position of the item inside the structure */\n> +\t\titem_pos = data + definition->struct_offset;\n> +\n> +\t\t/* Looking for the corresponding option from options list */\n> +\t\tforeach(cell, options)\n> +\t\t{\n> +\t\t\toption_value *option = (option_value *) lfirst(cell);\n> +\n> +\t\t\tif (option->status == OPTION_VALUE_STATUS_RAW)\n> +\t\t\t\tcontinue;\t\t/* raw can come from db. Just ignore them then */\n> +\t\t\tAssert(option->status != OPTION_VALUE_STATUS_EMPTY);\n> +\n> +\t\t\tif (strcmp(definition->name, option->gen->name) == 0)\n> +\t\t\t{\n> +\t\t\t\tfound = option;\n> +\t\t\t\tbreak;\n> +\t\t\t}\n> +\t\t}\n> +\t\t/* writing to the proper position either option value or default val */\n> +\t\tswitch (definition->type)\n> +\t\t{\n> +\t\t\tcase OPTION_TYPE_BOOL:\n> +\t\t\t\t*(bool *) item_pos = found ?\n> +\t\t\t\t\tfound->values.bool_val :\n> +\t\t\t\t\t((option_spec_bool *) definition)->default_val;\n> +\t\t\t\tbreak;\n> +\t\t\tcase OPTION_TYPE_INT:\n> +\t\t\t\t*(int *) item_pos = found ?\n> +\t\t\t\t\tfound->values.int_val :\n> +\t\t\t\t\t((option_spec_int *) definition)->default_val;\n> +\t\t\t\tbreak;\n> +\t\t\tcase OPTION_TYPE_REAL:\n> +\t\t\t\t*(double *) item_pos = found ?\n> +\t\t\t\t\tfound->values.real_val :\n> +\t\t\t\t\t((option_spec_real *) definition)->default_val;\n> +\t\t\t\tbreak;\n> +\t\t\tcase OPTION_TYPE_ENUM:\n> +\t\t\t\t*(int *) item_pos = found ?\n> +\t\t\t\t\tfound->values.enum_val :\n> +\t\t\t\t\t((option_spec_enum *) definition)->default_val;\n> +\t\t\t\tbreak;\n> +\n> +\t\t\tcase OPTION_TYPE_STRING:\n> +\t\t\t\t{\n> +\t\t\t\t\t/*\n> +\t\t\t\t\t * For string options: writing string value at the string\n> +\t\t\t\t\t * buffer after the structure, and storing and offset to\n> +\t\t\t\t\t * that value\n> +\t\t\t\t\t */\n> +\t\t\t\t\tchar\t *value = NULL;\n> +\n> +\t\t\t\t\tif (found)\n> +\t\t\t\t\t\tvalue = found->values.string_val;\n> +\t\t\t\t\telse\n> +\t\t\t\t\t\tvalue = ((option_spec_string *) definition)\n> +\t\t\t\t\t\t\t->default_val;\n> +\t\t\t\t\t*(int *) item_pos = value ?\n> +\t\t\t\t\t\tstring_values_buffer - data :\n> +\t\t\t\t\t\tOPTION_STRING_VALUE_NOT_SET_OFFSET;\n> +\t\t\t\t\tif (value)\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\tstrcpy(string_values_buffer, value);\n> +\t\t\t\t\t\tstring_values_buffer += strlen(value) + 1;\n> +\t\t\t\t\t}\n> +\t\t\t\t}\n> +\t\t\t\tbreak;\n> +\t\t\tdefault:\n> +\t\t\t\telog(ERROR, \"unsupported reloption type %d\",\n> +\t\t\t\t\t definition->type);\n> +\t\t\t\tbreak;\n> +\t\t}\n> +\t}\n> +\treturn (void *) data;\n> +}\n> +\n> +\n> +/*\n> + * transformOptions\n> + *\t\tThis function is used by src/backend/commands/Xxxx in order to process\n> + *\t\tnew option values, merge them with existing values (in the case of\n> + *\t\tALTER command) and prepare to put them [back] into DB\n> + */\n> +\n> +Datum\n> +transformOptions(options_spec_set * spec_set, Datum oldOptions,\n> +\t\t\t\t List *defList, options_parse_mode parse_mode)\n> +{\n> +\tDatum\t\tresult;\n> +\tList\t *new_values;\n> +\tList\t *old_values;\n> +\tList\t *merged_values;\n> +\n> +\t/*\n> +\t * Parse and validate New values\n> +\t */\n> +\tnew_values = optionsDefListToRawValues(defList, parse_mode);\n> +\tif (! (parse_mode & OPTIONS_PARSE_MODE_FOR_RESET))\n> +\t{\n> +\t\t/* FIXME: postgres usual behaviour vas not to vaidate names that\n> +\t\t * came from RESET command. Once this behavious should be changed,\n> +\t\t * I guess. But for now we keep it as it was.\n> +\t\t */\n> +\t\tparse_mode|= OPTIONS_PARSE_MODE_VALIDATE;\n> +\t}\n> +\tnew_values = optionsParseRawValues(new_values, spec_set, parse_mode);\n> +\n> +\t/*\n> +\t * Old values exists in case of ALTER commands. Transform them to raw\n> +\t * values and merge them with new_values, and parse it.\n> +\t */\n> +\tif (PointerIsValid(DatumGetPointer(oldOptions)))\n> +\t{\n> +\t\told_values = optionsTextArrayToRawValues(oldOptions);\n> +\t\tmerged_values = optionsMergeOptionValues(old_values, new_values);\n> +\n> +\t\t/*\n> +\t\t * Parse options only after merging in order not to parse options that\n> +\t\t * would be removed by merging later\n> +\t\t */\n> +\t\tmerged_values = optionsParseRawValues(merged_values, spec_set, 0);\n> +\t}\n> +\telse\n> +\t{\n> +\t\tmerged_values = new_values;\n> +\t}\n> +\n> +\t/*\n> +\t * If we have postprocess_fun function defined in spec_set, then there\n> +\t * might be some custom options checks there, with error throwing. So we\n> +\t * should do it here to throw these errors while CREATing or ALTERing\n> +\t * options\n> +\t */\n> +\tif (spec_set->postprocess_fun)\n> +\t{\n> +\t\tbytea\t *data = optionsValuesToBytea(merged_values, spec_set);\n> +\n> +\t\tspec_set->postprocess_fun(data, true);\n> +\t\tpfree(data);\n> +\t}\n> +\n> +\t/*\n> +\t * Convert options to TextArray format so caller can store them into\n> +\t * database\n> +\t */\n> +\tresult = optionsValuesToTextArray(merged_values);\n> +\treturn result;\n> +}\n> +\n> +\n> +/*\n> + * optionsTextArrayToBytea\n> + *\t\tA meta-function that transforms options stored as TextArray into binary\n> + *\t\t(bytea) representation.\n> + *\n> + *\tThis function runs other transform functions that leads to the desired\n> + *\tresult in no-validation mode. This function is used by cache mechanism,\n> + *\tin order to load and cache options when object itself is loaded and cached\n> + */\n> +bytea *\n> +optionsTextArrayToBytea(options_spec_set * spec_set, Datum data, bool validate)\n> +{\n> +\tList\t *values;\n> +\tbytea\t *options;\n> +\n> +\tvalues = optionsTextArrayToRawValues(data);\n> +\tvalues = optionsParseRawValues(values, spec_set,\n> +\t\t\t\t\t\t\t\tvalidate ? OPTIONS_PARSE_MODE_VALIDATE : 0);\n> +\toptions = optionsValuesToBytea(values, spec_set);\n> +\n> +\tif (spec_set->postprocess_fun)\n> +\t{\n> +\t\tspec_set->postprocess_fun(options, false);\n> +\t}\n> +\treturn options;\n> +}\n> diff --git a/src/backend/access/common/relation.c b/src/backend/access/common/relation.c\n> index 632d13c..49ad197 100644\n> --- a/src/backend/access/common/relation.c\n> +++ b/src/backend/access/common/relation.c\n> @@ -65,9 +65,13 @@ relation_open(Oid relationId, LOCKMODE lockmode)\n> \t * If we didn't get the lock ourselves, assert that caller holds one,\n> \t * except in bootstrap mode where no locks are used.\n> \t */\n> -\tAssert(lockmode != NoLock ||\n> -\t\t IsBootstrapProcessingMode() ||\n> -\t\t CheckRelationLockedByMe(r, AccessShareLock, true));\n> +\n> +// FIXME We need NoLock mode to get AM data when choosing Lock for\n> +// attoptions is changed. See ProcessUtilitySlow problems comes from there\n> +// This is a dirty hack, we need better solution for this case;\n> +//\tAssert(lockmode != NoLock ||\n> +//\t\t IsBootstrapProcessingMode() ||\n> +//\t\t CheckRelationLockedByMe(r, AccessShareLock, true));\n> \n> \t/* Make note that we've accessed a temporary relation */\n> \tif (RelationUsesLocalBuffers(r))\n> diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c\n> index b5602f5..29ab98a 100644\n> --- a/src/backend/access/common/reloptions.c\n> +++ b/src/backend/access/common/reloptions.c\n> @@ -1,7 +1,7 @@\n> /*-------------------------------------------------------------------------\n> *\n> * reloptions.c\n> - *\t Core support for relation options (pg_class.reloptions)\n> + *\t Support for relation options (pg_class.reloptions)\n> *\n> * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group\n> * Portions Copyright (c) 1994, Regents of the University of California\n> @@ -17,13 +17,10 @@\n> \n> #include <float.h>\n> \n> -#include \"access/gist_private.h\"\n> -#include \"access/hash.h\"\n> #include \"access/heaptoast.h\"\n> #include \"access/htup_details.h\"\n> -#include \"access/nbtree.h\"\n> #include \"access/reloptions.h\"\n> -#include \"access/spgist_private.h\"\n> +#include \"access/options.h\"\n> #include \"catalog/pg_type.h\"\n> #include \"commands/defrem.h\"\n> #include \"commands/tablespace.h\"\n> @@ -36,6 +33,7 @@\n> #include \"utils/guc.h\"\n> #include \"utils/memutils.h\"\n> #include \"utils/rel.h\"\n> +#include \"storage/bufmgr.h\"\n> \n> /*\n> * Contents of pg_class.reloptions\n> @@ -93,380 +91,8 @@\n> * value has no effect until the next VACUUM, so no need for stronger lock.\n> */\n> \n> -static relopt_bool boolRelOpts[] =\n> -{\n> -\t{\n> -\t\t{\n> -\t\t\t\"autosummarize\",\n> -\t\t\t\"Enables automatic summarization on this BRIN index\",\n> -\t\t\tRELOPT_KIND_BRIN,\n> -\t\t\tAccessExclusiveLock\n> -\t\t},\n> -\t\tfalse\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"autovacuum_enabled\",\n> -\t\t\t\"Enables autovacuum in this relation\",\n> -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\ttrue\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"user_catalog_table\",\n> -\t\t\t\"Declare a table as an additional catalog table, e.g. for the purpose of logical replication\",\n> -\t\t\tRELOPT_KIND_HEAP,\n> -\t\t\tAccessExclusiveLock\n> -\t\t},\n> -\t\tfalse\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"fastupdate\",\n> -\t\t\t\"Enables \\\"fast update\\\" feature for this GIN index\",\n> -\t\t\tRELOPT_KIND_GIN,\n> -\t\t\tAccessExclusiveLock\n> -\t\t},\n> -\t\ttrue\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"security_barrier\",\n> -\t\t\t\"View acts as a row security barrier\",\n> -\t\t\tRELOPT_KIND_VIEW,\n> -\t\t\tAccessExclusiveLock\n> -\t\t},\n> -\t\tfalse\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"vacuum_truncate\",\n> -\t\t\t\"Enables vacuum to truncate empty pages at the end of this table\",\n> -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\ttrue\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"deduplicate_items\",\n> -\t\t\t\"Enables \\\"deduplicate items\\\" feature for this btree index\",\n> -\t\t\tRELOPT_KIND_BTREE,\n> -\t\t\tShareUpdateExclusiveLock\t/* since it applies only to later\n> -\t\t\t\t\t\t\t\t\t\t * inserts */\n> -\t\t},\n> -\t\ttrue\n> -\t},\n> -\t/* list terminator */\n> -\t{{NULL}}\n> -};\n> -\n> -static relopt_int intRelOpts[] =\n> -{\n> -\t{\n> -\t\t{\n> -\t\t\t\"fillfactor\",\n> -\t\t\t\"Packs table pages only to this percentage\",\n> -\t\t\tRELOPT_KIND_HEAP,\n> -\t\t\tShareUpdateExclusiveLock\t/* since it applies only to later\n> -\t\t\t\t\t\t\t\t\t\t * inserts */\n> -\t\t},\n> -\t\tHEAP_DEFAULT_FILLFACTOR, HEAP_MIN_FILLFACTOR, 100\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"fillfactor\",\n> -\t\t\t\"Packs btree index pages only to this percentage\",\n> -\t\t\tRELOPT_KIND_BTREE,\n> -\t\t\tShareUpdateExclusiveLock\t/* since it applies only to later\n> -\t\t\t\t\t\t\t\t\t\t * inserts */\n> -\t\t},\n> -\t\tBTREE_DEFAULT_FILLFACTOR, BTREE_MIN_FILLFACTOR, 100\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"fillfactor\",\n> -\t\t\t\"Packs hash index pages only to this percentage\",\n> -\t\t\tRELOPT_KIND_HASH,\n> -\t\t\tShareUpdateExclusiveLock\t/* since it applies only to later\n> -\t\t\t\t\t\t\t\t\t\t * inserts */\n> -\t\t},\n> -\t\tHASH_DEFAULT_FILLFACTOR, HASH_MIN_FILLFACTOR, 100\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"fillfactor\",\n> -\t\t\t\"Packs gist index pages only to this percentage\",\n> -\t\t\tRELOPT_KIND_GIST,\n> -\t\t\tShareUpdateExclusiveLock\t/* since it applies only to later\n> -\t\t\t\t\t\t\t\t\t\t * inserts */\n> -\t\t},\n> -\t\tGIST_DEFAULT_FILLFACTOR, GIST_MIN_FILLFACTOR, 100\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"fillfactor\",\n> -\t\t\t\"Packs spgist index pages only to this percentage\",\n> -\t\t\tRELOPT_KIND_SPGIST,\n> -\t\t\tShareUpdateExclusiveLock\t/* since it applies only to later\n> -\t\t\t\t\t\t\t\t\t\t * inserts */\n> -\t\t},\n> -\t\tSPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"autovacuum_vacuum_threshold\",\n> -\t\t\t\"Minimum number of tuple updates or deletes prior to vacuum\",\n> -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-1, 0, INT_MAX\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"autovacuum_vacuum_insert_threshold\",\n> -\t\t\t\"Minimum number of tuple inserts prior to vacuum, or -1 to disable insert vacuums\",\n> -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-2, -1, INT_MAX\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"autovacuum_analyze_threshold\",\n> -\t\t\t\"Minimum number of tuple inserts, updates or deletes prior to analyze\",\n> -\t\t\tRELOPT_KIND_HEAP,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-1, 0, INT_MAX\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"autovacuum_vacuum_cost_limit\",\n> -\t\t\t\"Vacuum cost amount available before napping, for autovacuum\",\n> -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-1, 1, 10000\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"autovacuum_freeze_min_age\",\n> -\t\t\t\"Minimum age at which VACUUM should freeze a table row, for autovacuum\",\n> -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-1, 0, 1000000000\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"autovacuum_multixact_freeze_min_age\",\n> -\t\t\t\"Minimum multixact age at which VACUUM should freeze a row multixact's, for autovacuum\",\n> -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-1, 0, 1000000000\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"autovacuum_freeze_max_age\",\n> -\t\t\t\"Age at which to autovacuum a table to prevent transaction ID wraparound\",\n> -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-1, 100000, 2000000000\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"autovacuum_multixact_freeze_max_age\",\n> -\t\t\t\"Multixact age at which to autovacuum a table to prevent multixact wraparound\",\n> -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-1, 10000, 2000000000\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"autovacuum_freeze_table_age\",\n> -\t\t\t\"Age at which VACUUM should perform a full table sweep to freeze row versions\",\n> -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t}, -1, 0, 2000000000\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"autovacuum_multixact_freeze_table_age\",\n> -\t\t\t\"Age of multixact at which VACUUM should perform a full table sweep to freeze row versions\",\n> -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t}, -1, 0, 2000000000\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"log_autovacuum_min_duration\",\n> -\t\t\t\"Sets the minimum execution time above which autovacuum actions will be logged\",\n> -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-1, -1, INT_MAX\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"toast_tuple_target\",\n> -\t\t\t\"Sets the target tuple length at which external columns will be toasted\",\n> -\t\t\tRELOPT_KIND_HEAP,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\tTOAST_TUPLE_TARGET, 128, TOAST_TUPLE_TARGET_MAIN\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"pages_per_range\",\n> -\t\t\t\"Number of pages that each page range covers in a BRIN index\",\n> -\t\t\tRELOPT_KIND_BRIN,\n> -\t\t\tAccessExclusiveLock\n> -\t\t}, 128, 1, 131072\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"gin_pending_list_limit\",\n> -\t\t\t\"Maximum size of the pending list for this GIN index, in kilobytes.\",\n> -\t\t\tRELOPT_KIND_GIN,\n> -\t\t\tAccessExclusiveLock\n> -\t\t},\n> -\t\t-1, 64, MAX_KILOBYTES\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"effective_io_concurrency\",\n> -\t\t\t\"Number of simultaneous requests that can be handled efficiently by the disk subsystem.\",\n> -\t\t\tRELOPT_KIND_TABLESPACE,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -#ifdef USE_PREFETCH\n> -\t\t-1, 0, MAX_IO_CONCURRENCY\n> -#else\n> -\t\t0, 0, 0\n> -#endif\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"maintenance_io_concurrency\",\n> -\t\t\t\"Number of simultaneous requests that can be handled efficiently by the disk subsystem for maintenance work.\",\n> -\t\t\tRELOPT_KIND_TABLESPACE,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -#ifdef USE_PREFETCH\n> -\t\t-1, 0, MAX_IO_CONCURRENCY\n> -#else\n> -\t\t0, 0, 0\n> -#endif\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"parallel_workers\",\n> -\t\t\t\"Number of parallel processes that can be used per executor node for this relation.\",\n> -\t\t\tRELOPT_KIND_HEAP,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-1, 0, 1024\n> -\t},\n> -\n> -\t/* list terminator */\n> -\t{{NULL}}\n> -};\n> -\n> -static relopt_real realRelOpts[] =\n> -{\n> -\t{\n> -\t\t{\n> -\t\t\t\"autovacuum_vacuum_cost_delay\",\n> -\t\t\t\"Vacuum cost delay in milliseconds, for autovacuum\",\n> -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-1, 0.0, 100.0\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"autovacuum_vacuum_scale_factor\",\n> -\t\t\t\"Number of tuple updates or deletes prior to vacuum as a fraction of reltuples\",\n> -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-1, 0.0, 100.0\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"autovacuum_vacuum_insert_scale_factor\",\n> -\t\t\t\"Number of tuple inserts prior to vacuum as a fraction of reltuples\",\n> -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-1, 0.0, 100.0\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"autovacuum_analyze_scale_factor\",\n> -\t\t\t\"Number of tuple inserts, updates or deletes prior to analyze as a fraction of reltuples\",\n> -\t\t\tRELOPT_KIND_HEAP,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-1, 0.0, 100.0\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"seq_page_cost\",\n> -\t\t\t\"Sets the planner's estimate of the cost of a sequentially fetched disk page.\",\n> -\t\t\tRELOPT_KIND_TABLESPACE,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-1, 0.0, DBL_MAX\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"random_page_cost\",\n> -\t\t\t\"Sets the planner's estimate of the cost of a nonsequentially fetched disk page.\",\n> -\t\t\tRELOPT_KIND_TABLESPACE,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-1, 0.0, DBL_MAX\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"n_distinct\",\n> -\t\t\t\"Sets the planner's estimate of the number of distinct values appearing in a column (excluding child relations).\",\n> -\t\t\tRELOPT_KIND_ATTRIBUTE,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t0, -1.0, DBL_MAX\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"n_distinct_inherited\",\n> -\t\t\t\"Sets the planner's estimate of the number of distinct values appearing in a column (including child relations).\",\n> -\t\t\tRELOPT_KIND_ATTRIBUTE,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t0, -1.0, DBL_MAX\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"vacuum_cleanup_index_scale_factor\",\n> -\t\t\t\"Deprecated B-Tree parameter.\",\n> -\t\t\tRELOPT_KIND_BTREE,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\t-1, 0.0, 1e10\n> -\t},\n> -\t/* list terminator */\n> -\t{{NULL}}\n> -};\n> -\n> /* values from StdRdOptIndexCleanup */\n> -relopt_enum_elt_def StdRdOptIndexCleanupValues[] =\n> +opt_enum_elt_def StdRdOptIndexCleanupValues[] =\n> {\n> \t{\"auto\", STDRD_OPTION_VACUUM_INDEX_CLEANUP_AUTO},\n> \t{\"on\", STDRD_OPTION_VACUUM_INDEX_CLEANUP_ON},\n> @@ -480,17 +106,8 @@ relopt_enum_elt_def StdRdOptIndexCleanupValues[] =\n> \t{(const char *) NULL}\t\t/* list terminator */\n> };\n> \n> -/* values from GistOptBufferingMode */\n> -relopt_enum_elt_def gistBufferingOptValues[] =\n> -{\n> -\t{\"auto\", GIST_OPTION_BUFFERING_AUTO},\n> -\t{\"on\", GIST_OPTION_BUFFERING_ON},\n> -\t{\"off\", GIST_OPTION_BUFFERING_OFF},\n> -\t{(const char *) NULL}\t\t/* list terminator */\n> -};\n> -\n> /* values from ViewOptCheckOption */\n> -relopt_enum_elt_def viewCheckOptValues[] =\n> +opt_enum_elt_def viewCheckOptValues[] =\n> {\n> \t/* no value for NOT_SET */\n> \t{\"local\", VIEW_OPTION_CHECK_OPTION_LOCAL},\n> @@ -498,61 +115,8 @@ relopt_enum_elt_def viewCheckOptValues[] =\n> \t{(const char *) NULL}\t\t/* list terminator */\n> };\n> \n> -static relopt_enum enumRelOpts[] =\n> -{\n> -\t{\n> -\t\t{\n> -\t\t\t\"vacuum_index_cleanup\",\n> -\t\t\t\"Controls index vacuuming and index cleanup\",\n> -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> -\t\t\tShareUpdateExclusiveLock\n> -\t\t},\n> -\t\tStdRdOptIndexCleanupValues,\n> -\t\tSTDRD_OPTION_VACUUM_INDEX_CLEANUP_AUTO,\n> -\t\tgettext_noop(\"Valid values are \\\"on\\\", \\\"off\\\", and \\\"auto\\\".\")\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"buffering\",\n> -\t\t\t\"Enables buffering build for this GiST index\",\n> -\t\t\tRELOPT_KIND_GIST,\n> -\t\t\tAccessExclusiveLock\n> -\t\t},\n> -\t\tgistBufferingOptValues,\n> -\t\tGIST_OPTION_BUFFERING_AUTO,\n> -\t\tgettext_noop(\"Valid values are \\\"on\\\", \\\"off\\\", and \\\"auto\\\".\")\n> -\t},\n> -\t{\n> -\t\t{\n> -\t\t\t\"check_option\",\n> -\t\t\t\"View has WITH CHECK OPTION defined (local or cascaded).\",\n> -\t\t\tRELOPT_KIND_VIEW,\n> -\t\t\tAccessExclusiveLock\n> -\t\t},\n> -\t\tviewCheckOptValues,\n> -\t\tVIEW_OPTION_CHECK_OPTION_NOT_SET,\n> -\t\tgettext_noop(\"Valid values are \\\"local\\\" and \\\"cascaded\\\".\")\n> -\t},\n> -\t/* list terminator */\n> -\t{{NULL}}\n> -};\n> -\n> -static relopt_string stringRelOpts[] =\n> -{\n> -\t/* list terminator */\n> -\t{{NULL}}\n> -};\n> -\n> -static relopt_gen **relOpts = NULL;\n> -static bits32 last_assigned_kind = RELOPT_KIND_LAST_DEFAULT;\n> -\n> -static int\tnum_custom_options = 0;\n> -static relopt_gen **custom_options = NULL;\n> -static bool need_initialization = true;\n> \n> -static void initialize_reloptions(void);\n> -static void parse_one_reloption(relopt_value *option, char *text_str,\n> -\t\t\t\t\t\t\t\tint text_len, bool validate);\n> +options_spec_set *get_stdrd_relopt_spec_set(relopt_kind kind);\n> \n> /*\n> * Get the length of a string reloption (either default or the user-defined\n> @@ -563,160 +127,6 @@ static void parse_one_reloption(relopt_value *option, char *text_str,\n> \t((option).isset ? strlen((option).values.string_val) : \\\n> \t ((relopt_string *) (option).gen)->default_len)\n> \n> -/*\n> - * initialize_reloptions\n> - *\t\tinitialization routine, must be called before parsing\n> - *\n> - * Initialize the relOpts array and fill each variable's type and name length.\n> - */\n> -static void\n> -initialize_reloptions(void)\n> -{\n> -\tint\t\t\ti;\n> -\tint\t\t\tj;\n> -\n> -\tj = 0;\n> -\tfor (i = 0; boolRelOpts[i].gen.name; i++)\n> -\t{\n> -\t\tAssert(DoLockModesConflict(boolRelOpts[i].gen.lockmode,\n> -\t\t\t\t\t\t\t\t boolRelOpts[i].gen.lockmode));\n> -\t\tj++;\n> -\t}\n> -\tfor (i = 0; intRelOpts[i].gen.name; i++)\n> -\t{\n> -\t\tAssert(DoLockModesConflict(intRelOpts[i].gen.lockmode,\n> -\t\t\t\t\t\t\t\t intRelOpts[i].gen.lockmode));\n> -\t\tj++;\n> -\t}\n> -\tfor (i = 0; realRelOpts[i].gen.name; i++)\n> -\t{\n> -\t\tAssert(DoLockModesConflict(realRelOpts[i].gen.lockmode,\n> -\t\t\t\t\t\t\t\t realRelOpts[i].gen.lockmode));\n> -\t\tj++;\n> -\t}\n> -\tfor (i = 0; enumRelOpts[i].gen.name; i++)\n> -\t{\n> -\t\tAssert(DoLockModesConflict(enumRelOpts[i].gen.lockmode,\n> -\t\t\t\t\t\t\t\t enumRelOpts[i].gen.lockmode));\n> -\t\tj++;\n> -\t}\n> -\tfor (i = 0; stringRelOpts[i].gen.name; i++)\n> -\t{\n> -\t\tAssert(DoLockModesConflict(stringRelOpts[i].gen.lockmode,\n> -\t\t\t\t\t\t\t\t stringRelOpts[i].gen.lockmode));\n> -\t\tj++;\n> -\t}\n> -\tj += num_custom_options;\n> -\n> -\tif (relOpts)\n> -\t\tpfree(relOpts);\n> -\trelOpts = MemoryContextAlloc(TopMemoryContext,\n> -\t\t\t\t\t\t\t\t (j + 1) * sizeof(relopt_gen *));\n> -\n> -\tj = 0;\n> -\tfor (i = 0; boolRelOpts[i].gen.name; i++)\n> -\t{\n> -\t\trelOpts[j] = &boolRelOpts[i].gen;\n> -\t\trelOpts[j]->type = RELOPT_TYPE_BOOL;\n> -\t\trelOpts[j]->namelen = strlen(relOpts[j]->name);\n> -\t\tj++;\n> -\t}\n> -\n> -\tfor (i = 0; intRelOpts[i].gen.name; i++)\n> -\t{\n> -\t\trelOpts[j] = &intRelOpts[i].gen;\n> -\t\trelOpts[j]->type = RELOPT_TYPE_INT;\n> -\t\trelOpts[j]->namelen = strlen(relOpts[j]->name);\n> -\t\tj++;\n> -\t}\n> -\n> -\tfor (i = 0; realRelOpts[i].gen.name; i++)\n> -\t{\n> -\t\trelOpts[j] = &realRelOpts[i].gen;\n> -\t\trelOpts[j]->type = RELOPT_TYPE_REAL;\n> -\t\trelOpts[j]->namelen = strlen(relOpts[j]->name);\n> -\t\tj++;\n> -\t}\n> -\n> -\tfor (i = 0; enumRelOpts[i].gen.name; i++)\n> -\t{\n> -\t\trelOpts[j] = &enumRelOpts[i].gen;\n> -\t\trelOpts[j]->type = RELOPT_TYPE_ENUM;\n> -\t\trelOpts[j]->namelen = strlen(relOpts[j]->name);\n> -\t\tj++;\n> -\t}\n> -\n> -\tfor (i = 0; stringRelOpts[i].gen.name; i++)\n> -\t{\n> -\t\trelOpts[j] = &stringRelOpts[i].gen;\n> -\t\trelOpts[j]->type = RELOPT_TYPE_STRING;\n> -\t\trelOpts[j]->namelen = strlen(relOpts[j]->name);\n> -\t\tj++;\n> -\t}\n> -\n> -\tfor (i = 0; i < num_custom_options; i++)\n> -\t{\n> -\t\trelOpts[j] = custom_options[i];\n> -\t\tj++;\n> -\t}\n> -\n> -\t/* add a list terminator */\n> -\trelOpts[j] = NULL;\n> -\n> -\t/* flag the work is complete */\n> -\tneed_initialization = false;\n> -}\n> -\n> -/*\n> - * add_reloption_kind\n> - *\t\tCreate a new relopt_kind value, to be used in custom reloptions by\n> - *\t\tuser-defined AMs.\n> - */\n> -relopt_kind\n> -add_reloption_kind(void)\n> -{\n> -\t/* don't hand out the last bit so that the enum's behavior is portable */\n> -\tif (last_assigned_kind >= RELOPT_KIND_MAX)\n> -\t\tereport(ERROR,\n> -\t\t\t\t(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n> -\t\t\t\t errmsg(\"user-defined relation parameter types limit exceeded\")));\n> -\tlast_assigned_kind <<= 1;\n> -\treturn (relopt_kind) last_assigned_kind;\n> -}\n> -\n> -/*\n> - * add_reloption\n> - *\t\tAdd an already-created custom reloption to the list, and recompute the\n> - *\t\tmain parser table.\n> - */\n> -static void\n> -add_reloption(relopt_gen *newoption)\n> -{\n> -\tstatic int\tmax_custom_options = 0;\n> -\n> -\tif (num_custom_options >= max_custom_options)\n> -\t{\n> -\t\tMemoryContext oldcxt;\n> -\n> -\t\toldcxt = MemoryContextSwitchTo(TopMemoryContext);\n> -\n> -\t\tif (max_custom_options == 0)\n> -\t\t{\n> -\t\t\tmax_custom_options = 8;\n> -\t\t\tcustom_options = palloc(max_custom_options * sizeof(relopt_gen *));\n> -\t\t}\n> -\t\telse\n> -\t\t{\n> -\t\t\tmax_custom_options *= 2;\n> -\t\t\tcustom_options = repalloc(custom_options,\n> -\t\t\t\t\t\t\t\t\t max_custom_options * sizeof(relopt_gen *));\n> -\t\t}\n> -\t\tMemoryContextSwitchTo(oldcxt);\n> -\t}\n> -\tcustom_options[num_custom_options++] = newoption;\n> -\n> -\tneed_initialization = true;\n> -}\n> \n> /*\n> * init_local_reloptions\n> @@ -729,6 +139,7 @@ init_local_reloptions(local_relopts *opts, Size relopt_struct_size)\n> \topts->options = NIL;\n> \topts->validators = NIL;\n> \topts->relopt_struct_size = relopt_struct_size;\n> +\topts->spec_set = allocateOptionsSpecSet(NULL, relopt_struct_size, 0);\n> }\n> \n> /*\n> @@ -743,112 +154,6 @@ register_reloptions_validator(local_relopts *opts, relopts_validator validator)\n> }\n> \n> /*\n> - * add_local_reloption\n> - *\t\tAdd an already-created custom reloption to the local list.\n> - */\n> -static void\n> -add_local_reloption(local_relopts *relopts, relopt_gen *newoption, int offset)\n> -{\n> -\tlocal_relopt *opt = palloc(sizeof(*opt));\n> -\n> -\tAssert(offset < relopts->relopt_struct_size);\n> -\n> -\topt->option = newoption;\n> -\topt->offset = offset;\n> -\n> -\trelopts->options = lappend(relopts->options, opt);\n> -}\n> -\n> -/*\n> - * allocate_reloption\n> - *\t\tAllocate a new reloption and initialize the type-agnostic fields\n> - *\t\t(for types other than string)\n> - */\n> -static relopt_gen *\n> -allocate_reloption(bits32 kinds, int type, const char *name, const char *desc,\n> -\t\t\t\t LOCKMODE lockmode)\n> -{\n> -\tMemoryContext oldcxt;\n> -\tsize_t\t\tsize;\n> -\trelopt_gen *newoption;\n> -\n> -\tif (kinds != RELOPT_KIND_LOCAL)\n> -\t\toldcxt = MemoryContextSwitchTo(TopMemoryContext);\n> -\telse\n> -\t\toldcxt = NULL;\n> -\n> -\tswitch (type)\n> -\t{\n> -\t\tcase RELOPT_TYPE_BOOL:\n> -\t\t\tsize = sizeof(relopt_bool);\n> -\t\t\tbreak;\n> -\t\tcase RELOPT_TYPE_INT:\n> -\t\t\tsize = sizeof(relopt_int);\n> -\t\t\tbreak;\n> -\t\tcase RELOPT_TYPE_REAL:\n> -\t\t\tsize = sizeof(relopt_real);\n> -\t\t\tbreak;\n> -\t\tcase RELOPT_TYPE_ENUM:\n> -\t\t\tsize = sizeof(relopt_enum);\n> -\t\t\tbreak;\n> -\t\tcase RELOPT_TYPE_STRING:\n> -\t\t\tsize = sizeof(relopt_string);\n> -\t\t\tbreak;\n> -\t\tdefault:\n> -\t\t\telog(ERROR, \"unsupported reloption type %d\", type);\n> -\t\t\treturn NULL;\t\t/* keep compiler quiet */\n> -\t}\n> -\n> -\tnewoption = palloc(size);\n> -\n> -\tnewoption->name = pstrdup(name);\n> -\tif (desc)\n> -\t\tnewoption->desc = pstrdup(desc);\n> -\telse\n> -\t\tnewoption->desc = NULL;\n> -\tnewoption->kinds = kinds;\n> -\tnewoption->namelen = strlen(name);\n> -\tnewoption->type = type;\n> -\tnewoption->lockmode = lockmode;\n> -\n> -\tif (oldcxt != NULL)\n> -\t\tMemoryContextSwitchTo(oldcxt);\n> -\n> -\treturn newoption;\n> -}\n> -\n> -/*\n> - * init_bool_reloption\n> - *\t\tAllocate and initialize a new boolean reloption\n> - */\n> -static relopt_bool *\n> -init_bool_reloption(bits32 kinds, const char *name, const char *desc,\n> -\t\t\t\t\tbool default_val, LOCKMODE lockmode)\n> -{\n> -\trelopt_bool *newoption;\n> -\n> -\tnewoption = (relopt_bool *) allocate_reloption(kinds, RELOPT_TYPE_BOOL,\n> -\t\t\t\t\t\t\t\t\t\t\t\t name, desc, lockmode);\n> -\tnewoption->default_val = default_val;\n> -\n> -\treturn newoption;\n> -}\n> -\n> -/*\n> - * add_bool_reloption\n> - *\t\tAdd a new boolean reloption\n> - */\n> -void\n> -add_bool_reloption(bits32 kinds, const char *name, const char *desc,\n> -\t\t\t\t bool default_val, LOCKMODE lockmode)\n> -{\n> -\trelopt_bool *newoption = init_bool_reloption(kinds, name, desc,\n> -\t\t\t\t\t\t\t\t\t\t\t\t default_val, lockmode);\n> -\n> -\tadd_reloption((relopt_gen *) newoption);\n> -}\n> -\n> -/*\n> * add_local_bool_reloption\n> *\t\tAdd a new boolean local reloption\n> *\n> @@ -858,47 +163,8 @@ void\n> add_local_bool_reloption(local_relopts *relopts, const char *name,\n> \t\t\t\t\t\t const char *desc, bool default_val, int offset)\n> {\n> -\trelopt_bool *newoption = init_bool_reloption(RELOPT_KIND_LOCAL,\n> -\t\t\t\t\t\t\t\t\t\t\t\t name, desc,\n> -\t\t\t\t\t\t\t\t\t\t\t\t default_val, 0);\n> -\n> -\tadd_local_reloption(relopts, (relopt_gen *) newoption, offset);\n> -}\n> -\n> -\n> -/*\n> - * init_real_reloption\n> - *\t\tAllocate and initialize a new integer reloption\n> - */\n> -static relopt_int *\n> -init_int_reloption(bits32 kinds, const char *name, const char *desc,\n> -\t\t\t\t int default_val, int min_val, int max_val,\n> -\t\t\t\t LOCKMODE lockmode)\n> -{\n> -\trelopt_int *newoption;\n> -\n> -\tnewoption = (relopt_int *) allocate_reloption(kinds, RELOPT_TYPE_INT,\n> -\t\t\t\t\t\t\t\t\t\t\t\t name, desc, lockmode);\n> -\tnewoption->default_val = default_val;\n> -\tnewoption->min = min_val;\n> -\tnewoption->max = max_val;\n> -\n> -\treturn newoption;\n> -}\n> -\n> -/*\n> - * add_int_reloption\n> - *\t\tAdd a new integer reloption\n> - */\n> -void\n> -add_int_reloption(bits32 kinds, const char *name, const char *desc, int default_val,\n> -\t\t\t\t int min_val, int max_val, LOCKMODE lockmode)\n> -{\n> -\trelopt_int *newoption = init_int_reloption(kinds, name, desc,\n> -\t\t\t\t\t\t\t\t\t\t\t default_val, min_val,\n> -\t\t\t\t\t\t\t\t\t\t\t max_val, lockmode);\n> -\n> -\tadd_reloption((relopt_gen *) newoption);\n> +\toptionsSpecSetAddBool(relopts->spec_set, name, desc, NoLock, 0, offset,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tdefault_val);\n> }\n> \n> /*\n> @@ -912,47 +178,8 @@ add_local_int_reloption(local_relopts *relopts, const char *name,\n> \t\t\t\t\t\tconst char *desc, int default_val, int min_val,\n> \t\t\t\t\t\tint max_val, int offset)\n> {\n> -\trelopt_int *newoption = init_int_reloption(RELOPT_KIND_LOCAL,\n> -\t\t\t\t\t\t\t\t\t\t\t name, desc, default_val,\n> -\t\t\t\t\t\t\t\t\t\t\t min_val, max_val, 0);\n> -\n> -\tadd_local_reloption(relopts, (relopt_gen *) newoption, offset);\n> -}\n> -\n> -/*\n> - * init_real_reloption\n> - *\t\tAllocate and initialize a new real reloption\n> - */\n> -static relopt_real *\n> -init_real_reloption(bits32 kinds, const char *name, const char *desc,\n> -\t\t\t\t\tdouble default_val, double min_val, double max_val,\n> -\t\t\t\t\tLOCKMODE lockmode)\n> -{\n> -\trelopt_real *newoption;\n> -\n> -\tnewoption = (relopt_real *) allocate_reloption(kinds, RELOPT_TYPE_REAL,\n> -\t\t\t\t\t\t\t\t\t\t\t\t name, desc, lockmode);\n> -\tnewoption->default_val = default_val;\n> -\tnewoption->min = min_val;\n> -\tnewoption->max = max_val;\n> -\n> -\treturn newoption;\n> -}\n> -\n> -/*\n> - * add_real_reloption\n> - *\t\tAdd a new float reloption\n> - */\n> -void\n> -add_real_reloption(bits32 kinds, const char *name, const char *desc,\n> -\t\t\t\t double default_val, double min_val, double max_val,\n> -\t\t\t\t LOCKMODE lockmode)\n> -{\n> -\trelopt_real *newoption = init_real_reloption(kinds, name, desc,\n> -\t\t\t\t\t\t\t\t\t\t\t\t default_val, min_val,\n> -\t\t\t\t\t\t\t\t\t\t\t\t max_val, lockmode);\n> -\n> -\tadd_reloption((relopt_gen *) newoption);\n> +\toptionsSpecSetAddInt(relopts->spec_set, name, desc, NoLock, 0, offset,\n> +\t\t\t\t\t\t\t\t\t\t\t\tdefault_val, min_val, max_val);\n> }\n> \n> /*\n> @@ -966,57 +193,9 @@ add_local_real_reloption(local_relopts *relopts, const char *name,\n> \t\t\t\t\t\t const char *desc, double default_val,\n> \t\t\t\t\t\t double min_val, double max_val, int offset)\n> {\n> -\trelopt_real *newoption = init_real_reloption(RELOPT_KIND_LOCAL,\n> -\t\t\t\t\t\t\t\t\t\t\t\t name, desc,\n> -\t\t\t\t\t\t\t\t\t\t\t\t default_val, min_val,\n> -\t\t\t\t\t\t\t\t\t\t\t\t max_val, 0);\n> -\n> -\tadd_local_reloption(relopts, (relopt_gen *) newoption, offset);\n> -}\n> -\n> -/*\n> - * init_enum_reloption\n> - *\t\tAllocate and initialize a new enum reloption\n> - */\n> -static relopt_enum *\n> -init_enum_reloption(bits32 kinds, const char *name, const char *desc,\n> -\t\t\t\t\trelopt_enum_elt_def *members, int default_val,\n> -\t\t\t\t\tconst char *detailmsg, LOCKMODE lockmode)\n> -{\n> -\trelopt_enum *newoption;\n> -\n> -\tnewoption = (relopt_enum *) allocate_reloption(kinds, RELOPT_TYPE_ENUM,\n> -\t\t\t\t\t\t\t\t\t\t\t\t name, desc, lockmode);\n> -\tnewoption->members = members;\n> -\tnewoption->default_val = default_val;\n> -\tnewoption->detailmsg = detailmsg;\n> -\n> -\treturn newoption;\n> -}\n> -\n> -\n> -/*\n> - * add_enum_reloption\n> - *\t\tAdd a new enum reloption\n> - *\n> - * The members array must have a terminating NULL entry.\n> - *\n> - * The detailmsg is shown when unsupported values are passed, and has this\n> - * form: \"Valid values are \\\"foo\\\", \\\"bar\\\", and \\\"bar\\\".\"\n> - *\n> - * The members array and detailmsg are not copied -- caller must ensure that\n> - * they are valid throughout the life of the process.\n> - */\n> -void\n> -add_enum_reloption(bits32 kinds, const char *name, const char *desc,\n> -\t\t\t\t relopt_enum_elt_def *members, int default_val,\n> -\t\t\t\t const char *detailmsg, LOCKMODE lockmode)\n> -{\n> -\trelopt_enum *newoption = init_enum_reloption(kinds, name, desc,\n> -\t\t\t\t\t\t\t\t\t\t\t\t members, default_val,\n> -\t\t\t\t\t\t\t\t\t\t\t\t detailmsg, lockmode);\n> +\toptionsSpecSetAddReal(relopts->spec_set, name, desc, NoLock, 0, offset,\n> +\t\t\t\t\t\t\t\t\t\t\t\tdefault_val, min_val, max_val);\n> \n> -\tadd_reloption((relopt_gen *) newoption);\n> }\n> \n> /*\n> @@ -1027,77 +206,11 @@ add_enum_reloption(bits32 kinds, const char *name, const char *desc,\n> */\n> void\n> add_local_enum_reloption(local_relopts *relopts, const char *name,\n> -\t\t\t\t\t\t const char *desc, relopt_enum_elt_def *members,\n> +\t\t\t\t\t\t const char *desc, opt_enum_elt_def *members,\n> \t\t\t\t\t\t int default_val, const char *detailmsg, int offset)\n> {\n> -\trelopt_enum *newoption = init_enum_reloption(RELOPT_KIND_LOCAL,\n> -\t\t\t\t\t\t\t\t\t\t\t\t name, desc,\n> -\t\t\t\t\t\t\t\t\t\t\t\t members, default_val,\n> -\t\t\t\t\t\t\t\t\t\t\t\t detailmsg, 0);\n> -\n> -\tadd_local_reloption(relopts, (relopt_gen *) newoption, offset);\n> -}\n> -\n> -/*\n> - * init_string_reloption\n> - *\t\tAllocate and initialize a new string reloption\n> - */\n> -static relopt_string *\n> -init_string_reloption(bits32 kinds, const char *name, const char *desc,\n> -\t\t\t\t\t const char *default_val,\n> -\t\t\t\t\t validate_string_relopt validator,\n> -\t\t\t\t\t fill_string_relopt filler,\n> -\t\t\t\t\t LOCKMODE lockmode)\n> -{\n> -\trelopt_string *newoption;\n> -\n> -\t/* make sure the validator/default combination is sane */\n> -\tif (validator)\n> -\t\t(validator) (default_val);\n> -\n> -\tnewoption = (relopt_string *) allocate_reloption(kinds, RELOPT_TYPE_STRING,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t name, desc, lockmode);\n> -\tnewoption->validate_cb = validator;\n> -\tnewoption->fill_cb = filler;\n> -\tif (default_val)\n> -\t{\n> -\t\tif (kinds == RELOPT_KIND_LOCAL)\n> -\t\t\tnewoption->default_val = strdup(default_val);\n> -\t\telse\n> -\t\t\tnewoption->default_val = MemoryContextStrdup(TopMemoryContext, default_val);\n> -\t\tnewoption->default_len = strlen(default_val);\n> -\t\tnewoption->default_isnull = false;\n> -\t}\n> -\telse\n> -\t{\n> -\t\tnewoption->default_val = \"\";\n> -\t\tnewoption->default_len = 0;\n> -\t\tnewoption->default_isnull = true;\n> -\t}\n> -\n> -\treturn newoption;\n> -}\n> -\n> -/*\n> - * add_string_reloption\n> - *\t\tAdd a new string reloption\n> - *\n> - * \"validator\" is an optional function pointer that can be used to test the\n> - * validity of the values. It must elog(ERROR) when the argument string is\n> - * not acceptable for the variable. Note that the default value must pass\n> - * the validation.\n> - */\n> -void\n> -add_string_reloption(bits32 kinds, const char *name, const char *desc,\n> -\t\t\t\t\t const char *default_val, validate_string_relopt validator,\n> -\t\t\t\t\t LOCKMODE lockmode)\n> -{\n> -\trelopt_string *newoption = init_string_reloption(kinds, name, desc,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t default_val,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t validator, NULL,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t lockmode);\n> -\n> -\tadd_reloption((relopt_gen *) newoption);\n> +\toptionsSpecSetAddEnum(relopts->spec_set, name, desc, NoLock, 0, offset,\n> +\t\t\t\t\t\t\t\t\t\t\tmembers, default_val, detailmsg);\n> }\n> \n> /*\n> @@ -1113,249 +226,9 @@ add_local_string_reloption(local_relopts *relopts, const char *name,\n> \t\t\t\t\t\t validate_string_relopt validator,\n> \t\t\t\t\t\t fill_string_relopt filler, int offset)\n> {\n> -\trelopt_string *newoption = init_string_reloption(RELOPT_KIND_LOCAL,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t name, desc,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t default_val,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t validator, filler,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t 0);\n> -\n> -\tadd_local_reloption(relopts, (relopt_gen *) newoption, offset);\n> -}\n> -\n> -/*\n> - * Transform a relation options list (list of DefElem) into the text array\n> - * format that is kept in pg_class.reloptions, including only those options\n> - * that are in the passed namespace. The output values do not include the\n> - * namespace.\n> - *\n> - * This is used for three cases: CREATE TABLE/INDEX, ALTER TABLE SET, and\n> - * ALTER TABLE RESET. In the ALTER cases, oldOptions is the existing\n> - * reloptions value (possibly NULL), and we replace or remove entries\n> - * as needed.\n> - *\n> - * If acceptOidsOff is true, then we allow oids = false, but throw error when\n> - * on. This is solely needed for backwards compatibility.\n> - *\n> - * Note that this is not responsible for determining whether the options\n> - * are valid, but it does check that namespaces for all the options given are\n> - * listed in validnsps. The NULL namespace is always valid and need not be\n> - * explicitly listed. Passing a NULL pointer means that only the NULL\n> - * namespace is valid.\n> - *\n> - * Both oldOptions and the result are text arrays (or NULL for \"default\"),\n> - * but we declare them as Datums to avoid including array.h in reloptions.h.\n> - */\n> -Datum\n> -transformRelOptions(Datum oldOptions, List *defList, const char *namspace,\n> -\t\t\t\t\tchar *validnsps[], bool acceptOidsOff, bool isReset)\n> -{\n> -\tDatum\t\tresult;\n> -\tArrayBuildState *astate;\n> -\tListCell *cell;\n> -\n> -\t/* no change if empty list */\n> -\tif (defList == NIL)\n> -\t\treturn oldOptions;\n> -\n> -\t/* We build new array using accumArrayResult */\n> -\tastate = NULL;\n> -\n> -\t/* Copy any oldOptions that aren't to be replaced */\n> -\tif (PointerIsValid(DatumGetPointer(oldOptions)))\n> -\t{\n> -\t\tArrayType *array = DatumGetArrayTypeP(oldOptions);\n> -\t\tDatum\t *oldoptions;\n> -\t\tint\t\t\tnoldoptions;\n> -\t\tint\t\t\ti;\n> -\n> -\t\tdeconstruct_array(array, TEXTOID, -1, false, TYPALIGN_INT,\n> -\t\t\t\t\t\t &oldoptions, NULL, &noldoptions);\n> -\n> -\t\tfor (i = 0; i < noldoptions; i++)\n> -\t\t{\n> -\t\t\tchar\t *text_str = VARDATA(oldoptions[i]);\n> -\t\t\tint\t\t\ttext_len = VARSIZE(oldoptions[i]) - VARHDRSZ;\n> -\n> -\t\t\t/* Search for a match in defList */\n> -\t\t\tforeach(cell, defList)\n> -\t\t\t{\n> -\t\t\t\tDefElem *def = (DefElem *) lfirst(cell);\n> -\t\t\t\tint\t\t\tkw_len;\n> -\n> -\t\t\t\t/* ignore if not in the same namespace */\n> -\t\t\t\tif (namspace == NULL)\n> -\t\t\t\t{\n> -\t\t\t\t\tif (def->defnamespace != NULL)\n> -\t\t\t\t\t\tcontinue;\n> -\t\t\t\t}\n> -\t\t\t\telse if (def->defnamespace == NULL)\n> -\t\t\t\t\tcontinue;\n> -\t\t\t\telse if (strcmp(def->defnamespace, namspace) != 0)\n> -\t\t\t\t\tcontinue;\n> -\n> -\t\t\t\tkw_len = strlen(def->defname);\n> -\t\t\t\tif (text_len > kw_len && text_str[kw_len] == '=' &&\n> -\t\t\t\t\tstrncmp(text_str, def->defname, kw_len) == 0)\n> -\t\t\t\t\tbreak;\n> -\t\t\t}\n> -\t\t\tif (!cell)\n> -\t\t\t{\n> -\t\t\t\t/* No match, so keep old option */\n> -\t\t\t\tastate = accumArrayResult(astate, oldoptions[i],\n> -\t\t\t\t\t\t\t\t\t\t false, TEXTOID,\n> -\t\t\t\t\t\t\t\t\t\t CurrentMemoryContext);\n> -\t\t\t}\n> -\t\t}\n> -\t}\n> -\n> -\t/*\n> -\t * If CREATE/SET, add new options to array; if RESET, just check that the\n> -\t * user didn't say RESET (option=val). (Must do this because the grammar\n> -\t * doesn't enforce it.)\n> -\t */\n> -\tforeach(cell, defList)\n> -\t{\n> -\t\tDefElem *def = (DefElem *) lfirst(cell);\n> -\n> -\t\tif (isReset)\n> -\t\t{\n> -\t\t\tif (def->arg != NULL)\n> -\t\t\t\tereport(ERROR,\n> -\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n> -\t\t\t\t\t\t errmsg(\"RESET must not include values for parameters\")));\n> -\t\t}\n> -\t\telse\n> -\t\t{\n> -\t\t\ttext\t *t;\n> -\t\t\tconst char *value;\n> -\t\t\tSize\t\tlen;\n> -\n> -\t\t\t/*\n> -\t\t\t * Error out if the namespace is not valid. A NULL namespace is\n> -\t\t\t * always valid.\n> -\t\t\t */\n> -\t\t\tif (def->defnamespace != NULL)\n> -\t\t\t{\n> -\t\t\t\tbool\t\tvalid = false;\n> -\t\t\t\tint\t\t\ti;\n> -\n> -\t\t\t\tif (validnsps)\n> -\t\t\t\t{\n> -\t\t\t\t\tfor (i = 0; validnsps[i]; i++)\n> -\t\t\t\t\t{\n> -\t\t\t\t\t\tif (strcmp(def->defnamespace, validnsps[i]) == 0)\n> -\t\t\t\t\t\t{\n> -\t\t\t\t\t\t\tvalid = true;\n> -\t\t\t\t\t\t\tbreak;\n> -\t\t\t\t\t\t}\n> -\t\t\t\t\t}\n> -\t\t\t\t}\n> -\n> -\t\t\t\tif (!valid)\n> -\t\t\t\t\tereport(ERROR,\n> -\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> -\t\t\t\t\t\t\t errmsg(\"unrecognized parameter namespace \\\"%s\\\"\",\n> -\t\t\t\t\t\t\t\t\tdef->defnamespace)));\n> -\t\t\t}\n> -\n> -\t\t\t/* ignore if not in the same namespace */\n> -\t\t\tif (namspace == NULL)\n> -\t\t\t{\n> -\t\t\t\tif (def->defnamespace != NULL)\n> -\t\t\t\t\tcontinue;\n> -\t\t\t}\n> -\t\t\telse if (def->defnamespace == NULL)\n> -\t\t\t\tcontinue;\n> -\t\t\telse if (strcmp(def->defnamespace, namspace) != 0)\n> -\t\t\t\tcontinue;\n> -\n> -\t\t\t/*\n> -\t\t\t * Flatten the DefElem into a text string like \"name=arg\". If we\n> -\t\t\t * have just \"name\", assume \"name=true\" is meant. Note: the\n> -\t\t\t * namespace is not output.\n> -\t\t\t */\n> -\t\t\tif (def->arg != NULL)\n> -\t\t\t\tvalue = defGetString(def);\n> -\t\t\telse\n> -\t\t\t\tvalue = \"true\";\n> -\n> -\t\t\t/*\n> -\t\t\t * This is not a great place for this test, but there's no other\n> -\t\t\t * convenient place to filter the option out. As WITH (oids =\n> -\t\t\t * false) will be removed someday, this seems like an acceptable\n> -\t\t\t * amount of ugly.\n> -\t\t\t */\n> -\t\t\tif (acceptOidsOff && def->defnamespace == NULL &&\n> -\t\t\t\tstrcmp(def->defname, \"oids\") == 0)\n> -\t\t\t{\n> -\t\t\t\tif (defGetBoolean(def))\n> -\t\t\t\t\tereport(ERROR,\n> -\t\t\t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> -\t\t\t\t\t\t\t errmsg(\"tables declared WITH OIDS are not supported\")));\n> -\t\t\t\t/* skip over option, reloptions machinery doesn't know it */\n> -\t\t\t\tcontinue;\n> -\t\t\t}\n> -\n> -\t\t\tlen = VARHDRSZ + strlen(def->defname) + 1 + strlen(value);\n> -\t\t\t/* +1 leaves room for sprintf's trailing null */\n> -\t\t\tt = (text *) palloc(len + 1);\n> -\t\t\tSET_VARSIZE(t, len);\n> -\t\t\tsprintf(VARDATA(t), \"%s=%s\", def->defname, value);\n> -\n> -\t\t\tastate = accumArrayResult(astate, PointerGetDatum(t),\n> -\t\t\t\t\t\t\t\t\t false, TEXTOID,\n> -\t\t\t\t\t\t\t\t\t CurrentMemoryContext);\n> -\t\t}\n> -\t}\n> -\n> -\tif (astate)\n> -\t\tresult = makeArrayResult(astate, CurrentMemoryContext);\n> -\telse\n> -\t\tresult = (Datum) 0;\n> -\n> -\treturn result;\n> -}\n> -\n> -\n> -/*\n> - * Convert the text-array format of reloptions into a List of DefElem.\n> - * This is the inverse of transformRelOptions().\n> - */\n> -List *\n> -untransformRelOptions(Datum options)\n> -{\n> -\tList\t *result = NIL;\n> -\tArrayType *array;\n> -\tDatum\t *optiondatums;\n> -\tint\t\t\tnoptions;\n> -\tint\t\t\ti;\n> -\n> -\t/* Nothing to do if no options */\n> -\tif (!PointerIsValid(DatumGetPointer(options)))\n> -\t\treturn result;\n> -\n> -\tarray = DatumGetArrayTypeP(options);\n> -\n> -\tdeconstruct_array(array, TEXTOID, -1, false, TYPALIGN_INT,\n> -\t\t\t\t\t &optiondatums, NULL, &noptions);\n> -\n> -\tfor (i = 0; i < noptions; i++)\n> -\t{\n> -\t\tchar\t *s;\n> -\t\tchar\t *p;\n> -\t\tNode\t *val = NULL;\n> -\n> -\t\ts = TextDatumGetCString(optiondatums[i]);\n> -\t\tp = strchr(s, '=');\n> -\t\tif (p)\n> -\t\t{\n> -\t\t\t*p++ = '\\0';\n> -\t\t\tval = (Node *) makeString(pstrdup(p));\n> -\t\t}\n> -\t\tresult = lappend(result, makeDefElem(pstrdup(s), val, -1));\n> -\t}\n> -\n> -\treturn result;\n> +\toptionsSpecSetAddString(relopts->spec_set, name, desc, NoLock, 0, offset,\n> +\t\t\t\t\t\t\t\t\t\t\tdefault_val, validator);\n> +/* FIXME solve mistery with filler option! */\n> }\n> \n> /*\n> @@ -1372,12 +245,13 @@ untransformRelOptions(Datum options)\n> */\n> bytea *\n> extractRelOptions(HeapTuple tuple, TupleDesc tupdesc,\n> -\t\t\t\t amoptions_function amoptions)\n> +\t\t\t\t amreloptspecset_function amoptionsspecsetfn)\n> {\n> \tbytea\t *options;\n> \tbool\t\tisnull;\n> \tDatum\t\tdatum;\n> \tForm_pg_class classForm;\n> +\toptions_spec_set *spec_set;\n> \n> \tdatum = fastgetattr(tuple,\n> \t\t\t\t\t\tAnum_pg_class_reloptions,\n> @@ -1394,702 +268,341 @@ extractRelOptions(HeapTuple tuple, TupleDesc tupdesc,\n> \t\tcase RELKIND_RELATION:\n> \t\tcase RELKIND_TOASTVALUE:\n> \t\tcase RELKIND_MATVIEW:\n> -\t\t\toptions = heap_reloptions(classForm->relkind, datum, false);\n> +\t\t\tspec_set = get_heap_relopt_spec_set();\n> \t\t\tbreak;\n> \t\tcase RELKIND_PARTITIONED_TABLE:\n> -\t\t\toptions = partitioned_table_reloptions(datum, false);\n> +\t\t\tspec_set = get_partitioned_relopt_spec_set();\n> \t\t\tbreak;\n> \t\tcase RELKIND_VIEW:\n> -\t\t\toptions = view_reloptions(datum, false);\n> +\t\t\tspec_set = get_view_relopt_spec_set();\n> \t\t\tbreak;\n> \t\tcase RELKIND_INDEX:\n> \t\tcase RELKIND_PARTITIONED_INDEX:\n> -\t\t\toptions = index_reloptions(amoptions, datum, false);\n> +\t\t\tif (amoptionsspecsetfn)\n> +\t\t\t\tspec_set = amoptionsspecsetfn();\n> +\t\t\telse\n> +\t\t\t\tspec_set = NULL;\n> \t\t\tbreak;\n> \t\tcase RELKIND_FOREIGN_TABLE:\n> -\t\t\toptions = NULL;\n> +\t\t\tspec_set = NULL;\n> \t\t\tbreak;\n> \t\tdefault:\n> \t\t\tAssert(false);\t\t/* can't get here */\n> -\t\t\toptions = NULL;\t\t/* keep compiler quiet */\n> +\t\t\tspec_set = NULL;\t\t/* keep compiler quiet */\n> \t\t\tbreak;\n> \t}\n> +\tif (spec_set)\n> +\t\toptions = optionsTextArrayToBytea(spec_set, datum, 0);\n> +\telse\n> +\t\toptions = NULL;\n> \n> \treturn options;\n> }\n> \n> -static void\n> -parseRelOptionsInternal(Datum options, bool validate,\n> -\t\t\t\t\t\trelopt_value *reloptions, int numoptions)\n> -{\n> -\tArrayType *array = DatumGetArrayTypeP(options);\n> -\tDatum\t *optiondatums;\n> -\tint\t\t\tnoptions;\n> -\tint\t\t\ti;\n> -\n> -\tdeconstruct_array(array, TEXTOID, -1, false, TYPALIGN_INT,\n> -\t\t\t\t\t &optiondatums, NULL, &noptions);\n> +options_spec_set *\n> +get_stdrd_relopt_spec_set(relopt_kind kind)\n> +{\n> +\tbool is_for_toast = (kind == RELOPT_KIND_TOAST);\n> +\n> +\toptions_spec_set * stdrd_relopt_spec_set = allocateOptionsSpecSet(\n> +\t\t\t\t\tis_for_toast ? \"toast\" : NULL, sizeof(StdRdOptions), 0); //FIXME change 0 to actual value (may be)\n> +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"fillfactor\",\n> +\t\t\t\t\t\t\t\t \"Packs table pages only to this percentag\",\n> +\t\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\t\t/* since it applies only\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t * to later inserts */\n> +\t\t\t\t\t\t\t\tis_for_toast ? OPTION_DEFINITION_FLAG_REJECT : 0,\n> +\t\t\t\t\t\t\t\toffsetof(StdRdOptions, fillfactor),\n> +\t\t\t\t\t\t HEAP_DEFAULT_FILLFACTOR, HEAP_MIN_FILLFACTOR, 100);\n> +\toptionsSpecSetAddBool(stdrd_relopt_spec_set, \"autovacuum_enabled\",\n> +\t\t\t\t\t\t\t \"Enables autovacuum in this relation\",\n> +\t\t\t\t\t\t\t ShareUpdateExclusiveLock, 0,\n> +\t\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, enabled),\n> +\t\t\t\t\t\t\t true);\n> +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"autovacuum_vacuum_threshold\",\n> +\t\t\t\t\"Minimum number of tuple updates or deletes prior to vacuum\",\n> +\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t\t\t0, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_threshold),\n> +\t\t\t\t\t\t\t -1, 0, INT_MAX);\n> +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"autovacuum_analyze_threshold\",\n> +\t\t\t\t\"Minimum number of tuple updates or deletes prior to vacuum\",\n> +\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t\t\t\t\t is_for_toast ? OPTION_DEFINITION_FLAG_REJECT : 0,\n> +\t\t\t\t\t offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, analyze_threshold),\n> +\t\t\t\t\t\t\t -1, 0, INT_MAX);\n> +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"autovacuum_vacuum_cost_limit\",\n> +\t\t\t \"Vacuum cost amount available before napping, for autovacuum\",\n> +\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t\t 0, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_cost_limit),\n> +\t\t\t\t\t\t\t -1, 0, 10000);\n> +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"autovacuum_freeze_min_age\",\n> +\t \"Minimum age at which VACUUM should freeze a table row, for autovacuum\",\n> +\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t\t\t 0, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, freeze_min_age),\n> +\t\t\t\t\t\t\t -1, 0, 1000000000);\n> +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"autovacuum_freeze_max_age\",\n> +\t\"Age at which to autovacuum a table to prevent transaction ID wraparound\",\n> +\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t\t\t 0, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, freeze_max_age),\n> +\t\t\t\t\t\t\t -1, 100000, 2000000000);\n> +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"autovacuum_freeze_table_age\",\n> +\t\t\t\t\t\t\t \"Age at which VACUUM should perform a full table sweep to freeze row versions\",\n> +\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t\t\t0, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, freeze_table_age),\n> +\t\t\t\t\t\t\t -1, 0, 2000000000);\n> +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"autovacuum_multixact_freeze_min_age\",\n> +\t\t\t\t\t\t\t \"Minimum multixact age at which VACUUM should freeze a row multixact's, for autovacuum\",\n> +\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t0, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, multixact_freeze_min_age),\n> +\t\t\t\t\t\t\t -1, 0, 1000000000);\n> +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"autovacuum_multixact_freeze_max_age\",\n> +\t\t\t\t\t\t\t \"Multixact age at which to autovacuum a table to prevent multixact wraparound\",\n> +\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t0, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, multixact_freeze_max_age),\n> +\t\t\t\t\t\t\t -1, 10000, 2000000000);\n> +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"autovacuum_multixact_freeze_table_age\",\n> +\t\t\t\t\t\t\t \"Age of multixact at which VACUUM should perform a full table sweep to freeze row versions\",\n> +\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t 0, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, multixact_freeze_table_age),\n> +\t\t\t\t\t\t\t -1, 0, 2000000000);\n> +\toptionsSpecSetAddInt(stdrd_relopt_spec_set,\"log_autovacuum_min_duration\",\n> +\t\t\t\t\t\t\t \"Sets the minimum execution time above which autovacuum actions will be logged\",\n> +\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t\t\t0, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, log_min_duration),\n> +\t\t\t\t\t\t\t -1, -1, INT_MAX);\n> +\toptionsSpecSetAddReal(stdrd_relopt_spec_set, \"autovacuum_vacuum_cost_delay\",\n> +\t\t\t\t\t\t \"Vacuum cost delay in milliseconds, for autovacuum\",\n> +\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t\t 0, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_cost_delay),\n> +\t\t\t\t\t\t\t -1, 0.0, 100.0);\n> +\toptionsSpecSetAddReal(stdrd_relopt_spec_set, \"autovacuum_vacuum_scale_factor\",\n> +\t\t\t\t\t\t\t \"Number of tuple updates or deletes prior to vacuum as a fraction of reltuples\",\n> +\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t\t 0, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_scale_factor),\n> +\t\t\t\t\t\t\t -1, 0.0, 100.0);\n> +\n> +\toptionsSpecSetAddReal(stdrd_relopt_spec_set, \"autovacuum_vacuum_insert_scale_factor\",\n> +\t\t\t\t\t\t\t \"Number of tuple inserts prior to vacuum as a fraction of reltuples\",\n> +\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t\t 0, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_ins_scale_factor),\n> +\t\t\t\t\t\t\t -1, 0.0, 100.0);\n> +\n> +\toptionsSpecSetAddReal(stdrd_relopt_spec_set, \"autovacuum_analyze_scale_factor\",\n> +\t\t\t\t\t\t\t \"Number of tuple inserts, updates or deletes prior to analyze as a fraction of reltuples\",\n> +\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t\t\t\t\t is_for_toast ? OPTION_DEFINITION_FLAG_REJECT : 0,\n> +\t\t\t\t offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, analyze_scale_factor),\n> +\t\t\t\t\t\t\t -1, 0.0, 100.0);\n> +\n> +\n> +\n> +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"toast_tuple_target\",\n> +\t\t\t\t\t\t\t\t \"Sets the target tuple length at which external columns will be toasted\",\n> +\t\t\t\t\t\t\t\tShareUpdateExclusiveLock,\n> +\t\t\t\t\t\t\t\tis_for_toast ? OPTION_DEFINITION_FLAG_REJECT : 0,\n> +\t\t\t\t\t\t\t\toffsetof(StdRdOptions, toast_tuple_target),\n> +\t\t\t\t\t\t TOAST_TUPLE_TARGET, 128, TOAST_TUPLE_TARGET_MAIN);\n> +\n> +\toptionsSpecSetAddBool(stdrd_relopt_spec_set, \"user_catalog_table\",\n> +\t\t\t\t\t\t\t\t \"Declare a table as an additional catalog table, e.g. for the purpose of logical replication\",\n> +\t\t\t\t\t\t\t\t AccessExclusiveLock,\n> +\t\t\t\t\t\t\t\tis_for_toast ? OPTION_DEFINITION_FLAG_REJECT : 0,\n> +\t\t\t\t\t\t\t\t offsetof(StdRdOptions, user_catalog_table),\n> +\t\t\t\t\t\t\t\t false);\n> +\n> +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"parallel_workers\",\n> +\t\t\t\t\t\t\t\t\"Number of parallel processes that can be used per executor node for this relation.\",\n> +\t\t\t\t\t\t\t\tShareUpdateExclusiveLock,\n> +\t\t\t\t\t\t\t\tis_for_toast ? OPTION_DEFINITION_FLAG_REJECT : 0,\n> +\t\t\t\t\t\t\t\toffsetof(StdRdOptions, parallel_workers),\n> +\t\t\t\t\t\t\t\t-1, 0, 1024);\n> +\n> +\toptionsSpecSetAddEnum(stdrd_relopt_spec_set, \"vacuum_index_cleanup\",\n> +\t\t\t\t\t\t\t\t\"Controls index vacuuming and index cleanup\",\n> +\t\t\t\t\t\t\t\tShareUpdateExclusiveLock, 0,\n> +\t\t\t\t\t\t\t\toffsetof(StdRdOptions, vacuum_index_cleanup),\n> +\t\t\t\t\t\t\t\tStdRdOptIndexCleanupValues,\n> +\t\t\t\t\t\t\t\tSTDRD_OPTION_VACUUM_INDEX_CLEANUP_AUTO,\n> +\t\t\t\t\t\t\t\tgettext_noop(\"Valid values are \\\"on\\\", \\\"off\\\", and \\\"auto\\\".\"));\n> +\n> +\toptionsSpecSetAddBool(stdrd_relopt_spec_set, \"vacuum_truncate\",\n> +\t\t\t\t\t\t\t\t\"Enables vacuum to truncate empty pages at the end of this table\",\n> +\t\t\t\t\t\t\t\tShareUpdateExclusiveLock, 0,\n> +\t\t\t\t\t\t\t\toffsetof(StdRdOptions, vacuum_truncate),\n> +\t\t\t\t\t\t\t\ttrue);\n> +\n> +// FIXME Do something with OIDS\n> +\n> +\treturn stdrd_relopt_spec_set;\n> +}\n> +\n> +\n> +static options_spec_set *heap_relopt_spec_set = NULL;\n> +\n> +options_spec_set *\n> +get_heap_relopt_spec_set(void)\n> +{\n> +\tif (heap_relopt_spec_set)\n> +\t\treturn heap_relopt_spec_set;\n> +\theap_relopt_spec_set = get_stdrd_relopt_spec_set(RELOPT_KIND_HEAP);\n> +\treturn heap_relopt_spec_set;\n> +}\n> +\n> +static options_spec_set *toast_relopt_spec_set = NULL;\n> +\n> +options_spec_set *\n> +get_toast_relopt_spec_set(void)\n> +{\n> +\tif (toast_relopt_spec_set)\n> +\t\treturn toast_relopt_spec_set;\n> +\ttoast_relopt_spec_set = get_stdrd_relopt_spec_set(RELOPT_KIND_TOAST);\n> +\treturn toast_relopt_spec_set;\n> +}\n> +\n> +static options_spec_set *partitioned_relopt_spec_set = NULL;\n> \n> -\tfor (i = 0; i < noptions; i++)\n> -\t{\n> -\t\tchar\t *text_str = VARDATA(optiondatums[i]);\n> -\t\tint\t\t\ttext_len = VARSIZE(optiondatums[i]) - VARHDRSZ;\n> -\t\tint\t\t\tj;\n> -\n> -\t\t/* Search for a match in reloptions */\n> -\t\tfor (j = 0; j < numoptions; j++)\n> -\t\t{\n> -\t\t\tint\t\t\tkw_len = reloptions[j].gen->namelen;\n> -\n> -\t\t\tif (text_len > kw_len && text_str[kw_len] == '=' &&\n> -\t\t\t\tstrncmp(text_str, reloptions[j].gen->name, kw_len) == 0)\n> -\t\t\t{\n> -\t\t\t\tparse_one_reloption(&reloptions[j], text_str, text_len,\n> -\t\t\t\t\t\t\t\t\tvalidate);\n> -\t\t\t\tbreak;\n> -\t\t\t}\n> -\t\t}\n> -\n> -\t\tif (j >= numoptions && validate)\n> -\t\t{\n> -\t\t\tchar\t *s;\n> -\t\t\tchar\t *p;\n> -\n> -\t\t\ts = TextDatumGetCString(optiondatums[i]);\n> -\t\t\tp = strchr(s, '=');\n> -\t\t\tif (p)\n> -\t\t\t\t*p = '\\0';\n> -\t\t\tereport(ERROR,\n> -\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> -\t\t\t\t\t errmsg(\"unrecognized parameter \\\"%s\\\"\", s)));\n> -\t\t}\n> -\t}\n> -\n> -\t/* It's worth avoiding memory leaks in this function */\n> -\tpfree(optiondatums);\n> +options_spec_set *\n> +get_partitioned_relopt_spec_set(void)\n> +{\n> +\tif (partitioned_relopt_spec_set)\n> +\t\treturn partitioned_relopt_spec_set;\n> +\tpartitioned_relopt_spec_set = allocateOptionsSpecSet(\n> +\t\t\t\t\tNULL, sizeof(StdRdOptions), 0);\n> +\t/* No options for now, so spec set is empty */\n> \n> -\tif (((void *) array) != DatumGetPointer(options))\n> -\t\tpfree(array);\n> +\treturn partitioned_relopt_spec_set;\n> }\n> \n> /*\n> - * Interpret reloptions that are given in text-array format.\n> - *\n> - * options is a reloption text array as constructed by transformRelOptions.\n> - * kind specifies the family of options to be processed.\n> - *\n> - * The return value is a relopt_value * array on which the options actually\n> - * set in the options array are marked with isset=true. The length of this\n> - * array is returned in *numrelopts. Options not set are also present in the\n> - * array; this is so that the caller can easily locate the default values.\n> - *\n> - * If there are no options of the given kind, numrelopts is set to 0 and NULL\n> - * is returned (unless options are illegally supplied despite none being\n> - * defined, in which case an error occurs).\n> - *\n> - * Note: values of type int, bool and real are allocated as part of the\n> - * returned array. Values of type string are allocated separately and must\n> - * be freed by the caller.\n> + * Parse local options, allocate a bytea struct that's of the specified\n> + * 'base_size' plus any extra space that's needed for string variables,\n> + * fill its option's fields located at the given offsets and return it.\n> */\n> -static relopt_value *\n> -parseRelOptions(Datum options, bool validate, relopt_kind kind,\n> -\t\t\t\tint *numrelopts)\n> -{\n> -\trelopt_value *reloptions = NULL;\n> -\tint\t\t\tnumoptions = 0;\n> -\tint\t\t\ti;\n> -\tint\t\t\tj;\n> -\n> -\tif (need_initialization)\n> -\t\tinitialize_reloptions();\n> -\n> -\t/* Build a list of expected options, based on kind */\n> -\n> -\tfor (i = 0; relOpts[i]; i++)\n> -\t\tif (relOpts[i]->kinds & kind)\n> -\t\t\tnumoptions++;\n> -\n> -\tif (numoptions > 0)\n> -\t{\n> -\t\treloptions = palloc(numoptions * sizeof(relopt_value));\n> -\n> -\t\tfor (i = 0, j = 0; relOpts[i]; i++)\n> -\t\t{\n> -\t\t\tif (relOpts[i]->kinds & kind)\n> -\t\t\t{\n> -\t\t\t\treloptions[j].gen = relOpts[i];\n> -\t\t\t\treloptions[j].isset = false;\n> -\t\t\t\tj++;\n> -\t\t\t}\n> -\t\t}\n> -\t}\n> -\n> -\t/* Done if no options */\n> -\tif (PointerIsValid(DatumGetPointer(options)))\n> -\t\tparseRelOptionsInternal(options, validate, reloptions, numoptions);\n> -\n> -\t*numrelopts = numoptions;\n> -\treturn reloptions;\n> -}\n> -\n> -/* Parse local unregistered options. */\n> -static relopt_value *\n> -parseLocalRelOptions(local_relopts *relopts, Datum options, bool validate)\n> +void *\n> +build_local_reloptions(local_relopts *relopts, Datum options, bool validate)\n> {\n> -\tint\t\t\tnopts = list_length(relopts->options);\n> -\trelopt_value *values = palloc(sizeof(*values) * nopts);\n> +\tvoid\t *opts;\n> \tListCell *lc;\n> -\tint\t\t\ti = 0;\n> -\n> -\tforeach(lc, relopts->options)\n> -\t{\n> -\t\tlocal_relopt *opt = lfirst(lc);\n> -\n> -\t\tvalues[i].gen = opt->option;\n> -\t\tvalues[i].isset = false;\n> -\n> -\t\ti++;\n> -\t}\n> -\n> -\tif (options != (Datum) 0)\n> -\t\tparseRelOptionsInternal(options, validate, values, nopts);\n> +\topts = (void *) optionsTextArrayToBytea(relopts->spec_set, options, validate);\n> \n> -\treturn values;\n> -}\n> -\n> -/*\n> - * Subroutine for parseRelOptions, to parse and validate a single option's\n> - * value\n> - */\n> -static void\n> -parse_one_reloption(relopt_value *option, char *text_str, int text_len,\n> -\t\t\t\t\tbool validate)\n> -{\n> -\tchar\t *value;\n> -\tint\t\t\tvalue_len;\n> -\tbool\t\tparsed;\n> -\tbool\t\tnofree = false;\n> -\n> -\tif (option->isset && validate)\n> -\t\tereport(ERROR,\n> -\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> -\t\t\t\t errmsg(\"parameter \\\"%s\\\" specified more than once\",\n> -\t\t\t\t\t\toption->gen->name)));\n> -\n> -\tvalue_len = text_len - option->gen->namelen - 1;\n> -\tvalue = (char *) palloc(value_len + 1);\n> -\tmemcpy(value, text_str + option->gen->namelen + 1, value_len);\n> -\tvalue[value_len] = '\\0';\n> -\n> -\tswitch (option->gen->type)\n> -\t{\n> -\t\tcase RELOPT_TYPE_BOOL:\n> -\t\t\t{\n> -\t\t\t\tparsed = parse_bool(value, &option->values.bool_val);\n> -\t\t\t\tif (validate && !parsed)\n> -\t\t\t\t\tereport(ERROR,\n> -\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> -\t\t\t\t\t\t\t errmsg(\"invalid value for boolean option \\\"%s\\\": %s\",\n> -\t\t\t\t\t\t\t\t\toption->gen->name, value)));\n> -\t\t\t}\n> -\t\t\tbreak;\n> -\t\tcase RELOPT_TYPE_INT:\n> -\t\t\t{\n> -\t\t\t\trelopt_int *optint = (relopt_int *) option->gen;\n> -\n> -\t\t\t\tparsed = parse_int(value, &option->values.int_val, 0, NULL);\n> -\t\t\t\tif (validate && !parsed)\n> -\t\t\t\t\tereport(ERROR,\n> -\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> -\t\t\t\t\t\t\t errmsg(\"invalid value for integer option \\\"%s\\\": %s\",\n> -\t\t\t\t\t\t\t\t\toption->gen->name, value)));\n> -\t\t\t\tif (validate && (option->values.int_val < optint->min ||\n> -\t\t\t\t\t\t\t\t option->values.int_val > optint->max))\n> -\t\t\t\t\tereport(ERROR,\n> -\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> -\t\t\t\t\t\t\t errmsg(\"value %s out of bounds for option \\\"%s\\\"\",\n> -\t\t\t\t\t\t\t\t\tvalue, option->gen->name),\n> -\t\t\t\t\t\t\t errdetail(\"Valid values are between \\\"%d\\\" and \\\"%d\\\".\",\n> -\t\t\t\t\t\t\t\t\t optint->min, optint->max)));\n> -\t\t\t}\n> -\t\t\tbreak;\n> -\t\tcase RELOPT_TYPE_REAL:\n> -\t\t\t{\n> -\t\t\t\trelopt_real *optreal = (relopt_real *) option->gen;\n> -\n> -\t\t\t\tparsed = parse_real(value, &option->values.real_val, 0, NULL);\n> -\t\t\t\tif (validate && !parsed)\n> -\t\t\t\t\tereport(ERROR,\n> -\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> -\t\t\t\t\t\t\t errmsg(\"invalid value for floating point option \\\"%s\\\": %s\",\n> -\t\t\t\t\t\t\t\t\toption->gen->name, value)));\n> -\t\t\t\tif (validate && (option->values.real_val < optreal->min ||\n> -\t\t\t\t\t\t\t\t option->values.real_val > optreal->max))\n> -\t\t\t\t\tereport(ERROR,\n> -\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> -\t\t\t\t\t\t\t errmsg(\"value %s out of bounds for option \\\"%s\\\"\",\n> -\t\t\t\t\t\t\t\t\tvalue, option->gen->name),\n> -\t\t\t\t\t\t\t errdetail(\"Valid values are between \\\"%f\\\" and \\\"%f\\\".\",\n> -\t\t\t\t\t\t\t\t\t optreal->min, optreal->max)));\n> -\t\t\t}\n> -\t\t\tbreak;\n> -\t\tcase RELOPT_TYPE_ENUM:\n> -\t\t\t{\n> -\t\t\t\trelopt_enum *optenum = (relopt_enum *) option->gen;\n> -\t\t\t\trelopt_enum_elt_def *elt;\n> -\n> -\t\t\t\tparsed = false;\n> -\t\t\t\tfor (elt = optenum->members; elt->string_val; elt++)\n> -\t\t\t\t{\n> -\t\t\t\t\tif (pg_strcasecmp(value, elt->string_val) == 0)\n> -\t\t\t\t\t{\n> -\t\t\t\t\t\toption->values.enum_val = elt->symbol_val;\n> -\t\t\t\t\t\tparsed = true;\n> -\t\t\t\t\t\tbreak;\n> -\t\t\t\t\t}\n> -\t\t\t\t}\n> -\t\t\t\tif (validate && !parsed)\n> -\t\t\t\t\tereport(ERROR,\n> -\t\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> -\t\t\t\t\t\t\t errmsg(\"invalid value for enum option \\\"%s\\\": %s\",\n> -\t\t\t\t\t\t\t\t\toption->gen->name, value),\n> -\t\t\t\t\t\t\t optenum->detailmsg ?\n> -\t\t\t\t\t\t\t errdetail_internal(\"%s\", _(optenum->detailmsg)) : 0));\n> -\n> -\t\t\t\t/*\n> -\t\t\t\t * If value is not among the allowed string values, but we are\n> -\t\t\t\t * not asked to validate, just use the default numeric value.\n> -\t\t\t\t */\n> -\t\t\t\tif (!parsed)\n> -\t\t\t\t\toption->values.enum_val = optenum->default_val;\n> -\t\t\t}\n> -\t\t\tbreak;\n> -\t\tcase RELOPT_TYPE_STRING:\n> -\t\t\t{\n> -\t\t\t\trelopt_string *optstring = (relopt_string *) option->gen;\n> -\n> -\t\t\t\toption->values.string_val = value;\n> -\t\t\t\tnofree = true;\n> -\t\t\t\tif (validate && optstring->validate_cb)\n> -\t\t\t\t\t(optstring->validate_cb) (value);\n> -\t\t\t\tparsed = true;\n> -\t\t\t}\n> -\t\t\tbreak;\n> -\t\tdefault:\n> -\t\t\telog(ERROR, \"unsupported reloption type %d\", option->gen->type);\n> -\t\t\tparsed = true;\t\t/* quiet compiler */\n> -\t\t\tbreak;\n> -\t}\n> +\tforeach(lc, relopts->validators)\n> +\t\t((relopts_validator) lfirst(lc)) (opts, NULL, 0);\n> +//\t\t((relopts_validator) lfirst(lc)) (opts, vals, noptions);\n> +// FIXME solve problem with validation of separate option values;\n> +\treturn opts;\n> \n> -\tif (parsed)\n> -\t\toption->isset = true;\n> -\tif (!nofree)\n> -\t\tpfree(value);\n> }\n> \n> /*\n> - * Given the result from parseRelOptions, allocate a struct that's of the\n> - * specified base size plus any extra space that's needed for string variables.\n> - *\n> - * \"base\" should be sizeof(struct) of the reloptions struct (StdRdOptions or\n> - * equivalent).\n> + * get_view_relopt_spec_set\n> + *\t\tReturns an options catalog for view relation.\n> */\n> -static void *\n> -allocateReloptStruct(Size base, relopt_value *options, int numoptions)\n> -{\n> -\tSize\t\tsize = base;\n> -\tint\t\t\ti;\n> -\n> -\tfor (i = 0; i < numoptions; i++)\n> -\t{\n> -\t\trelopt_value *optval = &options[i];\n> -\n> -\t\tif (optval->gen->type == RELOPT_TYPE_STRING)\n> -\t\t{\n> -\t\t\trelopt_string *optstr = (relopt_string *) optval->gen;\n> -\n> -\t\t\tif (optstr->fill_cb)\n> -\t\t\t{\n> -\t\t\t\tconst char *val = optval->isset ? optval->values.string_val :\n> -\t\t\t\toptstr->default_isnull ? NULL : optstr->default_val;\n> -\n> -\t\t\t\tsize += optstr->fill_cb(val, NULL);\n> -\t\t\t}\n> -\t\t\telse\n> -\t\t\t\tsize += GET_STRING_RELOPTION_LEN(*optval) + 1;\n> -\t\t}\n> -\t}\n> -\n> -\treturn palloc0(size);\n> -}\n> +static options_spec_set *view_relopt_spec_set = NULL;\n> \n> -/*\n> - * Given the result of parseRelOptions and a parsing table, fill in the\n> - * struct (previously allocated with allocateReloptStruct) with the parsed\n> - * values.\n> - *\n> - * rdopts is the pointer to the allocated struct to be filled.\n> - * basesize is the sizeof(struct) that was passed to allocateReloptStruct.\n> - * options, of length numoptions, is parseRelOptions' output.\n> - * elems, of length numelems, is the table describing the allowed options.\n> - * When validate is true, it is expected that all options appear in elems.\n> - */\n> -static void\n> -fillRelOptions(void *rdopts, Size basesize,\n> -\t\t\t relopt_value *options, int numoptions,\n> -\t\t\t bool validate,\n> -\t\t\t const relopt_parse_elt *elems, int numelems)\n> +options_spec_set *\n> +get_view_relopt_spec_set(void)\n> {\n> -\tint\t\t\ti;\n> -\tint\t\t\toffset = basesize;\n> +\tif (view_relopt_spec_set)\n> +\t\treturn view_relopt_spec_set;\n> \n> -\tfor (i = 0; i < numoptions; i++)\n> -\t{\n> -\t\tint\t\t\tj;\n> -\t\tbool\t\tfound = false;\n> +\tview_relopt_spec_set = allocateOptionsSpecSet(NULL,\n> +\t\t\t\t\t\t\t\t\t\t\t\t sizeof(ViewOptions), 2);\n> \n> -\t\tfor (j = 0; j < numelems; j++)\n> -\t\t{\n> -\t\t\tif (strcmp(options[i].gen->name, elems[j].optname) == 0)\n> -\t\t\t{\n> -\t\t\t\trelopt_string *optstring;\n> -\t\t\t\tchar\t *itempos = ((char *) rdopts) + elems[j].offset;\n> -\t\t\t\tchar\t *string_val;\n> -\n> -\t\t\t\tswitch (options[i].gen->type)\n> -\t\t\t\t{\n> -\t\t\t\t\tcase RELOPT_TYPE_BOOL:\n> -\t\t\t\t\t\t*(bool *) itempos = options[i].isset ?\n> -\t\t\t\t\t\t\toptions[i].values.bool_val :\n> -\t\t\t\t\t\t\t((relopt_bool *) options[i].gen)->default_val;\n> -\t\t\t\t\t\tbreak;\n> -\t\t\t\t\tcase RELOPT_TYPE_INT:\n> -\t\t\t\t\t\t*(int *) itempos = options[i].isset ?\n> -\t\t\t\t\t\t\toptions[i].values.int_val :\n> -\t\t\t\t\t\t\t((relopt_int *) options[i].gen)->default_val;\n> -\t\t\t\t\t\tbreak;\n> -\t\t\t\t\tcase RELOPT_TYPE_REAL:\n> -\t\t\t\t\t\t*(double *) itempos = options[i].isset ?\n> -\t\t\t\t\t\t\toptions[i].values.real_val :\n> -\t\t\t\t\t\t\t((relopt_real *) options[i].gen)->default_val;\n> -\t\t\t\t\t\tbreak;\n> -\t\t\t\t\tcase RELOPT_TYPE_ENUM:\n> -\t\t\t\t\t\t*(int *) itempos = options[i].isset ?\n> -\t\t\t\t\t\t\toptions[i].values.enum_val :\n> -\t\t\t\t\t\t\t((relopt_enum *) options[i].gen)->default_val;\n> -\t\t\t\t\t\tbreak;\n> -\t\t\t\t\tcase RELOPT_TYPE_STRING:\n> -\t\t\t\t\t\toptstring = (relopt_string *) options[i].gen;\n> -\t\t\t\t\t\tif (options[i].isset)\n> -\t\t\t\t\t\t\tstring_val = options[i].values.string_val;\n> -\t\t\t\t\t\telse if (!optstring->default_isnull)\n> -\t\t\t\t\t\t\tstring_val = optstring->default_val;\n> -\t\t\t\t\t\telse\n> -\t\t\t\t\t\t\tstring_val = NULL;\n> -\n> -\t\t\t\t\t\tif (optstring->fill_cb)\n> -\t\t\t\t\t\t{\n> -\t\t\t\t\t\t\tSize\t\tsize =\n> -\t\t\t\t\t\t\toptstring->fill_cb(string_val,\n> -\t\t\t\t\t\t\t\t\t\t\t (char *) rdopts + offset);\n> -\n> -\t\t\t\t\t\t\tif (size)\n> -\t\t\t\t\t\t\t{\n> -\t\t\t\t\t\t\t\t*(int *) itempos = offset;\n> -\t\t\t\t\t\t\t\toffset += size;\n> -\t\t\t\t\t\t\t}\n> -\t\t\t\t\t\t\telse\n> -\t\t\t\t\t\t\t\t*(int *) itempos = 0;\n> -\t\t\t\t\t\t}\n> -\t\t\t\t\t\telse if (string_val == NULL)\n> -\t\t\t\t\t\t\t*(int *) itempos = 0;\n> -\t\t\t\t\t\telse\n> -\t\t\t\t\t\t{\n> -\t\t\t\t\t\t\tstrcpy((char *) rdopts + offset, string_val);\n> -\t\t\t\t\t\t\t*(int *) itempos = offset;\n> -\t\t\t\t\t\t\toffset += strlen(string_val) + 1;\n> -\t\t\t\t\t\t}\n> -\t\t\t\t\t\tbreak;\n> -\t\t\t\t\tdefault:\n> -\t\t\t\t\t\telog(ERROR, \"unsupported reloption type %d\",\n> -\t\t\t\t\t\t\t options[i].gen->type);\n> -\t\t\t\t\t\tbreak;\n> -\t\t\t\t}\n> -\t\t\t\tfound = true;\n> -\t\t\t\tbreak;\n> -\t\t\t}\n> -\t\t}\n> -\t\tif (validate && !found)\n> -\t\t\telog(ERROR, \"reloption \\\"%s\\\" not found in parse table\",\n> -\t\t\t\t options[i].gen->name);\n> -\t}\n> -\tSET_VARSIZE(rdopts, offset);\n> -}\n> +\toptionsSpecSetAddBool(view_relopt_spec_set, \"security_barrier\",\n> +\t\t\t\t\t\t\t \"View acts as a row security barrier\",\n> +\t\t\t\t\t\t\t AccessExclusiveLock,\n> +\t\t\t\t\t 0, offsetof(ViewOptions, security_barrier), false);\n> \n> +\toptionsSpecSetAddEnum(view_relopt_spec_set, \"check_option\",\n> +\t\t\t\t\t\t \"View has WITH CHECK OPTION defined (local or cascaded)\",\n> +\t\t\t\t\t\t\t AccessExclusiveLock, 0,\n> +\t\t\t\t\t\t\t offsetof(ViewOptions, check_option),\n> +\t\t\t\t\t\t\t viewCheckOptValues,\n> +\t\t\t\t\t\t\t VIEW_OPTION_CHECK_OPTION_NOT_SET,\n> +\t\t\t\t\t\t\t gettext_noop(\"Valid values are \\\"local\\\" and \\\"cascaded\\\".\"));\n> \n> -/*\n> - * Option parser for anything that uses StdRdOptions.\n> - */\n> -bytea *\n> -default_reloptions(Datum reloptions, bool validate, relopt_kind kind)\n> -{\n> -\tstatic const relopt_parse_elt tab[] = {\n> -\t\t{\"fillfactor\", RELOPT_TYPE_INT, offsetof(StdRdOptions, fillfactor)},\n> -\t\t{\"autovacuum_enabled\", RELOPT_TYPE_BOOL,\n> -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, enabled)},\n> -\t\t{\"autovacuum_vacuum_threshold\", RELOPT_TYPE_INT,\n> -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_threshold)},\n> -\t\t{\"autovacuum_vacuum_insert_threshold\", RELOPT_TYPE_INT,\n> -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_ins_threshold)},\n> -\t\t{\"autovacuum_analyze_threshold\", RELOPT_TYPE_INT,\n> -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, analyze_threshold)},\n> -\t\t{\"autovacuum_vacuum_cost_limit\", RELOPT_TYPE_INT,\n> -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_cost_limit)},\n> -\t\t{\"autovacuum_freeze_min_age\", RELOPT_TYPE_INT,\n> -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, freeze_min_age)},\n> -\t\t{\"autovacuum_freeze_max_age\", RELOPT_TYPE_INT,\n> -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, freeze_max_age)},\n> -\t\t{\"autovacuum_freeze_table_age\", RELOPT_TYPE_INT,\n> -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, freeze_table_age)},\n> -\t\t{\"autovacuum_multixact_freeze_min_age\", RELOPT_TYPE_INT,\n> -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, multixact_freeze_min_age)},\n> -\t\t{\"autovacuum_multixact_freeze_max_age\", RELOPT_TYPE_INT,\n> -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, multixact_freeze_max_age)},\n> -\t\t{\"autovacuum_multixact_freeze_table_age\", RELOPT_TYPE_INT,\n> -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, multixact_freeze_table_age)},\n> -\t\t{\"log_autovacuum_min_duration\", RELOPT_TYPE_INT,\n> -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, log_min_duration)},\n> -\t\t{\"toast_tuple_target\", RELOPT_TYPE_INT,\n> -\t\toffsetof(StdRdOptions, toast_tuple_target)},\n> -\t\t{\"autovacuum_vacuum_cost_delay\", RELOPT_TYPE_REAL,\n> -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_cost_delay)},\n> -\t\t{\"autovacuum_vacuum_scale_factor\", RELOPT_TYPE_REAL,\n> -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_scale_factor)},\n> -\t\t{\"autovacuum_vacuum_insert_scale_factor\", RELOPT_TYPE_REAL,\n> -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_ins_scale_factor)},\n> -\t\t{\"autovacuum_analyze_scale_factor\", RELOPT_TYPE_REAL,\n> -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, analyze_scale_factor)},\n> -\t\t{\"user_catalog_table\", RELOPT_TYPE_BOOL,\n> -\t\toffsetof(StdRdOptions, user_catalog_table)},\n> -\t\t{\"parallel_workers\", RELOPT_TYPE_INT,\n> -\t\toffsetof(StdRdOptions, parallel_workers)},\n> -\t\t{\"vacuum_index_cleanup\", RELOPT_TYPE_ENUM,\n> -\t\toffsetof(StdRdOptions, vacuum_index_cleanup)},\n> -\t\t{\"vacuum_truncate\", RELOPT_TYPE_BOOL,\n> -\t\toffsetof(StdRdOptions, vacuum_truncate)}\n> -\t};\n> -\n> -\treturn (bytea *) build_reloptions(reloptions, validate, kind,\n> -\t\t\t\t\t\t\t\t\t sizeof(StdRdOptions),\n> -\t\t\t\t\t\t\t\t\t tab, lengthof(tab));\n> +\treturn view_relopt_spec_set;\n> }\n> \n> /*\n> - * build_reloptions\n> - *\n> - * Parses \"reloptions\" provided by the caller, returning them in a\n> - * structure containing the parsed options. The parsing is done with\n> - * the help of a parsing table describing the allowed options, defined\n> - * by \"relopt_elems\" of length \"num_relopt_elems\".\n> - *\n> - * \"validate\" must be true if reloptions value is freshly built by\n> - * transformRelOptions(), as opposed to being read from the catalog, in which\n> - * case the values contained in it must already be valid.\n> - *\n> - * NULL is returned if the passed-in options did not match any of the options\n> - * in the parsing table, unless validate is true in which case an error would\n> - * be reported.\n> + * get_attribute_options_spec_set\n> + *\t\tReturns an options spec det for heap attributes\n> */\n> -void *\n> -build_reloptions(Datum reloptions, bool validate,\n> -\t\t\t\t relopt_kind kind,\n> -\t\t\t\t Size relopt_struct_size,\n> -\t\t\t\t const relopt_parse_elt *relopt_elems,\n> -\t\t\t\t int num_relopt_elems)\n> -{\n> -\tint\t\t\tnumoptions;\n> -\trelopt_value *options;\n> -\tvoid\t *rdopts;\n> -\n> -\t/* parse options specific to given relation option kind */\n> -\toptions = parseRelOptions(reloptions, validate, kind, &numoptions);\n> -\tAssert(numoptions <= num_relopt_elems);\n> -\n> -\t/* if none set, we're done */\n> -\tif (numoptions == 0)\n> -\t{\n> -\t\tAssert(options == NULL);\n> -\t\treturn NULL;\n> -\t}\n> -\n> -\t/* allocate and fill the structure */\n> -\trdopts = allocateReloptStruct(relopt_struct_size, options, numoptions);\n> -\tfillRelOptions(rdopts, relopt_struct_size, options, numoptions,\n> -\t\t\t\t validate, relopt_elems, num_relopt_elems);\n> +static options_spec_set *attribute_options_spec_set = NULL;\n> \n> -\tpfree(options);\n> -\n> -\treturn rdopts;\n> -}\n> -\n> -/*\n> - * Parse local options, allocate a bytea struct that's of the specified\n> - * 'base_size' plus any extra space that's needed for string variables,\n> - * fill its option's fields located at the given offsets and return it.\n> - */\n> -void *\n> -build_local_reloptions(local_relopts *relopts, Datum options, bool validate)\n> +options_spec_set *\n> +get_attribute_options_spec_set(void)\n> {\n> -\tint\t\t\tnoptions = list_length(relopts->options);\n> -\trelopt_parse_elt *elems = palloc(sizeof(*elems) * noptions);\n> -\trelopt_value *vals;\n> -\tvoid\t *opts;\n> -\tint\t\t\ti = 0;\n> -\tListCell *lc;\n> +\tif (attribute_options_spec_set)\n> +\t\t\treturn attribute_options_spec_set;\n> \n> -\tforeach(lc, relopts->options)\n> -\t{\n> -\t\tlocal_relopt *opt = lfirst(lc);\n> -\n> -\t\telems[i].optname = opt->option->name;\n> -\t\telems[i].opttype = opt->option->type;\n> -\t\telems[i].offset = opt->offset;\n> -\n> -\t\ti++;\n> -\t}\n> +\tattribute_options_spec_set = allocateOptionsSpecSet(NULL,\n> +\t\t\t\t\t\t\t\t\t\t\t sizeof(AttributeOpts), 2);\n> \n> -\tvals = parseLocalRelOptions(relopts, options, validate);\n> -\topts = allocateReloptStruct(relopts->relopt_struct_size, vals, noptions);\n> -\tfillRelOptions(opts, relopts->relopt_struct_size, vals, noptions, validate,\n> -\t\t\t\t elems, noptions);\n> +\toptionsSpecSetAddReal(attribute_options_spec_set, \"n_distinct\",\n> +\t\t\t\t\t\t \"Sets the planner's estimate of the number of distinct values appearing in a column (excluding child relations).\",\n> +\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t 0, offsetof(AttributeOpts, n_distinct), 0, -1.0, DBL_MAX);\n> \n> -\tforeach(lc, relopts->validators)\n> -\t\t((relopts_validator) lfirst(lc)) (opts, vals, noptions);\n> -\n> -\tif (elems)\n> -\t\tpfree(elems);\n> +\toptionsSpecSetAddReal(attribute_options_spec_set,\n> +\t\t\t\t\t\t \"n_distinct_inherited\",\n> +\t\t\t\t\t\t \"Sets the planner's estimate of the number of distinct values appearing in a column (including child relations).\",\n> +\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t 0, offsetof(AttributeOpts, n_distinct_inherited), 0, -1.0, DBL_MAX);\n> \n> -\treturn opts;\n> +\treturn attribute_options_spec_set;\n> }\n> \n> -/*\n> - * Option parser for partitioned tables\n> - */\n> -bytea *\n> -partitioned_table_reloptions(Datum reloptions, bool validate)\n> -{\n> -\t/*\n> -\t * There are no options for partitioned tables yet, but this is able to do\n> -\t * some validation.\n> -\t */\n> -\treturn (bytea *) build_reloptions(reloptions, validate,\n> -\t\t\t\t\t\t\t\t\t RELOPT_KIND_PARTITIONED,\n> -\t\t\t\t\t\t\t\t\t 0, NULL, 0);\n> -}\n> \n> /*\n> - * Option parser for views\n> - */\n> -bytea *\n> -view_reloptions(Datum reloptions, bool validate)\n> -{\n> -\tstatic const relopt_parse_elt tab[] = {\n> -\t\t{\"security_barrier\", RELOPT_TYPE_BOOL,\n> -\t\toffsetof(ViewOptions, security_barrier)},\n> -\t\t{\"check_option\", RELOPT_TYPE_ENUM,\n> -\t\toffsetof(ViewOptions, check_option)}\n> -\t};\n> -\n> -\treturn (bytea *) build_reloptions(reloptions, validate,\n> -\t\t\t\t\t\t\t\t\t RELOPT_KIND_VIEW,\n> -\t\t\t\t\t\t\t\t\t sizeof(ViewOptions),\n> -\t\t\t\t\t\t\t\t\t tab, lengthof(tab));\n> -}\n> + * get_tablespace_options_spec_set\n> + *\t\tReturns an options spec set for tablespaces\n> +*/\n> +static options_spec_set *tablespace_options_spec_set = NULL;\n> \n> -/*\n> - * Parse options for heaps, views and toast tables.\n> - */\n> -bytea *\n> -heap_reloptions(char relkind, Datum reloptions, bool validate)\n> +options_spec_set *\n> +get_tablespace_options_spec_set(void)\n> {\n> -\tStdRdOptions *rdopts;\n> -\n> -\tswitch (relkind)\n> +\tif (!tablespace_options_spec_set)\n> \t{\n> -\t\tcase RELKIND_TOASTVALUE:\n> -\t\t\trdopts = (StdRdOptions *)\n> -\t\t\t\tdefault_reloptions(reloptions, validate, RELOPT_KIND_TOAST);\n> -\t\t\tif (rdopts != NULL)\n> -\t\t\t{\n> -\t\t\t\t/* adjust default-only parameters for TOAST relations */\n> -\t\t\t\trdopts->fillfactor = 100;\n> -\t\t\t\trdopts->autovacuum.analyze_threshold = -1;\n> -\t\t\t\trdopts->autovacuum.analyze_scale_factor = -1;\n> -\t\t\t}\n> -\t\t\treturn (bytea *) rdopts;\n> -\t\tcase RELKIND_RELATION:\n> -\t\tcase RELKIND_MATVIEW:\n> -\t\t\treturn default_reloptions(reloptions, validate, RELOPT_KIND_HEAP);\n> -\t\tdefault:\n> -\t\t\t/* other relkinds are not supported */\n> -\t\t\treturn NULL;\n> -\t}\n> -}\n> -\n> -\n> -/*\n> - * Parse options for indexes.\n> - *\n> - *\tamoptions\tindex AM's option parser function\n> - *\treloptions\toptions as text[] datum\n> - *\tvalidate\terror flag\n> - */\n> -bytea *\n> -index_reloptions(amoptions_function amoptions, Datum reloptions, bool validate)\n> -{\n> -\tAssert(amoptions != NULL);\n> +\t\ttablespace_options_spec_set = allocateOptionsSpecSet(NULL,\n> +\t\t\t\t\t\t\t\t\t\t\t\t sizeof(TableSpaceOpts), 4);\n> \n> -\t/* Assume function is strict */\n> -\tif (!PointerIsValid(DatumGetPointer(reloptions)))\n> -\t\treturn NULL;\n> +\t\toptionsSpecSetAddReal(tablespace_options_spec_set,\n> +\t\t\t\t\t\t\t\t \"random_page_cost\",\n> +\t\t\t\t\t\t\t\t \"Sets the planner's estimate of the cost of a nonsequentially fetched disk page\",\n> +\t\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t0, offsetof(TableSpaceOpts, random_page_cost), -1, 0.0, DBL_MAX);\n> \n> -\treturn amoptions(reloptions, validate);\n> -}\n> +\t\toptionsSpecSetAddReal(tablespace_options_spec_set, \"seq_page_cost\",\n> +\t\t\t\t\t\t\t\t \"Sets the planner's estimate of the cost of a sequentially fetched disk page\",\n> +\t\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t 0, offsetof(TableSpaceOpts, seq_page_cost), -1, 0.0, DBL_MAX);\n> \n> -/*\n> - * Option parser for attribute reloptions\n> - */\n> -bytea *\n> -attribute_reloptions(Datum reloptions, bool validate)\n> -{\n> -\tstatic const relopt_parse_elt tab[] = {\n> -\t\t{\"n_distinct\", RELOPT_TYPE_REAL, offsetof(AttributeOpts, n_distinct)},\n> -\t\t{\"n_distinct_inherited\", RELOPT_TYPE_REAL, offsetof(AttributeOpts, n_distinct_inherited)}\n> -\t};\n> -\n> -\treturn (bytea *) build_reloptions(reloptions, validate,\n> -\t\t\t\t\t\t\t\t\t RELOPT_KIND_ATTRIBUTE,\n> -\t\t\t\t\t\t\t\t\t sizeof(AttributeOpts),\n> -\t\t\t\t\t\t\t\t\t tab, lengthof(tab));\n> -}\n> +\t\toptionsSpecSetAddInt(tablespace_options_spec_set,\n> +\t\t\t\t\t\t\t\t \"effective_io_concurrency\",\n> +\t\t\t\t\t\t\t\t \"Number of simultaneous requests that can be handled efficiently by the disk subsystem\",\n> +\t\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t\t\t 0, offsetof(TableSpaceOpts, effective_io_concurrency),\n> +#ifdef USE_PREFETCH\n> +\t\t\t\t\t\t\t\t -1, 0, MAX_IO_CONCURRENCY\n> +#else\n> +\t\t\t\t\t\t\t\t 0, 0, 0\n> +#endif\n> +\t\t\t);\n> \n> -/*\n> - * Option parser for tablespace reloptions\n> - */\n> -bytea *\n> -tablespace_reloptions(Datum reloptions, bool validate)\n> -{\n> -\tstatic const relopt_parse_elt tab[] = {\n> -\t\t{\"random_page_cost\", RELOPT_TYPE_REAL, offsetof(TableSpaceOpts, random_page_cost)},\n> -\t\t{\"seq_page_cost\", RELOPT_TYPE_REAL, offsetof(TableSpaceOpts, seq_page_cost)},\n> -\t\t{\"effective_io_concurrency\", RELOPT_TYPE_INT, offsetof(TableSpaceOpts, effective_io_concurrency)},\n> -\t\t{\"maintenance_io_concurrency\", RELOPT_TYPE_INT, offsetof(TableSpaceOpts, maintenance_io_concurrency)}\n> -\t};\n> -\n> -\treturn (bytea *) build_reloptions(reloptions, validate,\n> -\t\t\t\t\t\t\t\t\t RELOPT_KIND_TABLESPACE,\n> -\t\t\t\t\t\t\t\t\t sizeof(TableSpaceOpts),\n> -\t\t\t\t\t\t\t\t\t tab, lengthof(tab));\n> +\t\toptionsSpecSetAddInt(tablespace_options_spec_set,\n> +\t\t\t\t\t\t\t\t \"maintenance_io_concurrency\",\n> +\t\t\t\t\t\t\t\t \"Number of simultaneous requests that can be handled efficiently by the disk subsystem for maintenance work.\",\n> +\t\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\n> +\t\t\t\t\t 0, offsetof(TableSpaceOpts, maintenance_io_concurrency),\n> +#ifdef USE_PREFETCH\n> +\t\t\t\t\t\t\t\t -1, 0, MAX_IO_CONCURRENCY\n> +#else\n> +\t\t\t\t\t\t\t\t 0, 0, 0\n> +#endif\n> +\t\t\t);\n> +\t}\n> +\treturn tablespace_options_spec_set;\n> }\n> \n> /*\n> @@ -2099,33 +612,55 @@ tablespace_reloptions(Datum reloptions, bool validate)\n> * for a longer explanation of how this works.\n> */\n> LOCKMODE\n> -AlterTableGetRelOptionsLockLevel(List *defList)\n> +AlterTableGetRelOptionsLockLevel(Relation rel, List *defList)\n> {\n> \tLOCKMODE\tlockmode = NoLock;\n> \tListCell *cell;\n> +\toptions_spec_set *spec_set = NULL;\n> \n> \tif (defList == NIL)\n> \t\treturn AccessExclusiveLock;\n> \n> -\tif (need_initialization)\n> -\t\tinitialize_reloptions();\n> +\tswitch (rel->rd_rel->relkind)\n> +\t{\n> +\t\tcase RELKIND_TOASTVALUE:\n> +\t\t\tspec_set = get_toast_relopt_spec_set();\n> +\t\t\tbreak;\n> +\t\tcase RELKIND_RELATION:\n> +\t\tcase RELKIND_MATVIEW:\n> +\t\t\tspec_set = get_heap_relopt_spec_set();\n> +\t\t\tbreak;\n> +\t\tcase RELKIND_INDEX:\n> +\t\t\tspec_set = rel->rd_indam->amreloptspecset();\n> +\t\t\tbreak;\n> +\t\tcase RELKIND_VIEW:\n> +\t\t\tspec_set = get_view_relopt_spec_set();\n> +\t\t\tbreak;\n> +\t\tcase RELKIND_PARTITIONED_TABLE:\n> +\t\t\tspec_set = get_partitioned_relopt_spec_set();\n> +\t\t\tbreak;\n> +\t\tdefault:\n> +\t\t\tAssert(false);\t\t/* can't get here */\n> +\t\t\tbreak;\n> +\t}\n> +\tAssert(spec_set);\t\t\t/* No spec set - no reloption change. Should\n> +\t\t\t\t\t\t\t\t * never get here */\n> \n> \tforeach(cell, defList)\n> \t{\n> \t\tDefElem *def = (DefElem *) lfirst(cell);\n> +\n> \t\tint\t\t\ti;\n> \n> -\t\tfor (i = 0; relOpts[i]; i++)\n> +\t\tfor (i = 0; i < spec_set->num; i++)\n> \t\t{\n> -\t\t\tif (strncmp(relOpts[i]->name,\n> -\t\t\t\t\t\tdef->defname,\n> -\t\t\t\t\t\trelOpts[i]->namelen + 1) == 0)\n> -\t\t\t{\n> -\t\t\t\tif (lockmode < relOpts[i]->lockmode)\n> -\t\t\t\t\tlockmode = relOpts[i]->lockmode;\n> -\t\t\t}\n> +\t\t\toption_spec_basic *gen = spec_set->definitions[i];\n> +\n> +\t\t\tif (pg_strcasecmp(gen->name,\n> +\t\t\t\t\t\t\t def->defname) == 0)\n> +\t\t\t\tif (lockmode < gen->lockmode)\n> +\t\t\t\t\tlockmode = gen->lockmode;\n> \t\t}\n> \t}\n> -\n> \treturn lockmode;\n> -}\n> +}\n> \\ No newline at end of file\n> diff --git a/src/backend/access/gin/gininsert.c b/src/backend/access/gin/gininsert.c\n> index 0e8672c..0cbffad 100644\n> --- a/src/backend/access/gin/gininsert.c\n> +++ b/src/backend/access/gin/gininsert.c\n> @@ -512,6 +512,8 @@ gininsert(Relation index, Datum *values, bool *isnull,\n> \n> \toldCtx = MemoryContextSwitchTo(insertCtx);\n> \n> +// elog(WARNING, \"GinGetUseFastUpdate = %i\", GinGetUseFastUpdate(index));\n> +\n> \tif (GinGetUseFastUpdate(index))\n> \t{\n> \t\tGinTupleCollector collector;\n> diff --git a/src/backend/access/gin/ginutil.c b/src/backend/access/gin/ginutil.c\n> index 6d2d71b..d1fa3a0 100644\n> --- a/src/backend/access/gin/ginutil.c\n> +++ b/src/backend/access/gin/ginutil.c\n> @@ -16,7 +16,7 @@\n> \n> #include \"access/gin_private.h\"\n> #include \"access/ginxlog.h\"\n> -#include \"access/reloptions.h\"\n> +#include \"access/options.h\"\n> #include \"access/xloginsert.h\"\n> #include \"catalog/pg_collation.h\"\n> #include \"catalog/pg_type.h\"\n> @@ -28,6 +28,7 @@\n> #include \"utils/builtins.h\"\n> #include \"utils/index_selfuncs.h\"\n> #include \"utils/typcache.h\"\n> +#include \"utils/guc.h\"\n> \n> \n> /*\n> @@ -67,7 +68,6 @@ ginhandler(PG_FUNCTION_ARGS)\n> \tamroutine->amvacuumcleanup = ginvacuumcleanup;\n> \tamroutine->amcanreturn = NULL;\n> \tamroutine->amcostestimate = gincostestimate;\n> -\tamroutine->amoptions = ginoptions;\n> \tamroutine->amproperty = NULL;\n> \tamroutine->ambuildphasename = NULL;\n> \tamroutine->amvalidate = ginvalidate;\n> @@ -82,6 +82,7 @@ ginhandler(PG_FUNCTION_ARGS)\n> \tamroutine->amestimateparallelscan = NULL;\n> \tamroutine->aminitparallelscan = NULL;\n> \tamroutine->amparallelrescan = NULL;\n> +\tamroutine->amreloptspecset = gingetreloptspecset;\n> \n> \tPG_RETURN_POINTER(amroutine);\n> }\n> @@ -604,6 +605,7 @@ ginExtractEntries(GinState *ginstate, OffsetNumber attnum,\n> \treturn entries;\n> }\n> \n> +/*\n> bytea *\n> ginoptions(Datum reloptions, bool validate)\n> {\n> @@ -618,6 +620,7 @@ ginoptions(Datum reloptions, bool validate)\n> \t\t\t\t\t\t\t\t\t sizeof(GinOptions),\n> \t\t\t\t\t\t\t\t\t tab, lengthof(tab));\n> }\n> +*/\n> \n> /*\n> * Fetch index's statistical data into *stats\n> @@ -705,3 +708,31 @@ ginUpdateStats(Relation index, const GinStatsData *stats, bool is_build)\n> \n> \tEND_CRIT_SECTION();\n> }\n> +\n> +static options_spec_set *gin_relopt_specset = NULL;\n> +\n> +void *\n> +gingetreloptspecset(void)\n> +{\n> +\tif (gin_relopt_specset)\n> +\t\treturn gin_relopt_specset;\n> +\n> +\tgin_relopt_specset = allocateOptionsSpecSet(NULL,\n> +\t\t\t\t\t\t\t\t\t\t\t\tsizeof(GinOptions), 2);\n> +\n> +\toptionsSpecSetAddBool(gin_relopt_specset, \"fastupdate\",\n> +\t\t\t\t\t\t\"Enables \\\"fast update\\\" feature for this GIN index\",\n> +\t\t\t\t\t\t\t AccessExclusiveLock,\n> +\t\t\t\t\t\t\t 0,\n> +\t\t\t\t\t\t\t offsetof(GinOptions, useFastUpdate),\n> +\t\t\t\t\t\t\t GIN_DEFAULT_USE_FASTUPDATE);\n> +\n> +\toptionsSpecSetAddInt(gin_relopt_specset, \"gin_pending_list_limit\",\n> +\t\t \"Maximum size of the pending list for this GIN index, in kilobytes\",\n> +\t\t\t\t\t\t\t AccessExclusiveLock,\n> +\t\t\t\t\t\t\t 0,\n> +\t\t\t\t\t\t\t offsetof(GinOptions, pendingListCleanupSize),\n> +\t\t\t\t\t\t\t -1, 64, MAX_KILOBYTES);\n> +\n> +\treturn gin_relopt_specset;\n> +}\n> diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c\n> index 0683f42..cbbc6a5 100644\n> --- a/src/backend/access/gist/gist.c\n> +++ b/src/backend/access/gist/gist.c\n> @@ -88,7 +88,6 @@ gisthandler(PG_FUNCTION_ARGS)\n> \tamroutine->amvacuumcleanup = gistvacuumcleanup;\n> \tamroutine->amcanreturn = gistcanreturn;\n> \tamroutine->amcostestimate = gistcostestimate;\n> -\tamroutine->amoptions = gistoptions;\n> \tamroutine->amproperty = gistproperty;\n> \tamroutine->ambuildphasename = NULL;\n> \tamroutine->amvalidate = gistvalidate;\n> @@ -103,6 +102,7 @@ gisthandler(PG_FUNCTION_ARGS)\n> \tamroutine->amestimateparallelscan = NULL;\n> \tamroutine->aminitparallelscan = NULL;\n> \tamroutine->amparallelrescan = NULL;\n> +\tamroutine->amreloptspecset = gistgetreloptspecset;\n> \n> \tPG_RETURN_POINTER(amroutine);\n> }\n> diff --git a/src/backend/access/gist/gistbuild.c b/src/backend/access/gist/gistbuild.c\n> index baad28c..931d249 100644\n> --- a/src/backend/access/gist/gistbuild.c\n> +++ b/src/backend/access/gist/gistbuild.c\n> @@ -215,6 +215,7 @@ gistbuild(Relation heap, Relation index, IndexInfo *indexInfo)\n> \t\t\tbuildstate.buildMode = GIST_BUFFERING_DISABLED;\n> \t\telse\t\t\t\t\t/* must be \"auto\" */\n> \t\t\tbuildstate.buildMode = GIST_BUFFERING_AUTO;\n> +//elog(WARNING, \"biffering_mode = %i\", options->buffering_mode);\n> \t}\n> \telse\n> \t{\n> diff --git a/src/backend/access/gist/gistutil.c b/src/backend/access/gist/gistutil.c\n> index 43ba03b..0391915 100644\n> --- a/src/backend/access/gist/gistutil.c\n> +++ b/src/backend/access/gist/gistutil.c\n> @@ -17,7 +17,7 @@\n> \n> #include \"access/gist_private.h\"\n> #include \"access/htup_details.h\"\n> -#include \"access/reloptions.h\"\n> +#include \"access/options.h\"\n> #include \"catalog/pg_opclass.h\"\n> #include \"storage/indexfsm.h\"\n> #include \"storage/lmgr.h\"\n> @@ -916,20 +916,6 @@ gistPageRecyclable(Page page)\n> \treturn false;\n> }\n> \n> -bytea *\n> -gistoptions(Datum reloptions, bool validate)\n> -{\n> -\tstatic const relopt_parse_elt tab[] = {\n> -\t\t{\"fillfactor\", RELOPT_TYPE_INT, offsetof(GiSTOptions, fillfactor)},\n> -\t\t{\"buffering\", RELOPT_TYPE_ENUM, offsetof(GiSTOptions, buffering_mode)}\n> -\t};\n> -\n> -\treturn (bytea *) build_reloptions(reloptions, validate,\n> -\t\t\t\t\t\t\t\t\t RELOPT_KIND_GIST,\n> -\t\t\t\t\t\t\t\t\t sizeof(GiSTOptions),\n> -\t\t\t\t\t\t\t\t\t tab, lengthof(tab));\n> -}\n> -\n> /*\n> *\tgistproperty() -- Check boolean properties of indexes.\n> *\n> @@ -1064,3 +1050,42 @@ gistGetFakeLSN(Relation rel)\n> \t\treturn GetFakeLSNForUnloggedRel();\n> \t}\n> }\n> +\n> +/* values from GistOptBufferingMode */\n> +opt_enum_elt_def gistBufferingOptValues[] =\n> +{\n> +\t{\"auto\", GIST_OPTION_BUFFERING_AUTO},\n> +\t{\"on\", GIST_OPTION_BUFFERING_ON},\n> +\t{\"off\", GIST_OPTION_BUFFERING_OFF},\n> +\t{(const char *) NULL}\t\t/* list terminator */\n> +};\n> +\n> +static options_spec_set *gist_relopt_specset = NULL;\n> +\n> +void *\n> +gistgetreloptspecset(void)\n> +{\n> +\tif (gist_relopt_specset)\n> +\t\treturn gist_relopt_specset;\n> +\n> +\tgist_relopt_specset = allocateOptionsSpecSet(NULL,\n> +\t\t\t\t\t\t\t\t\t\t\t\t sizeof(GiSTOptions), 2);\n> +\n> +\toptionsSpecSetAddInt(gist_relopt_specset, \"fillfactor\",\n> +\t\t\t\t\t\t\"Packs gist index pages only to this percentage\",\n> +\t\t\t\t\t\t\t NoLock,\t\t/* No ALTER, no lock */\n> +\t\t\t\t\t\t\t 0,\n> +\t\t\t\t\t\t\t offsetof(GiSTOptions, fillfactor),\n> +\t\t\t\t\t\t\t GIST_DEFAULT_FILLFACTOR,\n> +\t\t\t\t\t\t\t GIST_MIN_FILLFACTOR, 100);\n> +\n> +\toptionsSpecSetAddEnum(gist_relopt_specset, \"buffering\",\n> +\t\t\t\t\t\t \"Enables buffering build for this GiST index\",\n> +\t\t\t\t\t\t\t NoLock,\t\t/* No ALTER, no lock */\n> +\t\t\t\t\t\t\t 0,\n> +\t\t\t\t\t\t\t offsetof(GiSTOptions, buffering_mode),\n> +\t\t\t\t\t\t\t gistBufferingOptValues,\n> +\t\t\t\t\t\t\t GIST_OPTION_BUFFERING_AUTO,\n> +\t\t\t\t\t\t\t gettext_noop(\"Valid values are \\\"on\\\", \\\"off\\\", and \\\"auto\\\".\"));\n> +\treturn gist_relopt_specset;\n> +}\n> diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c\n> index eb38104..8dc4ca7 100644\n> --- a/src/backend/access/hash/hash.c\n> +++ b/src/backend/access/hash/hash.c\n> @@ -85,7 +85,6 @@ hashhandler(PG_FUNCTION_ARGS)\n> \tamroutine->amvacuumcleanup = hashvacuumcleanup;\n> \tamroutine->amcanreturn = NULL;\n> \tamroutine->amcostestimate = hashcostestimate;\n> -\tamroutine->amoptions = hashoptions;\n> \tamroutine->amproperty = NULL;\n> \tamroutine->ambuildphasename = NULL;\n> \tamroutine->amvalidate = hashvalidate;\n> @@ -100,6 +99,7 @@ hashhandler(PG_FUNCTION_ARGS)\n> \tamroutine->amestimateparallelscan = NULL;\n> \tamroutine->aminitparallelscan = NULL;\n> \tamroutine->amparallelrescan = NULL;\n> +\tamroutine->amreloptspecset = hashgetreloptspecset;\n> \n> \tPG_RETURN_POINTER(amroutine);\n> }\n> diff --git a/src/backend/access/hash/hashpage.c b/src/backend/access/hash/hashpage.c\n> index 159646c..38f64ef 100644\n> --- a/src/backend/access/hash/hashpage.c\n> +++ b/src/backend/access/hash/hashpage.c\n> @@ -359,6 +359,8 @@ _hash_init(Relation rel, double num_tuples, ForkNumber forkNum)\n> \tdata_width = sizeof(uint32);\n> \titem_width = MAXALIGN(sizeof(IndexTupleData)) + MAXALIGN(data_width) +\n> \t\tsizeof(ItemIdData);\t\t/* include the line pointer */\n> +//elog(WARNING, \"fillfactor = %i\", HashGetFillFactor(rel));\n> +\n> \tffactor = HashGetTargetPageUsage(rel) / item_width;\n> \t/* keep to a sane range */\n> \tif (ffactor < 10)\n> diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c\n> index 5198728..826beab 100644\n> --- a/src/backend/access/hash/hashutil.c\n> +++ b/src/backend/access/hash/hashutil.c\n> @@ -15,7 +15,7 @@\n> #include \"postgres.h\"\n> \n> #include \"access/hash.h\"\n> -#include \"access/reloptions.h\"\n> +#include \"access/options.h\"\n> #include \"access/relscan.h\"\n> #include \"port/pg_bitutils.h\"\n> #include \"storage/buf_internals.h\"\n> @@ -272,19 +272,6 @@ _hash_checkpage(Relation rel, Buffer buf, int flags)\n> \t}\n> }\n> \n> -bytea *\n> -hashoptions(Datum reloptions, bool validate)\n> -{\n> -\tstatic const relopt_parse_elt tab[] = {\n> -\t\t{\"fillfactor\", RELOPT_TYPE_INT, offsetof(HashOptions, fillfactor)},\n> -\t};\n> -\n> -\treturn (bytea *) build_reloptions(reloptions, validate,\n> -\t\t\t\t\t\t\t\t\t RELOPT_KIND_HASH,\n> -\t\t\t\t\t\t\t\t\t sizeof(HashOptions),\n> -\t\t\t\t\t\t\t\t\t tab, lengthof(tab));\n> -}\n> -\n> /*\n> * _hash_get_indextuple_hashkey - get the hash index tuple's hash key value\n> */\n> @@ -620,3 +607,24 @@ _hash_kill_items(IndexScanDesc scan)\n> \telse\n> \t\t_hash_relbuf(rel, buf);\n> }\n> +\n> +static options_spec_set *hash_relopt_specset = NULL;\n> +\n> +void *\n> +hashgetreloptspecset(void)\n> +{\n> +\tif (hash_relopt_specset)\n> +\t\treturn hash_relopt_specset;\n> +\n> +\thash_relopt_specset = allocateOptionsSpecSet(NULL,\n> +\t\t\t\t\t\t\t\t\t\t\t sizeof(HashOptions), 1);\n> +\toptionsSpecSetAddInt(hash_relopt_specset, \"fillfactor\",\n> +\t\t\t\t\t\t\"Packs hash index pages only to this percentage\",\n> +\t\t\t\t\t\t\t NoLock,\t\t/* No ALTER -- no lock */\n> +\t\t\t\t\t\t\t 0,\n> +\t\t\t\t\t\t\t offsetof(HashOptions, fillfactor),\n> +\t\t\t\t\t\t\t HASH_DEFAULT_FILLFACTOR,\n> +\t\t\t\t\t\t\t HASH_MIN_FILLFACTOR, 100);\n> +\n> +\treturn hash_relopt_specset;\n> +}\n> diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c\n> index 7355e1d..f7b117e 100644\n> --- a/src/backend/access/nbtree/nbtinsert.c\n> +++ b/src/backend/access/nbtree/nbtinsert.c\n> @@ -2745,6 +2745,8 @@ _bt_delete_or_dedup_one_page(Relation rel, Relation heapRel,\n> \t\t_bt_bottomupdel_pass(rel, buffer, heapRel, insertstate->itemsz))\n> \t\treturn;\n> \n> +// elog(WARNING, \"Deduplicate_items = %i\", BTGetDeduplicateItems(rel));\n> +\n> \t/* Perform deduplication pass (when enabled and index-is-allequalimage) */\n> \tif (BTGetDeduplicateItems(rel) && itup_key->allequalimage)\n> \t\t_bt_dedup_pass(rel, buffer, heapRel, insertstate->itup,\n> diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c\n> index 40ad095..f171c54 100644\n> --- a/src/backend/access/nbtree/nbtree.c\n> +++ b/src/backend/access/nbtree/nbtree.c\n> @@ -22,6 +22,7 @@\n> #include \"access/nbtxlog.h\"\n> #include \"access/relscan.h\"\n> #include \"access/xlog.h\"\n> +#include \"access/options.h\"\n> #include \"commands/progress.h\"\n> #include \"commands/vacuum.h\"\n> #include \"miscadmin.h\"\n> @@ -124,7 +125,6 @@ bthandler(PG_FUNCTION_ARGS)\n> \tamroutine->amvacuumcleanup = btvacuumcleanup;\n> \tamroutine->amcanreturn = btcanreturn;\n> \tamroutine->amcostestimate = btcostestimate;\n> -\tamroutine->amoptions = btoptions;\n> \tamroutine->amproperty = btproperty;\n> \tamroutine->ambuildphasename = btbuildphasename;\n> \tamroutine->amvalidate = btvalidate;\n> @@ -139,6 +139,7 @@ bthandler(PG_FUNCTION_ARGS)\n> \tamroutine->amestimateparallelscan = btestimateparallelscan;\n> \tamroutine->aminitparallelscan = btinitparallelscan;\n> \tamroutine->amparallelrescan = btparallelrescan;\n> +\tamroutine->amreloptspecset = btgetreloptspecset;\n> \n> \tPG_RETURN_POINTER(amroutine);\n> }\n> @@ -1418,3 +1419,37 @@ btcanreturn(Relation index, int attno)\n> {\n> \treturn true;\n> }\n> +\n> +static options_spec_set *bt_relopt_specset = NULL;\n> +\n> +void *\n> +btgetreloptspecset(void)\n> +{\n> +\tif (bt_relopt_specset)\n> +\t\treturn bt_relopt_specset;\n> +\n> +\tbt_relopt_specset = allocateOptionsSpecSet(NULL,\n> +\t\t\t\t\t\t\t\t\t\t\t sizeof(BTOptions), 3);\n> +\n> +\toptionsSpecSetAddInt(\n> +\t\tbt_relopt_specset, \"fillfactor\",\n> +\t\t\"Packs btree index pages only to this percentage\",\n> +\t\tShareUpdateExclusiveLock, /* since it applies only to later inserts */\n> +\t\t0, offsetof(BTOptions, fillfactor),\n> +\t\tBTREE_DEFAULT_FILLFACTOR, BTREE_MIN_FILLFACTOR, 100\n> +\t);\n> +\toptionsSpecSetAddReal(\n> +\t\tbt_relopt_specset, \"vacuum_cleanup_index_scale_factor\",\n> +\t\t\"Number of tuple inserts prior to index cleanup as a fraction of reltuples\",\n> +\t\tShareUpdateExclusiveLock,\n> +\t\t0, offsetof(BTOptions,vacuum_cleanup_index_scale_factor),\n> +\t\t-1, 0.0, 1e10\n> +\t);\n> +\toptionsSpecSetAddBool(\n> +\t\tbt_relopt_specset, \"deduplicate_items\",\n> +\t\t\"Enables \\\"deduplicate items\\\" feature for this btree index\",\n> +\t\tShareUpdateExclusiveLock, /* since it applies only to later inserts */\n> +\t\t0, offsetof(BTOptions,deduplicate_items), true\n> +\t);\n> +\treturn bt_relopt_specset;\n> +}\n> diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c\n> index c72b456..2588a30 100644\n> --- a/src/backend/access/nbtree/nbtutils.c\n> +++ b/src/backend/access/nbtree/nbtutils.c\n> @@ -18,7 +18,7 @@\n> #include <time.h>\n> \n> #include \"access/nbtree.h\"\n> -#include \"access/reloptions.h\"\n> +#include \"storage/lock.h\"\n> #include \"access/relscan.h\"\n> #include \"catalog/catalog.h\"\n> #include \"commands/progress.h\"\n> @@ -2100,25 +2100,6 @@ BTreeShmemInit(void)\n> \t\tAssert(found);\n> }\n> \n> -bytea *\n> -btoptions(Datum reloptions, bool validate)\n> -{\n> -\tstatic const relopt_parse_elt tab[] = {\n> -\t\t{\"fillfactor\", RELOPT_TYPE_INT, offsetof(BTOptions, fillfactor)},\n> -\t\t{\"vacuum_cleanup_index_scale_factor\", RELOPT_TYPE_REAL,\n> -\t\toffsetof(BTOptions, vacuum_cleanup_index_scale_factor)},\n> -\t\t{\"deduplicate_items\", RELOPT_TYPE_BOOL,\n> -\t\toffsetof(BTOptions, deduplicate_items)}\n> -\n> -\t};\n> -\n> -\treturn (bytea *) build_reloptions(reloptions, validate,\n> -\t\t\t\t\t\t\t\t\t RELOPT_KIND_BTREE,\n> -\t\t\t\t\t\t\t\t\t sizeof(BTOptions),\n> -\t\t\t\t\t\t\t\t\t tab, lengthof(tab));\n> -\n> -}\n> -\n> /*\n> *\tbtproperty() -- Check boolean properties of indexes.\n> *\n> diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c\n> index 03a9cd3..14429ad 100644\n> --- a/src/backend/access/spgist/spgutils.c\n> +++ b/src/backend/access/spgist/spgutils.c\n> @@ -17,7 +17,7 @@\n> \n> #include \"access/amvalidate.h\"\n> #include \"access/htup_details.h\"\n> -#include \"access/reloptions.h\"\n> +#include \"access/options.h\"\n> #include \"access/spgist_private.h\"\n> #include \"access/toast_compression.h\"\n> #include \"access/transam.h\"\n> @@ -72,7 +72,6 @@ spghandler(PG_FUNCTION_ARGS)\n> \tamroutine->amvacuumcleanup = spgvacuumcleanup;\n> \tamroutine->amcanreturn = spgcanreturn;\n> \tamroutine->amcostestimate = spgcostestimate;\n> -\tamroutine->amoptions = spgoptions;\n> \tamroutine->amproperty = spgproperty;\n> \tamroutine->ambuildphasename = NULL;\n> \tamroutine->amvalidate = spgvalidate;\n> @@ -87,6 +86,7 @@ spghandler(PG_FUNCTION_ARGS)\n> \tamroutine->amestimateparallelscan = NULL;\n> \tamroutine->aminitparallelscan = NULL;\n> \tamroutine->amparallelrescan = NULL;\n> +\tamroutine->amreloptspecset = spggetreloptspecset;\n> \n> \tPG_RETURN_POINTER(amroutine);\n> }\n> @@ -550,6 +550,7 @@ SpGistGetBuffer(Relation index, int flags, int needSpace, bool *isNew)\n> \t * related to the ones already on it. But fillfactor mustn't cause an\n> \t * error for requests that would otherwise be legal.\n> \t */\n> +//elog(WARNING, \"fillfactor = %i\", SpGistGetFillFactor(index));\n> \tneedSpace += SpGistGetTargetPageFreeSpace(index);\n> \tneedSpace = Min(needSpace, SPGIST_PAGE_CAPACITY);\n> \n> @@ -721,23 +722,6 @@ SpGistInitMetapage(Page page)\n> }\n> \n> /*\n> - * reloptions processing for SPGiST\n> - */\n> -bytea *\n> -spgoptions(Datum reloptions, bool validate)\n> -{\n> -\tstatic const relopt_parse_elt tab[] = {\n> -\t\t{\"fillfactor\", RELOPT_TYPE_INT, offsetof(SpGistOptions, fillfactor)},\n> -\t};\n> -\n> -\treturn (bytea *) build_reloptions(reloptions, validate,\n> -\t\t\t\t\t\t\t\t\t RELOPT_KIND_SPGIST,\n> -\t\t\t\t\t\t\t\t\t sizeof(SpGistOptions),\n> -\t\t\t\t\t\t\t\t\t tab, lengthof(tab));\n> -\n> -}\n> -\n> -/*\n> * Get the space needed to store a non-null datum of the indicated type\n> * in an inner tuple (that is, as a prefix or node label).\n> * Note the result is already rounded up to a MAXALIGN boundary.\n> @@ -1336,3 +1320,25 @@ spgproperty(Oid index_oid, int attno,\n> \n> \treturn true;\n> }\n> +\n> +static options_spec_set *spgist_relopt_specset = NULL;\n> +\n> +void *\n> +spggetreloptspecset(void)\n> +{\n> +\tif (!spgist_relopt_specset)\n> +\t{\n> +\t\tspgist_relopt_specset = allocateOptionsSpecSet(NULL,\n> +\t\t\t\t\t\t\t\t\t\t\t\tsizeof(SpGistOptions), 1);\n> +\n> +\t\toptionsSpecSetAddInt(spgist_relopt_specset, \"fillfactor\",\n> +\t\t\t\t\t\t \"Packs spgist index pages only to this percentage\",\n> +\t\t\t\t\t\t\t\t ShareUpdateExclusiveLock,\t\t/* since it applies only\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t * to later inserts */\n> +\t\t\t\t\t\t\t\t 0,\n> +\t\t\t\t\t\t\t\t offsetof(SpGistOptions, fillfactor),\n> +\t\t\t\t\t\t\t\t SPGIST_DEFAULT_FILLFACTOR,\n> +\t\t\t\t\t\t\t\t SPGIST_MIN_FILLFACTOR, 100);\n> +\t}\n> +\treturn spgist_relopt_specset;\n> +}\n> diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c\n> index 0982851..4f3dbb8 100644\n> --- a/src/backend/commands/createas.c\n> +++ b/src/backend/commands/createas.c\n> @@ -90,6 +90,7 @@ create_ctas_internal(List *attrList, IntoClause *into)\n> \tDatum\t\ttoast_options;\n> \tstatic char *validnsps[] = HEAP_RELOPT_NAMESPACES;\n> \tObjectAddress intoRelationAddr;\n> +\tList\t *toastDefList;\n> \n> \t/* This code supports both CREATE TABLE AS and CREATE MATERIALIZED VIEW */\n> \tis_matview = (into->viewQuery != NULL);\n> @@ -124,14 +125,12 @@ create_ctas_internal(List *attrList, IntoClause *into)\n> \tCommandCounterIncrement();\n> \n> \t/* parse and validate reloptions for the toast table */\n> -\ttoast_options = transformRelOptions((Datum) 0,\n> -\t\t\t\t\t\t\t\t\t\tcreate->options,\n> -\t\t\t\t\t\t\t\t\t\t\"toast\",\n> -\t\t\t\t\t\t\t\t\t\tvalidnsps,\n> -\t\t\t\t\t\t\t\t\t\ttrue, false);\n> \n> -\t(void) heap_reloptions(RELKIND_TOASTVALUE, toast_options, true);\n> +\toptionsDefListValdateNamespaces(create->options, validnsps);\n> +\ttoastDefList = optionsDefListFilterNamespaces(create->options, \"toast\");\n> \n> +\ttoast_options = transformOptions(get_toast_relopt_spec_set(), (Datum) 0,\n> +\t\t\t\t\t\t\t\t\t toastDefList, 0);\n> \tNewRelationCreateToastTable(intoRelationAddr.objectId, toast_options);\n> \n> \t/* Create the \"view\" part of a materialized view. */\n> diff --git a/src/backend/commands/foreigncmds.c b/src/backend/commands/foreigncmds.c\n> index 146fa57..758ca34 100644\n> --- a/src/backend/commands/foreigncmds.c\n> +++ b/src/backend/commands/foreigncmds.c\n> @@ -112,7 +112,7 @@ transformGenericOptions(Oid catalogId,\n> \t\t\t\t\t\tList *options,\n> \t\t\t\t\t\tOid fdwvalidator)\n> {\n> -\tList\t *resultOptions = untransformRelOptions(oldOptions);\n> +\tList\t *resultOptions = optionsTextArrayToDefList(oldOptions);\n> \tListCell *optcell;\n> \tDatum\t\tresult;\n> \n> diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c\n> index c14ca27..96d465a 100644\n> --- a/src/backend/commands/indexcmds.c\n> +++ b/src/backend/commands/indexcmds.c\n> @@ -19,6 +19,7 @@\n> #include \"access/heapam.h\"\n> #include \"access/htup_details.h\"\n> #include \"access/reloptions.h\"\n> +#include \"access/options.h\"\n> #include \"access/sysattr.h\"\n> #include \"access/tableam.h\"\n> #include \"access/xact.h\"\n> @@ -531,7 +532,7 @@ DefineIndex(Oid relationId,\n> \tForm_pg_am\taccessMethodForm;\n> \tIndexAmRoutine *amRoutine;\n> \tbool\t\tamcanorder;\n> -\tamoptions_function amoptions;\n> +\tamreloptspecset_function amreloptspecsetfn;\n> \tbool\t\tpartitioned;\n> \tbool\t\tsafe_index;\n> \tDatum\t\treloptions;\n> @@ -837,7 +838,7 @@ DefineIndex(Oid relationId,\n> \t\t\t\t\t\taccessMethodName)));\n> \n> \tamcanorder = amRoutine->amcanorder;\n> -\tamoptions = amRoutine->amoptions;\n> +\tamreloptspecsetfn = amRoutine->amreloptspecset;\n> \n> \tpfree(amRoutine);\n> \tReleaseSysCache(tuple);\n> @@ -851,10 +852,19 @@ DefineIndex(Oid relationId,\n> \t/*\n> \t * Parse AM-specific options, convert to text array form, validate.\n> \t */\n> -\treloptions = transformRelOptions((Datum) 0, stmt->options,\n> -\t\t\t\t\t\t\t\t\t NULL, NULL, false, false);\n> \n> -\t(void) index_reloptions(amoptions, reloptions, true);\n> +\tif (amreloptspecsetfn)\n> +\t{\n> +\t\treloptions = transformOptions(amreloptspecsetfn(),\n> +\t\t\t\t\t\t\t\t\t (Datum) 0, stmt->options, 0);\n> +\t}\n> +\telse\n> +\t{\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +\t\t\t\t errmsg(\"access method %s does not support options\",\n> +\t\t\t\t\t\taccessMethodName)));\n> +\t}\n> \n> \t/*\n> \t * Prepare arguments for index_create, primarily an IndexInfo structure.\n> @@ -1986,8 +1996,7 @@ ComputeIndexAttrs(IndexInfo *indexInfo,\n> \t\t\t\t\tpalloc0(sizeof(Datum) * indexInfo->ii_NumIndexAttrs);\n> \n> \t\t\tindexInfo->ii_OpclassOptions[attn] =\n> -\t\t\t\ttransformRelOptions((Datum) 0, attribute->opclassopts,\n> -\t\t\t\t\t\t\t\t\tNULL, NULL, false, false);\n> +\t\t\t\toptionsDefListToTextArray(attribute->opclassopts);\n> \t\t}\n> \n> \t\tattn++;\n> diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c\n> index 1c2ebe1..7f3004f 100644\n> --- a/src/backend/commands/tablecmds.c\n> +++ b/src/backend/commands/tablecmds.c\n> @@ -20,6 +20,7 @@\n> #include \"access/heapam_xlog.h\"\n> #include \"access/multixact.h\"\n> #include \"access/reloptions.h\"\n> +#include \"access/options.h\"\n> #include \"access/relscan.h\"\n> #include \"access/sysattr.h\"\n> #include \"access/tableam.h\"\n> @@ -641,7 +642,6 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,\n> \tListCell *listptr;\n> \tAttrNumber\tattnum;\n> \tbool\t\tpartitioned;\n> -\tstatic char *validnsps[] = HEAP_RELOPT_NAMESPACES;\n> \tOid\t\t\tofTypeId;\n> \tObjectAddress address;\n> \tLOCKMODE\tparentLockmode;\n> @@ -789,19 +789,37 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,\n> \t/*\n> \t * Parse and validate reloptions, if any.\n> \t */\n> -\treloptions = transformRelOptions((Datum) 0, stmt->options, NULL, validnsps,\n> -\t\t\t\t\t\t\t\t\t true, false);\n> \n> \tswitch (relkind)\n> \t{\n> \t\tcase RELKIND_VIEW:\n> -\t\t\t(void) view_reloptions(reloptions, true);\n> +\t\t\treloptions = transformOptions(\n> +\t\t\t\t\t\t\t\t\t get_view_relopt_spec_set(),\n> +\t\t\t\t\t\t\t\t\t (Datum) 0, stmt->options, 0);\n> \t\t\tbreak;\n> \t\tcase RELKIND_PARTITIONED_TABLE:\n> -\t\t\t(void) partitioned_table_reloptions(reloptions, true);\n> +\t\t{\n> +\t\t\t/* If it is not all listed above, then it if heap */\n> +\t\t\tchar\t *namespaces[] = HEAP_RELOPT_NAMESPACES;\n> +\t\t\tList\t *heapDefList;\n> +\n> +\t\t\toptionsDefListValdateNamespaces(stmt->options, namespaces);\n> +\t\t\theapDefList = optionsDefListFilterNamespaces(stmt->options, NULL);\n> +\t\t\treloptions = transformOptions(get_partitioned_relopt_spec_set(),\n> +\t\t\t\t\t\t\t\t\t (Datum) 0, heapDefList, 0);\n> \t\t\tbreak;\n> +\t\t}\n> \t\tdefault:\n> -\t\t\t(void) heap_reloptions(relkind, reloptions, true);\n> +\t\t{\n> +\t\t\t/* If it is not all listed above, then it if heap */\n> +\t\t\tchar\t *namespaces[] = HEAP_RELOPT_NAMESPACES;\n> +\t\t\tList\t *heapDefList;\n> +\n> +\t\t\toptionsDefListValdateNamespaces(stmt->options, namespaces);\n> +\t\t\theapDefList = optionsDefListFilterNamespaces(stmt->options, NULL);\n> +\t\t\treloptions = transformOptions(get_heap_relopt_spec_set(),\n> +\t\t\t\t\t\t\t\t\t (Datum) 0, heapDefList, 0);\n> +\t\t}\n> \t}\n> \n> \tif (stmt->ofTypename)\n> @@ -4022,7 +4040,7 @@ void\n> AlterTableInternal(Oid relid, List *cmds, bool recurse)\n> {\n> \tRelation\trel;\n> -\tLOCKMODE\tlockmode = AlterTableGetLockLevel(cmds);\n> +\tLOCKMODE\tlockmode = AlterTableGetLockLevel(relid, cmds);\n> \n> \trel = relation_open(relid, lockmode);\n> \n> @@ -4064,7 +4082,7 @@ AlterTableInternal(Oid relid, List *cmds, bool recurse)\n> * otherwise we might end up with an inconsistent dump that can't restore.\n> */\n> LOCKMODE\n> -AlterTableGetLockLevel(List *cmds)\n> +AlterTableGetLockLevel(Oid relid, List *cmds)\n> {\n> \t/*\n> \t * This only works if we read catalog tables using MVCC snapshots.\n> @@ -4285,9 +4303,13 @@ AlterTableGetLockLevel(List *cmds)\n> \t\t\t\t\t\t\t\t\t * getTables() */\n> \t\t\tcase AT_ResetRelOptions:\t/* Uses MVCC in getIndexes() and\n> \t\t\t\t\t\t\t\t\t\t * getTables() */\n> -\t\t\t\tcmd_lockmode = AlterTableGetRelOptionsLockLevel((List *) cmd->def);\n> -\t\t\t\tbreak;\n> -\n> +\t\t\t\t{\n> +\t\t\t\t\tRelation rel = relation_open(relid, NoLock); // FIXME I am not sure how wise it is\n> +\t\t\t\t\tcmd_lockmode = AlterTableGetRelOptionsLockLevel(rel,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\tcastNode(List, cmd->def));\n> +\t\t\t\t\trelation_close(rel,NoLock);\n> +\t\t\t\t\tbreak;\n> +\t\t\t\t}\n> \t\t\tcase AT_AttachPartition:\n> \t\t\t\tcmd_lockmode = ShareUpdateExclusiveLock;\n> \t\t\t\tbreak;\n> @@ -8062,11 +8084,11 @@ ATExecSetOptions(Relation rel, const char *colName, Node *options,\n> \t/* Generate new proposed attoptions (text array) */\n> \tdatum = SysCacheGetAttr(ATTNAME, tuple, Anum_pg_attribute_attoptions,\n> \t\t\t\t\t\t\t&isnull);\n> -\tnewOptions = transformRelOptions(isnull ? (Datum) 0 : datum,\n> -\t\t\t\t\t\t\t\t\t castNode(List, options), NULL, NULL,\n> -\t\t\t\t\t\t\t\t\t false, isReset);\n> -\t/* Validate new options */\n> -\t(void) attribute_reloptions(newOptions, true);\n> +\n> +\tnewOptions = transformOptions(get_attribute_options_spec_set(),\n> +\t\t\t\t\t\t\t\t isnull ? (Datum) 0 : datum,\n> +\t\t\t\t\t castNode(List, options), OPTIONS_PARSE_MODE_FOR_ALTER |\n> +\t\t\t\t\t\t\t (isReset ? OPTIONS_PARSE_MODE_FOR_RESET : 0));\n> \n> \t/* Build new tuple. */\n> \tmemset(repl_null, false, sizeof(repl_null));\n> @@ -13704,7 +13726,8 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,\n> \tDatum\t\trepl_val[Natts_pg_class];\n> \tbool\t\trepl_null[Natts_pg_class];\n> \tbool\t\trepl_repl[Natts_pg_class];\n> -\tstatic char *validnsps[] = HEAP_RELOPT_NAMESPACES;\n> +\tList\t *toastDefList;\n> +\toptions_parse_mode parse_mode;\n> \n> \tif (defList == NIL && operation != AT_ReplaceRelOptions)\n> \t\treturn;\t\t\t\t\t/* nothing to do */\n> @@ -13734,27 +13757,68 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,\n> \t}\n> \n> \t/* Generate new proposed reloptions (text array) */\n> -\tnewOptions = transformRelOptions(isnull ? (Datum) 0 : datum,\n> -\t\t\t\t\t\t\t\t\t defList, NULL, validnsps, false,\n> -\t\t\t\t\t\t\t\t\t operation == AT_ResetRelOptions);\n> \n> \t/* Validate */\n> +\tparse_mode = OPTIONS_PARSE_MODE_FOR_ALTER;\n> +\tif (operation == AT_ResetRelOptions)\n> +\t\tparse_mode |= OPTIONS_PARSE_MODE_FOR_RESET;\n> +\n> \tswitch (rel->rd_rel->relkind)\n> \t{\n> \t\tcase RELKIND_RELATION:\n> -\t\tcase RELKIND_TOASTVALUE:\n> +\t\tcase RELKIND_TOASTVALUE: // FIXME why it is here???\n> \t\tcase RELKIND_MATVIEW:\n> -\t\t\t(void) heap_reloptions(rel->rd_rel->relkind, newOptions, true);\n> +\t\t\t{\n> +\t\t\t\tchar\t *namespaces[] = HEAP_RELOPT_NAMESPACES;\n> +\t\t\t\tList\t *heapDefList;\n> +\n> +\t\t\t\toptionsDefListValdateNamespaces(defList, namespaces);\n> +\t\t\t\theapDefList = optionsDefListFilterNamespaces(\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t defList, NULL);\n> +\t\t\t\tnewOptions = transformOptions(get_heap_relopt_spec_set(),\n> +\t\t\t\t\t\t\t\t\t\t\t isnull ? (Datum) 0 : datum,\n> +\t\t\t\t\t\t\t\t\t\t\t heapDefList, parse_mode);\n> +\t\t\t}\n> \t\t\tbreak;\n> +\n> \t\tcase RELKIND_PARTITIONED_TABLE:\n> -\t\t\t(void) partitioned_table_reloptions(newOptions, true);\n> -\t\t\tbreak;\n> +\t\t\t{\n> +\t\t\t\tchar\t *namespaces[] = HEAP_RELOPT_NAMESPACES;\n> +\t\t\t\tList\t *heapDefList;\n> +\n> +\t\t\t\toptionsDefListValdateNamespaces(defList, namespaces);\n> +\t\t\t\theapDefList = optionsDefListFilterNamespaces(\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t defList, NULL);\n> +\t\t\t\tnewOptions = transformOptions(get_partitioned_relopt_spec_set(),\n> +\t\t\t\t\t\t\t\t\t\t\t isnull ? (Datum) 0 : datum,\n> +\t\t\t\t\t\t\t\t\t\t\t heapDefList, parse_mode);\n> +\t\t\t\tbreak;\n> +\t\t\t}\n> \t\tcase RELKIND_VIEW:\n> -\t\t\t(void) view_reloptions(newOptions, true);\n> -\t\t\tbreak;\n> +\t\t\t{\n> +\n> +\t\t\t\tnewOptions = transformOptions(\n> +\t\t\t\t\t\t\t\t\t get_view_relopt_spec_set(),\n> +\t\t\t\t\t\t\t\t\t datum, defList, parse_mode);\n> +\t\t\t\tbreak;\n> +\t\t\t}\n> \t\tcase RELKIND_INDEX:\n> \t\tcase RELKIND_PARTITIONED_INDEX:\n> -\t\t\t(void) index_reloptions(rel->rd_indam->amoptions, newOptions, true);\n> +\t\t\tif (! rel->rd_indam->amreloptspecset)\n> +\t\t\t{\n> +\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +\t\t\t\t\t\t errmsg(\"index %s does not support options\",\n> +\t\t\t\t\t\t\t\tRelationGetRelationName(rel))));\n> +\t\t\t\tbreak;\n> +\t\t\t}\n> +\t\t\tparse_mode = OPTIONS_PARSE_MODE_FOR_ALTER;\n> +\t\t\tif (operation == AT_ResetRelOptions)\n> +\t\t\t\tparse_mode |= OPTIONS_PARSE_MODE_FOR_RESET;\n> +\t\t\tnewOptions = transformOptions(\n> +\t\t\t\t\t\t\t\t\trel->rd_indam->amreloptspecset(),\n> +\t\t\t\t\t\t\t\t\t\t\tisnull ? (Datum) 0 : datum,\n> +\t\t\t\t\t\t\t\t\t\t\tdefList, parse_mode);\n> \t\t\tbreak;\n> \t\tdefault:\n> \t\t\tereport(ERROR,\n> @@ -13769,7 +13833,7 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,\n> \tif (rel->rd_rel->relkind == RELKIND_VIEW)\n> \t{\n> \t\tQuery\t *view_query = get_view_query(rel);\n> -\t\tList\t *view_options = untransformRelOptions(newOptions);\n> +\t\tList\t *view_options = optionsTextArrayToDefList(newOptions);\n> \t\tListCell *cell;\n> \t\tbool\t\tcheck_option = false;\n> \n> @@ -13853,11 +13917,15 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,\n> \t\t\t\t\t\t\t\t\t&isnull);\n> \t\t}\n> \n> -\t\tnewOptions = transformRelOptions(isnull ? (Datum) 0 : datum,\n> -\t\t\t\t\t\t\t\t\t\t defList, \"toast\", validnsps, false,\n> -\t\t\t\t\t\t\t\t\t\t operation == AT_ResetRelOptions);\n> +\t\tparse_mode = OPTIONS_PARSE_MODE_FOR_ALTER;\n> +\t\tif (operation == AT_ResetRelOptions)\n> +\t\t\tparse_mode |= OPTIONS_PARSE_MODE_FOR_RESET;\n> +\n> +\t\ttoastDefList = optionsDefListFilterNamespaces(defList, \"toast\");\n> \n> -\t\t(void) heap_reloptions(RELKIND_TOASTVALUE, newOptions, true);\n> +\t\tnewOptions = transformOptions(get_toast_relopt_spec_set(),\n> +\t\t\t\t\t\t\t\t\t isnull ? (Datum) 0 : datum,\n> +\t\t\t\t\t\t\t\t\t toastDefList, parse_mode);\n> \n> \t\tmemset(repl_val, 0, sizeof(repl_val));\n> \t\tmemset(repl_null, false, sizeof(repl_null));\n> diff --git a/src/backend/commands/tablespace.c b/src/backend/commands/tablespace.c\n> index 4b96eec..912699b 100644\n> --- a/src/backend/commands/tablespace.c\n> +++ b/src/backend/commands/tablespace.c\n> @@ -345,10 +345,9 @@ CreateTableSpace(CreateTableSpaceStmt *stmt)\n> \tnulls[Anum_pg_tablespace_spcacl - 1] = true;\n> \n> \t/* Generate new proposed spcoptions (text array) */\n> -\tnewOptions = transformRelOptions((Datum) 0,\n> -\t\t\t\t\t\t\t\t\t stmt->options,\n> -\t\t\t\t\t\t\t\t\t NULL, NULL, false, false);\n> -\t(void) tablespace_reloptions(newOptions, true);\n> +\tnewOptions = transformOptions(get_tablespace_options_spec_set(),\n> +\t\t\t\t\t\t\t\t\t\t\t\t(Datum) 0, stmt->options, 0);\n> +\n> \tif (newOptions != (Datum) 0)\n> \t\tvalues[Anum_pg_tablespace_spcoptions - 1] = newOptions;\n> \telse\n> @@ -1053,10 +1052,11 @@ AlterTableSpaceOptions(AlterTableSpaceOptionsStmt *stmt)\n> \t/* Generate new proposed spcoptions (text array) */\n> \tdatum = heap_getattr(tup, Anum_pg_tablespace_spcoptions,\n> \t\t\t\t\t\t RelationGetDescr(rel), &isnull);\n> -\tnewOptions = transformRelOptions(isnull ? (Datum) 0 : datum,\n> -\t\t\t\t\t\t\t\t\t stmt->options, NULL, NULL, false,\n> -\t\t\t\t\t\t\t\t\t stmt->isReset);\n> -\t(void) tablespace_reloptions(newOptions, true);\n> +\tnewOptions = transformOptions(get_tablespace_options_spec_set(),\n> +\t\t\t\t\t\t\t\t isnull ? (Datum) 0 : datum,\n> +\t\t\t\t\t\t\t\t stmt->options,\n> +\t\t\t\t\t\t\t\t OPTIONS_PARSE_MODE_FOR_ALTER |\n> +\t\t\t\t\t\t (stmt->isReset ? OPTIONS_PARSE_MODE_FOR_RESET : 0));\n> \n> \t/* Build new tuple. */\n> \tmemset(repl_null, false, sizeof(repl_null));\n> diff --git a/src/backend/foreign/foreign.c b/src/backend/foreign/foreign.c\n> index 5564dc3..0370be7 100644\n> --- a/src/backend/foreign/foreign.c\n> +++ b/src/backend/foreign/foreign.c\n> @@ -78,7 +78,7 @@ GetForeignDataWrapperExtended(Oid fdwid, bits16 flags)\n> \tif (isnull)\n> \t\tfdw->options = NIL;\n> \telse\n> -\t\tfdw->options = untransformRelOptions(datum);\n> +\t\tfdw->options = optionsTextArrayToDefList(datum);\n> \n> \tReleaseSysCache(tp);\n> \n> @@ -165,7 +165,7 @@ GetForeignServerExtended(Oid serverid, bits16 flags)\n> \tif (isnull)\n> \t\tserver->options = NIL;\n> \telse\n> -\t\tserver->options = untransformRelOptions(datum);\n> +\t\tserver->options = optionsTextArrayToDefList(datum);\n> \n> \tReleaseSysCache(tp);\n> \n> @@ -233,7 +233,7 @@ GetUserMapping(Oid userid, Oid serverid)\n> \tif (isnull)\n> \t\tum->options = NIL;\n> \telse\n> -\t\tum->options = untransformRelOptions(datum);\n> +\t\tum->options = optionsTextArrayToDefList(datum);\n> \n> \tReleaseSysCache(tp);\n> \n> @@ -270,7 +270,7 @@ GetForeignTable(Oid relid)\n> \tif (isnull)\n> \t\tft->options = NIL;\n> \telse\n> -\t\tft->options = untransformRelOptions(datum);\n> +\t\tft->options = optionsTextArrayToDefList(datum);\n> \n> \tReleaseSysCache(tp);\n> \n> @@ -303,7 +303,7 @@ GetForeignColumnOptions(Oid relid, AttrNumber attnum)\n> \tif (isnull)\n> \t\toptions = NIL;\n> \telse\n> -\t\toptions = untransformRelOptions(datum);\n> +\t\toptions = optionsTextArrayToDefList(datum);\n> \n> \tReleaseSysCache(tp);\n> \n> @@ -572,7 +572,7 @@ pg_options_to_table(PG_FUNCTION_ARGS)\n> \tDatum\t\tarray = PG_GETARG_DATUM(0);\n> \n> \tdeflist_to_tuplestore((ReturnSetInfo *) fcinfo->resultinfo,\n> -\t\t\t\t\t\t untransformRelOptions(array));\n> +\t\t\t\t\t\t optionsTextArrayToDefList(array));\n> \n> \treturn (Datum) 0;\n> }\n> @@ -643,7 +643,7 @@ is_conninfo_option(const char *option, Oid context)\n> Datum\n> postgresql_fdw_validator(PG_FUNCTION_ARGS)\n> {\n> -\tList\t *options_list = untransformRelOptions(PG_GETARG_DATUM(0));\n> +\tList\t *options_list = optionsTextArrayToDefList(PG_GETARG_DATUM(0));\n> \tOid\t\t\tcatalog = PG_GETARG_OID(1);\n> \n> \tListCell *cell;\n> diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c\n> index 313d7b6..1fe41b4 100644\n> --- a/src/backend/parser/parse_utilcmd.c\n> +++ b/src/backend/parser/parse_utilcmd.c\n> @@ -1757,7 +1757,7 @@ generateClonedIndexStmt(RangeVar *heapRel, Relation source_idx,\n> \t\t/* Add the operator class name, if non-default */\n> \t\tiparam->opclass = get_opclass(indclass->values[keyno], keycoltype);\n> \t\tiparam->opclassopts =\n> -\t\t\tuntransformRelOptions(get_attoptions(source_relid, keyno + 1));\n> +\t\t\toptionsTextArrayToDefList(get_attoptions(source_relid, keyno + 1));\n> \n> \t\tiparam->ordering = SORTBY_DEFAULT;\n> \t\tiparam->nulls_ordering = SORTBY_NULLS_DEFAULT;\n> @@ -1821,7 +1821,7 @@ generateClonedIndexStmt(RangeVar *heapRel, Relation source_idx,\n> \tdatum = SysCacheGetAttr(RELOID, ht_idxrel,\n> \t\t\t\t\t\t\tAnum_pg_class_reloptions, &isnull);\n> \tif (!isnull)\n> -\t\tindex->options = untransformRelOptions(datum);\n> +\t\tindex->options = optionsTextArrayToDefList(datum);\n> \n> \t/* If it's a partial index, decompile and append the predicate */\n> \tdatum = SysCacheGetAttr(INDEXRELID, ht_idx,\n> diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c\n> index bf085aa..d12ab1a 100644\n> --- a/src/backend/tcop/utility.c\n> +++ b/src/backend/tcop/utility.c\n> @@ -1155,6 +1155,7 @@ ProcessUtilitySlow(ParseState *pstate,\n> \t\t\t\t\t\t\tCreateStmt *cstmt = (CreateStmt *) stmt;\n> \t\t\t\t\t\t\tDatum\t\ttoast_options;\n> \t\t\t\t\t\t\tstatic char *validnsps[] = HEAP_RELOPT_NAMESPACES;\n> +\t\t\t\t\t\t\tList\t *toastDefList;\n> \n> \t\t\t\t\t\t\t/* Remember transformed RangeVar for LIKE */\n> \t\t\t\t\t\t\ttable_rv = cstmt->relation;\n> @@ -1178,15 +1179,17 @@ ProcessUtilitySlow(ParseState *pstate,\n> \t\t\t\t\t\t\t * parse and validate reloptions for the toast\n> \t\t\t\t\t\t\t * table\n> \t\t\t\t\t\t\t */\n> -\t\t\t\t\t\t\ttoast_options = transformRelOptions((Datum) 0,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tcstmt->options,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"toast\",\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tvalidnsps,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\ttrue,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tfalse);\n> -\t\t\t\t\t\t\t(void) heap_reloptions(RELKIND_TOASTVALUE,\n> -\t\t\t\t\t\t\t\t\t\t\t\t toast_options,\n> -\t\t\t\t\t\t\t\t\t\t\t\t true);\n> +\n> +\t\t\t\t\t\t\toptionsDefListValdateNamespaces(\n> +\t\t\t\t\t\t\t\t\t\t\t ((CreateStmt *) stmt)->options,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tvalidnsps);\n> +\n> +\t\t\t\t\t\t\ttoastDefList = optionsDefListFilterNamespaces(\n> +\t\t\t\t\t\t\t\t\t((CreateStmt *) stmt)->options, \"toast\");\n> +\n> +\t\t\t\t\t\t\ttoast_options = transformOptions(\n> +\t\t\t\t\t\t\t\t\t get_toast_relopt_spec_set(), (Datum) 0,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t toastDefList, 0);\n> \n> \t\t\t\t\t\t\tNewRelationCreateToastTable(address.objectId,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\ttoast_options);\n> @@ -1295,9 +1298,12 @@ ProcessUtilitySlow(ParseState *pstate,\n> \t\t\t\t\t * lock on (for example) a relation on which we have no\n> \t\t\t\t\t * permissions.\n> \t\t\t\t\t */\n> -\t\t\t\t\tlockmode = AlterTableGetLockLevel(atstmt->cmds);\n> -\t\t\t\t\trelid = AlterTableLookupRelation(atstmt, lockmode);\n> -\n> +\t\t\t\t\trelid = AlterTableLookupRelation(atstmt, NoLock); // FIXME!\n> +\t\t\t\t\tif (OidIsValid(relid))\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\tlockmode = AlterTableGetLockLevel(relid, atstmt->cmds);\n> +\t\t\t\t\t\trelid = AlterTableLookupRelation(atstmt, lockmode);\n> +\t\t\t\t\t}\n> \t\t\t\t\tif (OidIsValid(relid))\n> \t\t\t\t\t{\n> \t\t\t\t\t\tAlterTableUtilityContext atcontext;\n> diff --git a/src/backend/utils/cache/attoptcache.c b/src/backend/utils/cache/attoptcache.c\n> index 72d89cb..f651129 100644\n> --- a/src/backend/utils/cache/attoptcache.c\n> +++ b/src/backend/utils/cache/attoptcache.c\n> @@ -16,6 +16,7 @@\n> */\n> #include \"postgres.h\"\n> \n> +#include \"access/options.h\"\n> #include \"access/reloptions.h\"\n> #include \"utils/attoptcache.h\"\n> #include \"utils/catcache.h\"\n> @@ -148,7 +149,8 @@ get_attribute_options(Oid attrelid, int attnum)\n> \t\t\t\topts = NULL;\n> \t\t\telse\n> \t\t\t{\n> -\t\t\t\tbytea\t *bytea_opts = attribute_reloptions(datum, false);\n> +\t\t\t\tbytea *bytea_opts = optionsTextArrayToBytea(\n> +\t\t\t\t\t\t\t\t\tget_attribute_options_spec_set(), datum, 0);\n> \n> \t\t\t\topts = MemoryContextAlloc(CacheMemoryContext,\n> \t\t\t\t\t\t\t\t\t\t VARSIZE(bytea_opts));\n> diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c\n> index 13d9994..f22c2d9 100644\n> --- a/src/backend/utils/cache/relcache.c\n> +++ b/src/backend/utils/cache/relcache.c\n> @@ -441,7 +441,7 @@ static void\n> RelationParseRelOptions(Relation relation, HeapTuple tuple)\n> {\n> \tbytea\t *options;\n> -\tamoptions_function amoptsfn;\n> +\tamreloptspecset_function amoptspecsetfn;\n> \n> \trelation->rd_options = NULL;\n> \n> @@ -456,11 +456,11 @@ RelationParseRelOptions(Relation relation, HeapTuple tuple)\n> \t\tcase RELKIND_VIEW:\n> \t\tcase RELKIND_MATVIEW:\n> \t\tcase RELKIND_PARTITIONED_TABLE:\n> -\t\t\tamoptsfn = NULL;\n> +\t\t\tamoptspecsetfn = NULL;\n> \t\t\tbreak;\n> \t\tcase RELKIND_INDEX:\n> \t\tcase RELKIND_PARTITIONED_INDEX:\n> -\t\t\tamoptsfn = relation->rd_indam->amoptions;\n> +\t\t\tamoptspecsetfn = relation->rd_indam->amreloptspecset;\n> \t\t\tbreak;\n> \t\tdefault:\n> \t\t\treturn;\n> @@ -471,7 +471,7 @@ RelationParseRelOptions(Relation relation, HeapTuple tuple)\n> \t * we might not have any other for pg_class yet (consider executing this\n> \t * code for pg_class itself)\n> \t */\n> -\toptions = extractRelOptions(tuple, GetPgClassDescriptor(), amoptsfn);\n> +\toptions = extractRelOptions(tuple, GetPgClassDescriptor(), amoptspecsetfn);\n> \n> \t/*\n> \t * Copy parsed data into CacheMemoryContext. To guard against the\n> diff --git a/src/backend/utils/cache/spccache.c b/src/backend/utils/cache/spccache.c\n> index 5870f43..87f2fa5 100644\n> --- a/src/backend/utils/cache/spccache.c\n> +++ b/src/backend/utils/cache/spccache.c\n> @@ -148,7 +148,8 @@ get_tablespace(Oid spcid)\n> \t\t\topts = NULL;\n> \t\telse\n> \t\t{\n> -\t\t\tbytea\t *bytea_opts = tablespace_reloptions(datum, false);\n> +\t\t\tbytea *bytea_opts = optionsTextArrayToBytea(\n> +\t\t\t\t\t\t\t\tget_tablespace_options_spec_set(), datum, 0);\n> \n> \t\t\topts = MemoryContextAlloc(CacheMemoryContext, VARSIZE(bytea_opts));\n> \t\t\tmemcpy(opts, bytea_opts, VARSIZE(bytea_opts));\n> diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h\n> index d357ebb..b8fb6b9 100644\n> --- a/src/include/access/amapi.h\n> +++ b/src/include/access/amapi.h\n> @@ -136,10 +136,6 @@ typedef void (*amcostestimate_function) (struct PlannerInfo *root,\n> \t\t\t\t\t\t\t\t\t\t double *indexCorrelation,\n> \t\t\t\t\t\t\t\t\t\t double *indexPages);\n> \n> -/* parse index reloptions */\n> -typedef bytea *(*amoptions_function) (Datum reloptions,\n> -\t\t\t\t\t\t\t\t\t bool validate);\n> -\n> /* report AM, index, or index column property */\n> typedef bool (*amproperty_function) (Oid index_oid, int attno,\n> \t\t\t\t\t\t\t\t\t IndexAMProperty prop, const char *propname,\n> @@ -186,6 +182,9 @@ typedef void (*ammarkpos_function) (IndexScanDesc scan);\n> /* restore marked scan position */\n> typedef void (*amrestrpos_function) (IndexScanDesc scan);\n> \n> +/* get catalog of reloptions definitions */\n> +typedef void *(*amreloptspecset_function) ();\n> +\n> /*\n> * Callback function signatures - for parallel index scans.\n> */\n> @@ -263,7 +262,6 @@ typedef struct IndexAmRoutine\n> \tamvacuumcleanup_function amvacuumcleanup;\n> \tamcanreturn_function amcanreturn;\t/* can be NULL */\n> \tamcostestimate_function amcostestimate;\n> -\tamoptions_function amoptions;\n> \tamproperty_function amproperty; /* can be NULL */\n> \tambuildphasename_function ambuildphasename; /* can be NULL */\n> \tamvalidate_function amvalidate;\n> @@ -275,6 +273,7 @@ typedef struct IndexAmRoutine\n> \tamendscan_function amendscan;\n> \tammarkpos_function ammarkpos;\t/* can be NULL */\n> \tamrestrpos_function amrestrpos; /* can be NULL */\n> +\tamreloptspecset_function amreloptspecset; /* can be NULL */\n> \n> \t/* interface functions to support parallel index scans */\n> \tamestimateparallelscan_function amestimateparallelscan; /* can be NULL */\n> diff --git a/src/include/access/brin.h b/src/include/access/brin.h\n> index 4e2be13..25b3456 100644\n> --- a/src/include/access/brin.h\n> +++ b/src/include/access/brin.h\n> @@ -36,6 +36,8 @@ typedef struct BrinStatsData\n> \n> \n> #define BRIN_DEFAULT_PAGES_PER_RANGE\t128\n> +#define BRIN_MIN_PAGES_PER_RANGE\t\t1\n> +#define BRIN_MAX_PAGES_PER_RANGE\t\t131072\n> #define BrinGetPagesPerRange(relation) \\\n> \t(AssertMacro(relation->rd_rel->relkind == RELKIND_INDEX && \\\n> \t\t\t\t relation->rd_rel->relam == BRIN_AM_OID), \\\n> diff --git a/src/include/access/brin_internal.h b/src/include/access/brin_internal.h\n> index 79440eb..a798a96 100644\n> --- a/src/include/access/brin_internal.h\n> +++ b/src/include/access/brin_internal.h\n> @@ -14,6 +14,7 @@\n> #include \"access/amapi.h\"\n> #include \"storage/bufpage.h\"\n> #include \"utils/typcache.h\"\n> +#include \"access/options.h\"\n> \n> \n> /*\n> @@ -108,6 +109,7 @@ extern IndexBulkDeleteResult *brinbulkdelete(IndexVacuumInfo *info,\n> extern IndexBulkDeleteResult *brinvacuumcleanup(IndexVacuumInfo *info,\n> \t\t\t\t\t\t\t\t\t\t\t\tIndexBulkDeleteResult *stats);\n> extern bytea *brinoptions(Datum reloptions, bool validate);\n> +extern void * bringetreloptspecset (void);\n> \n> /* brin_validate.c */\n> extern bool brinvalidate(Oid opclassoid);\n> diff --git a/src/include/access/gin_private.h b/src/include/access/gin_private.h\n> index 670a40b..2b7c25c 100644\n> --- a/src/include/access/gin_private.h\n> +++ b/src/include/access/gin_private.h\n> @@ -108,6 +108,7 @@ extern Datum *ginExtractEntries(GinState *ginstate, OffsetNumber attnum,\n> extern OffsetNumber gintuple_get_attrnum(GinState *ginstate, IndexTuple tuple);\n> extern Datum gintuple_get_key(GinState *ginstate, IndexTuple tuple,\n> \t\t\t\t\t\t\t GinNullCategory *category);\n> +extern void *gingetreloptspecset(void);\n> \n> /* gininsert.c */\n> extern IndexBuildResult *ginbuild(Relation heap, Relation index,\n> diff --git a/src/include/access/gist_private.h b/src/include/access/gist_private.h\n> index 553d364..015b75a 100644\n> --- a/src/include/access/gist_private.h\n> +++ b/src/include/access/gist_private.h\n> @@ -22,6 +22,7 @@\n> #include \"storage/buffile.h\"\n> #include \"utils/hsearch.h\"\n> #include \"access/genam.h\"\n> +#include \"access/reloptions.h\" //FIXME! should be replaced with options.h finally\n> \n> /*\n> * Maximum number of \"halves\" a page can be split into in one operation.\n> @@ -388,6 +389,7 @@ typedef enum GistOptBufferingMode\n> \tGIST_OPTION_BUFFERING_OFF\n> } GistOptBufferingMode;\n> \n> +\n> /*\n> * Storage type for GiST's reloptions\n> */\n> @@ -478,7 +480,7 @@ extern void gistadjustmembers(Oid opfamilyoid,\n> #define GIST_MIN_FILLFACTOR\t\t\t10\n> #define GIST_DEFAULT_FILLFACTOR\t\t90\n> \n> -extern bytea *gistoptions(Datum reloptions, bool validate);\n> +extern void *gistgetreloptspecset(void);\n> extern bool gistproperty(Oid index_oid, int attno,\n> \t\t\t\t\t\t IndexAMProperty prop, const char *propname,\n> \t\t\t\t\t\t bool *res, bool *isnull);\n> diff --git a/src/include/access/hash.h b/src/include/access/hash.h\n> index 1cce865..91922ef 100644\n> --- a/src/include/access/hash.h\n> +++ b/src/include/access/hash.h\n> @@ -378,7 +378,6 @@ extern IndexBulkDeleteResult *hashbulkdelete(IndexVacuumInfo *info,\n> \t\t\t\t\t\t\t\t\t\t\t void *callback_state);\n> extern IndexBulkDeleteResult *hashvacuumcleanup(IndexVacuumInfo *info,\n> \t\t\t\t\t\t\t\t\t\t\t\tIndexBulkDeleteResult *stats);\n> -extern bytea *hashoptions(Datum reloptions, bool validate);\n> extern bool hashvalidate(Oid opclassoid);\n> extern void hashadjustmembers(Oid opfamilyoid,\n> \t\t\t\t\t\t\t Oid opclassoid,\n> @@ -470,6 +469,7 @@ extern BlockNumber _hash_get_newblock_from_oldbucket(Relation rel, Bucket old_bu\n> extern Bucket _hash_get_newbucket_from_oldbucket(Relation rel, Bucket old_bucket,\n> \t\t\t\t\t\t\t\t\t\t\t\t uint32 lowmask, uint32 maxbucket);\n> extern void _hash_kill_items(IndexScanDesc scan);\n> +extern void *hashgetreloptspecset(void);\n> \n> /* hash.c */\n> extern void hashbucketcleanup(Relation rel, Bucket cur_bucket,\n> diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h\n> index 30a216e..1fcb5f5 100644\n> --- a/src/include/access/nbtree.h\n> +++ b/src/include/access/nbtree.h\n> @@ -1252,7 +1252,7 @@ extern void _bt_end_vacuum(Relation rel);\n> extern void _bt_end_vacuum_callback(int code, Datum arg);\n> extern Size BTreeShmemSize(void);\n> extern void BTreeShmemInit(void);\n> -extern bytea *btoptions(Datum reloptions, bool validate);\n> +extern void * btgetreloptspecset (void);\n> extern bool btproperty(Oid index_oid, int attno,\n> \t\t\t\t\t IndexAMProperty prop, const char *propname,\n> \t\t\t\t\t bool *res, bool *isnull);\n> diff --git a/src/include/access/options.h b/src/include/access/options.h\n> new file mode 100644\n> index 0000000..34e2917\n> --- /dev/null\n> +++ b/src/include/access/options.h\n> @@ -0,0 +1,245 @@\n> +/*-------------------------------------------------------------------------\n> + *\n> + * options.h\n> + *\t Core support for relation and tablespace options (pg_class.reloptions\n> + *\t and pg_tablespace.spcoptions)\n> + *\n> + * Note: the functions dealing with text-array options values declare\n> + * them as Datum, not ArrayType *, to avoid needing to include array.h\n> + * into a lot of low-level code.\n> + *\n> + *\n> + * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group\n> + * Portions Copyright (c) 1994, Regents of the University of California\n> + *\n> + * src/include/access/options.h\n> + *\n> + *-------------------------------------------------------------------------\n> + */\n> +#ifndef OPTIONS_H\n> +#define OPTIONS_H\n> +\n> +#include \"storage/lock.h\"\n> +#include \"nodes/pg_list.h\"\n> +\n> +\n> +/* supported option types */\n> +typedef enum option_type\n> +{\n> +\tOPTION_TYPE_BOOL,\n> +\tOPTION_TYPE_INT,\n> +\tOPTION_TYPE_REAL,\n> +\tOPTION_TYPE_ENUM,\n> +\tOPTION_TYPE_STRING\n> +}\toption_type;\n> +\n> +\n> +typedef enum option_value_status\n> +{\n> +\tOPTION_VALUE_STATUS_EMPTY,\t/* Option was just initialized */\n> +\tOPTION_VALUE_STATUS_RAW,\t/* Option just came from syntax analyzer in\n> +\t\t\t\t\t\t\t\t * has name, and raw (unparsed) value */\n> +\tOPTION_VALUE_STATUS_PARSED, /* Option was parsed and has link to catalog\n> +\t\t\t\t\t\t\t\t * entry and proper value */\n> +\tOPTION_VALUE_STATUS_FOR_RESET\t\t/* This option came from ALTER xxx\n> +\t\t\t\t\t\t\t\t\t\t * RESET */\n> +}\toption_value_status;\n> +\n> +/* flags for reloptinon definition */\n> +typedef enum option_spec_flags\n> +{\n> +\tOPTION_DEFINITION_FLAG_FORBID_ALTER = (1 << 0),\t\t/* Altering this option\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t * is forbidden */\n> +\tOPTION_DEFINITION_FLAG_IGNORE = (1 << 1),\t/* Skip this option while\n> +\t\t\t\t\t\t\t\t\t\t\t\t * parsing. Used for WITH OIDS\n> +\t\t\t\t\t\t\t\t\t\t\t\t * special case */\n> +\tOPTION_DEFINITION_FLAG_REJECT = (1 << 2)\t/* Option will be rejected\n> +\t\t\t\t\t\t\t\t\t\t\t\t * when comes from syntax\n> +\t\t\t\t\t\t\t\t\t\t\t\t * analyzer, but still have\n> +\t\t\t\t\t\t\t\t\t\t\t\t * default value and offset */\n> +} option_spec_flags;\n> +\n> +/* flags that tells reloption parser how to parse*/\n> +typedef enum options_parse_mode\n> +{\n> +\tOPTIONS_PARSE_MODE_VALIDATE = (1 << 0),\n> +\tOPTIONS_PARSE_MODE_FOR_ALTER = (1 << 1),\n> +\tOPTIONS_PARSE_MODE_FOR_RESET = (1 << 2)\n> +} options_parse_mode;\n> +\n> +\n> +\n> +/*\n> + * opt_enum_elt_def -- One member of the array of acceptable values\n> + * of an enum reloption.\n> + */\n> +typedef struct opt_enum_elt_def\n> +{\n> +\tconst char *string_val;\n> +\tint\t\t\tsymbol_val;\n> +} opt_enum_elt_def;\n> +\n> +\n> +/* generic structure to store Option Spec information */\n> +typedef struct option_spec_basic\n> +{\n> +\tconst char *name;\t\t\t/* must be first (used as list termination\n> +\t\t\t\t\t\t\t\t * marker) */\n> +\tconst char *desc;\n> +\tLOCKMODE\tlockmode;\n> +\toption_spec_flags flags;\n> +\toption_type type;\n> +\tint\t\t\tstruct_offset;\t/* offset of the value in Bytea representation */\n> +}\toption_spec_basic;\n> +\n> +\n> +/* reloptions records for specific variable types */\n> +typedef struct option_spec_bool\n> +{\n> +\toption_spec_basic base;\n> +\tbool\t\tdefault_val;\n> +}\toption_spec_bool;\n> +\n> +typedef struct option_spec_int\n> +{\n> +\toption_spec_basic base;\n> +\tint\t\t\tdefault_val;\n> +\tint\t\t\tmin;\n> +\tint\t\t\tmax;\n> +}\toption_spec_int;\n> +\n> +typedef struct option_spec_real\n> +{\n> +\toption_spec_basic base;\n> +\tdouble\t\tdefault_val;\n> +\tdouble\t\tmin;\n> +\tdouble\t\tmax;\n> +}\toption_spec_real;\n> +\n> +typedef struct option_spec_enum\n> +{\n> +\toption_spec_basic base;\n> +\topt_enum_elt_def *members;/* FIXME rewrite. Null terminated array of allowed values for\n> +\t\t\t\t\t\t\t\t * the option */\n> +\tint\t\t\tdefault_val;\t/* Number of item of allowed_values array */\n> +\tconst char *detailmsg;\n> +}\toption_spec_enum;\n> +\n> +/* validation routines for strings */\n> +typedef void (*validate_string_option) (const char *value);\n> +\n> +/*\n> + * When storing sting reloptions, we shoud deal with special case when\n> + * option value is not set. For fixed length options, we just copy default\n> + * option value into the binary structure. For varlen value, there can be\n> + * \"not set\" special case, with no default value offered.\n> + * In this case we will set offset value to -1, so code that use relptions\n> + * can deal this case. For better readability it was defined as a constant.\n> + */\n> +#define OPTION_STRING_VALUE_NOT_SET_OFFSET -1\n> +\n> +typedef struct option_spec_string\n> +{\n> +\toption_spec_basic base;\n> +\tvalidate_string_option validate_cb;\n> +\tchar\t *default_val;\n> +}\toption_spec_string;\n> +\n> +typedef void (*postprocess_bytea_options_function) (void *data, bool validate);\n> +\n> +typedef struct options_spec_set\n> +{\n> +\toption_spec_basic **definitions;\n> +\tint\t\t\tnum;\t\t\t/* Number of spec_set items in use */\n> +\tint\t\t\tnum_allocated;\t/* Number of spec_set items allocated */\n> +\tbool\t\tforbid_realloc; /* If number of items of the spec_set were\n> +\t\t\t\t\t\t\t\t * strictly set to certain value do no allow\n> +\t\t\t\t\t\t\t\t * adding more idems */\n> +\tSize\t\tstruct_size;\t/* Size of a structure for options in binary\n> +\t\t\t\t\t\t\t\t * representation */\n> +\tpostprocess_bytea_options_function postprocess_fun; /* This function is\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t * called after options\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t * were converted in\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t * Bytea represenation.\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t * Can be used for extra\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t * validation and so on */\n> +\tchar\t *namespace;\t\t/* spec_set is used for options from this\n> +\t\t\t\t\t\t\t\t * namespase */\n> +}\toptions_spec_set;\n> +\n> +\n> +/* holds an option value parsed or unparsed */\n> +typedef struct option_value\n> +{\n> +\toption_spec_basic *gen;\n> +\tchar\t *namespace;\n> +\toption_value_status status;\n> +\tchar\t *raw_value;\t\t/* allocated separately */\n> +\tchar\t *raw_name;\n> +\tunion\n> +\t{\n> +\t\tbool\t\tbool_val;\n> +\t\tint\t\t\tint_val;\n> +\t\tdouble\t\treal_val;\n> +\t\tint\t\t\tenum_val;\n> +\t\tchar\t *string_val; /* allocated separately */\n> +\t}\t\t\tvalues;\n> +}\toption_value;\n> +\n> +\n> +\n> +\n> +/*\n> + * Options spec_set related functions\n> + */\n> +extern options_spec_set *allocateOptionsSpecSet(const char *namespace,\n> +\t\t\t\t\t\t\t\t int size_of_bytea, int num_items_expected);\n> +extern void optionsSpecSetAddBool(options_spec_set * spec_set, const char *name,\n> +\t\t\t\t const char *desc, LOCKMODE lockmode, option_spec_flags flags,\n> +\t\t\t\t\t\t\t\t\tint struct_offset, bool default_val);\n> +extern void optionsSpecSetAddInt(options_spec_set * spec_set, const char *name,\n> +\t\t\t\t\tconst char *desc, LOCKMODE lockmode, option_spec_flags flags,\n> +\t\t\t\t\tint struct_offset, int default_val, int min_val, int max_val);\n> +extern void optionsSpecSetAddReal(options_spec_set * spec_set, const char *name,\n> +\t\t const char *desc, LOCKMODE lockmode, option_spec_flags flags,\n> +\t int struct_offset, double default_val, double min_val, double max_val);\n> +extern void optionsSpecSetAddEnum(options_spec_set * spec_set,\n> +\t\t\t\t\t\t const char *name, const char *desc, LOCKMODE lockmode, option_spec_flags flags,\n> +\t\t\tint struct_offset, opt_enum_elt_def* members, int default_val, const char *detailmsg);\n> +extern void optionsSpecSetAddString(options_spec_set * spec_set, const char *name,\n> +\t\t const char *desc, LOCKMODE lockmode, option_spec_flags flags,\n> +int struct_offset, const char *default_val, validate_string_option validator);\n> +\n> +\n> +/*\n> + * This macro allows to get string option value from bytea representation.\n> + * \"optstruct\" - is a structure that is stored in bytea options representation\n> + * \"member\" - member of this structure that has string option value\n> + * (actually string values are stored in bytea after the structure, and\n> + * and \"member\" will contain an offset to this value. This macro do all\n> + * the math\n> + */\n> +#define GET_STRING_OPTION(optstruct, member) \\\n> +\t((optstruct)->member == OPTION_STRING_VALUE_NOT_SET_OFFSET ? NULL : \\\n> +\t (char *)(optstruct) + (optstruct)->member)\n> +\n> +/*\n> + * Functions related to option convertation, parsing, manipulation\n> + * and validation\n> + */\n> +extern void optionsDefListValdateNamespaces(List *defList,\n> +\t\t\t\t\t\t\t\tchar **allowed_namespaces);\n> +extern List *optionsDefListFilterNamespaces(List *defList, const char *namespace);\n> +extern List *optionsTextArrayToDefList(Datum options);\n> +extern Datum optionsDefListToTextArray(List *defList);\n> +/*\n> + * Meta functions that uses functions above to get options for relations,\n> + * tablespaces, views and so on\n> + */\n> +\n> +extern bytea *optionsTextArrayToBytea(options_spec_set * spec_set, Datum data,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tbool validate);\n> +extern Datum transformOptions(options_spec_set * spec_set, Datum oldOptions,\n> +\t\t\t\t List *defList, options_parse_mode parse_mode);\n> +\n> +#endif /* OPTIONS_H */\n> diff --git a/src/include/access/reloptions.h b/src/include/access/reloptions.h\n> index 7c5fbeb..21b91df 100644\n> --- a/src/include/access/reloptions.h\n> +++ b/src/include/access/reloptions.h\n> @@ -22,6 +22,7 @@\n> #include \"access/amapi.h\"\n> #include \"access/htup.h\"\n> #include \"access/tupdesc.h\"\n> +#include \"access/options.h\"\n> #include \"nodes/pg_list.h\"\n> #include \"storage/lock.h\"\n> \n> @@ -110,20 +111,10 @@ typedef struct relopt_real\n> \tdouble\t\tmax;\n> } relopt_real;\n> \n> -/*\n> - * relopt_enum_elt_def -- One member of the array of acceptable values\n> - * of an enum reloption.\n> - */\n> -typedef struct relopt_enum_elt_def\n> -{\n> -\tconst char *string_val;\n> -\tint\t\t\tsymbol_val;\n> -} relopt_enum_elt_def;\n> -\n> typedef struct relopt_enum\n> {\n> \trelopt_gen\tgen;\n> -\trelopt_enum_elt_def *members;\n> +\topt_enum_elt_def *members;\n> \tint\t\t\tdefault_val;\n> \tconst char *detailmsg;\n> \t/* null-terminated array of members */\n> @@ -167,6 +158,7 @@ typedef struct local_relopts\n> \tList\t *options;\t\t/* list of local_relopt definitions */\n> \tList\t *validators;\t\t/* list of relopts_validator callbacks */\n> \tSize\t\trelopt_struct_size; /* size of parsed bytea structure */\n> +\toptions_spec_set * spec_set; /* FIXME */\n> } local_relopts;\n> \n> /*\n> @@ -179,21 +171,6 @@ typedef struct local_relopts\n> \t((optstruct)->member == 0 ? NULL : \\\n> \t (char *)(optstruct) + (optstruct)->member)\n> \n> -extern relopt_kind add_reloption_kind(void);\n> -extern void add_bool_reloption(bits32 kinds, const char *name, const char *desc,\n> -\t\t\t\t\t\t\t bool default_val, LOCKMODE lockmode);\n> -extern void add_int_reloption(bits32 kinds, const char *name, const char *desc,\n> -\t\t\t\t\t\t\t int default_val, int min_val, int max_val,\n> -\t\t\t\t\t\t\t LOCKMODE lockmode);\n> -extern void add_real_reloption(bits32 kinds, const char *name, const char *desc,\n> -\t\t\t\t\t\t\t double default_val, double min_val, double max_val,\n> -\t\t\t\t\t\t\t LOCKMODE lockmode);\n> -extern void add_enum_reloption(bits32 kinds, const char *name, const char *desc,\n> -\t\t\t\t\t\t\t relopt_enum_elt_def *members, int default_val,\n> -\t\t\t\t\t\t\t const char *detailmsg, LOCKMODE lockmode);\n> -extern void add_string_reloption(bits32 kinds, const char *name, const char *desc,\n> -\t\t\t\t\t\t\t\t const char *default_val, validate_string_relopt validator,\n> -\t\t\t\t\t\t\t\t LOCKMODE lockmode);\n> \n> extern void init_local_reloptions(local_relopts *opts, Size relopt_struct_size);\n> extern void register_reloptions_validator(local_relopts *opts,\n> @@ -210,7 +187,7 @@ extern void add_local_real_reloption(local_relopts *opts, const char *name,\n> \t\t\t\t\t\t\t\t\t int offset);\n> extern void add_local_enum_reloption(local_relopts *relopts,\n> \t\t\t\t\t\t\t\t\t const char *name, const char *desc,\n> -\t\t\t\t\t\t\t\t\t relopt_enum_elt_def *members,\n> +\t\t\t\t\t\t\t\t\t opt_enum_elt_def *members,\n> \t\t\t\t\t\t\t\t\t int default_val, const char *detailmsg,\n> \t\t\t\t\t\t\t\t\t int offset);\n> extern void add_local_string_reloption(local_relopts *opts, const char *name,\n> @@ -219,29 +196,17 @@ extern void add_local_string_reloption(local_relopts *opts, const char *name,\n> \t\t\t\t\t\t\t\t\t validate_string_relopt validator,\n> \t\t\t\t\t\t\t\t\t fill_string_relopt filler, int offset);\n> \n> -extern Datum transformRelOptions(Datum oldOptions, List *defList,\n> -\t\t\t\t\t\t\t\t const char *namspace, char *validnsps[],\n> -\t\t\t\t\t\t\t\t bool acceptOidsOff, bool isReset);\n> -extern List *untransformRelOptions(Datum options);\n> extern bytea *extractRelOptions(HeapTuple tuple, TupleDesc tupdesc,\n> -\t\t\t\t\t\t\t\tamoptions_function amoptions);\n> -extern void *build_reloptions(Datum reloptions, bool validate,\n> -\t\t\t\t\t\t\t relopt_kind kind,\n> -\t\t\t\t\t\t\t Size relopt_struct_size,\n> -\t\t\t\t\t\t\t const relopt_parse_elt *relopt_elems,\n> -\t\t\t\t\t\t\t int num_relopt_elems);\n> +\t\t\t\t\t\t\t\tamreloptspecset_function amoptions_def_set);\n> extern void *build_local_reloptions(local_relopts *relopts, Datum options,\n> \t\t\t\t\t\t\t\t\tbool validate);\n> \n> -extern bytea *default_reloptions(Datum reloptions, bool validate,\n> -\t\t\t\t\t\t\t\t relopt_kind kind);\n> -extern bytea *heap_reloptions(char relkind, Datum reloptions, bool validate);\n> -extern bytea *view_reloptions(Datum reloptions, bool validate);\n> -extern bytea *partitioned_table_reloptions(Datum reloptions, bool validate);\n> -extern bytea *index_reloptions(amoptions_function amoptions, Datum reloptions,\n> -\t\t\t\t\t\t\t bool validate);\n> -extern bytea *attribute_reloptions(Datum reloptions, bool validate);\n> -extern bytea *tablespace_reloptions(Datum reloptions, bool validate);\n> -extern LOCKMODE AlterTableGetRelOptionsLockLevel(List *defList);\n> +options_spec_set *get_heap_relopt_spec_set(void);\n> +options_spec_set *get_toast_relopt_spec_set(void);\n> +options_spec_set *get_partitioned_relopt_spec_set(void);\n> +options_spec_set *get_view_relopt_spec_set(void);\n> +options_spec_set *get_attribute_options_spec_set(void);\n> +options_spec_set *get_tablespace_options_spec_set(void);\n> +extern LOCKMODE AlterTableGetRelOptionsLockLevel(Relation rel, List *defList);\n> \n> #endif\t\t\t\t\t\t\t/* RELOPTIONS_H */\n> diff --git a/src/include/access/spgist.h b/src/include/access/spgist.h\n> index 2eb2f42..d9a9b2d 100644\n> --- a/src/include/access/spgist.h\n> +++ b/src/include/access/spgist.h\n> @@ -189,9 +189,6 @@ typedef struct spgLeafConsistentOut\n> } spgLeafConsistentOut;\n> \n> \n> -/* spgutils.c */\n> -extern bytea *spgoptions(Datum reloptions, bool validate);\n> -\n> /* spginsert.c */\n> extern IndexBuildResult *spgbuild(Relation heap, Relation index,\n> \t\t\t\t\t\t\t\t struct IndexInfo *indexInfo);\n> diff --git a/src/include/access/spgist_private.h b/src/include/access/spgist_private.h\n> index 40d3b71..dd9a05a 100644\n> --- a/src/include/access/spgist_private.h\n> +++ b/src/include/access/spgist_private.h\n> @@ -529,6 +529,7 @@ extern OffsetNumber SpGistPageAddNewItem(SpGistState *state, Page page,\n> extern bool spgproperty(Oid index_oid, int attno,\n> \t\t\t\t\t\tIndexAMProperty prop, const char *propname,\n> \t\t\t\t\t\tbool *res, bool *isnull);\n> +extern void *spggetreloptspecset(void);\n> \n> /* spgdoinsert.c */\n> extern void spgUpdateNodeLink(SpGistInnerTuple tup, int nodeN,\n> diff --git a/src/include/commands/tablecmds.h b/src/include/commands/tablecmds.h\n> index 336549c..3f87f98 100644\n> --- a/src/include/commands/tablecmds.h\n> +++ b/src/include/commands/tablecmds.h\n> @@ -34,7 +34,7 @@ extern Oid\tAlterTableLookupRelation(AlterTableStmt *stmt, LOCKMODE lockmode);\n> extern void AlterTable(AlterTableStmt *stmt, LOCKMODE lockmode,\n> \t\t\t\t\t struct AlterTableUtilityContext *context);\n> \n> -extern LOCKMODE AlterTableGetLockLevel(List *cmds);\n> +extern LOCKMODE AlterTableGetLockLevel(Oid relid, List *cmds);\n> \n> extern void ATExecChangeOwner(Oid relationOid, Oid newOwnerId, bool recursing, LOCKMODE lockmode);\n> \n> diff --git a/src/test/modules/dummy_index_am/dummy_index_am.c b/src/test/modules/dummy_index_am/dummy_index_am.c\n> index 5365b063..80b39e8 100644\n> --- a/src/test/modules/dummy_index_am/dummy_index_am.c\n> +++ b/src/test/modules/dummy_index_am/dummy_index_am.c\n> @@ -14,7 +14,7 @@\n> #include \"postgres.h\"\n> \n> #include \"access/amapi.h\"\n> -#include \"access/reloptions.h\"\n> +#include \"access/options.h\"\n> #include \"catalog/index.h\"\n> #include \"commands/vacuum.h\"\n> #include \"nodes/pathnodes.h\"\n> @@ -25,12 +25,6 @@ PG_MODULE_MAGIC;\n> \n> void\t\t_PG_init(void);\n> \n> -/* parse table for fillRelOptions */\n> -relopt_parse_elt di_relopt_tab[6];\n> -\n> -/* Kind of relation options for dummy index */\n> -relopt_kind di_relopt_kind;\n> -\n> typedef enum DummyAmEnum\n> {\n> \tDUMMY_AM_ENUM_ONE,\n> @@ -49,7 +43,7 @@ typedef struct DummyIndexOptions\n> \tint\t\t\toption_string_null_offset;\n> }\t\t\tDummyIndexOptions;\n> \n> -relopt_enum_elt_def dummyAmEnumValues[] =\n> +opt_enum_elt_def dummyAmEnumValues[] =\n> {\n> \t{\"one\", DUMMY_AM_ENUM_ONE},\n> \t{\"two\", DUMMY_AM_ENUM_TWO},\n> @@ -63,77 +57,85 @@ PG_FUNCTION_INFO_V1(dihandler);\n> * Validation function for string relation options.\n> */\n> static void\n> -validate_string_option(const char *value)\n> +divalidate_string_option(const char *value)\n> {\n> \tereport(NOTICE,\n> \t\t\t(errmsg(\"new option value for string parameter %s\",\n> \t\t\t\t\tvalue ? value : \"NULL\")));\n> }\n> \n> -/*\n> - * This function creates a full set of relation option types,\n> - * with various patterns.\n> - */\n> -static void\n> -create_reloptions_table(void)\n> +static options_spec_set *di_relopt_specset = NULL;\n> +void * digetreloptspecset(void);\n> +\n> +void *\n> +digetreloptspecset(void)\n> {\n> -\tdi_relopt_kind = add_reloption_kind();\n> -\n> -\tadd_int_reloption(di_relopt_kind, \"option_int\",\n> -\t\t\t\t\t \"Integer option for dummy_index_am\",\n> -\t\t\t\t\t 10, -10, 100, AccessExclusiveLock);\n> -\tdi_relopt_tab[0].optname = \"option_int\";\n> -\tdi_relopt_tab[0].opttype = RELOPT_TYPE_INT;\n> -\tdi_relopt_tab[0].offset = offsetof(DummyIndexOptions, option_int);\n> -\n> -\tadd_real_reloption(di_relopt_kind, \"option_real\",\n> -\t\t\t\t\t \"Real option for dummy_index_am\",\n> -\t\t\t\t\t 3.1415, -10, 100, AccessExclusiveLock);\n> -\tdi_relopt_tab[1].optname = \"option_real\";\n> -\tdi_relopt_tab[1].opttype = RELOPT_TYPE_REAL;\n> -\tdi_relopt_tab[1].offset = offsetof(DummyIndexOptions, option_real);\n> -\n> -\tadd_bool_reloption(di_relopt_kind, \"option_bool\",\n> -\t\t\t\t\t \"Boolean option for dummy_index_am\",\n> -\t\t\t\t\t true, AccessExclusiveLock);\n> -\tdi_relopt_tab[2].optname = \"option_bool\";\n> -\tdi_relopt_tab[2].opttype = RELOPT_TYPE_BOOL;\n> -\tdi_relopt_tab[2].offset = offsetof(DummyIndexOptions, option_bool);\n> -\n> -\tadd_enum_reloption(di_relopt_kind, \"option_enum\",\n> -\t\t\t\t\t \"Enum option for dummy_index_am\",\n> -\t\t\t\t\t dummyAmEnumValues,\n> -\t\t\t\t\t DUMMY_AM_ENUM_ONE,\n> -\t\t\t\t\t \"Valid values are \\\"one\\\" and \\\"two\\\".\",\n> -\t\t\t\t\t AccessExclusiveLock);\n> -\tdi_relopt_tab[3].optname = \"option_enum\";\n> -\tdi_relopt_tab[3].opttype = RELOPT_TYPE_ENUM;\n> -\tdi_relopt_tab[3].offset = offsetof(DummyIndexOptions, option_enum);\n> -\n> -\tadd_string_reloption(di_relopt_kind, \"option_string_val\",\n> -\t\t\t\t\t\t \"String option for dummy_index_am with non-NULL default\",\n> -\t\t\t\t\t\t \"DefaultValue\", &validate_string_option,\n> -\t\t\t\t\t\t AccessExclusiveLock);\n> -\tdi_relopt_tab[4].optname = \"option_string_val\";\n> -\tdi_relopt_tab[4].opttype = RELOPT_TYPE_STRING;\n> -\tdi_relopt_tab[4].offset = offsetof(DummyIndexOptions,\n> -\t\t\t\t\t\t\t\t\t option_string_val_offset);\n> +\tif (di_relopt_specset)\n> +\t\treturn di_relopt_specset;\n> +\n> +\tdi_relopt_specset = allocateOptionsSpecSet(NULL,\n> +\t\t\t\t\t\t\t\t\t\t\t sizeof(DummyIndexOptions), 6);\n> +\n> +\toptionsSpecSetAddInt(\n> +\t\tdi_relopt_specset, \"option_int\",\n> +\t\t\"Integer option for dummy_index_am\",\n> +\t\tAccessExclusiveLock,\n> +\t\t0, offsetof(DummyIndexOptions, option_int),\n> +\t\t10, -10, 100\n> +\t);\n> +\n> +\n> +\toptionsSpecSetAddReal(\n> +\t\tdi_relopt_specset, \"option_real\",\n> +\t\t\"Real option for dummy_index_am\",\n> +\t\tAccessExclusiveLock,\n> +\t\t0, offsetof(DummyIndexOptions, option_real),\n> +\t\t3.1415, -10, 100\n> +\t);\n> +\n> +\toptionsSpecSetAddBool(\n> +\t\tdi_relopt_specset, \"option_bool\",\n> +\t\t\"Boolean option for dummy_index_am\",\n> +\t\tAccessExclusiveLock,\n> +\t\t0, offsetof(DummyIndexOptions, option_bool), true\n> +\t);\n> +\n> +\toptionsSpecSetAddEnum(di_relopt_specset, \"option_enum\",\n> +\t\t\"Enum option for dummy_index_am\",\n> +\t\tAccessExclusiveLock,\n> +\t\t0,\n> +\t\toffsetof(DummyIndexOptions, option_enum),\n> +\t\tdummyAmEnumValues,\n> +\t\tDUMMY_AM_ENUM_ONE,\n> +\t\t\"Valid values are \\\"one\\\" and \\\"two\\\".\"\n> +\t);\n> +\n> +\toptionsSpecSetAddString(di_relopt_specset, \"option_string_val\",\n> +\t\t\"String option for dummy_index_am with non-NULL default\",\n> +\t\tAccessExclusiveLock,\n> +\t\t0,\n> +\t\toffsetof(DummyIndexOptions, option_string_val_offset),\n> +\t\t\"DefaultValue\", &divalidate_string_option\n> +\t);\n> \n> \t/*\n> \t * String option for dummy_index_am with NULL default, and without\n> \t * description.\n> \t */\n> -\tadd_string_reloption(di_relopt_kind, \"option_string_null\",\n> -\t\t\t\t\t\t NULL,\t/* description */\n> -\t\t\t\t\t\t NULL, &validate_string_option,\n> -\t\t\t\t\t\t AccessExclusiveLock);\n> -\tdi_relopt_tab[5].optname = \"option_string_null\";\n> -\tdi_relopt_tab[5].opttype = RELOPT_TYPE_STRING;\n> -\tdi_relopt_tab[5].offset = offsetof(DummyIndexOptions,\n> -\t\t\t\t\t\t\t\t\t option_string_null_offset);\n> +\n> +\toptionsSpecSetAddString(di_relopt_specset, \"option_string_null\",\n> +\t\tNULL,\t/* description */\n> +\t\tAccessExclusiveLock,\n> +\t\t0,\n> +\t\toffsetof(DummyIndexOptions, option_string_null_offset),\n> +\t\tNULL, &divalidate_string_option\n> +\t);\n> +\n> +\treturn di_relopt_specset;\n> }\n> \n> \n> +\n> /*\n> * Build a new index.\n> */\n> @@ -219,19 +221,6 @@ dicostestimate(PlannerInfo *root, IndexPath *path, double loop_count,\n> }\n> \n> /*\n> - * Parse relation options for index AM, returning a DummyIndexOptions\n> - * structure filled with option values.\n> - */\n> -static bytea *\n> -dioptions(Datum reloptions, bool validate)\n> -{\n> -\treturn (bytea *) build_reloptions(reloptions, validate,\n> -\t\t\t\t\t\t\t\t\t di_relopt_kind,\n> -\t\t\t\t\t\t\t\t\t sizeof(DummyIndexOptions),\n> -\t\t\t\t\t\t\t\t\t di_relopt_tab, lengthof(di_relopt_tab));\n> -}\n> -\n> -/*\n> * Validator for index AM.\n> */\n> static bool\n> @@ -308,7 +297,6 @@ dihandler(PG_FUNCTION_ARGS)\n> \tamroutine->amvacuumcleanup = divacuumcleanup;\n> \tamroutine->amcanreturn = NULL;\n> \tamroutine->amcostestimate = dicostestimate;\n> -\tamroutine->amoptions = dioptions;\n> \tamroutine->amproperty = NULL;\n> \tamroutine->ambuildphasename = NULL;\n> \tamroutine->amvalidate = divalidate;\n> @@ -322,12 +310,7 @@ dihandler(PG_FUNCTION_ARGS)\n> \tamroutine->amestimateparallelscan = NULL;\n> \tamroutine->aminitparallelscan = NULL;\n> \tamroutine->amparallelrescan = NULL;\n> +\tamroutine->amreloptspecset = digetreloptspecset;\n> \n> \tPG_RETURN_POINTER(amroutine);\n> }\n> -\n> -void\n> -_PG_init(void)\n> -{\n> -\tcreate_reloptions_table();\n> -}\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 26 Oct 2021 10:25:32 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Suggestion: Unified options API. Need help from core team" }, { "msg_contents": "В письме от вторник, 26 октября 2021 г. 17:25:32 MSK пользователь Bruce \nMomjian написал:\n> Uh, the core team does not get involved in development issues, unless\n> there is a issue that clearly cannot be resolved by discussion on the\n> hackers list.\nThen may be I used wrong therm. May be I should say \"experienced postgres \ndevelopers\". \n\n> \n> ---------------------------------------------------------------------------\n> \n> On Mon, Oct 18, 2021 at 04:24:23PM +0300, Nikolay Shaplov wrote:\n> > Hi!\n> > \n> > I am still hoping to finish my work on reloptions I've started some years\n> > ago.\n> > \n> > I've renewed my patch and I think I need help from core team to finish it.\n> > \n> > General idea of the patch: Now we have three ways to define options for\n> > different objects, with more or less different code used for it. It wold\n> > be\n> > better to have unified context independent API for processing options,\n> > instead.\n> > \n> > Long story short:\n> > \n> > There is Option Specification object, that has all information about\n> > single\n> > option, how it should be parsed and validated.\n> > \n> > There is Option Specification Set object, an array of Option Specs, that\n> > defines all options available for certain object (am of some index for\n> > example).\n> > \n> > When some object (relation, opclass, etc) wants to have an options, it\n> > creates an Option Spec Set for there options, and uses it for converting\n> > options between different representations (to get is from SQL, to store it\n> > in pg_class, to pass it to the core code as bytea etc)\n> > \n> > For indexes Option Spec Set is available via Access Method API.\n> > \n> > For non-index relations all Option Spec Sets are left in reloption.c file,\n> > and should be moved to heap AM later. (They are not in AM now so will not\n> > change it now)\n> > \n> > Main problem:\n> > \n> > There are LockModes. LockModes for options is also stored in Option Spec\n> > Set. For indexes Option Spec Sec is accessable via AM. So to get LockMode\n> > for option of an index you need to have access for it's relation object\n> > (so you can call proper AM method to fetch spec set). So you need\n> > \"Relation rel\" in AlterTableGetRelOptionsLockLevel where Lock Level is\n> > determinated (src/ backend/access/common/reloptions.c)\n> > AlterTableGetRelOptionsLockLevel is called from AlterTableGetLockLevel\n> > (src/ backend/commands/tablecmds.c) so we need \"Relation rel\" there too.\n> > AlterTableGetLockLevel is called from AlterTableInternal (/src/backend/\n> > commands/tablecmds.c) There we have \"Oid relid\" so we can try to open\n> > relation like this\n> > \n> > Relation rel = relation_open(relid, NoLock);\n> > cmd_lockmode = AlterTableGetRelOptionsLockLevel(rel,\n> > \n> > castNode(List,\n> > cmd->def));\n> > \n> > relation_close(rel,NoLock);\n> > break;\n> > \n> > but this will trigger the assertion\n> > \n> > Assert(lockmode != NoLock ||\n> > \n> > IsBootstrapProcessingMode() ||\n> > CheckRelationLockedByMe(r, c, true));\n> > \n> > in relation_open (b/src/backend/access/common/relation.c)\n> > \n> > For now I've commented this assertion out. I've tried to open relation\n> > with\n> > AccessShareLock but this caused one test to fail, and I am not sure this\n> > solution is better.\n> > \n> > What I have done here I consider a hack, so I need a help of core-team\n> > here to do it in right way.\n> > \n> > General problems:\n> > \n> > I guess I need a coauthor, or supervisor from core team, to finish this\n> > patch. The amount of code is big, and I guess there are parts that can be\n> > made more in postgres way, then I did them. And I would need an advice\n> > there, and I guess it would be better to do if before sending it to\n> > commitfest.\n> > \n> > \n> > Current patch status:\n> > \n> > 1. It is Beta. Some minor issues and FIXMEs are not solved. Some code\n> > comments needs revising, but in general it do what it is intended to do.\n> > \n> > 2. This patch does not intend to change postgres behavior at all, all\n> > should work as before, all changes are internal only.\n> > \n> > The only exception is error message for unexciting option name in toast\n> > namespace\n> > \n> > CREATE TABLE reloptions_test2 (i int) WITH (toast.not_existing_option =\n> > 42);> \n> > -ERROR: unrecognized parameter \"not_existing_option\"\n> > +ERROR: unrecognized parameter \"toast.not_existing_option\"\n> > \n> > New message is better I guess, though I can change it back if needed.\n> > \n> > 3. I am doing my development in this blanch\n> > https://gitlab.com/dhyannataraj/ postgres/-/tree/new_options_take_two I\n> > am making changes every day, so last version will be available there\n> > \n> > Would be glad to hear from coreteam before I finish with this patch and\n> > made it ready for commit-fest.\n> > \n> > \n> > \n> > diff --git a/contrib/bloom/bloom.h b/contrib/bloom/bloom.h\n> > index a22a6df..8f2d5e7 100644\n> > --- a/contrib/bloom/bloom.h\n> > +++ b/contrib/bloom/bloom.h\n> > @@ -17,6 +17,7 @@\n> > \n> > #include \"access/generic_xlog.h\"\n> > #include \"access/itup.h\"\n> > #include \"access/xlog.h\"\n> > \n> > +#include \"access/options.h\"\n> > \n> > #include \"fmgr.h\"\n> > #include \"nodes/pathnodes.h\"\n> > \n> > @@ -207,7 +208,8 @@ extern IndexBulkDeleteResult\n> > *blbulkdelete(IndexVacuumInfo *info,> \n> > \t\t\t\t\t\t\t\t\t\t\n void *callback_state);\n> > \n> > extern IndexBulkDeleteResult *blvacuumcleanup(IndexVacuumInfo *info,\n> > \n> > \t\t\t\t\t\t\t\t\t\t\n\t IndexBulkDeleteResult *stats);\n> > \n> > -extern bytea *bloptions(Datum reloptions, bool validate);\n> > +extern void *blrelopt_specset(void);\n> > +extern void blReloptionPostprocess(void *, bool validate);\n> > \n> > extern void blcostestimate(PlannerInfo *root, IndexPath *path,\n> > \n> > \t\t\t\t\t\t double loop_count, Cost \n*indexStartupCost,\n> > \t\t\t\t\t\t Cost *indexTotalCost, \nSelectivity *indexSelectivity,\n> > \n> > diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c\n> > index 754de00..54dad16 100644\n> > --- a/contrib/bloom/blutils.c\n> > +++ b/contrib/bloom/blutils.c\n> > @@ -15,7 +15,7 @@\n> > \n> > #include \"access/amapi.h\"\n> > #include \"access/generic_xlog.h\"\n> > \n> > -#include \"access/reloptions.h\"\n> > +#include \"access/options.h\"\n> > \n> > #include \"bloom.h\"\n> > #include \"catalog/index.h\"\n> > #include \"commands/vacuum.h\"\n> > \n> > @@ -34,53 +34,13 @@\n> > \n> > PG_FUNCTION_INFO_V1(blhandler);\n> > \n> > -/* Kind of relation options for bloom index */\n> > -static relopt_kind bl_relopt_kind;\n> > -\n> > -/* parse table for fillRelOptions */\n> > -static relopt_parse_elt bl_relopt_tab[INDEX_MAX_KEYS + 1];\n> > +/* Catalog of relation options for bloom index */\n> > +static options_spec_set *bl_relopt_specset;\n> > \n> > static int32 myRand(void);\n> > static void mySrand(uint32 seed);\n> > \n> > /*\n> > \n> > - * Module initialize function: initialize info about Bloom relation\n> > options. - *\n> > - * Note: keep this in sync with makeDefaultBloomOptions().\n> > - */\n> > -void\n> > -_PG_init(void)\n> > -{\n> > -\tint\t\t\ti;\n> > -\tchar\t\tbuf[16];\n> > -\n> > -\tbl_relopt_kind = add_reloption_kind();\n> > -\n> > -\t/* Option for length of signature */\n> > -\tadd_int_reloption(bl_relopt_kind, \"length\",\n> > -\t\t\t\t\t \"Length of signature in bits\",\n> > -\t\t\t\t\t DEFAULT_BLOOM_LENGTH, 1, \nMAX_BLOOM_LENGTH,\n> > -\t\t\t\t\t AccessExclusiveLock);\n> > -\tbl_relopt_tab[0].optname = \"length\";\n> > -\tbl_relopt_tab[0].opttype = RELOPT_TYPE_INT;\n> > -\tbl_relopt_tab[0].offset = offsetof(BloomOptions, bloomLength);\n> > -\n> > -\t/* Number of bits for each possible index column: col1, col2, ... */\n> > -\tfor (i = 0; i < INDEX_MAX_KEYS; i++)\n> > -\t{\n> > -\t\tsnprintf(buf, sizeof(buf), \"col%d\", i + 1);\n> > -\t\tadd_int_reloption(bl_relopt_kind, buf,\n> > -\t\t\t\t\t\t \"Number of bits generated \nfor each index column\",\n> > -\t\t\t\t\t\t DEFAULT_BLOOM_BITS, 1, \nMAX_BLOOM_BITS,\n> > -\t\t\t\t\t\t AccessExclusiveLock);\n> > -\t\tbl_relopt_tab[i + 1].optname = \nMemoryContextStrdup(TopMemoryContext,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t buf);\n> > -\t\tbl_relopt_tab[i + 1].opttype = RELOPT_TYPE_INT;\n> > -\t\tbl_relopt_tab[i + 1].offset = offsetof(BloomOptions, \nbitSize[0]) +\n> > sizeof(int) * i; -\t}\n> > -}\n> > -\n> > -/*\n> > \n> > * Construct a default set of Bloom options.\n> > */\n> > \n> > static BloomOptions *\n> > \n> > @@ -135,7 +95,7 @@ blhandler(PG_FUNCTION_ARGS)\n> > \n> > \tamroutine->amvacuumcleanup = blvacuumcleanup;\n> > \tamroutine->amcanreturn = NULL;\n> > \tamroutine->amcostestimate = blcostestimate;\n> > \n> > -\tamroutine->amoptions = bloptions;\n> > +\tamroutine->amreloptspecset = blrelopt_specset;\n> > \n> > \tamroutine->amproperty = NULL;\n> > \tamroutine->ambuildphasename = NULL;\n> > \tamroutine->amvalidate = blvalidate;\n> > \n> > @@ -154,6 +114,28 @@ blhandler(PG_FUNCTION_ARGS)\n> > \n> > \tPG_RETURN_POINTER(amroutine);\n> > \n> > }\n> > \n> > +void\n> > +blReloptionPostprocess(void *data, bool validate)\n> > +{\n> > +\tBloomOptions *opts = (BloomOptions *) data;\n> > +\tint\t\t\ti;\n> > +\n> > +\tif (validate)\n> > +\t\tfor (i = 0; i < INDEX_MAX_KEYS; i++)\n> > +\t\t{\n> > +\t\t\tif (opts->bitSize[i] >= opts->bloomLength)\n> > +\t\t\t{\n> > +\t\t\t\tereport(ERROR,\n> > +\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > +\t\t\t\t\t errmsg(\"col%i should not be grater \nthan length\", i)));\n> > +\t\t\t}\n> > +\t\t}\n> > +\n> > +\t/* Convert signature length from # of bits to # to words, rounding up \n*/\n> > +\topts->bloomLength = (opts->bloomLength + SIGNWORDBITS - 1) /\n> > SIGNWORDBITS; +}\n> > +\n> > +\n> > \n> > /*\n> > \n> > * Fill BloomState structure for particular index.\n> > */\n> > \n> > @@ -474,24 +456,39 @@ BloomInitMetapage(Relation index)\n> > \n> > \tUnlockReleaseBuffer(metaBuffer);\n> > \n> > }\n> > \n> > -/*\n> > - * Parse reloptions for bloom index, producing a BloomOptions struct.\n> > - */\n> > -bytea *\n> > -bloptions(Datum reloptions, bool validate)\n> > +void *\n> > +blrelopt_specset(void)\n> > \n> > {\n> > \n> > -\tBloomOptions *rdopts;\n> > +\tint\t\t\ti;\n> > +\tchar\t\tbuf[16];\n> > \n> > -\t/* Parse the user-given reloptions */\n> > -\trdopts = (BloomOptions *) build_reloptions(reloptions, validate,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t bl_relopt_kind,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t sizeof(BloomOptions),\n> > -\t\t\t\t\t\t\t\t\t\t\n\t bl_relopt_tab,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t lengthof(bl_relopt_tab));\n> > +\tif (bl_relopt_specset)\n> > +\t\treturn bl_relopt_specset;\n> > \n> > -\t/* Convert signature length from # of bits to # to words, rounding \nup */\n> > -\tif (rdopts)\n> > -\t\trdopts->bloomLength = (rdopts->bloomLength + SIGNWORDBITS - \n1) /\n> > SIGNWORDBITS;\n> > \n> > -\treturn (bytea *) rdopts;\n> > +\tbl_relopt_specset = allocateOptionsSpecSet(NULL,\n> > +\t\t\t\t\t\t\t \nsizeof(BloomOptions), INDEX_MAX_KEYS + 1);\n> > +\tbl_relopt_specset->postprocess_fun = blReloptionPostprocess;\n> > +\n> > +\toptionsSpecSetAddInt(bl_relopt_specset, \"length\",\n> > +\t\t\t\t\t\t\t \"Length of signature \nin bits\",\n> > +\t\t\t\t\t\t\t NoLock,\t\t/* \nNo lock as far as ALTER is\n> > +\t\t\t\t\t\t\t\t\t\t\n\t * forbidden */\n> > +\t\t\t\t\t\t\t 0,\n> > +\t\t\t\t\t\t\t \noffsetof(BloomOptions, bloomLength),\n> > +\t\t\t\t\t\t\t \nDEFAULT_BLOOM_LENGTH, 1, MAX_BLOOM_LENGTH);\n> > +\n> > +\t/* Number of bits for each possible index column: col1, col2, ... */\n> > +\tfor (i = 0; i < INDEX_MAX_KEYS; i++)\n> > +\t{\n> > +\t\tsnprintf(buf, 16, \"col%d\", i + 1);\n> > +\t\toptionsSpecSetAddInt(bl_relopt_specset, buf,\n> > +\t\t\t\t\t\t\t \"Number of bits \nfor corresponding column\",\n> > +\t\t\t\t\t\t\t\t NoLock,\t/* \nNo lock as far as ALTER is\n> > +\t\t\t\t\t\t\t\t\t\t\n\t * forbidden */\n> > +\t\t\t\t\t\t\t\t 0,\n> > +\t\t\t\t\t\t\t\t \noffsetof(BloomOptions, bitSize[i]),\n> > +\t\t\t\t\t\t\t\t \nDEFAULT_BLOOM_BITS, 1, MAX_BLOOM_BITS);\n> > +\t}\n> > +\treturn bl_relopt_specset;\n> > \n> > }\n> > \n> > diff --git a/contrib/bloom/expected/bloom.out\n> > b/contrib/bloom/expected/bloom.out index dae12a7..e79456d 100644\n> > --- a/contrib/bloom/expected/bloom.out\n> > +++ b/contrib/bloom/expected/bloom.out\n> > @@ -228,3 +228,6 @@ CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH\n> > (length=0);> \n> > ERROR: value 0 out of bounds for option \"length\"\n> > CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH (col1=0);\n> > ERROR: value 0 out of bounds for option \"col1\"\n> > \n> > +-- check post_validate for colN<lengh\n> > +CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH\n> > (length=10,col1=11);\n> > +ERROR: col0 should not be grater than length\n> > diff --git a/contrib/bloom/sql/bloom.sql b/contrib/bloom/sql/bloom.sql\n> > index 4733e1e..0bfc767 100644\n> > --- a/contrib/bloom/sql/bloom.sql\n> > +++ b/contrib/bloom/sql/bloom.sql\n> > @@ -93,3 +93,6 @@ SELECT reloptions FROM pg_class WHERE oid =\n> > 'bloomidx'::regclass;> \n> > \\set VERBOSITY terse\n> > CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH (length=0);\n> > CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH (col1=0);\n> > \n> > +\n> > +-- check post_validate for colN<lengh\n> > +CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH\n> > (length=10,col1=11);\n> > diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c\n> > index 3a0beaa..a15a10b 100644\n> > --- a/contrib/dblink/dblink.c\n> > +++ b/contrib/dblink/dblink.c\n> > @@ -2005,7 +2005,7 @@ PG_FUNCTION_INFO_V1(dblink_fdw_validator);\n> > \n> > Datum\n> > dblink_fdw_validator(PG_FUNCTION_ARGS)\n> > {\n> > \n> > -\tList\t *options_list = \nuntransformRelOptions(PG_GETARG_DATUM(0));\n> > +\tList\t *options_list = \noptionsTextArrayToDefList(PG_GETARG_DATUM(0));\n> > \n> > \tOid\t\t\tcontext = PG_GETARG_OID(1);\n> > \tListCell *cell;\n> > \n> > diff --git a/contrib/file_fdw/file_fdw.c b/contrib/file_fdw/file_fdw.c\n> > index 2c2f149..1194747 100644\n> > --- a/contrib/file_fdw/file_fdw.c\n> > +++ b/contrib/file_fdw/file_fdw.c\n> > @@ -195,7 +195,7 @@ file_fdw_handler(PG_FUNCTION_ARGS)\n> > \n> > Datum\n> > file_fdw_validator(PG_FUNCTION_ARGS)\n> > {\n> > \n> > -\tList\t *options_list = \nuntransformRelOptions(PG_GETARG_DATUM(0));\n> > +\tList\t *options_list = \noptionsTextArrayToDefList(PG_GETARG_DATUM(0));\n> > \n> > \tOid\t\t\tcatalog = PG_GETARG_OID(1);\n> > \tchar\t *filename = NULL;\n> > \tDefElem *force_not_null = NULL;\n> > \n> > diff --git a/contrib/postgres_fdw/option.c b/contrib/postgres_fdw/option.c\n> > index 5bb1af4..bbd4167 100644\n> > --- a/contrib/postgres_fdw/option.c\n> > +++ b/contrib/postgres_fdw/option.c\n> > @@ -72,7 +72,7 @@ PG_FUNCTION_INFO_V1(postgres_fdw_validator);\n> > \n> > Datum\n> > postgres_fdw_validator(PG_FUNCTION_ARGS)\n> > {\n> > \n> > -\tList\t *options_list = \nuntransformRelOptions(PG_GETARG_DATUM(0));\n> > +\tList\t *options_list = \noptionsTextArrayToDefList(PG_GETARG_DATUM(0));\n> > \n> > \tOid\t\t\tcatalog = PG_GETARG_OID(1);\n> > \tListCell *cell;\n> > \n> > diff --git a/src/backend/access/brin/brin.c\n> > b/src/backend/access/brin/brin.c index ccc9fa0..5dd52a4 100644\n> > --- a/src/backend/access/brin/brin.c\n> > +++ b/src/backend/access/brin/brin.c\n> > @@ -20,7 +20,6 @@\n> > \n> > #include \"access/brin_pageops.h\"\n> > #include \"access/brin_xlog.h\"\n> > #include \"access/relation.h\"\n> > \n> > -#include \"access/reloptions.h\"\n> > \n> > #include \"access/relscan.h\"\n> > #include \"access/table.h\"\n> > #include \"access/tableam.h\"\n> > \n> > @@ -40,7 +39,6 @@\n> > \n> > #include \"utils/memutils.h\"\n> > #include \"utils/rel.h\"\n> > \n> > -\n> > \n> > /*\n> > \n> > * We use a BrinBuildState during initial construction of a BRIN index.\n> > * The running state is kept in a BrinMemTuple.\n> > \n> > @@ -119,7 +117,6 @@ brinhandler(PG_FUNCTION_ARGS)\n> > \n> > \tamroutine->amvacuumcleanup = brinvacuumcleanup;\n> > \tamroutine->amcanreturn = NULL;\n> > \tamroutine->amcostestimate = brincostestimate;\n> > \n> > -\tamroutine->amoptions = brinoptions;\n> > \n> > \tamroutine->amproperty = NULL;\n> > \tamroutine->ambuildphasename = NULL;\n> > \tamroutine->amvalidate = brinvalidate;\n> > \n> > @@ -134,6 +131,7 @@ brinhandler(PG_FUNCTION_ARGS)\n> > \n> > \tamroutine->amestimateparallelscan = NULL;\n> > \tamroutine->aminitparallelscan = NULL;\n> > \tamroutine->amparallelrescan = NULL;\n> > \n> > +\tamroutine->amreloptspecset = bringetreloptspecset;\n> > \n> > \tPG_RETURN_POINTER(amroutine);\n> > \n> > }\n> > \n> > @@ -963,23 +961,6 @@ brinvacuumcleanup(IndexVacuumInfo *info,\n> > IndexBulkDeleteResult *stats)> \n> > }\n> > \n> > /*\n> > \n> > - * reloptions processor for BRIN indexes\n> > - */\n> > -bytea *\n> > -brinoptions(Datum reloptions, bool validate)\n> > -{\n> > -\tstatic const relopt_parse_elt tab[] = {\n> > -\t\t{\"pages_per_range\", RELOPT_TYPE_INT, offsetof(BrinOptions,\n> > pagesPerRange)}, -\t\t{\"autosummarize\", RELOPT_TYPE_BOOL,\n> > offsetof(BrinOptions, autosummarize)} -\t};\n> > -\n> > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > -\t\t\t\t\t\t\t\t\t \nRELOPT_KIND_BRIN,\n> > -\t\t\t\t\t\t\t\t\t \nsizeof(BrinOptions),\n> > -\t\t\t\t\t\t\t\t\t \ntab, lengthof(tab));\n> > -}\n> > -\n> > -/*\n> > \n> > * SQL-callable function to scan through an index and summarize all\n> > ranges\n> > * that are not currently summarized.\n> > */\n> > \n> > @@ -1765,3 +1746,32 @@ check_null_keys(BrinValues *bval, ScanKey\n> > *nullkeys, int nnullkeys)> \n> > \treturn true;\n> > \n> > }\n> > \n> > +\n> > +static options_spec_set *brin_relopt_specset = NULL;\n> > +\n> > +void *\n> > +bringetreloptspecset(void)\n> > +{\n> > +\tif (brin_relopt_specset)\n> > +\t\treturn brin_relopt_specset;\n> > +\tbrin_relopt_specset = allocateOptionsSpecSet(NULL,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t sizeof(BrinOptions), 2);\n> > +\n> > +\toptionsSpecSetAddInt(brin_relopt_specset, \"pages_per_range\",\n> > +\t\t \"Number of pages that each page range covers in a BRIN \nindex\",\n> > +\t\t\t\t\t\t\t NoLock,\t\t/* \nsince ALTER is not allowed\n> > +\t\t\t\t\t\t\t\t\t\t\n\t * no lock needed */\n> > +\t\t\t\t\t\t\t 0,\n> > +\t\t\t\t\t\t\t offsetof(BrinOptions, \npagesPerRange),\n> > +\t\t\t\t\t\t\t \nBRIN_DEFAULT_PAGES_PER_RANGE,\n> > +\t\t\t\t\t\t\t \nBRIN_MIN_PAGES_PER_RANGE,\n> > +\t\t\t\t\t\t\t \nBRIN_MAX_PAGES_PER_RANGE);\n> > +\t\toptionsSpecSetAddBool(brin_relopt_specset, \"autosummarize\",\n> > +\t\t\t\t\t\"Enables automatic summarization on \nthis BRIN index\",\n> > +\t\t\t\t\t\t\t \nAccessExclusiveLock,\n> > +\t\t\t\t\t\t\t 0,\n> > +\t\t\t\t\t\t\t \noffsetof(BrinOptions, autosummarize),\n> > +\t\t\t\t\t\t\t false);\n> > +\treturn brin_relopt_specset;\n> > +}\n> > +\n> > diff --git a/src/backend/access/brin/brin_pageops.c\n> > b/src/backend/access/brin/brin_pageops.c index df9ffc2..1940b3d 100644\n> > --- a/src/backend/access/brin/brin_pageops.c\n> > +++ b/src/backend/access/brin/brin_pageops.c\n> > @@ -420,6 +420,9 @@ brin_doinsert(Relation idxrel, BlockNumber\n> > pagesPerRange,> \n> > \t\tfreespace = br_page_get_freespace(page);\n> > \t\n> > \tItemPointerSet(&tid, blk, off);\n> > \n> > +\n> > +//elog(WARNING, \"pages_per_range = %i\", pagesPerRange);\n> > +\n> > \n> > \tbrinSetHeapBlockItemptr(revmapbuf, pagesPerRange, heapBlk, tid);\n> > \tMarkBufferDirty(revmapbuf);\n> > \n> > diff --git a/src/backend/access/common/Makefile\n> > b/src/backend/access/common/Makefile index b9aff0c..78c9c5a 100644\n> > --- a/src/backend/access/common/Makefile\n> > +++ b/src/backend/access/common/Makefile\n> > @@ -18,6 +18,7 @@ OBJS = \\\n> > \n> > \tdetoast.o \\\n> > \theaptuple.o \\\n> > \tindextuple.o \\\n> > \n> > +\toptions.o \\\n> > \n> > \tprintsimple.o \\\n> > \tprinttup.o \\\n> > \trelation.o \\\n> > \n> > diff --git a/src/backend/access/common/options.c\n> > b/src/backend/access/common/options.c new file mode 100644\n> > index 0000000..752cddc\n> > --- /dev/null\n> > +++ b/src/backend/access/common/options.c\n> > @@ -0,0 +1,1468 @@\n> > +/*-----------------------------------------------------------------------\n> > -- + *\n> > + * options.c\n> > + *\t An unifom, context-free API for processing name=value options. \nUsed\n> > + *\t to process relation optons (reloptions), attribute options, \nopclass\n> > + *\t options, etc.\n> > + *\n> > + * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group\n> > + * Portions Copyright (c) 1994, Regents of the University of California\n> > + *\n> > + *\n> > + * IDENTIFICATION\n> > + *\t src/backend/access/common/options.c\n> > + *\n> > +\n> > *------------------------------------------------------------------------\n> > - + */\n> > +\n> > +#include \"postgres.h\"\n> > +\n> > +#include \"access/options.h\"\n> > +#include \"catalog/pg_type.h\"\n> > +#include \"commands/defrem.h\"\n> > +#include \"nodes/makefuncs.h\"\n> > +#include \"utils/builtins.h\"\n> > +#include \"utils/guc.h\"\n> > +#include \"utils/memutils.h\"\n> > +#include \"mb/pg_wchar.h\"\n> > +\n> > +\n> > +/*\n> > + * OPTIONS SPECIFICATION and OPTION SPECIFICATION SET\n> > + *\n> > + * Each option is defined via Option Specification object (Option Spec).\n> > + * Option Spec should have all information that is needed for processing\n> > + * (parsing, validating, converting) of a single option. Implemented via\n> > set of + * option_spec_* structures.\n> > + *\n> > + * A set of Option Specs (Options Spec Set), defines all options\n> > available for + * certain object (certain relation kind for example). It\n> > is a list of + * Options Specs, plus validation functions that can be\n> > used to validate whole + * option set, if needed. Implemenred via\n> > options_spec_set structure and set of + * optionsSpecSetAdd* functions\n> > that are used for adding Option Specs items to + * a Set.\n> > + *\n> > + * NOTE: we choose therm \"sepcification\" instead of \"definition\" because\n> > therm + * \"definition\" is used for objects that came from lexer. So to\n> > avoud confusion + * here we have Option Specifications, and all\n> > \"definitions\" are from lexer. + */\n> > +\n> > +/*\n> > + * OPTION VALUES REPRESENTATIONS\n> > + *\n> > + * Option values usually came from lexer in form of defList obect, stored\n> > in + * pg_catalog as text array, and used when they are stored in memory\n> > as + * C-structure. These are different option values representations.\n> > Here goes + * brief description of all representations used in the code.\n> > + *\n> > + * Values\n> > + *\n> > + * Values are an internal representation that is used while converting\n> > + * Values between other representation. Value is called \"parsed\",\n> > + * when Value's value is converted to a proper type and validated, or is\n> > called + * \"unparsed\", when Value's value is stored as raw string that\n> > was obtained + * from the source without any cheks. In convertation\n> > funcion names first case + * is refered as Values, second case is refered\n> > as RawValues. Values is + * implemented as List of option_value\n> > C-structures.\n> > + *\n> > + * defList\n> > + *\n> > + * Options in form of definition List that comes from lexer. (For\n> > reloptions it + * is a part of SQL query that goes after WITH, SET or\n> > RESET keywords). Can be + * converted to and from Values using\n> > optionsDefListToRawValues and + * optionsTextArrayToRawValues functions.\n> > + *\n> > + * TEXT[]\n> > + *\n> > + * Options in form suitable for storig in TEXT[] field in DB. (E.g.\n> > reloptions + * are stores in pg_catalog.pg_class table in reloptions\n> > field). Can be converted + * to and from Values using\n> > optionsValuesToTextArray and optionsTextArrayToRawValues + * functions.\n> > + *\n> > + * Bytea\n> > + *\n> > + * Option data stored in C-structure with varlena header in the beginning\n> > of the + * structure. This representation is used to pass option values\n> > to the core + * postgres. It is fast to read, it can be cached and so on.\n> > Bytea rpresentation + * can be obtained from Vales using\n> > optionsValuesToBytea function, and can't be + * converted back.\n> > + */\n> > +\n> > +static option_spec_basic *allocateOptionSpec(int type, const char *name,\n> > +\t\t\t\t\t\t const char *desc, LOCKMODE \nlockmode,\n> > +\t\t\t\t\t\t option_spec_flags flags, int \nstruct_offset);\n> > +\n> > +static void parse_one_option(option_value * option, const char *text_str,\n> > +\t\t\t\t int text_len, bool validate);\n> > +static void *optionsAllocateBytea(options_spec_set * spec_set, List\n> > *options); +\n> > +\n> > +static List *\n> > +optionsDefListToRawValues(List *defList, options_parse_mode\n> > +\t\t\t\t\t\t parse_mode);\n> > +static Datum optionsValuesToTextArray(List *options_values);\n> > +static List *optionsMergeOptionValues(List *old_options, List\n> > *new_options); +static bytea *optionsValuesToBytea(List *options,\n> > options_spec_set * spec_set); +List *optionsTextArrayToRawValues(Datum\n> > array_datum);\n> > +List *optionsParseRawValues(List *raw_values, options_spec_set *\n> > spec_set,\n> > +\t\t\t\t\t options_parse_mode mode);\n> > +\n> > +\n> > +/*\n> > + * Options spec_set functions\n> > + */\n> > +\n> > +/*\n> > + * Options catalog describes options available for certain object.\n> > Catalog has + * all necessary information for parsing transforming and\n> > validating options + * for an object. All\n> > parsing/validation/transformation functions should not + * know any\n> > details of option implementation for certain object, all this + *\n> > information should be stored in catalog instead and interpreted by + *\n> > pars/valid/transf functions blindly.\n> > + *\n> > + * The heart of the option catalog is an array of option definitions. \n> > Options + * definition specifies name of option, type, range of\n> > acceptable values, and + * default value.\n> > + *\n> > + * Options values can be one of the following types: bool, int, real,\n> > enum, + * string. For more info see \"option_type\" and\n> > \"optionsCatalogAddItemYyyy\" + * functions.\n> > + *\n> > + * Option definition flags allows to define parser behavior for special\n> > (or not + * so special) cases. See option_spec_flags for more info.\n> > + *\n> > + * Options and Lock levels:\n> > + *\n> > + * The default choice for any new option should be AccessExclusiveLock.\n> > + * In some cases the lock level can be reduced from there, but the lock\n> > + * level chosen should always conflict with itself to ensure that\n> > multiple\n> > + * changes aren't lost when we attempt concurrent changes.\n> > + * The choice of lock level depends completely upon how that parameter\n> > + * is used within the server, not upon how and when you'd like to change\n> > it. + * Safety first. Existing choices are documented here, and elsewhere\n> > in + * backend code where the parameters are used.\n> > + *\n> > + * In general, anything that affects the results obtained from a SELECT\n> > must be + * protected by AccessExclusiveLock.\n> > + *\n> > + * Autovacuum related parameters can be set at ShareUpdateExclusiveLock\n> > + * since they are only used by the AV procs and don't change anything\n> > + * currently executing.\n> > + *\n> > + * Fillfactor can be set because it applies only to subsequent changes\n> > made to + * data blocks, as documented in heapio.c\n> > + *\n> > + * n_distinct options can be set at ShareUpdateExclusiveLock because they\n> > + * are only used during ANALYZE, which uses a ShareUpdateExclusiveLock,\n> > + * so the ANALYZE will not be affected by in-flight changes. Changing\n> > those + * values has no affect until the next ANALYZE, so no need for\n> > stronger lock. + *\n> > + * Planner-related parameters can be set with ShareUpdateExclusiveLock\n> > because + * they only affect planning and not the correctness of the\n> > execution. Plans + * cannot be changed in mid-flight, so changes here\n> > could not easily result in + * new improved plans in any case. So we\n> > allow existing queries to continue + * and existing plans to survive, a\n> > small price to pay for allowing better + * plans to be introduced\n> > concurrently without interfering with users. + *\n> > + * Setting parallel_workers is safe, since it acts the same as\n> > + * max_parallel_workers_per_gather which is a USERSET parameter that\n> > doesn't + * affect existing plans or queries.\n> > +*/\n> > +\n> > +/*\n> > + * allocateOptionsSpecSet\n> > + *\t\tCreates new Option Spec Set object: Allocates memory and \ninitializes\n> > + *\t\tstructure members.\n> > + *\n> > + * Spec Set items can be add via allocateOptionSpec and\n> > optionSpecSetAddItem functions + * or by calling directly any of\n> > optionsSpecSetAdd* function (preferable way) + *\n> > + * namespace - Spec Set can be bind to certain namespace (E.g.\n> > + * namespace.option=value). Options from other namespaces will be ignored\n> > while + * processing. If set to NULL, no namespace will be used at all.\n> > + *\n> > + * size_of_bytea - size of target structure of Bytea options\n> > represenation\n> > + *\n> > + * num_items_expected - if you know expected number of Spec Set items set\n> > it here. + * Set to -1 in other cases. num_items_expected will be used\n> > for preallocating memory + * and will trigger error, if you try to add\n> > more items than you expected. + */\n> > +\n> > +options_spec_set *\n> > +allocateOptionsSpecSet(const char *namespace, int size_of_bytea, int\n> > num_items_expected) +{\n> > +\tMemoryContext oldcxt;\n> > +\toptions_spec_set *spec_set;\n> > +\n> > +\toldcxt = MemoryContextSwitchTo(TopMemoryContext);\n> > +\tspec_set = palloc(sizeof(options_spec_set));\n> > +\tif (namespace)\n> > +\t{\n> > +\t\tspec_set->namespace = palloc(strlen(namespace) + 1);\n> > +\t\tstrcpy(spec_set->namespace, namespace);\n> > +\t}\n> > +\telse\n> > +\t\tspec_set->namespace = NULL;\n> > +\tif (num_items_expected > 0)\n> > +\t{\n> > +\t\tspec_set->num_allocated = num_items_expected;\n> > +\t\tspec_set->forbid_realloc = true;\n> > +\t\tspec_set->definitions = palloc(\n> > +\t\t\t\t spec_set->num_allocated * \nsizeof(option_spec_basic *));\n> > +\t}\n> > +\telse\n> > +\t{\n> > +\t\tspec_set->num_allocated = 0;\n> > +\t\tspec_set->forbid_realloc = false;\n> > +\t\tspec_set->definitions = NULL;\n> > +\t}\n> > +\tspec_set->num = 0;\n> > +\tspec_set->struct_size = size_of_bytea;\n> > +\tspec_set->postprocess_fun = NULL;\n> > +\tMemoryContextSwitchTo(oldcxt);\n> > +\treturn spec_set;\n> > +}\n> > +\n> > +/*\n> > + * allocateOptionSpec\n> > + *\t\tAllocates a new Option Specifiation object of desired type \nand\n> > + *\t\tinitialize the type-independent fields\n> > + */\n> > +static option_spec_basic *\n> > +allocateOptionSpec(int type, const char *name, const char *desc, LOCKMODE\n> > lockmode, +\t\t\t\t\t\t option_spec_flags \nflags, int struct_offset)\n> > +{\n> > +\tMemoryContext oldcxt;\n> > +\tsize_t\t\tsize;\n> > +\toption_spec_basic *newoption;\n> > +\n> > +\toldcxt = MemoryContextSwitchTo(TopMemoryContext);\n> > +\n> > +\tswitch (type)\n> > +\t{\n> > +\t\tcase OPTION_TYPE_BOOL:\n> > +\t\t\tsize = sizeof(option_spec_bool);\n> > +\t\t\tbreak;\n> > +\t\tcase OPTION_TYPE_INT:\n> > +\t\t\tsize = sizeof(option_spec_int);\n> > +\t\t\tbreak;\n> > +\t\tcase OPTION_TYPE_REAL:\n> > +\t\t\tsize = sizeof(option_spec_real);\n> > +\t\t\tbreak;\n> > +\t\tcase OPTION_TYPE_ENUM:\n> > +\t\t\tsize = sizeof(option_spec_enum);\n> > +\t\t\tbreak;\n> > +\t\tcase OPTION_TYPE_STRING:\n> > +\t\t\tsize = sizeof(option_spec_string);\n> > +\t\t\tbreak;\n> > +\t\tdefault:\n> > +\t\t\telog(ERROR, \"unsupported reloption type %d\", type);\n> > +\t\t\treturn NULL;\t\t/* keep compiler quiet */\n> > +\t}\n> > +\n> > +\tnewoption = palloc(size);\n> > +\n> > +\tnewoption->name = pstrdup(name);\n> > +\tif (desc)\n> > +\t\tnewoption->desc = pstrdup(desc);\n> > +\telse\n> > +\t\tnewoption->desc = NULL;\n> > +\tnewoption->type = type;\n> > +\tnewoption->lockmode = lockmode;\n> > +\tnewoption->flags = flags;\n> > +\tnewoption->struct_offset = struct_offset;\n> > +\n> > +\tMemoryContextSwitchTo(oldcxt);\n> > +\n> > +\treturn newoption;\n> > +}\n> > +\n> > +/*\n> > + * optionSpecSetAddItem\n> > + *\t\tAdds pre-created Option Specification objec to the Spec Set\n> > + */\n> > +static void\n> > +optionSpecSetAddItem(option_spec_basic * newoption,\n> > +\t\t\t\t\t options_spec_set * spec_set)\n> > +{\n> > +\tif (spec_set->num >= spec_set->num_allocated)\n> > +\t{\n> > +\t\tMemoryContext oldcxt;\n> > +\n> > +\t\tAssert(!spec_set->forbid_realloc);\n> > +\t\toldcxt = MemoryContextSwitchTo(TopMemoryContext);\n> > +\n> > +\t\tif (spec_set->num_allocated == 0)\n> > +\t\t{\n> > +\t\t\tspec_set->num_allocated = 8;\n> > +\t\t\tspec_set->definitions = palloc(\n> > +\t\t\t\t spec_set->num_allocated * \nsizeof(option_spec_basic *));\n> > +\t\t}\n> > +\t\telse\n> > +\t\t{\n> > +\t\t\tspec_set->num_allocated *= 2;\n> > +\t\t\tspec_set->definitions = repalloc(spec_set->definitions,\n> > +\t\t\t\t spec_set->num_allocated * \nsizeof(option_spec_basic *));\n> > +\t\t}\n> > +\t\tMemoryContextSwitchTo(oldcxt);\n> > +\t}\n> > +\tspec_set->definitions[spec_set->num] = newoption;\n> > +\tspec_set->num++;\n> > +}\n> > +\n> > +\n> > +/*\n> > + * optionsSpecSetAddBool\n> > + *\t\tAdds boolean Option Specification entry to the Spec Set\n> > + */\n> > +void\n> > +optionsSpecSetAddBool(options_spec_set * spec_set, const char *name,\n> > const char *desc, +\t\t\t\t\t\t \nLOCKMODE lockmode, option_spec_flags flags,\n> > +\t\t\t\t\t\t int struct_offset, bool \ndefault_val)\n> > +{\n> > +\toption_spec_bool *spec_set_item;\n> > +\n> > +\tspec_set_item = (option_spec_bool *)\n> > +\t\tallocateOptionSpec(OPTION_TYPE_BOOL, name, desc, lockmode,\n> > +\t\t\t\t\t\t\t\t flags, \nstruct_offset);\n> > +\n> > +\tspec_set_item->default_val = default_val;\n> > +\n> > +\toptionSpecSetAddItem((option_spec_basic *) spec_set_item, spec_set);\n> > +}\n> > +\n> > +/*\n> > + * optionsSpecSetAddInt\n> > + *\t\tAdds integer Option Specification entry to the Spec Set\n> > + */\n> > +void\n> > +optionsSpecSetAddInt(options_spec_set * spec_set, const char *name,\n> > +\t\t const char *desc, LOCKMODE lockmode, option_spec_flags flags,\n> > +\t\t\t\tint struct_offset, int default_val, int \nmin_val, int max_val)\n> > +{\n> > +\toption_spec_int *spec_set_item;\n> > +\n> > +\tspec_set_item = (option_spec_int *)\n> > +\t\tallocateOptionSpec(OPTION_TYPE_INT, name, desc, lockmode,\n> > +\t\t\t\t\t\t\t\t flags, \nstruct_offset);\n> > +\n> > +\tspec_set_item->default_val = default_val;\n> > +\tspec_set_item->min = min_val;\n> > +\tspec_set_item->max = max_val;\n> > +\n> > +\toptionSpecSetAddItem((option_spec_basic *) spec_set_item, spec_set);\n> > +}\n> > +\n> > +/*\n> > + * optionsSpecSetAddReal\n> > + *\t\tAdds float Option Specification entry to the Spec Set\n> > + */\n> > +void\n> > +optionsSpecSetAddReal(options_spec_set * spec_set, const char *name,\n> > const char *desc, +\t\t LOCKMODE lockmode, option_spec_flags \nflags, int\n> > struct_offset, +\t\t\t\t\t\t double \ndefault_val, double min_val, double\n> > max_val)\n> > +{\n> > +\toption_spec_real *spec_set_item;\n> > +\n> > +\tspec_set_item = (option_spec_real *)\n> > +\t\tallocateOptionSpec(OPTION_TYPE_REAL, name, desc, lockmode,\n> > +\t\t\t\t\t\t\t\t flags, \nstruct_offset);\n> > +\n> > +\tspec_set_item->default_val = default_val;\n> > +\tspec_set_item->min = min_val;\n> > +\tspec_set_item->max = max_val;\n> > +\n> > +\toptionSpecSetAddItem((option_spec_basic *) spec_set_item, spec_set);\n> > +}\n> > +\n> > +/*\n> > + * optionsSpecSetAddEnum\n> > + *\t\tAdds enum Option Specification entry to the Spec Set\n> > + *\n> > + * The members array must have a terminating NULL entry.\n> > + *\n> > + * The detailmsg is shown when unsupported values are passed, and has\n> > this\n> > + * form: \"Valid values are \\\"foo\\\", \\\"bar\\\", and \\\"bar\\\".\"\n> > + *\n> > + * The members array and detailmsg are not copied -- caller must ensure\n> > that + * they are valid throughout the life of the process.\n> > + */\n> > +\n> > +void\n> > +optionsSpecSetAddEnum(options_spec_set * spec_set, const char *name,\n> > const char *desc, +\t\tLOCKMODE lockmode, option_spec_flags flags, \nint\n> > struct_offset,\n> > +\t\topt_enum_elt_def * members, int default_val, const char \n*detailmsg)\n> > +{\n> > +\toption_spec_enum *spec_set_item;\n> > +\n> > +\tspec_set_item = (option_spec_enum *)\n> > +\t\tallocateOptionSpec(OPTION_TYPE_ENUM, name, desc, lockmode,\n> > +\t\t\t\t\t\t\t\t flags, \nstruct_offset);\n> > +\n> > +\tspec_set_item->default_val = default_val;\n> > +\tspec_set_item->members = members;\n> > +\tspec_set_item->detailmsg = detailmsg;\n> > +\n> > +\toptionSpecSetAddItem((option_spec_basic *) spec_set_item, spec_set);\n> > +}\n> > +\n> > +/*\n> > + * optionsSpecSetAddString\n> > + *\t\tAdds string Option Specification entry to the Spec Set\n> > + *\n> > + * \"validator\" is an optional function pointer that can be used to test\n> > the + * validity of the values. It must elog(ERROR) when the argument\n> > string is + * not acceptable for the variable. Note that the default\n> > value must pass + * the validation.\n> > + */\n> > +void\n> > +optionsSpecSetAddString(options_spec_set * spec_set, const char *name,\n> > const char *desc, +\t\t LOCKMODE lockmode, option_spec_flags \nflags, int\n> > struct_offset, +\t\t\t\t const char *default_val, \nvalidate_string_option\n> > validator) +{\n> > +\toption_spec_string *spec_set_item;\n> > +\n> > +\t/* make sure the validator/default combination is sane */\n> > +\tif (validator)\n> > +\t\t(validator) (default_val);\n> > +\n> > +\tspec_set_item = (option_spec_string *)\n> > +\t\tallocateOptionSpec(OPTION_TYPE_STRING, name, desc, lockmode,\n> > +\t\t\t\t\t\t\t\t flags, \nstruct_offset);\n> > +\tspec_set_item->validate_cb = validator;\n> > +\n> > +\tif (default_val)\n> > +\t\tspec_set_item->default_val = \nMemoryContextStrdup(TopMemoryContext,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t\t\tdefault_val);\n> > +\telse\n> > +\t\tspec_set_item->default_val = NULL;\n> > +\toptionSpecSetAddItem((option_spec_basic *) spec_set_item, spec_set);\n> > +}\n> > +\n> > +\n> > +/*\n> > + * Options transform functions\n> > + */\n> > +\n> > +/* FIXME this comment should be updated\n> > + * Option values exists in five representations: DefList, TextArray,\n> > Values and + * Bytea:\n> > + *\n> > + * DefList: Is a List of DefElem structures, that comes from syntax\n> > analyzer. + * It can be transformed to Values representation for further\n> > parsing and + * validating\n> > + *\n> > + * Values: A List of option_value structures. Is divided into two\n> > subclasses: + * RawValues, when values are already transformed from\n> > DefList or TextArray, + * but not parsed yet. (In this case you should\n> > use raw_name and raw_value + * structure members to see option content).\n> > ParsedValues (or just simple + * Values) is crated after finding a\n> > definition for this option in a spec_set + * and after parsing of the raw\n> > value. For ParsedValues content is stored in + * values structure member,\n> > and name can be taken from option definition in gen + * structure member.\n> > Actually Value list can have both Raw and Parsed values, + * as we do\n> > not validate options that came from database, and db option that + * does\n> > not exist in spec_set is just ignored, and kept as RawValues + *\n> > + * TextArray: The representation in which options for existing object\n> > comes + * and goes from/to database; for example from\n> > pg_class.reloptions. It is a + * plain TEXT[] db object with name=value\n> > text inside. This representation can + * be transformed into Values for\n> > further processing, using options spec_set. + *\n> > + * Bytea: Is a binary representation of options. Each object that has\n> > code that + * uses options, should create a C-structure for this options,\n> > with varlen + * 4-byte header in front of the data; all items of options\n> > spec_set should have + * an offset of a corresponding binary data in this\n> > structure, so transform + * function can put this data in the correct\n> > place. One can transform options + * data from values representation into\n> > Bytea, using spec_set data, and then use + * it as a usual Datum object,\n> > when needed. This Datum should be cached + * somewhere (for example in\n> > rel->rd_options for relations) when object that + * has option is loaded\n> > from db.\n> > + */\n> > +\n> > +\n> > +/* optionsDefListToRawValues\n> > + *\t\tConverts option values that came from syntax analyzer \n(DefList) into\n> > + *\t\tValues List.\n> > + *\n> > + * No parsing is done here except for checking that RESET syntax is\n> > correct + * (syntax analyzer do not see difference between SET and RESET\n> > cases, we + * should treat it here manually\n> > + */\n> > +static List *\n> > +optionsDefListToRawValues(List *defList, options_parse_mode parse_mode)\n> > +{\n> > +\tListCell *cell;\n> > +\tList\t *result = NIL;\n> > +\n> > +\tforeach(cell, defList)\n> > +\t{\n> > +\t\toption_value *option_dst;\n> > +\t\tDefElem *def = (DefElem *) lfirst(cell);\n> > +\t\tchar\t *value;\n> > +\n> > +\t\toption_dst = palloc(sizeof(option_value));\n> > +\n> > +\t\tif (def->defnamespace)\n> > +\t\t{\n> > +\t\t\toption_dst->namespace = palloc(strlen(def-\n>defnamespace) + 1);\n> > +\t\t\tstrcpy(option_dst->namespace, def->defnamespace);\n> > +\t\t}\n> > +\t\telse\n> > +\t\t{\n> > +\t\t\toption_dst->namespace = NULL;\n> > +\t\t}\n> > +\t\toption_dst->raw_name = palloc(strlen(def->defname) + 1);\n> > +\t\tstrcpy(option_dst->raw_name, def->defname);\n> > +\n> > +\t\tif (parse_mode & OPTIONS_PARSE_MODE_FOR_RESET)\n> > +\t\t{\n> > +\t\t\t/*\n> > +\t\t\t * If this option came from RESET statement we should \nthrow error\n> > +\t\t\t * it it brings us name=value data, as syntax \nanalyzer do not\n> > +\t\t\t * prevent it\n> > +\t\t\t */\n> > +\t\t\tif (def->arg != NULL)\n> > +\t\t\t\tereport(ERROR,\n> > +\t\t\t\t\t\t\n(errcode(ERRCODE_SYNTAX_ERROR),\n> > +\t\t\t\t\terrmsg(\"RESET must not include values \nfor parameters\")));\n> > +\n> > +\t\t\toption_dst->status = OPTION_VALUE_STATUS_FOR_RESET;\n> > +\t\t}\n> > +\t\telse\n> > +\t\t{\n> > +\t\t\t/*\n> > +\t\t\t * For SET statement we should treat (name) \nexpression as if it is\n> > +\t\t\t * actually (name=true) so do it here manually. In \nother cases\n> > +\t\t\t * just use value as we should use it\n> > +\t\t\t */\n> > +\t\t\toption_dst->status = OPTION_VALUE_STATUS_RAW;\n> > +\t\t\tif (def->arg != NULL)\n> > +\t\t\t\tvalue = defGetString(def);\n> > +\t\t\telse\n> > +\t\t\t\tvalue = \"true\";\n> > +\t\t\toption_dst->raw_value = palloc(strlen(value) + 1);\n> > +\t\t\tstrcpy(option_dst->raw_value, value);\n> > +\t\t}\n> > +\n> > +\t\tresult = lappend(result, option_dst);\n> > +\t}\n> > +\treturn result;\n> > +}\n> > +\n> > +/*\n> > + * optionsValuesToTextArray\n> > + *\t\tConverts List of option_values into TextArray\n> > + *\n> > + *\tConvertation is made to put options into database (e.g. in\n> > + *\tpg_class.reloptions for all relation options)\n> > + */\n> > +\n> > +Datum\n> > +optionsValuesToTextArray(List *options_values)\n> > +{\n> > +\tArrayBuildState *astate = NULL;\n> > +\tListCell *cell;\n> > +\tDatum\t\tresult;\n> > +\n> > +\tforeach(cell, options_values)\n> > +\t{\n> > +\t\toption_value *option = (option_value *) lfirst(cell);\n> > +\t\tconst char *name;\n> > +\t\tchar\t *value;\n> > +\t\ttext\t *t;\n> > +\t\tint\t\t\tlen;\n> > +\n> > +\t\t/*\n> > +\t\t * Raw value were not cleared while parsing, so instead of \nconverting\n> > +\t\t * it back, just use it to store value as text\n> > +\t\t */\n> > +\t\tvalue = option->raw_value;\n> > +\n> > +\t\tAssert(option->status != OPTION_VALUE_STATUS_EMPTY);\n> > +\n> > +\t\t/*\n> > +\t\t * Name will be taken from option definition, if option were \nparsed or\n> > +\t\t * from raw_name if option were not parsed for some reason\n> > +\t\t */\n> > +\t\tif (option->status == OPTION_VALUE_STATUS_PARSED)\n> > +\t\t\tname = option->gen->name;\n> > +\t\telse\n> > +\t\t\tname = option->raw_name;\n> > +\n> > +\t\t/*\n> > +\t\t * Now build \"name=value\" string and append it to the array\n> > +\t\t */\n> > +\t\tlen = VARHDRSZ + strlen(name) + strlen(value) + 1;\n> > +\t\tt = (text *) palloc(len + 1);\n> > +\t\tSET_VARSIZE(t, len);\n> > +\t\tsprintf(VARDATA(t), \"%s=%s\", name, value);\n> > +\t\tastate = accumArrayResult(astate, PointerGetDatum(t), false,\n> > +\t\t\t\t\t\t\t\t TEXTOID, \nCurrentMemoryContext);\n> > +\t}\n> > +\tif (astate)\n> > +\t\tresult = makeArrayResult(astate, CurrentMemoryContext);\n> > +\telse\n> > +\t\tresult = (Datum) 0;\n> > +\n> > +\treturn result;\n> > +}\n> > +\n> > +/*\n> > + * optionsTextArrayToRawValues\n> > + *\t\tConverts options from TextArray format into RawValues list.\n> > + *\n> > + *\tThis function is used to convert options data that comes from \ndatabase\n> > to + *\tList of option_values, for further parsing, and, in the case of\n> > ALTER + *\tcommand, for merging with new option values.\n> > + */\n> > +List *\n> > +optionsTextArrayToRawValues(Datum array_datum)\n> > +{\n> > +\tList\t *result = NIL;\n> > +\n> > +\tif (PointerIsValid(DatumGetPointer(array_datum)))\n> > +\t{\n> > +\t\tArrayType *array = DatumGetArrayTypeP(array_datum);\n> > +\t\tDatum\t *options;\n> > +\t\tint\t\t\tnoptions;\n> > +\t\tint\t\t\ti;\n> > +\n> > +\t\tdeconstruct_array(array, TEXTOID, -1, false, 'i',\n> > +\t\t\t\t\t\t &options, NULL, &noptions);\n> > +\n> > +\t\tfor (i = 0; i < noptions; i++)\n> > +\t\t{\n> > +\t\t\toption_value *option_dst;\n> > +\t\t\tchar\t *text_str = VARDATA(options[i]);\n> > +\t\t\tint\t\t\ttext_len = \nVARSIZE(options[i]) - VARHDRSZ;\n> > +\t\t\tint\t\t\ti;\n> > +\t\t\tint\t\t\tname_len = -1;\n> > +\t\t\tchar\t *name;\n> > +\t\t\tint\t\t\traw_value_len;\n> > +\t\t\tchar\t *raw_value;\n> > +\n> > +\t\t\t/*\n> > +\t\t\t * Find position of '=' sign and treat id as a \nseparator between\n> > +\t\t\t * name and value in \"name=value\" item\n> > +\t\t\t */\n> > +\t\t\tfor (i = 0; i < text_len; i = i + pg_mblen(text_str))\n> > +\t\t\t{\n> > +\t\t\t\tif (text_str[i] == '=')\n> > +\t\t\t\t{\n> > +\t\t\t\t\tname_len = i;\n> > +\t\t\t\t\tbreak;\n> > +\t\t\t\t}\n> > +\t\t\t}\n> > +\t\t\tAssert(name_len >= 1);\t\t/* Just in case \n*/\n> > +\n> > +\t\t\traw_value_len = text_len - name_len - 1;\n> > +\n> > +\t\t\t/*\n> > +\t\t\t * Copy name from src\n> > +\t\t\t */\n> > +\t\t\tname = palloc(name_len + 1);\n> > +\t\t\tmemcpy(name, text_str, name_len);\n> > +\t\t\tname[name_len] = '\\0';\n> > +\n> > +\t\t\t/*\n> > +\t\t\t * Copy value from src\n> > +\t\t\t */\n> > +\t\t\traw_value = palloc(raw_value_len + 1);\n> > +\t\t\tmemcpy(raw_value, text_str + name_len + 1, \nraw_value_len);\n> > +\t\t\traw_value[raw_value_len] = '\\0';\n> > +\n> > +\t\t\t/*\n> > +\t\t\t * Create new option_value item\n> > +\t\t\t */\n> > +\t\t\toption_dst = palloc(sizeof(option_value));\n> > +\t\t\toption_dst->status = OPTION_VALUE_STATUS_RAW;\n> > +\t\t\toption_dst->raw_name = name;\n> > +\t\t\toption_dst->raw_value = raw_value;\n> > +\t\t\toption_dst->namespace = NULL;\n> > +\n> > +\t\t\tresult = lappend(result, option_dst);\n> > +\t\t}\n> > +\t}\n> > +\treturn result;\n> > +}\n> > +\n> > +/*\n> > + * optionsMergeOptionValues\n> > + *\t\tMerges two lists of option_values into one list\n> > + *\n> > + * This function is used to merge two Values list into one. It is used\n> > for all + * kinds of ALTER commands when existing options are\n> > merged|replaced with new + * options list. This function also process\n> > RESET variant of ALTER command. It + * merges two lists as usual, and\n> > then removes all items with RESET flag on. + *\n> > + * Both incoming lists will be destroyed while merging\n> > + */\n> > +static List *\n> > +optionsMergeOptionValues(List *old_options, List *new_options)\n> > +{\n> > +\tList\t *result = NIL;\n> > +\tListCell *old_cell;\n> > +\tListCell *new_cell;\n> > +\n> > +\t/*\n> > +\t * First add to result all old options that are not mentioned in new\n> > list\n> > +\t */\n> > +\tforeach(old_cell, old_options)\n> > +\t{\n> > +\t\tbool\t\tfound;\n> > +\t\tconst char *old_name;\n> > +\t\toption_value *old_option;\n> > +\n> > +\t\told_option = (option_value *) lfirst(old_cell);\n> > +\t\tif (old_option->status == OPTION_VALUE_STATUS_PARSED)\n> > +\t\t\told_name = old_option->gen->name;\n> > +\t\telse\n> > +\t\t\told_name = old_option->raw_name;\n> > +\n> > +\t\t/*\n> > +\t\t * Looking for a new option with same name\n> > +\t\t */\n> > +\t\tfound = false;\n> > +\t\tforeach(new_cell, new_options)\n> > +\t\t{\n> > +\t\t\toption_value *new_option;\n> > +\t\t\tconst char *new_name;\n> > +\n> > +\t\t\tnew_option = (option_value *) lfirst(new_cell);\n> > +\t\t\tif (new_option->status == OPTION_VALUE_STATUS_PARSED)\n> > +\t\t\t\tnew_name = new_option->gen->name;\n> > +\t\t\telse\n> > +\t\t\t\tnew_name = new_option->raw_name;\n> > +\n> > +\t\t\tif (strcmp(new_name, old_name) == 0)\n> > +\t\t\t{\n> > +\t\t\t\tfound = true;\n> > +\t\t\t\tbreak;\n> > +\t\t\t}\n> > +\t\t}\n> > +\t\tif (!found)\n> > +\t\t\tresult = lappend(result, old_option);\n> > +\t}\n> > +\t/*\n> > +\t * Now add all to result all new options that are not designated for\n> > reset +\t */\n> > +\tforeach(new_cell, new_options)\n> > +\t{\n> > +\t\toption_value *new_option;\n> > +\t\tnew_option = (option_value *) lfirst(new_cell);\n> > +\n> > +\t\tif(new_option->status != OPTION_VALUE_STATUS_FOR_RESET)\n> > +\t\t\tresult = lappend(result, new_option);\n> > +\t}\n> > +\treturn result;\n> > +}\n> > +\n> > +/*\n> > + * optionsDefListValdateNamespaces\n> > + *\t\tFunction checks that all options represented as DefList has \nno\n> > + *\t\tnamespaces or have namespaces only from allowed list\n> > + *\n> > + * Function accept options as DefList and NULL terminated list of allowed\n> > + * namespaces. It throws an error if not proper namespace was found.\n> > + *\n> > + * This function actually used only for tables with it's toast. namespace\n> > + */\n> > +void\n> > +optionsDefListValdateNamespaces(List *defList, char **allowed_namespaces)\n> > +{\n> > +\tListCell *cell;\n> > +\n> > +\tforeach(cell, defList)\n> > +\t{\n> > +\t\tDefElem *def = (DefElem *) lfirst(cell);\n> > +\n> > +\t\t/*\n> > +\t\t * Checking namespace only for options that have namespaces. \nOptions\n> > +\t\t * with no namespaces are always accepted\n> > +\t\t */\n> > +\t\tif (def->defnamespace)\n> > +\t\t{\n> > +\t\t\tbool\t\tfound = false;\n> > +\t\t\tint\t\t\ti = 0;\n> > +\n> > +\t\t\twhile (allowed_namespaces[i])\n> > +\t\t\t{\n> > +\t\t\t\tif (strcmp(def->defnamespace,\n> > +\t\t\t\t\t\t\t\t \nallowed_namespaces[i]) == 0)\n> > +\t\t\t\t{\n> > +\t\t\t\t\tfound = true;\n> > +\t\t\t\t\tbreak;\n> > +\t\t\t\t}\n> > +\t\t\t\ti++;\n> > +\t\t\t}\n> > +\t\t\tif (!found)\n> > +\t\t\t\tereport(ERROR,\n> > +\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > +\t\t\t\t\t\t errmsg(\"unrecognized \nparameter namespace \\\"%s\\\"\",\n> > +\t\t\t\t\t\t\t\tdef-\n>defnamespace)));\n> > +\t\t}\n> > +\t}\n> > +}\n> > +\n> > +/*\n> > + * optionsDefListFilterNamespaces\n> > + *\t\tIterates over DefList, choose items with specified namespace \nand adds\n> > + *\t\tthem to a result List\n> > + *\n> > + * This function does not destroy source DefList but does not create\n> > copies + * of List nodes.\n> > + * It is actually used only for tables, in order to split toast and heap\n> > + * reloptions, so each one can be stored in on it's own pg_class record\n> > + */\n> > +List *\n> > +optionsDefListFilterNamespaces(List *defList, const char *namespace)\n> > +{\n> > +\tListCell *cell;\n> > +\tList\t *result = NIL;\n> > +\n> > +\tforeach(cell, defList)\n> > +\t{\n> > +\t\tDefElem *def = (DefElem *) lfirst(cell);\n> > +\n> > +\t\tif ((!namespace && !def->defnamespace) ||\n> > +\t\t\t(namespace && def->defnamespace &&\n> > +\t\t\t strcmp(namespace, def->defnamespace) == 0))\n> > +\t\t{\n> > +\t\t\tresult = lappend(result, def);\n> > +\t\t}\n> > +\t}\n> > +\treturn result;\n> > +}\n> > +\n> > +/*\n> > + * optionsTextArrayToDefList\n> > + *\t\tConvert the text-array format of reloptions into a List of \nDefElem.\n> > + */\n> > +List *\n> > +optionsTextArrayToDefList(Datum options)\n> > +{\n> > +\tList\t *result = NIL;\n> > +\tArrayType *array;\n> > +\tDatum\t *optiondatums;\n> > +\tint\t\t\tnoptions;\n> > +\tint\t\t\ti;\n> > +\n> > +\t/* Nothing to do if no options */\n> > +\tif (!PointerIsValid(DatumGetPointer(options)))\n> > +\t\treturn result;\n> > +\n> > +\tarray = DatumGetArrayTypeP(options);\n> > +\n> > +\tdeconstruct_array(array, TEXTOID, -1, false, 'i',\n> > +\t\t\t\t\t &optiondatums, NULL, &noptions);\n> > +\n> > +\tfor (i = 0; i < noptions; i++)\n> > +\t{\n> > +\t\tchar\t *s;\n> > +\t\tchar\t *p;\n> > +\t\tNode\t *val = NULL;\n> > +\n> > +\t\ts = TextDatumGetCString(optiondatums[i]);\n> > +\t\tp = strchr(s, '=');\n> > +\t\tif (p)\n> > +\t\t{\n> > +\t\t\t*p++ = '\\0';\n> > +\t\t\tval = (Node *) makeString(pstrdup(p));\n> > +\t\t}\n> > +\t\tresult = lappend(result, makeDefElem(pstrdup(s), val, -1));\n> > +\t}\n> > +\n> > +\treturn result;\n> > +}\n> > +\n> > +/* FIXME write comment here */\n> > +\n> > +Datum\n> > +optionsDefListToTextArray(List *defList)\n> > +{\n> > +\tListCell *cell;\n> > +\tDatum\t\tresult;\n> > +\tArrayBuildState *astate = NULL;\n> > +\n> > +\tforeach(cell, defList)\n> > +\t{\n> > +\t\tDefElem\t *def = (DefElem *) lfirst(cell);\n> > +\t\tconst char *name = def->defname;\n> > +\t\tconst char *value;\n> > +\t\ttext\t *t;\n> > +\t\tint\t\t\tlen;\n> > +\n> > +\t\tif (def->arg != NULL)\n> > +\t\t\tvalue = defGetString(def);\n> > +\t\telse\n> > +\t\t\tvalue = \"true\";\n> > +\n> > +\t\tif (def->defnamespace)\n> > +\t\t{\n> > +\t\t\tAssert(false); /* Should not get here */\n> > +\t\t\t/* This function is used for backward compatibility \nin the place were\n> > namespases are not allowed */ +\t\t\treturn (Datum) 0;\n> > +\t\t}\n> > +\t\tlen = VARHDRSZ + strlen(name) + strlen(value) + 1;\n> > +\t\tt = (text *) palloc(len + 1);\n> > +\t\tSET_VARSIZE(t, len);\n> > +\t\tsprintf(VARDATA(t), \"%s=%s\", name, value);\n> > +\t\tastate = accumArrayResult(astate, PointerGetDatum(t), false,\n> > +\t\t\t\t\t\t\t\t TEXTOID, \nCurrentMemoryContext);\n> > +\n> > +\t}\n> > +\tif (astate)\n> > +\t\tresult = makeArrayResult(astate, CurrentMemoryContext);\n> > +\telse\n> > +\t\tresult = (Datum) 0;\n> > +\treturn result;\n> > +}\n> > +\n> > +\n> > +/*\n> > + * optionsParseRawValues\n> > + *\t\tParses and vlaidates (if proper flag is set) option_values. \nAs a\n> > result + *\t\tcaller will get the list of parsed (or partly \nparsed)\n> > option_values + *\n> > + * This function is used in cases when caller gets raw values from db or\n> > + * syntax and want to parse them.\n> > + * This function uses option_spec_set to get information about how each\n> > option + * should be parsed.\n> > + * If validate mode is off, function found an option that do not have\n> > proper + * option_spec_set entry, this option kept unparsed (if some\n> > garbage came from + * the DB, we should put it back there)\n> > + *\n> > + * This function destroys incoming list.\n> > + */\n> > +List *\n> > +optionsParseRawValues(List *raw_values, options_spec_set * spec_set,\n> > +\t\t\t\t\t options_parse_mode mode)\n> > +{\n> > +\tListCell *cell;\n> > +\tList\t *result = NIL;\n> > +\tbool\t *is_set;\n> > +\tint\t\t\ti;\n> > +\tbool\t\tvalidate = mode & OPTIONS_PARSE_MODE_VALIDATE;\n> > +\tbool\t\tfor_alter = mode & OPTIONS_PARSE_MODE_FOR_ALTER;\n> > +\n> > +\n> > +\tis_set = palloc0(sizeof(bool) * spec_set->num);\n> > +\tforeach(cell, raw_values)\n> > +\t{\n> > +\t\toption_value *option = (option_value *) lfirst(cell);\n> > +\t\tbool\t\tfound = false;\n> > +\t\tbool\t\tskip = false;\n> > +\n> > +\n> > +\t\tif (option->status == OPTION_VALUE_STATUS_PARSED)\n> > +\t\t{\n> > +\t\t\t/*\n> > +\t\t\t * This can happen while ALTER, when new values were \nalready\n> > +\t\t\t * parsed, but old values merged from DB are still \nraw\n> > +\t\t\t */\n> > +\t\t\tresult = lappend(result, option);\n> > +\t\t\tcontinue;\n> > +\t\t}\n> > +\t\tif (validate && option->namespace && (!spec_set->namespace ||\n> > +\t\t\t\t strcmp(spec_set->namespace, option-\n>namespace) != 0))\n> > +\t\t{\n> > +\t\t\tereport(ERROR,\n> > +\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > +\t\t\t\t\t errmsg(\"unrecognized parameter \nnamespace \\\"%s\\\"\",\n> > +\t\t\t\t\t\t\toption->namespace)));\n> > +\t\t}\n> > +\n> > +\t\tfor (i = 0; i < spec_set->num; i++)\n> > +\t\t{\n> > +\t\t\toption_spec_basic *definition = spec_set-\n>definitions[i];\n> > +\n> > +\t\t\tif (strcmp(option->raw_name,\n> > +\t\t\t\t\t\t\t definition->name) == \n0)\n> > +\t\t\t{\n> > +\t\t\t\t/*\n> > +\t\t\t\t * Skip option with \"ignore\" flag, as it is \nprocessed\n> > +\t\t\t\t * somewhere else. (WITH OIDS special case)\n> > +\t\t\t\t */\n> > +\t\t\t\tif (definition->flags & \nOPTION_DEFINITION_FLAG_IGNORE)\n> > +\t\t\t\t{\n> > +\t\t\t\t\tfound = true;\n> > +\t\t\t\t\tskip = true;\n> > +\t\t\t\t\tbreak;\n> > +\t\t\t\t}\n> > +\n> > +\t\t\t\t/*\n> > +\t\t\t\t * Reject option as if it was not in \nspec_set. Needed for cases\n> > +\t\t\t\t * when option should have default value, but \nshould not be\n> > +\t\t\t\t * changed\n> > +\t\t\t\t */\n> > +\t\t\t\tif (definition->flags & \nOPTION_DEFINITION_FLAG_REJECT)\n> > +\t\t\t\t{\n> > +\t\t\t\t\tfound = false;\n> > +\t\t\t\t\tbreak;\n> > +\t\t\t\t}\n> > +\n> > +\t\t\t\tif (validate && is_set[i])\n> > +\t\t\t\t{\n> > +\t\t\t\t\tereport(ERROR,\n> > +\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > +\t\t\t\t\t\t errmsg(\"parameter \\\"%s\\\" \nspecified more than once\",\n> > +\t\t\t\t\t\t\t\t option-\n>raw_name)));\n> > +\t\t\t\t}\n> > +\t\t\t\tif ((for_alter) &&\n> > +\t\t\t\t\t(definition->flags & \nOPTION_DEFINITION_FLAG_FORBID_ALTER))\n> > +\t\t\t\t{\n> > +\t\t\t\t\tereport(ERROR,\n> > +\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > +\t\t\t\t\t\t errmsg(\"changing parameter \n\\\"%s\\\" is not allowed\",\n> > +\t\t\t\t\t\t\t\t definition-\n>name)));\n> > +\t\t\t\t}\n> > +\t\t\t\tif (option->status == \nOPTION_VALUE_STATUS_FOR_RESET)\n> > +\t\t\t\t{\n> > +\t\t\t\t\t/*\n> > +\t\t\t\t\t * For RESET options do not need \nfurther processing so\n> > +\t\t\t\t\t * mark it found and stop searching\n> > +\t\t\t\t\t */\n> > +\t\t\t\t\tfound = true;\n> > +\t\t\t\t\tbreak;\n> > +\t\t\t\t}\n> > +\t\t\t\tpfree(option->raw_name);\n> > +\t\t\t\toption->raw_name = NULL;\n> > +\t\t\t\toption->gen = definition;\n> > +\t\t\t\tparse_one_option(option, NULL, -1, validate);\n> > +\t\t\t\tis_set[i] = true;\n> > +\t\t\t\tfound = true;\n> > +\t\t\t\tbreak;\n> > +\t\t\t}\n> > +\t\t}\n> > +\t\tif (!found)\n> > +\t\t{\n> > +\t\t\tif (validate)\n> > +\t\t\t{\n> > +\t\t\t\tif (option->namespace)\n> > +\t\t\t\t\tereport(ERROR,\n> > +\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > +\t\t\t\t\t\t\t errmsg(\"unrecognized \nparameter \\\"%s.%s\\\"\",\n> > +\t\t\t\t\t\t\t\t\t\noption->namespace, option->raw_name)));\n> > +\t\t\t\telse\n> > +\t\t\t\t\tereport(ERROR,\n> > +\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > +\t\t\t\t\t\t\t errmsg(\"unrecognized \nparameter \\\"%s\\\"\",\n> > +\t\t\t\t\t\t\t\t\t\noption->raw_name)));\n> > +\t\t\t} else\n> > +\t\t\t{\n> > +\t\t\t\t/* RESET is always in non-validating mode, \nunkown names should\n> > +\t\t\t\t * be ignored. This is traditional behaviour \nof postgres/\n> > +\t\t\t\t * FIXME may be it should be changed someday\n> > +\t\t\t\t */\n> > +\t\t\t\tif (option->status == \nOPTION_VALUE_STATUS_FOR_RESET)\n> > +\t\t\t\t{\n> > +\t\t\t\t\tskip = true;\n> > +\t\t\t\t}\n> > +\t\t\t}\n> > +\t\t\t/*\n> > +\t\t\t * In other cases, if we are parsing not in validate \nmode, then\n> > +\t\t\t * we should keep unknown node, because non-validate \nmode is for\n> > +\t\t\t * data that is already in the DB and should not be \nchanged after\n> > +\t\t\t * altering another entries\n> > +\t\t\t */\n> > +\t\t}\n> > +\t\tif (!skip)\n> > +\t\t\tresult = lappend(result, option);\n> > +\t}\n> > +\treturn result;\n> > +}\n> > +\n> > +/*\n> > + * parse_one_option\n> > + *\n> > + *\t\tSubroutine for optionsParseRawValues, to parse and validate \na\n> > + *\t\tsingle option's value\n> > + */\n> > +static void\n> > +parse_one_option(option_value * option, const char *text_str, int\n> > text_len, +\t\t\t\t bool validate)\n> > +{\n> > +\tchar\t *value;\n> > +\tbool\t\tparsed;\n> > +\n> > +\tvalue = option->raw_value;\n> > +\n> > +\tswitch (option->gen->type)\n> > +\t{\n> > +\t\tcase OPTION_TYPE_BOOL:\n> > +\t\t\t{\n> > +\t\t\t\tparsed = parse_bool(value, &option-\n>values.bool_val);\n> > +\t\t\t\tif (validate && !parsed)\n> > +\t\t\t\t\tereport(ERROR,\n> > +\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > +\t\t\t\t\t\terrmsg(\"invalid value for \nboolean option \\\"%s\\\": %s\",\n> > +\t\t\t\t\t\t\t option->gen->name, \nvalue)));\n> > +\t\t\t}\n> > +\t\t\tbreak;\n> > +\t\tcase OPTION_TYPE_INT:\n> > +\t\t\t{\n> > +\t\t\t\toption_spec_int *optint =\n> > +\t\t\t\t(option_spec_int *) option->gen;\n> > +\n> > +\t\t\t\tparsed = parse_int(value, &option-\n>values.int_val, 0, NULL);\n> > +\t\t\t\tif (validate && !parsed)\n> > +\t\t\t\t\tereport(ERROR,\n> > +\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > +\t\t\t\t\t\terrmsg(\"invalid value for \ninteger option \\\"%s\\\": %s\",\n> > +\t\t\t\t\t\t\t option->gen->name, \nvalue)));\n> > +\t\t\t\tif (validate && (option->values.int_val < \noptint->min ||\n> > +\t\t\t\t\t\t\t\t option-\n>values.int_val > optint->max))\n> > +\t\t\t\t\tereport(ERROR,\n> > +\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > +\t\t\t\t\t\t errmsg(\"value %s out of \nbounds for option \\\"%s\\\"\",\n> > +\t\t\t\t\t\t\t\t value, \noption->gen->name),\n> > +\t\t\t\t\t errdetail(\"Valid values are between \n\\\"%d\\\" and \\\"%d\\\".\",\n> > +\t\t\t\t\t\t\t optint->min, \noptint->max)));\n> > +\t\t\t}\n> > +\t\t\tbreak;\n> > +\t\tcase OPTION_TYPE_REAL:\n> > +\t\t\t{\n> > +\t\t\t\toption_spec_real *optreal =\n> > +\t\t\t\t(option_spec_real *) option->gen;\n> > +\n> > +\t\t\t\tparsed = parse_real(value, &option-\n>values.real_val, 0, NULL);\n> > +\t\t\t\tif (validate && !parsed)\n> > +\t\t\t\t\tereport(ERROR,\n> > +\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > +\t\t\t\t\t\t\t errmsg(\"invalid \nvalue for floating point option \\\"%s\\\": %s\",\n> > +\t\t\t\t\t\t\t\t\t\noption->gen->name, value)));\n> > +\t\t\t\tif (validate && (option->values.real_val < \noptreal->min ||\n> > +\t\t\t\t\t\t\t\t option-\n>values.real_val > optreal->max))\n> > +\t\t\t\t\tereport(ERROR,\n> > +\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > +\t\t\t\t\t\t errmsg(\"value %s out of \nbounds for option \\\"%s\\\"\",\n> > +\t\t\t\t\t\t\t\t value, \noption->gen->name),\n> > +\t\t\t\t\t errdetail(\"Valid values are between \n\\\"%f\\\" and \\\"%f\\\".\",\n> > +\t\t\t\t\t\t\t optreal->min, \noptreal->max)));\n> > +\t\t\t}\n> > +\t\t\tbreak;\n> > +\t\tcase OPTION_TYPE_ENUM:\n> > +\t\t\t{\n> > +\t\t\t\toption_spec_enum *optenum =\n> > +\t\t\t\t\t\t\t\t\t\t\n(option_spec_enum *) option->gen;\n> > +\t\t\t\topt_enum_elt_def *elt;\n> > +\t\t\t\tparsed = false;\n> > +\t\t\t\tfor (elt = optenum->members; elt->string_val; \nelt++)\n> > +\t\t\t\t{\n> > +\t\t\t\t\tif (strcmp(value, elt->string_val) == \n0)\n> > +\t\t\t\t\t{\n> > +\t\t\t\t\t\toption->values.enum_val = \nelt->symbol_val;\n> > +\t\t\t\t\t\tparsed = true;\n> > +\t\t\t\t\t\tbreak;\n> > +\t\t\t\t\t}\n> > +\t\t\t\t}\n> > +\t\t\t\tif (!parsed)\n> > +\t\t\t\t{\n> > +\t\t\t\t\tereport(ERROR,\n> > +\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > +\t\t\t\t\t\t\t errmsg(\"invalid \nvalue for enum option \\\"%s\\\": %s\",\n> > +\t\t\t\t\t\t\t\t\t\noption->gen->name, value),\n> > +\t\t\t\t\t\t\t optenum->detailmsg ?\n> > +\t\t\t\t\t\t\t \nerrdetail_internal(\"%s\", _(optenum->detailmsg)) : 0));\n> > +\t\t\t\t}\n> > +\t\t\t}\n> > +\t\t\tbreak;\n> > +\t\tcase OPTION_TYPE_STRING:\n> > +\t\t\t{\n> > +\t\t\t\toption_spec_string *optstring =\n> > +\t\t\t\t(option_spec_string *) option->gen;\n> > +\n> > +\t\t\t\toption->values.string_val = value;\n> > +\t\t\t\tif (validate && optstring->validate_cb)\n> > +\t\t\t\t\t(optstring->validate_cb) (value);\n> > +\t\t\t\tparsed = true;\n> > +\t\t\t}\n> > +\t\t\tbreak;\n> > +\t\tdefault:\n> > +\t\t\telog(ERROR, \"unsupported reloption type %d\", option-\n>gen->type);\n> > +\t\t\tparsed = true;\t\t/* quiet compiler */\n> > +\t\t\tbreak;\n> > +\t}\n> > +\n> > +\tif (parsed)\n> > +\t\toption->status = OPTION_VALUE_STATUS_PARSED;\n> > +\n> > +}\n> > +\n> > +/*\n> > + * optionsAllocateBytea\n> > + *\t\tAllocates memory for bytea options representation\n> > + *\n> > + * Function allocates memory for byrea structure of an option, plus adds\n> > space + * for values of string options. We should keep all data including\n> > string + * values in the same memory chunk, because Cache code copies\n> > bytea option + * data from one MemoryConext to another without knowing\n> > about it's internal + * structure, so it would not be able to copy string\n> > values if they are outside + * of bytea memory chunk.\n> > + */\n> > +static void *\n> > +optionsAllocateBytea(options_spec_set * spec_set, List *options)\n> > +{\n> > +\tSize\t\tsize;\n> > +\tint\t\t\ti;\n> > +\tListCell *cell;\n> > +\tint\t\t\tlength;\n> > +\tvoid\t *res;\n> > +\n> > +\tsize = spec_set->struct_size;\n> > +\n> > +\t/* Calculate size needed to store all string values for this option \n*/\n> > +\tfor (i = 0; i < spec_set->num; i++)\n> > +\t{\n> > +\t\toption_spec_basic *definition = spec_set->definitions[i];\n> > +\t\tbool\t\tfound = false;\n> > +\t\toption_value *option;\n> > +\n> > +\t\t/* Not interested in non-string options, skipping */\n> > +\t\tif (definition->type != OPTION_TYPE_STRING)\n> > +\t\t\tcontinue;\n> > +\n> > +\t\t/*\n> > +\t\t * Trying to find option_value that references definition \nspec_set\n> > +\t\t * entry\n> > +\t\t */\n> > +\t\tforeach(cell, options)\n> > +\t\t{\n> > +\t\t\toption = (option_value *) lfirst(cell);\n> > +\t\t\tif (option->status == OPTION_VALUE_STATUS_PARSED &&\n> > +\t\t\t\tstrcmp(option->gen->name, definition->name) == \n0)\n> > +\t\t\t{\n> > +\t\t\t\tfound = true;\n> > +\t\t\t\tbreak;\n> > +\t\t\t}\n> > +\t\t}\n> > +\t\tif (found)\n> > +\t\t\t/* If found, it'value will be stored */\n> > +\t\t\tlength = strlen(option->values.string_val) + 1;\n> > +\t\telse\n> > +\t\t\t/* If not found, then there would be default value \nthere */\n> > +\t\tif (((option_spec_string *) definition)->default_val)\n> > +\t\t\tlength = strlen(\n> > +\t\t\t\t ((option_spec_string *) definition)-\n>default_val) + 1;\n> > +\t\telse\n> > +\t\t\tlength = 0;\n> > +\t\t/* Add total length of all string values to basic size */\n> > +\t\tsize += length;\n> > +\t}\n> > +\n> > +\tres = palloc0(size);\n> > +\tSET_VARSIZE(res, size);\n> > +\treturn res;\n> > +}\n> > +\n> > +/*\n> > + * optionsValuesToBytea\n> > + *\t\tConverts options from List of option_values to binary bytea \nstructure\n> > + *\n> > + * Convertation goes according to options_spec_set: each spec_set item\n> > + * has offset value, and option value in binary mode is written to the\n> > + * structure with that offset.\n> > + *\n> > + * More special case is string values. Memory for bytea structure is\n> > allocated + * by optionsAllocateBytea which adds some more space for\n> > string values to + * the size of original structure. All string values\n> > are copied there and + * inside the bytea structure an offset to that\n> > value is kept.\n> > + *\n> > + */\n> > +static bytea *\n> > +optionsValuesToBytea(List *options, options_spec_set * spec_set)\n> > +{\n> > +\tchar\t *data;\n> > +\tchar\t *string_values_buffer;\n> > +\tint\t\t\ti;\n> > +\n> > +\tdata = optionsAllocateBytea(spec_set, options);\n> > +\n> > +\t/* place for string data starts right after original structure */\n> > +\tstring_values_buffer = data + spec_set->struct_size;\n> > +\n> > +\tfor (i = 0; i < spec_set->num; i++)\n> > +\t{\n> > +\t\toption_value *found = NULL;\n> > +\t\tListCell *cell;\n> > +\t\tchar\t *item_pos;\n> > +\t\toption_spec_basic *definition = spec_set->definitions[i];\n> > +\n> > +\t\tif (definition->flags & OPTION_DEFINITION_FLAG_IGNORE)\n> > +\t\t\tcontinue;\n> > +\n> > +\t\t/* Calculate the position of the item inside the structure */\n> > +\t\titem_pos = data + definition->struct_offset;\n> > +\n> > +\t\t/* Looking for the corresponding option from options list */\n> > +\t\tforeach(cell, options)\n> > +\t\t{\n> > +\t\t\toption_value *option = (option_value *) lfirst(cell);\n> > +\n> > +\t\t\tif (option->status == OPTION_VALUE_STATUS_RAW)\n> > +\t\t\t\tcontinue;\t\t/* raw can come from db. \nJust ignore them then */\n> > +\t\t\tAssert(option->status != OPTION_VALUE_STATUS_EMPTY);\n> > +\n> > +\t\t\tif (strcmp(definition->name, option->gen->name) == 0)\n> > +\t\t\t{\n> > +\t\t\t\tfound = option;\n> > +\t\t\t\tbreak;\n> > +\t\t\t}\n> > +\t\t}\n> > +\t\t/* writing to the proper position either option value or \ndefault val */\n> > +\t\tswitch (definition->type)\n> > +\t\t{\n> > +\t\t\tcase OPTION_TYPE_BOOL:\n> > +\t\t\t\t*(bool *) item_pos = found ?\n> > +\t\t\t\t\tfound->values.bool_val :\n> > +\t\t\t\t\t((option_spec_bool *) definition)-\n>default_val;\n> > +\t\t\t\tbreak;\n> > +\t\t\tcase OPTION_TYPE_INT:\n> > +\t\t\t\t*(int *) item_pos = found ?\n> > +\t\t\t\t\tfound->values.int_val :\n> > +\t\t\t\t\t((option_spec_int *) definition)-\n>default_val;\n> > +\t\t\t\tbreak;\n> > +\t\t\tcase OPTION_TYPE_REAL:\n> > +\t\t\t\t*(double *) item_pos = found ?\n> > +\t\t\t\t\tfound->values.real_val :\n> > +\t\t\t\t\t((option_spec_real *) definition)-\n>default_val;\n> > +\t\t\t\tbreak;\n> > +\t\t\tcase OPTION_TYPE_ENUM:\n> > +\t\t\t\t*(int *) item_pos = found ?\n> > +\t\t\t\t\tfound->values.enum_val :\n> > +\t\t\t\t\t((option_spec_enum *) definition)-\n>default_val;\n> > +\t\t\t\tbreak;\n> > +\n> > +\t\t\tcase OPTION_TYPE_STRING:\n> > +\t\t\t\t{\n> > +\t\t\t\t\t/*\n> > +\t\t\t\t\t * For string options: writing string \nvalue at the string\n> > +\t\t\t\t\t * buffer after the structure, and \nstoring and offset to\n> > +\t\t\t\t\t * that value\n> > +\t\t\t\t\t */\n> > +\t\t\t\t\tchar\t *value = NULL;\n> > +\n> > +\t\t\t\t\tif (found)\n> > +\t\t\t\t\t\tvalue = found-\n>values.string_val;\n> > +\t\t\t\t\telse\n> > +\t\t\t\t\t\tvalue = ((option_spec_string \n*) definition)\n> > +\t\t\t\t\t\t\t->default_val;\n> > +\t\t\t\t\t*(int *) item_pos = value ?\n> > +\t\t\t\t\t\tstring_values_buffer - data :\n> > +\t\t\t\t\t\t\nOPTION_STRING_VALUE_NOT_SET_OFFSET;\n> > +\t\t\t\t\tif (value)\n> > +\t\t\t\t\t{\n> > +\t\t\t\t\t\tstrcpy(string_values_buffer, \nvalue);\n> > +\t\t\t\t\t\tstring_values_buffer += \nstrlen(value) + 1;\n> > +\t\t\t\t\t}\n> > +\t\t\t\t}\n> > +\t\t\t\tbreak;\n> > +\t\t\tdefault:\n> > +\t\t\t\telog(ERROR, \"unsupported reloption type %d\",\n> > +\t\t\t\t\t definition->type);\n> > +\t\t\t\tbreak;\n> > +\t\t}\n> > +\t}\n> > +\treturn (void *) data;\n> > +}\n> > +\n> > +\n> > +/*\n> > + * transformOptions\n> > + *\t\tThis function is used by src/backend/commands/Xxxx in order \nto\n> > process\n> > + *\t\tnew option values, merge them with existing values (in the \ncase of\n> > + *\t\tALTER command) and prepare to put them [back] into DB\n> > + */\n> > +\n> > +Datum\n> > +transformOptions(options_spec_set * spec_set, Datum oldOptions,\n> > +\t\t\t\t List *defList, options_parse_mode \nparse_mode)\n> > +{\n> > +\tDatum\t\tresult;\n> > +\tList\t *new_values;\n> > +\tList\t *old_values;\n> > +\tList\t *merged_values;\n> > +\n> > +\t/*\n> > +\t * Parse and validate New values\n> > +\t */\n> > +\tnew_values = optionsDefListToRawValues(defList, parse_mode);\n> > +\tif (! (parse_mode & OPTIONS_PARSE_MODE_FOR_RESET))\n> > +\t{\n> > +\t\t/* FIXME: postgres usual behaviour vas not to vaidate names \nthat\n> > +\t\t * came from RESET command. Once this behavious should be \nchanged,\n> > +\t\t * I guess. But for now we keep it as it was.\n> > +\t\t */\n> > +\t\tparse_mode|= OPTIONS_PARSE_MODE_VALIDATE;\n> > +\t}\n> > +\tnew_values = optionsParseRawValues(new_values, spec_set, parse_mode);\n> > +\n> > +\t/*\n> > +\t * Old values exists in case of ALTER commands. Transform them to raw\n> > +\t * values and merge them with new_values, and parse it.\n> > +\t */\n> > +\tif (PointerIsValid(DatumGetPointer(oldOptions)))\n> > +\t{\n> > +\t\told_values = optionsTextArrayToRawValues(oldOptions);\n> > +\t\tmerged_values = optionsMergeOptionValues(old_values, \nnew_values);\n> > +\n> > +\t\t/*\n> > +\t\t * Parse options only after merging in order not to parse \noptions that\n> > +\t\t * would be removed by merging later\n> > +\t\t */\n> > +\t\tmerged_values = optionsParseRawValues(merged_values, \nspec_set, 0);\n> > +\t}\n> > +\telse\n> > +\t{\n> > +\t\tmerged_values = new_values;\n> > +\t}\n> > +\n> > +\t/*\n> > +\t * If we have postprocess_fun function defined in spec_set, then there\n> > +\t * might be some custom options checks there, with error throwing. So \nwe\n> > +\t * should do it here to throw these errors while CREATing or ALTERing\n> > +\t * options\n> > +\t */\n> > +\tif (spec_set->postprocess_fun)\n> > +\t{\n> > +\t\tbytea\t *data = optionsValuesToBytea(merged_values, \nspec_set);\n> > +\n> > +\t\tspec_set->postprocess_fun(data, true);\n> > +\t\tpfree(data);\n> > +\t}\n> > +\n> > +\t/*\n> > +\t * Convert options to TextArray format so caller can store them into\n> > +\t * database\n> > +\t */\n> > +\tresult = optionsValuesToTextArray(merged_values);\n> > +\treturn result;\n> > +}\n> > +\n> > +\n> > +/*\n> > + * optionsTextArrayToBytea\n> > + *\t\tA meta-function that transforms options stored as TextArray \ninto\n> > binary + *\t\t(bytea) representation.\n> > + *\n> > + *\tThis function runs other transform functions that leads to the \ndesired\n> > + *\tresult in no-validation mode. This function is used by cache\n> > mechanism,\n> > + *\tin order to load and cache options when object itself is loaded and\n> > cached + */\n> > +bytea *\n> > +optionsTextArrayToBytea(options_spec_set * spec_set, Datum data, bool\n> > validate) +{\n> > +\tList\t *values;\n> > +\tbytea\t *options;\n> > +\n> > +\tvalues = optionsTextArrayToRawValues(data);\n> > +\tvalues = optionsParseRawValues(values, spec_set,\n> > +\t\t\t\t\t\t\t\tvalidate ? \nOPTIONS_PARSE_MODE_VALIDATE : 0);\n> > +\toptions = optionsValuesToBytea(values, spec_set);\n> > +\n> > +\tif (spec_set->postprocess_fun)\n> > +\t{\n> > +\t\tspec_set->postprocess_fun(options, false);\n> > +\t}\n> > +\treturn options;\n> > +}\n> > diff --git a/src/backend/access/common/relation.c\n> > b/src/backend/access/common/relation.c index 632d13c..49ad197 100644\n> > --- a/src/backend/access/common/relation.c\n> > +++ b/src/backend/access/common/relation.c\n> > @@ -65,9 +65,13 @@ relation_open(Oid relationId, LOCKMODE lockmode)\n> > \n> > \t * If we didn't get the lock ourselves, assert that caller holds \none,\n> > \t * except in bootstrap mode where no locks are used.\n> > \t */\n> > \n> > -\tAssert(lockmode != NoLock ||\n> > -\t\t IsBootstrapProcessingMode() ||\n> > -\t\t CheckRelationLockedByMe(r, AccessShareLock, true));\n> > +\n> > +// FIXME We need NoLock mode to get AM data when choosing Lock for\n> > +// attoptions is changed. See ProcessUtilitySlow problems comes from\n> > there\n> > +// This is a dirty hack, we need better solution for this case;\n> > +//\tAssert(lockmode != NoLock ||\n> > +//\t\t IsBootstrapProcessingMode() ||\n> > +//\t\t CheckRelationLockedByMe(r, AccessShareLock, true));\n> > \n> > \t/* Make note that we've accessed a temporary relation */\n> > \tif (RelationUsesLocalBuffers(r))\n> > \n> > diff --git a/src/backend/access/common/reloptions.c\n> > b/src/backend/access/common/reloptions.c index b5602f5..29ab98a 100644\n> > --- a/src/backend/access/common/reloptions.c\n> > +++ b/src/backend/access/common/reloptions.c\n> > @@ -1,7 +1,7 @@\n> > \n> > /*-----------------------------------------------------------------------\n> > --\n> > \n> > *\n> > * reloptions.c\n> > \n> > - *\t Core support for relation options (pg_class.reloptions)\n> > + *\t Support for relation options (pg_class.reloptions)\n> > \n> > *\n> > * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group\n> > * Portions Copyright (c) 1994, Regents of the University of California\n> > \n> > @@ -17,13 +17,10 @@\n> > \n> > #include <float.h>\n> > \n> > -#include \"access/gist_private.h\"\n> > -#include \"access/hash.h\"\n> > \n> > #include \"access/heaptoast.h\"\n> > #include \"access/htup_details.h\"\n> > \n> > -#include \"access/nbtree.h\"\n> > \n> > #include \"access/reloptions.h\"\n> > \n> > -#include \"access/spgist_private.h\"\n> > +#include \"access/options.h\"\n> > \n> > #include \"catalog/pg_type.h\"\n> > #include \"commands/defrem.h\"\n> > #include \"commands/tablespace.h\"\n> > \n> > @@ -36,6 +33,7 @@\n> > \n> > #include \"utils/guc.h\"\n> > #include \"utils/memutils.h\"\n> > #include \"utils/rel.h\"\n> > \n> > +#include \"storage/bufmgr.h\"\n> > \n> > /*\n> > \n> > * Contents of pg_class.reloptions\n> > \n> > @@ -93,380 +91,8 @@\n> > \n> > * value has no effect until the next VACUUM, so no need for stronger\n> > lock.\n> > */\n> > \n> > -static relopt_bool boolRelOpts[] =\n> > -{\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"autosummarize\",\n> > -\t\t\t\"Enables automatic summarization on this BRIN \nindex\",\n> > -\t\t\tRELOPT_KIND_BRIN,\n> > -\t\t\tAccessExclusiveLock\n> > -\t\t},\n> > -\t\tfalse\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"autovacuum_enabled\",\n> > -\t\t\t\"Enables autovacuum in this relation\",\n> > -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\ttrue\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"user_catalog_table\",\n> > -\t\t\t\"Declare a table as an additional catalog table, \ne.g. for the purpose\n> > of logical replication\", -\t\t\tRELOPT_KIND_HEAP,\n> > -\t\t\tAccessExclusiveLock\n> > -\t\t},\n> > -\t\tfalse\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"fastupdate\",\n> > -\t\t\t\"Enables \\\"fast update\\\" feature for this GIN \nindex\",\n> > -\t\t\tRELOPT_KIND_GIN,\n> > -\t\t\tAccessExclusiveLock\n> > -\t\t},\n> > -\t\ttrue\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"security_barrier\",\n> > -\t\t\t\"View acts as a row security barrier\",\n> > -\t\t\tRELOPT_KIND_VIEW,\n> > -\t\t\tAccessExclusiveLock\n> > -\t\t},\n> > -\t\tfalse\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"vacuum_truncate\",\n> > -\t\t\t\"Enables vacuum to truncate empty pages at the end \nof this table\",\n> > -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\ttrue\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"deduplicate_items\",\n> > -\t\t\t\"Enables \\\"deduplicate items\\\" feature for this \nbtree index\",\n> > -\t\t\tRELOPT_KIND_BTREE,\n> > -\t\t\tShareUpdateExclusiveLock\t/* since it applies only \nto later\n> > -\t\t\t\t\t\t\t\t\t\t\n * inserts */\n> > -\t\t},\n> > -\t\ttrue\n> > -\t},\n> > -\t/* list terminator */\n> > -\t{{NULL}}\n> > -};\n> > -\n> > -static relopt_int intRelOpts[] =\n> > -{\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"fillfactor\",\n> > -\t\t\t\"Packs table pages only to this percentage\",\n> > -\t\t\tRELOPT_KIND_HEAP,\n> > -\t\t\tShareUpdateExclusiveLock\t/* since it applies only \nto later\n> > -\t\t\t\t\t\t\t\t\t\t\n * inserts */\n> > -\t\t},\n> > -\t\tHEAP_DEFAULT_FILLFACTOR, HEAP_MIN_FILLFACTOR, 100\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"fillfactor\",\n> > -\t\t\t\"Packs btree index pages only to this percentage\",\n> > -\t\t\tRELOPT_KIND_BTREE,\n> > -\t\t\tShareUpdateExclusiveLock\t/* since it applies only \nto later\n> > -\t\t\t\t\t\t\t\t\t\t\n * inserts */\n> > -\t\t},\n> > -\t\tBTREE_DEFAULT_FILLFACTOR, BTREE_MIN_FILLFACTOR, 100\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"fillfactor\",\n> > -\t\t\t\"Packs hash index pages only to this percentage\",\n> > -\t\t\tRELOPT_KIND_HASH,\n> > -\t\t\tShareUpdateExclusiveLock\t/* since it applies only \nto later\n> > -\t\t\t\t\t\t\t\t\t\t\n * inserts */\n> > -\t\t},\n> > -\t\tHASH_DEFAULT_FILLFACTOR, HASH_MIN_FILLFACTOR, 100\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"fillfactor\",\n> > -\t\t\t\"Packs gist index pages only to this percentage\",\n> > -\t\t\tRELOPT_KIND_GIST,\n> > -\t\t\tShareUpdateExclusiveLock\t/* since it applies only \nto later\n> > -\t\t\t\t\t\t\t\t\t\t\n * inserts */\n> > -\t\t},\n> > -\t\tGIST_DEFAULT_FILLFACTOR, GIST_MIN_FILLFACTOR, 100\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"fillfactor\",\n> > -\t\t\t\"Packs spgist index pages only to this percentage\",\n> > -\t\t\tRELOPT_KIND_SPGIST,\n> > -\t\t\tShareUpdateExclusiveLock\t/* since it applies only \nto later\n> > -\t\t\t\t\t\t\t\t\t\t\n * inserts */\n> > -\t\t},\n> > -\t\tSPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"autovacuum_vacuum_threshold\",\n> > -\t\t\t\"Minimum number of tuple updates or deletes prior to \nvacuum\",\n> > -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-1, 0, INT_MAX\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"autovacuum_vacuum_insert_threshold\",\n> > -\t\t\t\"Minimum number of tuple inserts prior to vacuum, or \n-1 to disable\n> > insert vacuums\", -\t\t\tRELOPT_KIND_HEAP | \nRELOPT_KIND_TOAST,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-2, -1, INT_MAX\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"autovacuum_analyze_threshold\",\n> > -\t\t\t\"Minimum number of tuple inserts, updates or deletes \nprior to\n> > analyze\",\n> > -\t\t\tRELOPT_KIND_HEAP,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-1, 0, INT_MAX\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"autovacuum_vacuum_cost_limit\",\n> > -\t\t\t\"Vacuum cost amount available before napping, for \nautovacuum\",\n> > -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-1, 1, 10000\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"autovacuum_freeze_min_age\",\n> > -\t\t\t\"Minimum age at which VACUUM should freeze a table \nrow, for\n> > autovacuum\", -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-1, 0, 1000000000\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"autovacuum_multixact_freeze_min_age\",\n> > -\t\t\t\"Minimum multixact age at which VACUUM should freeze \na row\n> > multixact's, for autovacuum\", -\t\t\tRELOPT_KIND_HEAP | \nRELOPT_KIND_TOAST,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-1, 0, 1000000000\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"autovacuum_freeze_max_age\",\n> > -\t\t\t\"Age at which to autovacuum a table to prevent \ntransaction ID\n> > wraparound\", -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-1, 100000, 2000000000\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"autovacuum_multixact_freeze_max_age\",\n> > -\t\t\t\"Multixact age at which to autovacuum a table to \nprevent multixact\n> > wraparound\", -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-1, 10000, 2000000000\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"autovacuum_freeze_table_age\",\n> > -\t\t\t\"Age at which VACUUM should perform a full table \nsweep to freeze row\n> > versions\", -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t}, -1, 0, 2000000000\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"autovacuum_multixact_freeze_table_age\",\n> > -\t\t\t\"Age of multixact at which VACUUM should perform a \nfull table sweep to\n> > freeze row versions\", -\t\t\tRELOPT_KIND_HEAP | \nRELOPT_KIND_TOAST,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t}, -1, 0, 2000000000\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"log_autovacuum_min_duration\",\n> > -\t\t\t\"Sets the minimum execution time above which \nautovacuum actions will\n> > be logged\", -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-1, -1, INT_MAX\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"toast_tuple_target\",\n> > -\t\t\t\"Sets the target tuple length at which external \ncolumns will be\n> > toasted\", -\t\t\tRELOPT_KIND_HEAP,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\tTOAST_TUPLE_TARGET, 128, TOAST_TUPLE_TARGET_MAIN\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"pages_per_range\",\n> > -\t\t\t\"Number of pages that each page range covers in a \nBRIN index\",\n> > -\t\t\tRELOPT_KIND_BRIN,\n> > -\t\t\tAccessExclusiveLock\n> > -\t\t}, 128, 1, 131072\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"gin_pending_list_limit\",\n> > -\t\t\t\"Maximum size of the pending list for this GIN \nindex, in kilobytes.\",\n> > -\t\t\tRELOPT_KIND_GIN,\n> > -\t\t\tAccessExclusiveLock\n> > -\t\t},\n> > -\t\t-1, 64, MAX_KILOBYTES\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"effective_io_concurrency\",\n> > -\t\t\t\"Number of simultaneous requests that can be handled \nefficiently by\n> > the disk subsystem.\", -\t\t\tRELOPT_KIND_TABLESPACE,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -#ifdef USE_PREFETCH\n> > -\t\t-1, 0, MAX_IO_CONCURRENCY\n> > -#else\n> > -\t\t0, 0, 0\n> > -#endif\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"maintenance_io_concurrency\",\n> > -\t\t\t\"Number of simultaneous requests that can be handled \nefficiently by\n> > the disk subsystem for maintenance work.\", -\t\t\t\nRELOPT_KIND_TABLESPACE,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -#ifdef USE_PREFETCH\n> > -\t\t-1, 0, MAX_IO_CONCURRENCY\n> > -#else\n> > -\t\t0, 0, 0\n> > -#endif\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"parallel_workers\",\n> > -\t\t\t\"Number of parallel processes that can be used per \nexecutor node for\n> > this relation.\", -\t\t\tRELOPT_KIND_HEAP,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-1, 0, 1024\n> > -\t},\n> > -\n> > -\t/* list terminator */\n> > -\t{{NULL}}\n> > -};\n> > -\n> > -static relopt_real realRelOpts[] =\n> > -{\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"autovacuum_vacuum_cost_delay\",\n> > -\t\t\t\"Vacuum cost delay in milliseconds, for autovacuum\",\n> > -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-1, 0.0, 100.0\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"autovacuum_vacuum_scale_factor\",\n> > -\t\t\t\"Number of tuple updates or deletes prior to vacuum \nas a fraction of\n> > reltuples\", -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-1, 0.0, 100.0\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"autovacuum_vacuum_insert_scale_factor\",\n> > -\t\t\t\"Number of tuple inserts prior to vacuum as a \nfraction of reltuples\",\n> > -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-1, 0.0, 100.0\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"autovacuum_analyze_scale_factor\",\n> > -\t\t\t\"Number of tuple inserts, updates or deletes prior \nto analyze as a\n> > fraction of reltuples\", -\t\t\tRELOPT_KIND_HEAP,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-1, 0.0, 100.0\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"seq_page_cost\",\n> > -\t\t\t\"Sets the planner's estimate of the cost of a \nsequentially fetched\n> > disk page.\", -\t\t\tRELOPT_KIND_TABLESPACE,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-1, 0.0, DBL_MAX\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"random_page_cost\",\n> > -\t\t\t\"Sets the planner's estimate of the cost of a \nnonsequentially fetched\n> > disk page.\", -\t\t\tRELOPT_KIND_TABLESPACE,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-1, 0.0, DBL_MAX\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"n_distinct\",\n> > -\t\t\t\"Sets the planner's estimate of the number of \ndistinct values\n> > appearing in a column (excluding child relations).\",\n> > -\t\t\tRELOPT_KIND_ATTRIBUTE,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t0, -1.0, DBL_MAX\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"n_distinct_inherited\",\n> > -\t\t\t\"Sets the planner's estimate of the number of \ndistinct values\n> > appearing in a column (including child relations).\",\n> > -\t\t\tRELOPT_KIND_ATTRIBUTE,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t0, -1.0, DBL_MAX\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"vacuum_cleanup_index_scale_factor\",\n> > -\t\t\t\"Deprecated B-Tree parameter.\",\n> > -\t\t\tRELOPT_KIND_BTREE,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\t-1, 0.0, 1e10\n> > -\t},\n> > -\t/* list terminator */\n> > -\t{{NULL}}\n> > -};\n> > -\n> > \n> > /* values from StdRdOptIndexCleanup */\n> > \n> > -relopt_enum_elt_def StdRdOptIndexCleanupValues[] =\n> > +opt_enum_elt_def StdRdOptIndexCleanupValues[] =\n> > \n> > {\n> > \n> > \t{\"auto\", STDRD_OPTION_VACUUM_INDEX_CLEANUP_AUTO},\n> > \t{\"on\", STDRD_OPTION_VACUUM_INDEX_CLEANUP_ON},\n> > \n> > @@ -480,17 +106,8 @@ relopt_enum_elt_def StdRdOptIndexCleanupValues[] =\n> > \n> > \t{(const char *) NULL}\t\t/* list terminator */\n> > \n> > };\n> > \n> > -/* values from GistOptBufferingMode */\n> > -relopt_enum_elt_def gistBufferingOptValues[] =\n> > -{\n> > -\t{\"auto\", GIST_OPTION_BUFFERING_AUTO},\n> > -\t{\"on\", GIST_OPTION_BUFFERING_ON},\n> > -\t{\"off\", GIST_OPTION_BUFFERING_OFF},\n> > -\t{(const char *) NULL}\t\t/* list terminator */\n> > -};\n> > -\n> > \n> > /* values from ViewOptCheckOption */\n> > \n> > -relopt_enum_elt_def viewCheckOptValues[] =\n> > +opt_enum_elt_def viewCheckOptValues[] =\n> > \n> > {\n> > \n> > \t/* no value for NOT_SET */\n> > \t{\"local\", VIEW_OPTION_CHECK_OPTION_LOCAL},\n> > \n> > @@ -498,61 +115,8 @@ relopt_enum_elt_def viewCheckOptValues[] =\n> > \n> > \t{(const char *) NULL}\t\t/* list terminator */\n> > \n> > };\n> > \n> > -static relopt_enum enumRelOpts[] =\n> > -{\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"vacuum_index_cleanup\",\n> > -\t\t\t\"Controls index vacuuming and index cleanup\",\n> > -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > -\t\t\tShareUpdateExclusiveLock\n> > -\t\t},\n> > -\t\tStdRdOptIndexCleanupValues,\n> > -\t\tSTDRD_OPTION_VACUUM_INDEX_CLEANUP_AUTO,\n> > -\t\tgettext_noop(\"Valid values are \\\"on\\\", \\\"off\\\", and \n\\\"auto\\\".\")\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"buffering\",\n> > -\t\t\t\"Enables buffering build for this GiST index\",\n> > -\t\t\tRELOPT_KIND_GIST,\n> > -\t\t\tAccessExclusiveLock\n> > -\t\t},\n> > -\t\tgistBufferingOptValues,\n> > -\t\tGIST_OPTION_BUFFERING_AUTO,\n> > -\t\tgettext_noop(\"Valid values are \\\"on\\\", \\\"off\\\", and \n\\\"auto\\\".\")\n> > -\t},\n> > -\t{\n> > -\t\t{\n> > -\t\t\t\"check_option\",\n> > -\t\t\t\"View has WITH CHECK OPTION defined (local or \ncascaded).\",\n> > -\t\t\tRELOPT_KIND_VIEW,\n> > -\t\t\tAccessExclusiveLock\n> > -\t\t},\n> > -\t\tviewCheckOptValues,\n> > -\t\tVIEW_OPTION_CHECK_OPTION_NOT_SET,\n> > -\t\tgettext_noop(\"Valid values are \\\"local\\\" and \\\"cascaded\\\".\")\n> > -\t},\n> > -\t/* list terminator */\n> > -\t{{NULL}}\n> > -};\n> > -\n> > -static relopt_string stringRelOpts[] =\n> > -{\n> > -\t/* list terminator */\n> > -\t{{NULL}}\n> > -};\n> > -\n> > -static relopt_gen **relOpts = NULL;\n> > -static bits32 last_assigned_kind = RELOPT_KIND_LAST_DEFAULT;\n> > -\n> > -static int\tnum_custom_options = 0;\n> > -static relopt_gen **custom_options = NULL;\n> > -static bool need_initialization = true;\n> > \n> > -static void initialize_reloptions(void);\n> > -static void parse_one_reloption(relopt_value *option, char *text_str,\n> > -\t\t\t\t\t\t\t\tint \ntext_len, bool validate);\n> > +options_spec_set *get_stdrd_relopt_spec_set(relopt_kind kind);\n> > \n> > /*\n> > \n> > * Get the length of a string reloption (either default or the\n> > user-defined\n> > \n> > @@ -563,160 +127,6 @@ static void parse_one_reloption(relopt_value\n> > *option, char *text_str,> \n> > \t((option).isset ? strlen((option).values.string_val) : \\\n> > \t\n> > \t ((relopt_string *) (option).gen)->default_len)\n> > \n> > -/*\n> > - * initialize_reloptions\n> > - *\t\tinitialization routine, must be called before parsing\n> > - *\n> > - * Initialize the relOpts array and fill each variable's type and name\n> > length. - */\n> > -static void\n> > -initialize_reloptions(void)\n> > -{\n> > -\tint\t\t\ti;\n> > -\tint\t\t\tj;\n> > -\n> > -\tj = 0;\n> > -\tfor (i = 0; boolRelOpts[i].gen.name; i++)\n> > -\t{\n> > -\t\tAssert(DoLockModesConflict(boolRelOpts[i].gen.lockmode,\n> > -\t\t\t\t\t\t\t\t \nboolRelOpts[i].gen.lockmode));\n> > -\t\tj++;\n> > -\t}\n> > -\tfor (i = 0; intRelOpts[i].gen.name; i++)\n> > -\t{\n> > -\t\tAssert(DoLockModesConflict(intRelOpts[i].gen.lockmode,\n> > -\t\t\t\t\t\t\t\t \nintRelOpts[i].gen.lockmode));\n> > -\t\tj++;\n> > -\t}\n> > -\tfor (i = 0; realRelOpts[i].gen.name; i++)\n> > -\t{\n> > -\t\tAssert(DoLockModesConflict(realRelOpts[i].gen.lockmode,\n> > -\t\t\t\t\t\t\t\t \nrealRelOpts[i].gen.lockmode));\n> > -\t\tj++;\n> > -\t}\n> > -\tfor (i = 0; enumRelOpts[i].gen.name; i++)\n> > -\t{\n> > -\t\tAssert(DoLockModesConflict(enumRelOpts[i].gen.lockmode,\n> > -\t\t\t\t\t\t\t\t \nenumRelOpts[i].gen.lockmode));\n> > -\t\tj++;\n> > -\t}\n> > -\tfor (i = 0; stringRelOpts[i].gen.name; i++)\n> > -\t{\n> > -\t\tAssert(DoLockModesConflict(stringRelOpts[i].gen.lockmode,\n> > -\t\t\t\t\t\t\t\t \nstringRelOpts[i].gen.lockmode));\n> > -\t\tj++;\n> > -\t}\n> > -\tj += num_custom_options;\n> > -\n> > -\tif (relOpts)\n> > -\t\tpfree(relOpts);\n> > -\trelOpts = MemoryContextAlloc(TopMemoryContext,\n> > -\t\t\t\t\t\t\t\t (j + 1) * \nsizeof(relopt_gen *));\n> > -\n> > -\tj = 0;\n> > -\tfor (i = 0; boolRelOpts[i].gen.name; i++)\n> > -\t{\n> > -\t\trelOpts[j] = &boolRelOpts[i].gen;\n> > -\t\trelOpts[j]->type = RELOPT_TYPE_BOOL;\n> > -\t\trelOpts[j]->namelen = strlen(relOpts[j]->name);\n> > -\t\tj++;\n> > -\t}\n> > -\n> > -\tfor (i = 0; intRelOpts[i].gen.name; i++)\n> > -\t{\n> > -\t\trelOpts[j] = &intRelOpts[i].gen;\n> > -\t\trelOpts[j]->type = RELOPT_TYPE_INT;\n> > -\t\trelOpts[j]->namelen = strlen(relOpts[j]->name);\n> > -\t\tj++;\n> > -\t}\n> > -\n> > -\tfor (i = 0; realRelOpts[i].gen.name; i++)\n> > -\t{\n> > -\t\trelOpts[j] = &realRelOpts[i].gen;\n> > -\t\trelOpts[j]->type = RELOPT_TYPE_REAL;\n> > -\t\trelOpts[j]->namelen = strlen(relOpts[j]->name);\n> > -\t\tj++;\n> > -\t}\n> > -\n> > -\tfor (i = 0; enumRelOpts[i].gen.name; i++)\n> > -\t{\n> > -\t\trelOpts[j] = &enumRelOpts[i].gen;\n> > -\t\trelOpts[j]->type = RELOPT_TYPE_ENUM;\n> > -\t\trelOpts[j]->namelen = strlen(relOpts[j]->name);\n> > -\t\tj++;\n> > -\t}\n> > -\n> > -\tfor (i = 0; stringRelOpts[i].gen.name; i++)\n> > -\t{\n> > -\t\trelOpts[j] = &stringRelOpts[i].gen;\n> > -\t\trelOpts[j]->type = RELOPT_TYPE_STRING;\n> > -\t\trelOpts[j]->namelen = strlen(relOpts[j]->name);\n> > -\t\tj++;\n> > -\t}\n> > -\n> > -\tfor (i = 0; i < num_custom_options; i++)\n> > -\t{\n> > -\t\trelOpts[j] = custom_options[i];\n> > -\t\tj++;\n> > -\t}\n> > -\n> > -\t/* add a list terminator */\n> > -\trelOpts[j] = NULL;\n> > -\n> > -\t/* flag the work is complete */\n> > -\tneed_initialization = false;\n> > -}\n> > -\n> > -/*\n> > - * add_reloption_kind\n> > - *\t\tCreate a new relopt_kind value, to be used in custom \nreloptions by\n> > - *\t\tuser-defined AMs.\n> > - */\n> > -relopt_kind\n> > -add_reloption_kind(void)\n> > -{\n> > -\t/* don't hand out the last bit so that the enum's behavior is \nportable\n> > */\n> > -\tif (last_assigned_kind >= RELOPT_KIND_MAX)\n> > -\t\tereport(ERROR,\n> > -\t\t\t\t(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n> > -\t\t\t\t errmsg(\"user-defined relation parameter \ntypes limit exceeded\")));\n> > -\tlast_assigned_kind <<= 1;\n> > -\treturn (relopt_kind) last_assigned_kind;\n> > -}\n> > -\n> > -/*\n> > - * add_reloption\n> > - *\t\tAdd an already-created custom reloption to the list, and \nrecompute\n> > the\n> > - *\t\tmain parser table.\n> > - */\n> > -static void\n> > -add_reloption(relopt_gen *newoption)\n> > -{\n> > -\tstatic int\tmax_custom_options = 0;\n> > -\n> > -\tif (num_custom_options >= max_custom_options)\n> > -\t{\n> > -\t\tMemoryContext oldcxt;\n> > -\n> > -\t\toldcxt = MemoryContextSwitchTo(TopMemoryContext);\n> > -\n> > -\t\tif (max_custom_options == 0)\n> > -\t\t{\n> > -\t\t\tmax_custom_options = 8;\n> > -\t\t\tcustom_options = palloc(max_custom_options * \nsizeof(relopt_gen *));\n> > -\t\t}\n> > -\t\telse\n> > -\t\t{\n> > -\t\t\tmax_custom_options *= 2;\n> > -\t\t\tcustom_options = repalloc(custom_options,\n> > -\t\t\t\t\t\t\t\t\t \nmax_custom_options * sizeof(relopt_gen *));\n> > -\t\t}\n> > -\t\tMemoryContextSwitchTo(oldcxt);\n> > -\t}\n> > -\tcustom_options[num_custom_options++] = newoption;\n> > -\n> > -\tneed_initialization = true;\n> > -}\n> > \n> > /*\n> > \n> > * init_local_reloptions\n> > \n> > @@ -729,6 +139,7 @@ init_local_reloptions(local_relopts *opts, Size\n> > relopt_struct_size)> \n> > \topts->options = NIL;\n> > \topts->validators = NIL;\n> > \topts->relopt_struct_size = relopt_struct_size;\n> > \n> > +\topts->spec_set = allocateOptionsSpecSet(NULL, relopt_struct_size, 0);\n> > \n> > }\n> > \n> > /*\n> > \n> > @@ -743,112 +154,6 @@ register_reloptions_validator(local_relopts *opts,\n> > relopts_validator validator)> \n> > }\n> > \n> > /*\n> > \n> > - * add_local_reloption\n> > - *\t\tAdd an already-created custom reloption to the local list.\n> > - */\n> > -static void\n> > -add_local_reloption(local_relopts *relopts, relopt_gen *newoption, int\n> > offset) -{\n> > -\tlocal_relopt *opt = palloc(sizeof(*opt));\n> > -\n> > -\tAssert(offset < relopts->relopt_struct_size);\n> > -\n> > -\topt->option = newoption;\n> > -\topt->offset = offset;\n> > -\n> > -\trelopts->options = lappend(relopts->options, opt);\n> > -}\n> > -\n> > -/*\n> > - * allocate_reloption\n> > - *\t\tAllocate a new reloption and initialize the type-agnostic \nfields\n> > - *\t\t(for types other than string)\n> > - */\n> > -static relopt_gen *\n> > -allocate_reloption(bits32 kinds, int type, const char *name, const char\n> > *desc, -\t\t\t\t LOCKMODE lockmode)\n> > -{\n> > -\tMemoryContext oldcxt;\n> > -\tsize_t\t\tsize;\n> > -\trelopt_gen *newoption;\n> > -\n> > -\tif (kinds != RELOPT_KIND_LOCAL)\n> > -\t\toldcxt = MemoryContextSwitchTo(TopMemoryContext);\n> > -\telse\n> > -\t\toldcxt = NULL;\n> > -\n> > -\tswitch (type)\n> > -\t{\n> > -\t\tcase RELOPT_TYPE_BOOL:\n> > -\t\t\tsize = sizeof(relopt_bool);\n> > -\t\t\tbreak;\n> > -\t\tcase RELOPT_TYPE_INT:\n> > -\t\t\tsize = sizeof(relopt_int);\n> > -\t\t\tbreak;\n> > -\t\tcase RELOPT_TYPE_REAL:\n> > -\t\t\tsize = sizeof(relopt_real);\n> > -\t\t\tbreak;\n> > -\t\tcase RELOPT_TYPE_ENUM:\n> > -\t\t\tsize = sizeof(relopt_enum);\n> > -\t\t\tbreak;\n> > -\t\tcase RELOPT_TYPE_STRING:\n> > -\t\t\tsize = sizeof(relopt_string);\n> > -\t\t\tbreak;\n> > -\t\tdefault:\n> > -\t\t\telog(ERROR, \"unsupported reloption type %d\", type);\n> > -\t\t\treturn NULL;\t\t/* keep compiler quiet */\n> > -\t}\n> > -\n> > -\tnewoption = palloc(size);\n> > -\n> > -\tnewoption->name = pstrdup(name);\n> > -\tif (desc)\n> > -\t\tnewoption->desc = pstrdup(desc);\n> > -\telse\n> > -\t\tnewoption->desc = NULL;\n> > -\tnewoption->kinds = kinds;\n> > -\tnewoption->namelen = strlen(name);\n> > -\tnewoption->type = type;\n> > -\tnewoption->lockmode = lockmode;\n> > -\n> > -\tif (oldcxt != NULL)\n> > -\t\tMemoryContextSwitchTo(oldcxt);\n> > -\n> > -\treturn newoption;\n> > -}\n> > -\n> > -/*\n> > - * init_bool_reloption\n> > - *\t\tAllocate and initialize a new boolean reloption\n> > - */\n> > -static relopt_bool *\n> > -init_bool_reloption(bits32 kinds, const char *name, const char *desc,\n> > -\t\t\t\t\tbool default_val, LOCKMODE lockmode)\n> > -{\n> > -\trelopt_bool *newoption;\n> > -\n> > -\tnewoption = (relopt_bool *) allocate_reloption(kinds, \nRELOPT_TYPE_BOOL,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t name, desc, lockmode);\n> > -\tnewoption->default_val = default_val;\n> > -\n> > -\treturn newoption;\n> > -}\n> > -\n> > -/*\n> > - * add_bool_reloption\n> > - *\t\tAdd a new boolean reloption\n> > - */\n> > -void\n> > -add_bool_reloption(bits32 kinds, const char *name, const char *desc,\n> > -\t\t\t\t bool default_val, LOCKMODE lockmode)\n> > -{\n> > -\trelopt_bool *newoption = init_bool_reloption(kinds, name, desc,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t default_val, lockmode);\n> > -\n> > -\tadd_reloption((relopt_gen *) newoption);\n> > -}\n> > -\n> > -/*\n> > \n> > * add_local_bool_reloption\n> > *\t\tAdd a new boolean local reloption\n> > *\n> > \n> > @@ -858,47 +163,8 @@ void\n> > \n> > add_local_bool_reloption(local_relopts *relopts, const char *name,\n> > \n> > \t\t\t\t\t\t const char *desc, bool \ndefault_val, int offset)\n> > \n> > {\n> > \n> > -\trelopt_bool *newoption = init_bool_reloption(RELOPT_KIND_LOCAL,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t name, desc,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t default_val, 0);\n> > -\n> > -\tadd_local_reloption(relopts, (relopt_gen *) newoption, offset);\n> > -}\n> > -\n> > -\n> > -/*\n> > - * init_real_reloption\n> > - *\t\tAllocate and initialize a new integer reloption\n> > - */\n> > -static relopt_int *\n> > -init_int_reloption(bits32 kinds, const char *name, const char *desc,\n> > -\t\t\t\t int default_val, int min_val, int \nmax_val,\n> > -\t\t\t\t LOCKMODE lockmode)\n> > -{\n> > -\trelopt_int *newoption;\n> > -\n> > -\tnewoption = (relopt_int *) allocate_reloption(kinds, \nRELOPT_TYPE_INT,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t name, desc, lockmode);\n> > -\tnewoption->default_val = default_val;\n> > -\tnewoption->min = min_val;\n> > -\tnewoption->max = max_val;\n> > -\n> > -\treturn newoption;\n> > -}\n> > -\n> > -/*\n> > - * add_int_reloption\n> > - *\t\tAdd a new integer reloption\n> > - */\n> > -void\n> > -add_int_reloption(bits32 kinds, const char *name, const char *desc, int\n> > default_val, -\t\t\t\t int min_val, int max_val, \nLOCKMODE lockmode)\n> > -{\n> > -\trelopt_int *newoption = init_int_reloption(kinds, name, desc,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t default_val, min_val,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t max_val, lockmode);\n> > -\n> > -\tadd_reloption((relopt_gen *) newoption);\n> > +\toptionsSpecSetAddBool(relopts->spec_set, name, desc, NoLock, 0, \noffset,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\tdefault_val);\n> > \n> > }\n> > \n> > /*\n> > \n> > @@ -912,47 +178,8 @@ add_local_int_reloption(local_relopts *relopts, const\n> > char *name,> \n> > \t\t\t\t\t\tconst char *desc, int \ndefault_val, int min_val,\n> > \t\t\t\t\t\tint max_val, int offset)\n> > \n> > {\n> > \n> > -\trelopt_int *newoption = init_int_reloption(RELOPT_KIND_LOCAL,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t name, desc, default_val,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t min_val, max_val, 0);\n> > -\n> > -\tadd_local_reloption(relopts, (relopt_gen *) newoption, offset);\n> > -}\n> > -\n> > -/*\n> > - * init_real_reloption\n> > - *\t\tAllocate and initialize a new real reloption\n> > - */\n> > -static relopt_real *\n> > -init_real_reloption(bits32 kinds, const char *name, const char *desc,\n> > -\t\t\t\t\tdouble default_val, double min_val, \ndouble max_val,\n> > -\t\t\t\t\tLOCKMODE lockmode)\n> > -{\n> > -\trelopt_real *newoption;\n> > -\n> > -\tnewoption = (relopt_real *) allocate_reloption(kinds, \nRELOPT_TYPE_REAL,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t name, desc, lockmode);\n> > -\tnewoption->default_val = default_val;\n> > -\tnewoption->min = min_val;\n> > -\tnewoption->max = max_val;\n> > -\n> > -\treturn newoption;\n> > -}\n> > -\n> > -/*\n> > - * add_real_reloption\n> > - *\t\tAdd a new float reloption\n> > - */\n> > -void\n> > -add_real_reloption(bits32 kinds, const char *name, const char *desc,\n> > -\t\t\t\t double default_val, double min_val, \ndouble max_val,\n> > -\t\t\t\t LOCKMODE lockmode)\n> > -{\n> > -\trelopt_real *newoption = init_real_reloption(kinds, name, desc,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t default_val, min_val,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t max_val, lockmode);\n> > -\n> > -\tadd_reloption((relopt_gen *) newoption);\n> > +\toptionsSpecSetAddInt(relopts->spec_set, name, desc, NoLock, 0, offset,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\tdefault_val, min_val, max_val);\n> > \n> > }\n> > \n> > /*\n> > \n> > @@ -966,57 +193,9 @@ add_local_real_reloption(local_relopts *relopts,\n> > const char *name,> \n> > \t\t\t\t\t\t const char *desc, double \ndefault_val,\n> > \t\t\t\t\t\t double min_val, double \nmax_val, int offset)\n> > \n> > {\n> > \n> > -\trelopt_real *newoption = init_real_reloption(RELOPT_KIND_LOCAL,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t name, desc,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t default_val, min_val,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t max_val, 0);\n> > -\n> > -\tadd_local_reloption(relopts, (relopt_gen *) newoption, offset);\n> > -}\n> > -\n> > -/*\n> > - * init_enum_reloption\n> > - *\t\tAllocate and initialize a new enum reloption\n> > - */\n> > -static relopt_enum *\n> > -init_enum_reloption(bits32 kinds, const char *name, const char *desc,\n> > -\t\t\t\t\trelopt_enum_elt_def *members, int \ndefault_val,\n> > -\t\t\t\t\tconst char *detailmsg, LOCKMODE \nlockmode)\n> > -{\n> > -\trelopt_enum *newoption;\n> > -\n> > -\tnewoption = (relopt_enum *) allocate_reloption(kinds, \nRELOPT_TYPE_ENUM,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t name, desc, lockmode);\n> > -\tnewoption->members = members;\n> > -\tnewoption->default_val = default_val;\n> > -\tnewoption->detailmsg = detailmsg;\n> > -\n> > -\treturn newoption;\n> > -}\n> > -\n> > -\n> > -/*\n> > - * add_enum_reloption\n> > - *\t\tAdd a new enum reloption\n> > - *\n> > - * The members array must have a terminating NULL entry.\n> > - *\n> > - * The detailmsg is shown when unsupported values are passed, and has\n> > this\n> > - * form: \"Valid values are \\\"foo\\\", \\\"bar\\\", and \\\"bar\\\".\"\n> > - *\n> > - * The members array and detailmsg are not copied -- caller must ensure\n> > that - * they are valid throughout the life of the process.\n> > - */\n> > -void\n> > -add_enum_reloption(bits32 kinds, const char *name, const char *desc,\n> > -\t\t\t\t relopt_enum_elt_def *members, int \ndefault_val,\n> > -\t\t\t\t const char *detailmsg, LOCKMODE lockmode)\n> > -{\n> > -\trelopt_enum *newoption = init_enum_reloption(kinds, name, desc,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t members, default_val,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t detailmsg, lockmode);\n> > +\toptionsSpecSetAddReal(relopts->spec_set, name, desc, NoLock, 0, \noffset,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\tdefault_val, min_val, max_val);\n> > \n> > -\tadd_reloption((relopt_gen *) newoption);\n> > \n> > }\n> > \n> > /*\n> > \n> > @@ -1027,77 +206,11 @@ add_enum_reloption(bits32 kinds, const char *name,\n> > const char *desc,> \n> > */\n> > \n> > void\n> > add_local_enum_reloption(local_relopts *relopts, const char *name,\n> > \n> > -\t\t\t\t\t\t const char *desc, \nrelopt_enum_elt_def *members,\n> > +\t\t\t\t\t\t const char *desc, \nopt_enum_elt_def *members,\n> > \n> > \t\t\t\t\t\t int default_val, const char \n*detailmsg, int offset)\n> > \n> > {\n> > \n> > -\trelopt_enum *newoption = init_enum_reloption(RELOPT_KIND_LOCAL,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t name, desc,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t members, default_val,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t detailmsg, 0);\n> > -\n> > -\tadd_local_reloption(relopts, (relopt_gen *) newoption, offset);\n> > -}\n> > -\n> > -/*\n> > - * init_string_reloption\n> > - *\t\tAllocate and initialize a new string reloption\n> > - */\n> > -static relopt_string *\n> > -init_string_reloption(bits32 kinds, const char *name, const char *desc,\n> > -\t\t\t\t\t const char *default_val,\n> > -\t\t\t\t\t validate_string_relopt validator,\n> > -\t\t\t\t\t fill_string_relopt filler,\n> > -\t\t\t\t\t LOCKMODE lockmode)\n> > -{\n> > -\trelopt_string *newoption;\n> > -\n> > -\t/* make sure the validator/default combination is sane */\n> > -\tif (validator)\n> > -\t\t(validator) (default_val);\n> > -\n> > -\tnewoption = (relopt_string *) allocate_reloption(kinds,\n> > RELOPT_TYPE_STRING, -\t\t\t\t\t\t\t\t\n\t\t\t\t\t name, desc, lockmode);\n> > -\tnewoption->validate_cb = validator;\n> > -\tnewoption->fill_cb = filler;\n> > -\tif (default_val)\n> > -\t{\n> > -\t\tif (kinds == RELOPT_KIND_LOCAL)\n> > -\t\t\tnewoption->default_val = strdup(default_val);\n> > -\t\telse\n> > -\t\t\tnewoption->default_val = \nMemoryContextStrdup(TopMemoryContext,\n> > default_val); -\t\tnewoption->default_len = strlen(default_val);\n> > -\t\tnewoption->default_isnull = false;\n> > -\t}\n> > -\telse\n> > -\t{\n> > -\t\tnewoption->default_val = \"\";\n> > -\t\tnewoption->default_len = 0;\n> > -\t\tnewoption->default_isnull = true;\n> > -\t}\n> > -\n> > -\treturn newoption;\n> > -}\n> > -\n> > -/*\n> > - * add_string_reloption\n> > - *\t\tAdd a new string reloption\n> > - *\n> > - * \"validator\" is an optional function pointer that can be used to test\n> > the - * validity of the values. It must elog(ERROR) when the argument\n> > string is - * not acceptable for the variable. Note that the default\n> > value must pass - * the validation.\n> > - */\n> > -void\n> > -add_string_reloption(bits32 kinds, const char *name, const char *desc,\n> > -\t\t\t\t\t const char *default_val, \nvalidate_string_relopt validator,\n> > -\t\t\t\t\t LOCKMODE lockmode)\n> > -{\n> > -\trelopt_string *newoption = init_string_reloption(kinds, name, desc,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t\t default_val,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t\t validator, NULL,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t\t lockmode);\n> > -\n> > -\tadd_reloption((relopt_gen *) newoption);\n> > +\toptionsSpecSetAddEnum(relopts->spec_set, name, desc, NoLock, 0, \noffset,\n> > +\t\t\t\t\t\t\t\t\t\t\n\tmembers, default_val, detailmsg);\n> > \n> > }\n> > \n> > /*\n> > \n> > @@ -1113,249 +226,9 @@ add_local_string_reloption(local_relopts *relopts,\n> > const char *name,> \n> > \t\t\t\t\t\t validate_string_relopt \nvalidator,\n> > \t\t\t\t\t\t fill_string_relopt filler, \nint offset)\n> > \n> > {\n> > \n> > -\trelopt_string *newoption = init_string_reloption(RELOPT_KIND_LOCAL,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t\t name, desc,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t\t default_val,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t\t validator, filler,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t\t 0);\n> > -\n> > -\tadd_local_reloption(relopts, (relopt_gen *) newoption, offset);\n> > -}\n> > -\n> > -/*\n> > - * Transform a relation options list (list of DefElem) into the text\n> > array\n> > - * format that is kept in pg_class.reloptions, including only those\n> > options - * that are in the passed namespace. The output values do not\n> > include the - * namespace.\n> > - *\n> > - * This is used for three cases: CREATE TABLE/INDEX, ALTER TABLE SET, and\n> > - * ALTER TABLE RESET. In the ALTER cases, oldOptions is the existing\n> > - * reloptions value (possibly NULL), and we replace or remove entries\n> > - * as needed.\n> > - *\n> > - * If acceptOidsOff is true, then we allow oids = false, but throw error\n> > when - * on. This is solely needed for backwards compatibility.\n> > - *\n> > - * Note that this is not responsible for determining whether the options\n> > - * are valid, but it does check that namespaces for all the options given\n> > are - * listed in validnsps. The NULL namespace is always valid and need\n> > not be - * explicitly listed. Passing a NULL pointer means that only the\n> > NULL - * namespace is valid.\n> > - *\n> > - * Both oldOptions and the result are text arrays (or NULL for\n> > \"default\"),\n> > - * but we declare them as Datums to avoid including array.h in\n> > reloptions.h. - */\n> > -Datum\n> > -transformRelOptions(Datum oldOptions, List *defList, const char\n> > *namspace,\n> > -\t\t\t\t\tchar *validnsps[], bool \nacceptOidsOff, bool isReset)\n> > -{\n> > -\tDatum\t\tresult;\n> > -\tArrayBuildState *astate;\n> > -\tListCell *cell;\n> > -\n> > -\t/* no change if empty list */\n> > -\tif (defList == NIL)\n> > -\t\treturn oldOptions;\n> > -\n> > -\t/* We build new array using accumArrayResult */\n> > -\tastate = NULL;\n> > -\n> > -\t/* Copy any oldOptions that aren't to be replaced */\n> > -\tif (PointerIsValid(DatumGetPointer(oldOptions)))\n> > -\t{\n> > -\t\tArrayType *array = DatumGetArrayTypeP(oldOptions);\n> > -\t\tDatum\t *oldoptions;\n> > -\t\tint\t\t\tnoldoptions;\n> > -\t\tint\t\t\ti;\n> > -\n> > -\t\tdeconstruct_array(array, TEXTOID, -1, false, TYPALIGN_INT,\n> > -\t\t\t\t\t\t &oldoptions, NULL, \n&noldoptions);\n> > -\n> > -\t\tfor (i = 0; i < noldoptions; i++)\n> > -\t\t{\n> > -\t\t\tchar\t *text_str = VARDATA(oldoptions[i]);\n> > -\t\t\tint\t\t\ttext_len = \nVARSIZE(oldoptions[i]) - VARHDRSZ;\n> > -\n> > -\t\t\t/* Search for a match in defList */\n> > -\t\t\tforeach(cell, defList)\n> > -\t\t\t{\n> > -\t\t\t\tDefElem *def = (DefElem *) lfirst(cell);\n> > -\t\t\t\tint\t\t\tkw_len;\n> > -\n> > -\t\t\t\t/* ignore if not in the same namespace */\n> > -\t\t\t\tif (namspace == NULL)\n> > -\t\t\t\t{\n> > -\t\t\t\t\tif (def->defnamespace != NULL)\n> > -\t\t\t\t\t\tcontinue;\n> > -\t\t\t\t}\n> > -\t\t\t\telse if (def->defnamespace == NULL)\n> > -\t\t\t\t\tcontinue;\n> > -\t\t\t\telse if (strcmp(def->defnamespace, namspace) \n!= 0)\n> > -\t\t\t\t\tcontinue;\n> > -\n> > -\t\t\t\tkw_len = strlen(def->defname);\n> > -\t\t\t\tif (text_len > kw_len && text_str[kw_len] == \n'=' &&\n> > -\t\t\t\t\tstrncmp(text_str, def->defname, \nkw_len) == 0)\n> > -\t\t\t\t\tbreak;\n> > -\t\t\t}\n> > -\t\t\tif (!cell)\n> > -\t\t\t{\n> > -\t\t\t\t/* No match, so keep old option */\n> > -\t\t\t\tastate = accumArrayResult(astate, \noldoptions[i],\n> > -\t\t\t\t\t\t\t\t\t\t\n false, TEXTOID,\n> > -\t\t\t\t\t\t\t\t\t\t\n CurrentMemoryContext);\n> > -\t\t\t}\n> > -\t\t}\n> > -\t}\n> > -\n> > -\t/*\n> > -\t * If CREATE/SET, add new options to array; if RESET, just check \nthat\n> > the\n> > -\t * user didn't say RESET (option=val). (Must do this because the\n> > grammar\n> > -\t * doesn't enforce it.)\n> > -\t */\n> > -\tforeach(cell, defList)\n> > -\t{\n> > -\t\tDefElem *def = (DefElem *) lfirst(cell);\n> > -\n> > -\t\tif (isReset)\n> > -\t\t{\n> > -\t\t\tif (def->arg != NULL)\n> > -\t\t\t\tereport(ERROR,\n> > -\t\t\t\t\t\t\n(errcode(ERRCODE_SYNTAX_ERROR),\n> > -\t\t\t\t\t\t errmsg(\"RESET must not \ninclude values for parameters\")));\n> > -\t\t}\n> > -\t\telse\n> > -\t\t{\n> > -\t\t\ttext\t *t;\n> > -\t\t\tconst char *value;\n> > -\t\t\tSize\t\tlen;\n> > -\n> > -\t\t\t/*\n> > -\t\t\t * Error out if the namespace is not valid. A NULL \nnamespace is\n> > -\t\t\t * always valid.\n> > -\t\t\t */\n> > -\t\t\tif (def->defnamespace != NULL)\n> > -\t\t\t{\n> > -\t\t\t\tbool\t\tvalid = false;\n> > -\t\t\t\tint\t\t\ti;\n> > -\n> > -\t\t\t\tif (validnsps)\n> > -\t\t\t\t{\n> > -\t\t\t\t\tfor (i = 0; validnsps[i]; i++)\n> > -\t\t\t\t\t{\n> > -\t\t\t\t\t\tif (strcmp(def-\n>defnamespace, validnsps[i]) == 0)\n> > -\t\t\t\t\t\t{\n> > -\t\t\t\t\t\t\tvalid = true;\n> > -\t\t\t\t\t\t\tbreak;\n> > -\t\t\t\t\t\t}\n> > -\t\t\t\t\t}\n> > -\t\t\t\t}\n> > -\n> > -\t\t\t\tif (!valid)\n> > -\t\t\t\t\tereport(ERROR,\n> > -\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > -\t\t\t\t\t\t\t \nerrmsg(\"unrecognized parameter namespace \\\"%s\\\"\",\n> > -\t\t\t\t\t\t\t\t\tdef-\n>defnamespace)));\n> > -\t\t\t}\n> > -\n> > -\t\t\t/* ignore if not in the same namespace */\n> > -\t\t\tif (namspace == NULL)\n> > -\t\t\t{\n> > -\t\t\t\tif (def->defnamespace != NULL)\n> > -\t\t\t\t\tcontinue;\n> > -\t\t\t}\n> > -\t\t\telse if (def->defnamespace == NULL)\n> > -\t\t\t\tcontinue;\n> > -\t\t\telse if (strcmp(def->defnamespace, namspace) != 0)\n> > -\t\t\t\tcontinue;\n> > -\n> > -\t\t\t/*\n> > -\t\t\t * Flatten the DefElem into a text string like \n\"name=arg\". If we\n> > -\t\t\t * have just \"name\", assume \"name=true\" is meant. \nNote: the\n> > -\t\t\t * namespace is not output.\n> > -\t\t\t */\n> > -\t\t\tif (def->arg != NULL)\n> > -\t\t\t\tvalue = defGetString(def);\n> > -\t\t\telse\n> > -\t\t\t\tvalue = \"true\";\n> > -\n> > -\t\t\t/*\n> > -\t\t\t * This is not a great place for this test, but \nthere's no other\n> > -\t\t\t * convenient place to filter the option out. As WITH \n(oids =\n> > -\t\t\t * false) will be removed someday, this seems like \nan acceptable\n> > -\t\t\t * amount of ugly.\n> > -\t\t\t */\n> > -\t\t\tif (acceptOidsOff && def->defnamespace == NULL &&\n> > -\t\t\t\tstrcmp(def->defname, \"oids\") == 0)\n> > -\t\t\t{\n> > -\t\t\t\tif (defGetBoolean(def))\n> > -\t\t\t\t\tereport(ERROR,\n> > -\t\t\t\t\t\t\t\n(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > -\t\t\t\t\t\t\t errmsg(\"tables \ndeclared WITH OIDS are not supported\")));\n> > -\t\t\t\t/* skip over option, reloptions machinery \ndoesn't know it */\n> > -\t\t\t\tcontinue;\n> > -\t\t\t}\n> > -\n> > -\t\t\tlen = VARHDRSZ + strlen(def->defname) + 1 + \nstrlen(value);\n> > -\t\t\t/* +1 leaves room for sprintf's trailing null */\n> > -\t\t\tt = (text *) palloc(len + 1);\n> > -\t\t\tSET_VARSIZE(t, len);\n> > -\t\t\tsprintf(VARDATA(t), \"%s=%s\", def->defname, value);\n> > -\n> > -\t\t\tastate = accumArrayResult(astate, \nPointerGetDatum(t),\n> > -\t\t\t\t\t\t\t\t\t \nfalse, TEXTOID,\n> > -\t\t\t\t\t\t\t\t\t \nCurrentMemoryContext);\n> > -\t\t}\n> > -\t}\n> > -\n> > -\tif (astate)\n> > -\t\tresult = makeArrayResult(astate, CurrentMemoryContext);\n> > -\telse\n> > -\t\tresult = (Datum) 0;\n> > -\n> > -\treturn result;\n> > -}\n> > -\n> > -\n> > -/*\n> > - * Convert the text-array format of reloptions into a List of DefElem.\n> > - * This is the inverse of transformRelOptions().\n> > - */\n> > -List *\n> > -untransformRelOptions(Datum options)\n> > -{\n> > -\tList\t *result = NIL;\n> > -\tArrayType *array;\n> > -\tDatum\t *optiondatums;\n> > -\tint\t\t\tnoptions;\n> > -\tint\t\t\ti;\n> > -\n> > -\t/* Nothing to do if no options */\n> > -\tif (!PointerIsValid(DatumGetPointer(options)))\n> > -\t\treturn result;\n> > -\n> > -\tarray = DatumGetArrayTypeP(options);\n> > -\n> > -\tdeconstruct_array(array, TEXTOID, -1, false, TYPALIGN_INT,\n> > -\t\t\t\t\t &optiondatums, NULL, &noptions);\n> > -\n> > -\tfor (i = 0; i < noptions; i++)\n> > -\t{\n> > -\t\tchar\t *s;\n> > -\t\tchar\t *p;\n> > -\t\tNode\t *val = NULL;\n> > -\n> > -\t\ts = TextDatumGetCString(optiondatums[i]);\n> > -\t\tp = strchr(s, '=');\n> > -\t\tif (p)\n> > -\t\t{\n> > -\t\t\t*p++ = '\\0';\n> > -\t\t\tval = (Node *) makeString(pstrdup(p));\n> > -\t\t}\n> > -\t\tresult = lappend(result, makeDefElem(pstrdup(s), val, -1));\n> > -\t}\n> > -\n> > -\treturn result;\n> > +\toptionsSpecSetAddString(relopts->spec_set, name, desc, NoLock, 0,\n> > offset,\n> > +\t\t\t\t\t\t\t\t\t\t\n\tdefault_val, validator);\n> > +/* FIXME solve mistery with filler option! */\n> > \n> > }\n> > \n> > /*\n> > \n> > @@ -1372,12 +245,13 @@ untransformRelOptions(Datum options)\n> > \n> > */\n> > \n> > bytea *\n> > extractRelOptions(HeapTuple tuple, TupleDesc tupdesc,\n> > \n> > -\t\t\t\t amoptions_function amoptions)\n> > +\t\t\t\t amreloptspecset_function \namoptionsspecsetfn)\n> > \n> > {\n> > \n> > \tbytea\t *options;\n> > \tbool\t\tisnull;\n> > \tDatum\t\tdatum;\n> > \tForm_pg_class classForm;\n> > \n> > +\toptions_spec_set *spec_set;\n> > \n> > \tdatum = fastgetattr(tuple,\n> > \t\n> > \t\t\t\t\t\tAnum_pg_class_reloptions,\n> > \n> > @@ -1394,702 +268,341 @@ extractRelOptions(HeapTuple tuple, TupleDesc\n> > tupdesc,> \n> > \t\tcase RELKIND_RELATION:\n> > \t\tcase RELKIND_TOASTVALUE:\n> > \n> > \t\tcase RELKIND_MATVIEW:\n> > -\t\t\toptions = heap_reloptions(classForm->relkind, datum, \nfalse);\n> > +\t\t\tspec_set = get_heap_relopt_spec_set();\n> > \n> > \t\t\tbreak;\n> > \t\t\n> > \t\tcase RELKIND_PARTITIONED_TABLE:\n> > -\t\t\toptions = partitioned_table_reloptions(datum, \nfalse);\n> > +\t\t\tspec_set = get_partitioned_relopt_spec_set();\n> > \n> > \t\t\tbreak;\n> > \t\t\n> > \t\tcase RELKIND_VIEW:\n> > -\t\t\toptions = view_reloptions(datum, false);\n> > +\t\t\tspec_set = get_view_relopt_spec_set();\n> > \n> > \t\t\tbreak;\n> > \t\t\n> > \t\tcase RELKIND_INDEX:\n> > \n> > \t\tcase RELKIND_PARTITIONED_INDEX:\n> > -\t\t\toptions = index_reloptions(amoptions, datum, false);\n> > +\t\t\tif (amoptionsspecsetfn)\n> > +\t\t\t\tspec_set = amoptionsspecsetfn();\n> > +\t\t\telse\n> > +\t\t\t\tspec_set = NULL;\n> > \n> > \t\t\tbreak;\n> > \t\t\n> > \t\tcase RELKIND_FOREIGN_TABLE:\n> > -\t\t\toptions = NULL;\n> > +\t\t\tspec_set = NULL;\n> > \n> > \t\t\tbreak;\n> > \t\t\n> > \t\tdefault:\n> > \t\t\tAssert(false);\t\t/* can't get here */\n> > \n> > -\t\t\toptions = NULL;\t\t/* keep compiler quiet */\n> > +\t\t\tspec_set = NULL;\t\t/* keep compiler quiet */\n> > \n> > \t\t\tbreak;\n> > \t\n> > \t}\n> > \n> > +\tif (spec_set)\n> > +\t\toptions = optionsTextArrayToBytea(spec_set, datum, 0);\n> > +\telse\n> > +\t\toptions = NULL;\n> > \n> > \treturn options;\n> > \n> > }\n> > \n> > -static void\n> > -parseRelOptionsInternal(Datum options, bool validate,\n> > -\t\t\t\t\t\trelopt_value *reloptions, \nint numoptions)\n> > -{\n> > -\tArrayType *array = DatumGetArrayTypeP(options);\n> > -\tDatum\t *optiondatums;\n> > -\tint\t\t\tnoptions;\n> > -\tint\t\t\ti;\n> > -\n> > -\tdeconstruct_array(array, TEXTOID, -1, false, TYPALIGN_INT,\n> > -\t\t\t\t\t &optiondatums, NULL, &noptions);\n> > +options_spec_set *\n> > +get_stdrd_relopt_spec_set(relopt_kind kind)\n> > +{\n> > +\tbool is_for_toast = (kind == RELOPT_KIND_TOAST);\n> > +\n> > +\toptions_spec_set * stdrd_relopt_spec_set = allocateOptionsSpecSet(\n> > +\t\t\t\t\tis_for_toast ? \"toast\" : NULL, \nsizeof(StdRdOptions), 0); //FIXME\n> > change 0 to actual value (may be)\n> > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"fillfactor\",\n> > +\t\t\t\t\t\t\t\t \"Packs table \npages only to this percentag\",\n> > +\t\t\t\t\t\t\t\t \nShareUpdateExclusiveLock,\t\t/* since it applies only\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t * to later inserts */\n> > +\t\t\t\t\t\t\t\tis_for_toast \n? OPTION_DEFINITION_FLAG_REJECT : 0,\n> > +\t\t\t\t\t\t\t\t\noffsetof(StdRdOptions, fillfactor),\n> > +\t\t\t\t\t\t HEAP_DEFAULT_FILLFACTOR, \nHEAP_MIN_FILLFACTOR, 100);\n> > +\toptionsSpecSetAddBool(stdrd_relopt_spec_set, \"autovacuum_enabled\",\n> > +\t\t\t\t\t\t\t \"Enables autovacuum \nin this relation\",\n> > +\t\t\t\t\t\t\t \nShareUpdateExclusiveLock, 0,\n> > +\t\t\toffsetof(StdRdOptions, autovacuum) + \noffsetof(AutoVacOpts, enabled),\n> > +\t\t\t\t\t\t\t true);\n> > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set,\n> > \"autovacuum_vacuum_threshold\", +\t\t\t\t\"Minimum number \nof tuple updates or\n> > deletes prior to vacuum\", +\t\t\t\t\t\t\n\t ShareUpdateExclusiveLock,\n> > +\t\t\t\t\t0, offsetof(StdRdOptions, autovacuum) \n+ offsetof(AutoVacOpts,\n> > vacuum_threshold), +\t\t\t\t\t\t\t \n-1, 0, INT_MAX);\n> > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set,\n> > \"autovacuum_analyze_threshold\", +\t\t\t\t\"Minimum number \nof tuple updates or\n> > deletes prior to vacuum\", +\t\t\t\t\t\t\n\t ShareUpdateExclusiveLock,\n> > +\t\t\t\t\t\t\t is_for_toast ? \nOPTION_DEFINITION_FLAG_REJECT : 0,\n> > +\t\t\t\t\t offsetof(StdRdOptions, autovacuum) + \noffsetof(AutoVacOpts,\n> > analyze_threshold), +\t\t\t\t\t\t\t \n-1, 0, INT_MAX);\n> > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set,\n> > \"autovacuum_vacuum_cost_limit\", +\t\t\t \"Vacuum cost amount \navailable\n> > before napping, for autovacuum\", +\t\t\t\t\t\t\n\t ShareUpdateExclusiveLock,\n> > +\t\t\t\t 0, offsetof(StdRdOptions, autovacuum) + \noffsetof(AutoVacOpts,\n> > vacuum_cost_limit), +\t\t\t\t\t\t\t \n-1, 0, 10000);\n> > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \n\"autovacuum_freeze_min_age\",\n> > +\t \"Minimum age at which VACUUM should freeze a table row, for\n> > autovacuum\",\n> > +\t\t\t\t\t\t\t \nShareUpdateExclusiveLock,\n> > +\t\t\t\t\t 0, offsetof(StdRdOptions, \nautovacuum) + offsetof(AutoVacOpts,\n> > freeze_min_age), +\t\t\t\t\t\t\t \n-1, 0, 1000000000);\n> > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \n\"autovacuum_freeze_max_age\",\n> > +\t\"Age at which to autovacuum a table to prevent transaction ID\n> > wraparound\", +\t\t\t\t\t\t\t \nShareUpdateExclusiveLock,\n> > +\t\t\t\t\t 0, offsetof(StdRdOptions, \nautovacuum) + offsetof(AutoVacOpts,\n> > freeze_max_age), +\t\t\t\t\t\t\t \n-1, 100000, 2000000000);\n> > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set,\n> > \"autovacuum_freeze_table_age\", +\t\t\t\t\t\t\t\n \"Age at which VACUUM should\n> > perform a full table sweep to freeze row versions\", +\t\t\t\t\n\t\t\t\n> > ShareUpdateExclusiveLock,\n> > +\t\t\t\t\t0, offsetof(StdRdOptions, autovacuum) \n+ offsetof(AutoVacOpts,\n> > freeze_table_age), +\t\t\t\t\t\t\t \n-1, 0, 2000000000);\n> > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set,\n> > \"autovacuum_multixact_freeze_min_age\", +\t\t\t\t\t\n\t\t \"Minimum multixact age at\n> > which VACUUM should freeze a row multixact's, for autovacuum\", +\t\t\n\t\t\t\t\t\n> > ShareUpdateExclusiveLock,\n> > +\t\t\t0, offsetof(StdRdOptions, autovacuum) + \noffsetof(AutoVacOpts,\n> > multixact_freeze_min_age), +\t\t\t\t\t\t\t\n -1, 0, 1000000000);\n> > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set,\n> > \"autovacuum_multixact_freeze_max_age\", +\t\t\t\t\t\t\n\t \"Multixact age at which\n> > to autovacuum a table to prevent multixact wraparound\", +\t\t\t\n\t\t\t\t\n> > ShareUpdateExclusiveLock,\n> > +\t\t\t0, offsetof(StdRdOptions, autovacuum) + \noffsetof(AutoVacOpts,\n> > multixact_freeze_max_age), +\t\t\t\t\t\t\t\n -1, 10000, 2000000000);\n> > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set,\n> > \"autovacuum_multixact_freeze_table_age\", +\t\t\t\t\t\n\t\t \"Age of multixact at\n> > which VACUUM should perform a full table sweep to freeze row versions\",\n> > +\t\t\t\t\t\t\t \nShareUpdateExclusiveLock,\n> > +\t\t 0, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > multixact_freeze_table_age), +\t\t\t\t\t\t\n\t -1, 0, 2000000000);\n> > +\t\noptionsSpecSetAddInt(stdrd_relopt_spec_set,\"log_autovacuum_min_duration\"\n> > ,\n> > +\t\t\t\t\t\t\t \"Sets the minimum \nexecution time above which autovacuum actions\n> > will be logged\", +\t\t\t\t\t\t\t \nShareUpdateExclusiveLock,\n> > +\t\t\t\t\t0, offsetof(StdRdOptions, autovacuum) \n+ offsetof(AutoVacOpts,\n> > log_min_duration), +\t\t\t\t\t\t\t \n-1, -1, INT_MAX);\n> > +\toptionsSpecSetAddReal(stdrd_relopt_spec_set,\n> > \"autovacuum_vacuum_cost_delay\", +\t\t\t\t\t\t\n \"Vacuum cost delay in\n> > milliseconds, for autovacuum\",\n> > +\t\t\t\t\t\t\t \nShareUpdateExclusiveLock,\n> > +\t\t\t\t 0, offsetof(StdRdOptions, autovacuum) + \noffsetof(AutoVacOpts,\n> > vacuum_cost_delay), +\t\t\t\t\t\t\t \n-1, 0.0, 100.0);\n> > +\toptionsSpecSetAddReal(stdrd_relopt_spec_set,\n> > \"autovacuum_vacuum_scale_factor\", +\t\t\t\t\t\t\n\t \"Number of tuple updates or\n> > deletes prior to vacuum as a fraction of reltuples\", +\t\t\t\t\n\t\t\t \n> > ShareUpdateExclusiveLock,\n> > +\t\t\t\t 0, offsetof(StdRdOptions, autovacuum) + \noffsetof(AutoVacOpts,\n> > vacuum_scale_factor), +\t\t\t\t\t\t\t \n-1, 0.0, 100.0);\n> > +\n> > +\toptionsSpecSetAddReal(stdrd_relopt_spec_set,\n> > \"autovacuum_vacuum_insert_scale_factor\", +\t\t\t\t\t\n\t\t \"Number of tuple\n> > inserts prior to vacuum as a fraction of reltuples\", +\t\t\t\t\n\t\t\t \n> > ShareUpdateExclusiveLock,\n> > +\t\t\t\t 0, offsetof(StdRdOptions, autovacuum) + \noffsetof(AutoVacOpts,\n> > vacuum_ins_scale_factor), +\t\t\t\t\t\t\t\n -1, 0.0, 100.0);\n> > +\n> > +\toptionsSpecSetAddReal(stdrd_relopt_spec_set,\n> > \"autovacuum_analyze_scale_factor\", +\t\t\t\t\t\n\t\t \"Number of tuple inserts,\n> > updates or deletes prior to analyze as a fraction of reltuples\", +\t\t\n\t\t\t\t\t\n> > ShareUpdateExclusiveLock,\n> > +\t\t\t\t\t\t\t is_for_toast ? \nOPTION_DEFINITION_FLAG_REJECT : 0,\n> > +\t\t\t\t offsetof(StdRdOptions, autovacuum) + \noffsetof(AutoVacOpts,\n> > analyze_scale_factor), +\t\t\t\t\t\t\t\n -1, 0.0, 100.0);\n> > +\n> > +\n> > +\n> > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"toast_tuple_target\",\n> > +\t\t\t\t\t\t\t\t \"Sets the \ntarget tuple length at which external columns will be\n> > toasted\", +\t\t\t\t\t\t\t\t\nShareUpdateExclusiveLock,\n> > +\t\t\t\t\t\t\t\tis_for_toast \n? OPTION_DEFINITION_FLAG_REJECT : 0,\n> > +\t\t\t\t\t\t\t\t\noffsetof(StdRdOptions, toast_tuple_target),\n> > +\t\t\t\t\t\t TOAST_TUPLE_TARGET, 128, \nTOAST_TUPLE_TARGET_MAIN);\n> > +\n> > +\toptionsSpecSetAddBool(stdrd_relopt_spec_set, \"user_catalog_table\",\n> > +\t\t\t\t\t\t\t\t \"Declare a \ntable as an additional catalog table, e.g. for the\n> > purpose of logical replication\", +\t\t\t\t\t\t\n\t\t AccessExclusiveLock,\n> > +\t\t\t\t\t\t\t\tis_for_toast \n? OPTION_DEFINITION_FLAG_REJECT : 0,\n> > +\t\t\t\t\t\t\t\t \noffsetof(StdRdOptions, user_catalog_table),\n> > +\t\t\t\t\t\t\t\t false);\n> > +\n> > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"parallel_workers\",\n> > +\t\t\t\t\t\t\t\t\"Number of \nparallel processes that can be used per executor node\n> > for this relation.\", +\t\t\t\t\t\t\t\n\tShareUpdateExclusiveLock,\n> > +\t\t\t\t\t\t\t\tis_for_toast \n? OPTION_DEFINITION_FLAG_REJECT : 0,\n> > +\t\t\t\t\t\t\t\t\noffsetof(StdRdOptions, parallel_workers),\n> > +\t\t\t\t\t\t\t\t-1, 0, 1024);\n> > +\n> > +\toptionsSpecSetAddEnum(stdrd_relopt_spec_set, \"vacuum_index_cleanup\",\n> > +\t\t\t\t\t\t\t\t\"Controls \nindex vacuuming and index cleanup\",\n> > +\t\t\t\t\t\t\t\t\nShareUpdateExclusiveLock, 0,\n> > +\t\t\t\t\t\t\t\t\noffsetof(StdRdOptions, vacuum_index_cleanup),\n> > +\t\t\t\t\t\t\t\t\nStdRdOptIndexCleanupValues,\n> > +\t\t\t\t\t\t\t\t\nSTDRD_OPTION_VACUUM_INDEX_CLEANUP_AUTO,\n> > +\t\t\t\t\t\t\t\t\ngettext_noop(\"Valid values are \\\"on\\\", \\\"off\\\", and \\\"auto\\\".\"));\n> > +\n> > +\toptionsSpecSetAddBool(stdrd_relopt_spec_set, \"vacuum_truncate\",\n> > +\t\t\t\t\t\t\t\t\"Enables \nvacuum to truncate empty pages at the end of this\n> > table\",\n> > +\t\t\t\t\t\t\t\t\nShareUpdateExclusiveLock, 0,\n> > +\t\t\t\t\t\t\t\t\noffsetof(StdRdOptions, vacuum_truncate),\n> > +\t\t\t\t\t\t\t\ttrue);\n> > +\n> > +// FIXME Do something with OIDS\n> > +\n> > +\treturn stdrd_relopt_spec_set;\n> > +}\n> > +\n> > +\n> > +static options_spec_set *heap_relopt_spec_set = NULL;\n> > +\n> > +options_spec_set *\n> > +get_heap_relopt_spec_set(void)\n> > +{\n> > +\tif (heap_relopt_spec_set)\n> > +\t\treturn heap_relopt_spec_set;\n> > +\theap_relopt_spec_set = get_stdrd_relopt_spec_set(RELOPT_KIND_HEAP);\n> > +\treturn heap_relopt_spec_set;\n> > +}\n> > +\n> > +static options_spec_set *toast_relopt_spec_set = NULL;\n> > +\n> > +options_spec_set *\n> > +get_toast_relopt_spec_set(void)\n> > +{\n> > +\tif (toast_relopt_spec_set)\n> > +\t\treturn toast_relopt_spec_set;\n> > +\ttoast_relopt_spec_set = get_stdrd_relopt_spec_set(RELOPT_KIND_TOAST);\n> > +\treturn toast_relopt_spec_set;\n> > +}\n> > +\n> > +static options_spec_set *partitioned_relopt_spec_set = NULL;\n> > \n> > -\tfor (i = 0; i < noptions; i++)\n> > -\t{\n> > -\t\tchar\t *text_str = VARDATA(optiondatums[i]);\n> > -\t\tint\t\t\ttext_len = VARSIZE(optiondatums[i]) \n- VARHDRSZ;\n> > -\t\tint\t\t\tj;\n> > -\n> > -\t\t/* Search for a match in reloptions */\n> > -\t\tfor (j = 0; j < numoptions; j++)\n> > -\t\t{\n> > -\t\t\tint\t\t\tkw_len = reloptions[j].gen-\n>namelen;\n> > -\n> > -\t\t\tif (text_len > kw_len && text_str[kw_len] == '=' &&\n> > -\t\t\t\tstrncmp(text_str, reloptions[j].gen->name, \nkw_len) == 0)\n> > -\t\t\t{\n> > -\t\t\t\tparse_one_reloption(&reloptions[j], \ntext_str, text_len,\n> > -\t\t\t\t\t\t\t\t\t\nvalidate);\n> > -\t\t\t\tbreak;\n> > -\t\t\t}\n> > -\t\t}\n> > -\n> > -\t\tif (j >= numoptions && validate)\n> > -\t\t{\n> > -\t\t\tchar\t *s;\n> > -\t\t\tchar\t *p;\n> > -\n> > -\t\t\ts = TextDatumGetCString(optiondatums[i]);\n> > -\t\t\tp = strchr(s, '=');\n> > -\t\t\tif (p)\n> > -\t\t\t\t*p = '\\0';\n> > -\t\t\tereport(ERROR,\n> > -\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > -\t\t\t\t\t errmsg(\"unrecognized parameter \n\\\"%s\\\"\", s)));\n> > -\t\t}\n> > -\t}\n> > -\n> > -\t/* It's worth avoiding memory leaks in this function */\n> > -\tpfree(optiondatums);\n> > +options_spec_set *\n> > +get_partitioned_relopt_spec_set(void)\n> > +{\n> > +\tif (partitioned_relopt_spec_set)\n> > +\t\treturn partitioned_relopt_spec_set;\n> > +\tpartitioned_relopt_spec_set = allocateOptionsSpecSet(\n> > +\t\t\t\t\tNULL, sizeof(StdRdOptions), 0);\n> > +\t/* No options for now, so spec set is empty */\n> > \n> > -\tif (((void *) array) != DatumGetPointer(options))\n> > -\t\tpfree(array);\n> > +\treturn partitioned_relopt_spec_set;\n> > \n> > }\n> > \n> > /*\n> > \n> > - * Interpret reloptions that are given in text-array format.\n> > - *\n> > - * options is a reloption text array as constructed by\n> > transformRelOptions. - * kind specifies the family of options to be\n> > processed.\n> > - *\n> > - * The return value is a relopt_value * array on which the options\n> > actually - * set in the options array are marked with isset=true. The\n> > length of this - * array is returned in *numrelopts. Options not set are\n> > also present in the - * array; this is so that the caller can easily\n> > locate the default values. - *\n> > - * If there are no options of the given kind, numrelopts is set to 0 and\n> > NULL - * is returned (unless options are illegally supplied despite none\n> > being - * defined, in which case an error occurs).\n> > - *\n> > - * Note: values of type int, bool and real are allocated as part of the\n> > - * returned array. Values of type string are allocated separately and\n> > must - * be freed by the caller.\n> > + * Parse local options, allocate a bytea struct that's of the specified\n> > + * 'base_size' plus any extra space that's needed for string variables,\n> > + * fill its option's fields located at the given offsets and return it.\n> > \n> > */\n> > \n> > -static relopt_value *\n> > -parseRelOptions(Datum options, bool validate, relopt_kind kind,\n> > -\t\t\t\tint *numrelopts)\n> > -{\n> > -\trelopt_value *reloptions = NULL;\n> > -\tint\t\t\tnumoptions = 0;\n> > -\tint\t\t\ti;\n> > -\tint\t\t\tj;\n> > -\n> > -\tif (need_initialization)\n> > -\t\tinitialize_reloptions();\n> > -\n> > -\t/* Build a list of expected options, based on kind */\n> > -\n> > -\tfor (i = 0; relOpts[i]; i++)\n> > -\t\tif (relOpts[i]->kinds & kind)\n> > -\t\t\tnumoptions++;\n> > -\n> > -\tif (numoptions > 0)\n> > -\t{\n> > -\t\treloptions = palloc(numoptions * sizeof(relopt_value));\n> > -\n> > -\t\tfor (i = 0, j = 0; relOpts[i]; i++)\n> > -\t\t{\n> > -\t\t\tif (relOpts[i]->kinds & kind)\n> > -\t\t\t{\n> > -\t\t\t\treloptions[j].gen = relOpts[i];\n> > -\t\t\t\treloptions[j].isset = false;\n> > -\t\t\t\tj++;\n> > -\t\t\t}\n> > -\t\t}\n> > -\t}\n> > -\n> > -\t/* Done if no options */\n> > -\tif (PointerIsValid(DatumGetPointer(options)))\n> > -\t\tparseRelOptionsInternal(options, validate, reloptions, \nnumoptions);\n> > -\n> > -\t*numrelopts = numoptions;\n> > -\treturn reloptions;\n> > -}\n> > -\n> > -/* Parse local unregistered options. */\n> > -static relopt_value *\n> > -parseLocalRelOptions(local_relopts *relopts, Datum options, bool\n> > validate)\n> > +void *\n> > +build_local_reloptions(local_relopts *relopts, Datum options, bool\n> > validate)> \n> > {\n> > \n> > -\tint\t\t\tnopts = list_length(relopts->options);\n> > -\trelopt_value *values = palloc(sizeof(*values) * nopts);\n> > +\tvoid\t *opts;\n> > \n> > \tListCell *lc;\n> > \n> > -\tint\t\t\ti = 0;\n> > -\n> > -\tforeach(lc, relopts->options)\n> > -\t{\n> > -\t\tlocal_relopt *opt = lfirst(lc);\n> > -\n> > -\t\tvalues[i].gen = opt->option;\n> > -\t\tvalues[i].isset = false;\n> > -\n> > -\t\ti++;\n> > -\t}\n> > -\n> > -\tif (options != (Datum) 0)\n> > -\t\tparseRelOptionsInternal(options, validate, values, nopts);\n> > +\topts = (void *) optionsTextArrayToBytea(relopts->spec_set, options,\n> > validate);\n> > \n> > -\treturn values;\n> > -}\n> > -\n> > -/*\n> > - * Subroutine for parseRelOptions, to parse and validate a single\n> > option's\n> > - * value\n> > - */\n> > -static void\n> > -parse_one_reloption(relopt_value *option, char *text_str, int text_len,\n> > -\t\t\t\t\tbool validate)\n> > -{\n> > -\tchar\t *value;\n> > -\tint\t\t\tvalue_len;\n> > -\tbool\t\tparsed;\n> > -\tbool\t\tnofree = false;\n> > -\n> > -\tif (option->isset && validate)\n> > -\t\tereport(ERROR,\n> > -\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > -\t\t\t\t errmsg(\"parameter \\\"%s\\\" specified more than \nonce\",\n> > -\t\t\t\t\t\toption->gen->name)));\n> > -\n> > -\tvalue_len = text_len - option->gen->namelen - 1;\n> > -\tvalue = (char *) palloc(value_len + 1);\n> > -\tmemcpy(value, text_str + option->gen->namelen + 1, value_len);\n> > -\tvalue[value_len] = '\\0';\n> > -\n> > -\tswitch (option->gen->type)\n> > -\t{\n> > -\t\tcase RELOPT_TYPE_BOOL:\n> > -\t\t\t{\n> > -\t\t\t\tparsed = parse_bool(value, &option-\n>values.bool_val);\n> > -\t\t\t\tif (validate && !parsed)\n> > -\t\t\t\t\tereport(ERROR,\n> > -\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > -\t\t\t\t\t\t\t errmsg(\"invalid \nvalue for boolean option \\\"%s\\\": %s\",\n> > -\t\t\t\t\t\t\t\t\t\noption->gen->name, value)));\n> > -\t\t\t}\n> > -\t\t\tbreak;\n> > -\t\tcase RELOPT_TYPE_INT:\n> > -\t\t\t{\n> > -\t\t\t\trelopt_int *optint = (relopt_int *) option-\n>gen;\n> > -\n> > -\t\t\t\tparsed = parse_int(value, &option-\n>values.int_val, 0, NULL);\n> > -\t\t\t\tif (validate && !parsed)\n> > -\t\t\t\t\tereport(ERROR,\n> > -\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > -\t\t\t\t\t\t\t errmsg(\"invalid \nvalue for integer option \\\"%s\\\": %s\",\n> > -\t\t\t\t\t\t\t\t\t\noption->gen->name, value)));\n> > -\t\t\t\tif (validate && (option->values.int_val < \noptint->min ||\n> > -\t\t\t\t\t\t\t\t option-\n>values.int_val > optint->max))\n> > -\t\t\t\t\tereport(ERROR,\n> > -\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > -\t\t\t\t\t\t\t errmsg(\"value %s \nout of bounds for option \\\"%s\\\"\",\n> > -\t\t\t\t\t\t\t\t\t\nvalue, option->gen->name),\n> > -\t\t\t\t\t\t\t errdetail(\"Valid \nvalues are between \\\"%d\\\" and \\\"%d\\\".\",\n> > -\t\t\t\t\t\t\t\t\t \noptint->min, optint->max)));\n> > -\t\t\t}\n> > -\t\t\tbreak;\n> > -\t\tcase RELOPT_TYPE_REAL:\n> > -\t\t\t{\n> > -\t\t\t\trelopt_real *optreal = (relopt_real *) \noption->gen;\n> > -\n> > -\t\t\t\tparsed = parse_real(value, &option-\n>values.real_val, 0, NULL);\n> > -\t\t\t\tif (validate && !parsed)\n> > -\t\t\t\t\tereport(ERROR,\n> > -\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > -\t\t\t\t\t\t\t errmsg(\"invalid \nvalue for floating point option \\\"%s\\\": %s\",\n> > -\t\t\t\t\t\t\t\t\t\noption->gen->name, value)));\n> > -\t\t\t\tif (validate && (option->values.real_val < \noptreal->min ||\n> > -\t\t\t\t\t\t\t\t option-\n>values.real_val > optreal->max))\n> > -\t\t\t\t\tereport(ERROR,\n> > -\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > -\t\t\t\t\t\t\t errmsg(\"value %s \nout of bounds for option \\\"%s\\\"\",\n> > -\t\t\t\t\t\t\t\t\t\nvalue, option->gen->name),\n> > -\t\t\t\t\t\t\t errdetail(\"Valid \nvalues are between \\\"%f\\\" and \\\"%f\\\".\",\n> > -\t\t\t\t\t\t\t\t\t \noptreal->min, optreal->max)));\n> > -\t\t\t}\n> > -\t\t\tbreak;\n> > -\t\tcase RELOPT_TYPE_ENUM:\n> > -\t\t\t{\n> > -\t\t\t\trelopt_enum *optenum = (relopt_enum *) \noption->gen;\n> > -\t\t\t\trelopt_enum_elt_def *elt;\n> > -\n> > -\t\t\t\tparsed = false;\n> > -\t\t\t\tfor (elt = optenum->members; elt-\n>string_val; elt++)\n> > -\t\t\t\t{\n> > -\t\t\t\t\tif (pg_strcasecmp(value, elt-\n>string_val) == 0)\n> > -\t\t\t\t\t{\n> > -\t\t\t\t\t\toption->values.enum_val = \nelt->symbol_val;\n> > -\t\t\t\t\t\tparsed = true;\n> > -\t\t\t\t\t\tbreak;\n> > -\t\t\t\t\t}\n> > -\t\t\t\t}\n> > -\t\t\t\tif (validate && !parsed)\n> > -\t\t\t\t\tereport(ERROR,\n> > -\t\t\t\t\t\t\t\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > -\t\t\t\t\t\t\t errmsg(\"invalid \nvalue for enum option \\\"%s\\\": %s\",\n> > -\t\t\t\t\t\t\t\t\t\noption->gen->name, value),\n> > -\t\t\t\t\t\t\t optenum->detailmsg \n?\n> > -\t\t\t\t\t\t\t \nerrdetail_internal(\"%s\", _(optenum->detailmsg)) : 0));\n> > -\n> > -\t\t\t\t/*\n> > -\t\t\t\t * If value is not among the allowed string \nvalues, but we are\n> > -\t\t\t\t * not asked to validate, just use the \ndefault numeric value.\n> > -\t\t\t\t */\n> > -\t\t\t\tif (!parsed)\n> > -\t\t\t\t\toption->values.enum_val = optenum-\n>default_val;\n> > -\t\t\t}\n> > -\t\t\tbreak;\n> > -\t\tcase RELOPT_TYPE_STRING:\n> > -\t\t\t{\n> > -\t\t\t\trelopt_string *optstring = (relopt_string *) \noption->gen;\n> > -\n> > -\t\t\t\toption->values.string_val = value;\n> > -\t\t\t\tnofree = true;\n> > -\t\t\t\tif (validate && optstring->validate_cb)\n> > -\t\t\t\t\t(optstring->validate_cb) (value);\n> > -\t\t\t\tparsed = true;\n> > -\t\t\t}\n> > -\t\t\tbreak;\n> > -\t\tdefault:\n> > -\t\t\telog(ERROR, \"unsupported reloption type %d\", option-\n>gen->type);\n> > -\t\t\tparsed = true;\t\t/* quiet compiler */\n> > -\t\t\tbreak;\n> > -\t}\n> > +\tforeach(lc, relopts->validators)\n> > +\t\t((relopts_validator) lfirst(lc)) (opts, NULL, 0);\n> > +//\t\t((relopts_validator) lfirst(lc)) (opts, vals, noptions);\n> > +// FIXME solve problem with validation of separate option values;\n> > +\treturn opts;\n> > \n> > -\tif (parsed)\n> > -\t\toption->isset = true;\n> > -\tif (!nofree)\n> > -\t\tpfree(value);\n> > \n> > }\n> > \n> > /*\n> > \n> > - * Given the result from parseRelOptions, allocate a struct that's of the\n> > - * specified base size plus any extra space that's needed for string\n> > variables. - *\n> > - * \"base\" should be sizeof(struct) of the reloptions struct (StdRdOptions\n> > or - * equivalent).\n> > + * get_view_relopt_spec_set\n> > + *\t\tReturns an options catalog for view relation.\n> > \n> > */\n> > \n> > -static void *\n> > -allocateReloptStruct(Size base, relopt_value *options, int numoptions)\n> > -{\n> > -\tSize\t\tsize = base;\n> > -\tint\t\t\ti;\n> > -\n> > -\tfor (i = 0; i < numoptions; i++)\n> > -\t{\n> > -\t\trelopt_value *optval = &options[i];\n> > -\n> > -\t\tif (optval->gen->type == RELOPT_TYPE_STRING)\n> > -\t\t{\n> > -\t\t\trelopt_string *optstr = (relopt_string *) optval-\n>gen;\n> > -\n> > -\t\t\tif (optstr->fill_cb)\n> > -\t\t\t{\n> > -\t\t\t\tconst char *val = optval->isset ? optval-\n>values.string_val :\n> > -\t\t\t\toptstr->default_isnull ? NULL : optstr-\n>default_val;\n> > -\n> > -\t\t\t\tsize += optstr->fill_cb(val, NULL);\n> > -\t\t\t}\n> > -\t\t\telse\n> > -\t\t\t\tsize += GET_STRING_RELOPTION_LEN(*optval) + \n1;\n> > -\t\t}\n> > -\t}\n> > -\n> > -\treturn palloc0(size);\n> > -}\n> > +static options_spec_set *view_relopt_spec_set = NULL;\n> > \n> > -/*\n> > - * Given the result of parseRelOptions and a parsing table, fill in the\n> > - * struct (previously allocated with allocateReloptStruct) with the\n> > parsed\n> > - * values.\n> > - *\n> > - * rdopts is the pointer to the allocated struct to be filled.\n> > - * basesize is the sizeof(struct) that was passed to\n> > allocateReloptStruct.\n> > - * options, of length numoptions, is parseRelOptions' output.\n> > - * elems, of length numelems, is the table describing the allowed\n> > options.\n> > - * When validate is true, it is expected that all options appear in\n> > elems.\n> > - */\n> > -static void\n> > -fillRelOptions(void *rdopts, Size basesize,\n> > -\t\t\t relopt_value *options, int numoptions,\n> > -\t\t\t bool validate,\n> > -\t\t\t const relopt_parse_elt *elems, int numelems)\n> > +options_spec_set *\n> > +get_view_relopt_spec_set(void)\n> > \n> > {\n> > \n> > -\tint\t\t\ti;\n> > -\tint\t\t\toffset = basesize;\n> > +\tif (view_relopt_spec_set)\n> > +\t\treturn view_relopt_spec_set;\n> > \n> > -\tfor (i = 0; i < numoptions; i++)\n> > -\t{\n> > -\t\tint\t\t\tj;\n> > -\t\tbool\t\tfound = false;\n> > +\tview_relopt_spec_set = allocateOptionsSpecSet(NULL,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t sizeof(ViewOptions), 2);\n> > \n> > -\t\tfor (j = 0; j < numelems; j++)\n> > -\t\t{\n> > -\t\t\tif (strcmp(options[i].gen->name, elems[j].optname) \n== 0)\n> > -\t\t\t{\n> > -\t\t\t\trelopt_string *optstring;\n> > -\t\t\t\tchar\t *itempos = ((char *) rdopts) + \nelems[j].offset;\n> > -\t\t\t\tchar\t *string_val;\n> > -\n> > -\t\t\t\tswitch (options[i].gen->type)\n> > -\t\t\t\t{\n> > -\t\t\t\t\tcase RELOPT_TYPE_BOOL:\n> > -\t\t\t\t\t\t*(bool *) itempos = \noptions[i].isset ?\n> > -\t\t\t\t\t\t\t\noptions[i].values.bool_val :\n> > -\t\t\t\t\t\t\t((relopt_bool *) \noptions[i].gen)->default_val;\n> > -\t\t\t\t\t\tbreak;\n> > -\t\t\t\t\tcase RELOPT_TYPE_INT:\n> > -\t\t\t\t\t\t*(int *) itempos = \noptions[i].isset ?\n> > -\t\t\t\t\t\t\t\noptions[i].values.int_val :\n> > -\t\t\t\t\t\t\t((relopt_int *) \noptions[i].gen)->default_val;\n> > -\t\t\t\t\t\tbreak;\n> > -\t\t\t\t\tcase RELOPT_TYPE_REAL:\n> > -\t\t\t\t\t\t*(double *) itempos = \noptions[i].isset ?\n> > -\t\t\t\t\t\t\t\noptions[i].values.real_val :\n> > -\t\t\t\t\t\t\t((relopt_real *) \noptions[i].gen)->default_val;\n> > -\t\t\t\t\t\tbreak;\n> > -\t\t\t\t\tcase RELOPT_TYPE_ENUM:\n> > -\t\t\t\t\t\t*(int *) itempos = \noptions[i].isset ?\n> > -\t\t\t\t\t\t\t\noptions[i].values.enum_val :\n> > -\t\t\t\t\t\t\t((relopt_enum *) \noptions[i].gen)->default_val;\n> > -\t\t\t\t\t\tbreak;\n> > -\t\t\t\t\tcase RELOPT_TYPE_STRING:\n> > -\t\t\t\t\t\toptstring = (relopt_string \n*) options[i].gen;\n> > -\t\t\t\t\t\tif (options[i].isset)\n> > -\t\t\t\t\t\t\tstring_val = \noptions[i].values.string_val;\n> > -\t\t\t\t\t\telse if (!optstring-\n>default_isnull)\n> > -\t\t\t\t\t\t\tstring_val = \noptstring->default_val;\n> > -\t\t\t\t\t\telse\n> > -\t\t\t\t\t\t\tstring_val = NULL;\n> > -\n> > -\t\t\t\t\t\tif (optstring->fill_cb)\n> > -\t\t\t\t\t\t{\n> > -\t\t\t\t\t\t\tSize\t\t\nsize =\n> > -\t\t\t\t\t\t\toptstring-\n>fill_cb(string_val,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t (char *) rdopts + offset);\n> > -\n> > -\t\t\t\t\t\t\tif (size)\n> > -\t\t\t\t\t\t\t{\n> > -\t\t\t\t\t\t\t\t*(int *) \nitempos = offset;\n> > -\t\t\t\t\t\t\t\toffset += \nsize;\n> > -\t\t\t\t\t\t\t}\n> > -\t\t\t\t\t\t\telse\n> > -\t\t\t\t\t\t\t\t*(int *) \nitempos = 0;\n> > -\t\t\t\t\t\t}\n> > -\t\t\t\t\t\telse if (string_val == NULL)\n> > -\t\t\t\t\t\t\t*(int *) itempos = \n0;\n> > -\t\t\t\t\t\telse\n> > -\t\t\t\t\t\t{\n> > -\t\t\t\t\t\t\tstrcpy((char *) \nrdopts + offset, string_val);\n> > -\t\t\t\t\t\t\t*(int *) itempos = \noffset;\n> > -\t\t\t\t\t\t\toffset += \nstrlen(string_val) + 1;\n> > -\t\t\t\t\t\t}\n> > -\t\t\t\t\t\tbreak;\n> > -\t\t\t\t\tdefault:\n> > -\t\t\t\t\t\telog(ERROR, \"unsupported \nreloption type %d\",\n> > -\t\t\t\t\t\t\t options[i].gen-\n>type);\n> > -\t\t\t\t\t\tbreak;\n> > -\t\t\t\t}\n> > -\t\t\t\tfound = true;\n> > -\t\t\t\tbreak;\n> > -\t\t\t}\n> > -\t\t}\n> > -\t\tif (validate && !found)\n> > -\t\t\telog(ERROR, \"reloption \\\"%s\\\" not found in parse \ntable\",\n> > -\t\t\t\t options[i].gen->name);\n> > -\t}\n> > -\tSET_VARSIZE(rdopts, offset);\n> > -}\n> > +\toptionsSpecSetAddBool(view_relopt_spec_set, \"security_barrier\",\n> > +\t\t\t\t\t\t\t \"View acts as a row \nsecurity barrier\",\n> > +\t\t\t\t\t\t\t \nAccessExclusiveLock,\n> > +\t\t\t\t\t 0, offsetof(ViewOptions, \nsecurity_barrier), false);\n> > \n> > +\toptionsSpecSetAddEnum(view_relopt_spec_set, \"check_option\",\n> > +\t\t\t\t\t\t \"View has WITH CHECK \nOPTION defined (local or cascaded)\",\n> > +\t\t\t\t\t\t\t \nAccessExclusiveLock, 0,\n> > +\t\t\t\t\t\t\t \noffsetof(ViewOptions, check_option),\n> > +\t\t\t\t\t\t\t viewCheckOptValues,\n> > +\t\t\t\t\t\t\t \nVIEW_OPTION_CHECK_OPTION_NOT_SET,\n> > +\t\t\t\t\t\t\t gettext_noop(\"Valid \nvalues are \\\"local\\\" and \\\"cascaded\\\".\"));\n> > \n> > -/*\n> > - * Option parser for anything that uses StdRdOptions.\n> > - */\n> > -bytea *\n> > -default_reloptions(Datum reloptions, bool validate, relopt_kind kind)\n> > -{\n> > -\tstatic const relopt_parse_elt tab[] = {\n> > -\t\t{\"fillfactor\", RELOPT_TYPE_INT, offsetof(StdRdOptions, \nfillfactor)},\n> > -\t\t{\"autovacuum_enabled\", RELOPT_TYPE_BOOL,\n> > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, \nenabled)},\n> > -\t\t{\"autovacuum_vacuum_threshold\", RELOPT_TYPE_INT,\n> > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > vacuum_threshold)}, -\t\t{\"autovacuum_vacuum_insert_threshold\",\n> > RELOPT_TYPE_INT,\n> > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > vacuum_ins_threshold)}, -\t\t{\"autovacuum_analyze_threshold\",\n> > RELOPT_TYPE_INT,\n> > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > analyze_threshold)}, -\t\t{\"autovacuum_vacuum_cost_limit\", \nRELOPT_TYPE_INT,\n> > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > vacuum_cost_limit)}, -\t\t{\"autovacuum_freeze_min_age\", \nRELOPT_TYPE_INT,\n> > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > freeze_min_age)}, -\t\t{\"autovacuum_freeze_max_age\", \nRELOPT_TYPE_INT,\n> > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > freeze_max_age)}, -\t\t{\"autovacuum_freeze_table_age\", \nRELOPT_TYPE_INT,\n> > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > freeze_table_age)}, -\t\t{\"autovacuum_multixact_freeze_min_age\",\n> > RELOPT_TYPE_INT,\n> > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > multixact_freeze_min_age)}, -\t\t\n{\"autovacuum_multixact_freeze_max_age\",\n> > RELOPT_TYPE_INT,\n> > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > multixact_freeze_max_age)}, -\t\t\n{\"autovacuum_multixact_freeze_table_age\",\n> > RELOPT_TYPE_INT,\n> > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > multixact_freeze_table_age)}, -\t\t\n{\"log_autovacuum_min_duration\",\n> > RELOPT_TYPE_INT,\n> > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > log_min_duration)}, -\t\t{\"toast_tuple_target\", RELOPT_TYPE_INT,\n> > -\t\toffsetof(StdRdOptions, toast_tuple_target)},\n> > -\t\t{\"autovacuum_vacuum_cost_delay\", RELOPT_TYPE_REAL,\n> > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > vacuum_cost_delay)}, -\t\t{\"autovacuum_vacuum_scale_factor\",\n> > RELOPT_TYPE_REAL,\n> > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > vacuum_scale_factor)}, -\t\t\n{\"autovacuum_vacuum_insert_scale_factor\",\n> > RELOPT_TYPE_REAL,\n> > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > vacuum_ins_scale_factor)}, -\t\t\n{\"autovacuum_analyze_scale_factor\",\n> > RELOPT_TYPE_REAL,\n> > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > analyze_scale_factor)}, -\t\t{\"user_catalog_table\", \nRELOPT_TYPE_BOOL,\n> > -\t\toffsetof(StdRdOptions, user_catalog_table)},\n> > -\t\t{\"parallel_workers\", RELOPT_TYPE_INT,\n> > -\t\toffsetof(StdRdOptions, parallel_workers)},\n> > -\t\t{\"vacuum_index_cleanup\", RELOPT_TYPE_ENUM,\n> > -\t\toffsetof(StdRdOptions, vacuum_index_cleanup)},\n> > -\t\t{\"vacuum_truncate\", RELOPT_TYPE_BOOL,\n> > -\t\toffsetof(StdRdOptions, vacuum_truncate)}\n> > -\t};\n> > -\n> > -\treturn (bytea *) build_reloptions(reloptions, validate, kind,\n> > -\t\t\t\t\t\t\t\t\t \nsizeof(StdRdOptions),\n> > -\t\t\t\t\t\t\t\t\t \ntab, lengthof(tab));\n> > +\treturn view_relopt_spec_set;\n> > \n> > }\n> > \n> > /*\n> > \n> > - * build_reloptions\n> > - *\n> > - * Parses \"reloptions\" provided by the caller, returning them in a\n> > - * structure containing the parsed options. The parsing is done with\n> > - * the help of a parsing table describing the allowed options, defined\n> > - * by \"relopt_elems\" of length \"num_relopt_elems\".\n> > - *\n> > - * \"validate\" must be true if reloptions value is freshly built by\n> > - * transformRelOptions(), as opposed to being read from the catalog, in\n> > which - * case the values contained in it must already be valid.\n> > - *\n> > - * NULL is returned if the passed-in options did not match any of the\n> > options - * in the parsing table, unless validate is true in which case\n> > an error would - * be reported.\n> > + * get_attribute_options_spec_set\n> > + *\t\tReturns an options spec det for heap attributes\n> > \n> > */\n> > \n> > -void *\n> > -build_reloptions(Datum reloptions, bool validate,\n> > -\t\t\t\t relopt_kind kind,\n> > -\t\t\t\t Size relopt_struct_size,\n> > -\t\t\t\t const relopt_parse_elt *relopt_elems,\n> > -\t\t\t\t int num_relopt_elems)\n> > -{\n> > -\tint\t\t\tnumoptions;\n> > -\trelopt_value *options;\n> > -\tvoid\t *rdopts;\n> > -\n> > -\t/* parse options specific to given relation option kind */\n> > -\toptions = parseRelOptions(reloptions, validate, kind, &numoptions);\n> > -\tAssert(numoptions <= num_relopt_elems);\n> > -\n> > -\t/* if none set, we're done */\n> > -\tif (numoptions == 0)\n> > -\t{\n> > -\t\tAssert(options == NULL);\n> > -\t\treturn NULL;\n> > -\t}\n> > -\n> > -\t/* allocate and fill the structure */\n> > -\trdopts = allocateReloptStruct(relopt_struct_size, options, \nnumoptions);\n> > -\tfillRelOptions(rdopts, relopt_struct_size, options, numoptions,\n> > -\t\t\t\t validate, relopt_elems, \nnum_relopt_elems);\n> > +static options_spec_set *attribute_options_spec_set = NULL;\n> > \n> > -\tpfree(options);\n> > -\n> > -\treturn rdopts;\n> > -}\n> > -\n> > -/*\n> > - * Parse local options, allocate a bytea struct that's of the specified\n> > - * 'base_size' plus any extra space that's needed for string variables,\n> > - * fill its option's fields located at the given offsets and return it.\n> > - */\n> > -void *\n> > -build_local_reloptions(local_relopts *relopts, Datum options, bool\n> > validate) +options_spec_set *\n> > +get_attribute_options_spec_set(void)\n> > \n> > {\n> > \n> > -\tint\t\t\tnoptions = list_length(relopts->options);\n> > -\trelopt_parse_elt *elems = palloc(sizeof(*elems) * noptions);\n> > -\trelopt_value *vals;\n> > -\tvoid\t *opts;\n> > -\tint\t\t\ti = 0;\n> > -\tListCell *lc;\n> > +\tif (attribute_options_spec_set)\n> > +\t\t\treturn attribute_options_spec_set;\n> > \n> > -\tforeach(lc, relopts->options)\n> > -\t{\n> > -\t\tlocal_relopt *opt = lfirst(lc);\n> > -\n> > -\t\telems[i].optname = opt->option->name;\n> > -\t\telems[i].opttype = opt->option->type;\n> > -\t\telems[i].offset = opt->offset;\n> > -\n> > -\t\ti++;\n> > -\t}\n> > +\tattribute_options_spec_set = allocateOptionsSpecSet(NULL,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t sizeof(AttributeOpts), 2);\n> > \n> > -\tvals = parseLocalRelOptions(relopts, options, validate);\n> > -\topts = allocateReloptStruct(relopts->relopt_struct_size, vals,\n> > noptions);\n> > -\tfillRelOptions(opts, relopts->relopt_struct_size, vals, noptions,\n> > validate, -\t\t\t\t elems, noptions);\n> > +\toptionsSpecSetAddReal(attribute_options_spec_set, \"n_distinct\",\n> > +\t\t\t\t\t\t \"Sets the planner's \nestimate of the number of distinct values\n> > appearing in a column (excluding child relations).\", +\t\t\t\t\n\t\t \n> > ShareUpdateExclusiveLock,\n> > +\t\t\t 0, offsetof(AttributeOpts, n_distinct), 0, -1.0, \nDBL_MAX);\n> > \n> > -\tforeach(lc, relopts->validators)\n> > -\t\t((relopts_validator) lfirst(lc)) (opts, vals, noptions);\n> > -\n> > -\tif (elems)\n> > -\t\tpfree(elems);\n> > +\toptionsSpecSetAddReal(attribute_options_spec_set,\n> > +\t\t\t\t\t\t \"n_distinct_inherited\",\n> > +\t\t\t\t\t\t \"Sets the planner's \nestimate of the number of distinct values\n> > appearing in a column (including child relations).\", +\t\t\t\t\n\t\t \n> > ShareUpdateExclusiveLock,\n> > +\t 0, offsetof(AttributeOpts, n_distinct_inherited), 0, -1.0, DBL_MAX);\n> > \n> > -\treturn opts;\n> > +\treturn attribute_options_spec_set;\n> > \n> > }\n> > \n> > -/*\n> > - * Option parser for partitioned tables\n> > - */\n> > -bytea *\n> > -partitioned_table_reloptions(Datum reloptions, bool validate)\n> > -{\n> > -\t/*\n> > -\t * There are no options for partitioned tables yet, but this is able \nto\n> > do -\t * some validation.\n> > -\t */\n> > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > -\t\t\t\t\t\t\t\t\t \nRELOPT_KIND_PARTITIONED,\n> > -\t\t\t\t\t\t\t\t\t 0, \nNULL, 0);\n> > -}\n> > \n> > /*\n> > \n> > - * Option parser for views\n> > - */\n> > -bytea *\n> > -view_reloptions(Datum reloptions, bool validate)\n> > -{\n> > -\tstatic const relopt_parse_elt tab[] = {\n> > -\t\t{\"security_barrier\", RELOPT_TYPE_BOOL,\n> > -\t\toffsetof(ViewOptions, security_barrier)},\n> > -\t\t{\"check_option\", RELOPT_TYPE_ENUM,\n> > -\t\toffsetof(ViewOptions, check_option)}\n> > -\t};\n> > -\n> > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > -\t\t\t\t\t\t\t\t\t \nRELOPT_KIND_VIEW,\n> > -\t\t\t\t\t\t\t\t\t \nsizeof(ViewOptions),\n> > -\t\t\t\t\t\t\t\t\t \ntab, lengthof(tab));\n> > -}\n> > + * get_tablespace_options_spec_set\n> > + *\t\tReturns an options spec set for tablespaces\n> > +*/\n> > +static options_spec_set *tablespace_options_spec_set = NULL;\n> > \n> > -/*\n> > - * Parse options for heaps, views and toast tables.\n> > - */\n> > -bytea *\n> > -heap_reloptions(char relkind, Datum reloptions, bool validate)\n> > +options_spec_set *\n> > +get_tablespace_options_spec_set(void)\n> > \n> > {\n> > \n> > -\tStdRdOptions *rdopts;\n> > -\n> > -\tswitch (relkind)\n> > +\tif (!tablespace_options_spec_set)\n> > \n> > \t{\n> > \n> > -\t\tcase RELKIND_TOASTVALUE:\n> > -\t\t\trdopts = (StdRdOptions *)\n> > -\t\t\t\tdefault_reloptions(reloptions, validate, \nRELOPT_KIND_TOAST);\n> > -\t\t\tif (rdopts != NULL)\n> > -\t\t\t{\n> > -\t\t\t\t/* adjust default-only parameters for TOAST \nrelations */\n> > -\t\t\t\trdopts->fillfactor = 100;\n> > -\t\t\t\trdopts->autovacuum.analyze_threshold = -1;\n> > -\t\t\t\trdopts->autovacuum.analyze_scale_factor = \n-1;\n> > -\t\t\t}\n> > -\t\t\treturn (bytea *) rdopts;\n> > -\t\tcase RELKIND_RELATION:\n> > -\t\tcase RELKIND_MATVIEW:\n> > -\t\t\treturn default_reloptions(reloptions, validate, \nRELOPT_KIND_HEAP);\n> > -\t\tdefault:\n> > -\t\t\t/* other relkinds are not supported */\n> > -\t\t\treturn NULL;\n> > -\t}\n> > -}\n> > -\n> > -\n> > -/*\n> > - * Parse options for indexes.\n> > - *\n> > - *\tamoptions\tindex AM's option parser function\n> > - *\treloptions\toptions as text[] datum\n> > - *\tvalidate\terror flag\n> > - */\n> > -bytea *\n> > -index_reloptions(amoptions_function amoptions, Datum reloptions, bool\n> > validate) -{\n> > -\tAssert(amoptions != NULL);\n> > +\t\ttablespace_options_spec_set = allocateOptionsSpecSet(NULL,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t sizeof(TableSpaceOpts), 4);\n> > \n> > -\t/* Assume function is strict */\n> > -\tif (!PointerIsValid(DatumGetPointer(reloptions)))\n> > -\t\treturn NULL;\n> > +\t\toptionsSpecSetAddReal(tablespace_options_spec_set,\n> > +\t\t\t\t\t\t\t\t \n\"random_page_cost\",\n> > +\t\t\t\t\t\t\t\t \"Sets the \nplanner's estimate of the cost of a nonsequentially\n> > fetched disk page\", +\t\t\t\t\t\t\t\t\n ShareUpdateExclusiveLock,\n> > +\t\t\t0, offsetof(TableSpaceOpts, random_page_cost), -1, \n0.0, DBL_MAX);\n> > \n> > -\treturn amoptions(reloptions, validate);\n> > -}\n> > +\t\toptionsSpecSetAddReal(tablespace_options_spec_set, \n\"seq_page_cost\",\n> > +\t\t\t\t\t\t\t\t \"Sets the \nplanner's estimate of the cost of a sequentially\n> > fetched disk page\", +\t\t\t\t\t\t\t\t\n ShareUpdateExclusiveLock,\n> > +\t\t\t 0, offsetof(TableSpaceOpts, seq_page_cost), -1, \n0.0, DBL_MAX);\n> > \n> > -/*\n> > - * Option parser for attribute reloptions\n> > - */\n> > -bytea *\n> > -attribute_reloptions(Datum reloptions, bool validate)\n> > -{\n> > -\tstatic const relopt_parse_elt tab[] = {\n> > -\t\t{\"n_distinct\", RELOPT_TYPE_REAL, offsetof(AttributeOpts, \nn_distinct)},\n> > -\t\t{\"n_distinct_inherited\", RELOPT_TYPE_REAL, \noffsetof(AttributeOpts,\n> > n_distinct_inherited)} -\t};\n> > -\n> > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > -\t\t\t\t\t\t\t\t\t \nRELOPT_KIND_ATTRIBUTE,\n> > -\t\t\t\t\t\t\t\t\t \nsizeof(AttributeOpts),\n> > -\t\t\t\t\t\t\t\t\t \ntab, lengthof(tab));\n> > -}\n> > +\t\toptionsSpecSetAddInt(tablespace_options_spec_set,\n> > +\t\t\t\t\t\t\t\t \n\"effective_io_concurrency\",\n> > +\t\t\t\t\t\t\t\t \"Number of \nsimultaneous requests that can be handled efficiently\n> > by the disk subsystem\", +\t\t\t\t\t\t\t\t\n ShareUpdateExclusiveLock,\n> > +\t\t\t\t\t 0, offsetof(TableSpaceOpts, \neffective_io_concurrency),\n> > +#ifdef USE_PREFETCH\n> > +\t\t\t\t\t\t\t\t -1, 0, \nMAX_IO_CONCURRENCY\n> > +#else\n> > +\t\t\t\t\t\t\t\t 0, 0, 0\n> > +#endif\n> > +\t\t\t);\n> > \n> > -/*\n> > - * Option parser for tablespace reloptions\n> > - */\n> > -bytea *\n> > -tablespace_reloptions(Datum reloptions, bool validate)\n> > -{\n> > -\tstatic const relopt_parse_elt tab[] = {\n> > -\t\t{\"random_page_cost\", RELOPT_TYPE_REAL, \noffsetof(TableSpaceOpts,\n> > random_page_cost)}, -\t\t{\"seq_page_cost\", RELOPT_TYPE_REAL,\n> > offsetof(TableSpaceOpts, seq_page_cost)}, -\t\t\n{\"effective_io_concurrency\",\n> > RELOPT_TYPE_INT, offsetof(TableSpaceOpts, effective_io_concurrency)},\n> > -\t\t{\"maintenance_io_concurrency\", RELOPT_TYPE_INT,\n> > offsetof(TableSpaceOpts, maintenance_io_concurrency)} -\t};\n> > -\n> > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > -\t\t\t\t\t\t\t\t\t \nRELOPT_KIND_TABLESPACE,\n> > -\t\t\t\t\t\t\t\t\t \nsizeof(TableSpaceOpts),\n> > -\t\t\t\t\t\t\t\t\t \ntab, lengthof(tab));\n> > +\t\toptionsSpecSetAddInt(tablespace_options_spec_set,\n> > +\t\t\t\t\t\t\t\t \n\"maintenance_io_concurrency\",\n> > +\t\t\t\t\t\t\t\t \"Number of \nsimultaneous requests that can be handled efficiently\n> > by the disk subsystem for maintenance work.\", +\t\t\t\t\n\t\t\t\t\n> > ShareUpdateExclusiveLock,\n> > +\t\t\t\t\t 0, offsetof(TableSpaceOpts, \nmaintenance_io_concurrency),\n> > +#ifdef USE_PREFETCH\n> > +\t\t\t\t\t\t\t\t -1, 0, \nMAX_IO_CONCURRENCY\n> > +#else\n> > +\t\t\t\t\t\t\t\t 0, 0, 0\n> > +#endif\n> > +\t\t\t);\n> > +\t}\n> > +\treturn tablespace_options_spec_set;\n> > \n> > }\n> > \n> > /*\n> > \n> > @@ -2099,33 +612,55 @@ tablespace_reloptions(Datum reloptions, bool\n> > validate)> \n> > * for a longer explanation of how this works.\n> > */\n> > \n> > LOCKMODE\n> > \n> > -AlterTableGetRelOptionsLockLevel(List *defList)\n> > +AlterTableGetRelOptionsLockLevel(Relation rel, List *defList)\n> > \n> > {\n> > \n> > \tLOCKMODE\tlockmode = NoLock;\n> > \tListCell *cell;\n> > \n> > +\toptions_spec_set *spec_set = NULL;\n> > \n> > \tif (defList == NIL)\n> > \t\n> > \t\treturn AccessExclusiveLock;\n> > \n> > -\tif (need_initialization)\n> > -\t\tinitialize_reloptions();\n> > +\tswitch (rel->rd_rel->relkind)\n> > +\t{\n> > +\t\tcase RELKIND_TOASTVALUE:\n> > +\t\t\tspec_set = get_toast_relopt_spec_set();\n> > +\t\t\tbreak;\n> > +\t\tcase RELKIND_RELATION:\n> > +\t\tcase RELKIND_MATVIEW:\n> > +\t\t\tspec_set = get_heap_relopt_spec_set();\n> > +\t\t\tbreak;\n> > +\t\tcase RELKIND_INDEX:\n> > +\t\t\tspec_set = rel->rd_indam->amreloptspecset();\n> > +\t\t\tbreak;\n> > +\t\tcase RELKIND_VIEW:\n> > +\t\t\tspec_set = get_view_relopt_spec_set();\n> > +\t\t\tbreak;\n> > +\t\tcase RELKIND_PARTITIONED_TABLE:\n> > +\t\t\tspec_set = get_partitioned_relopt_spec_set();\n> > +\t\t\tbreak;\n> > +\t\tdefault:\n> > +\t\t\tAssert(false);\t\t/* can't get here */\n> > +\t\t\tbreak;\n> > +\t}\n> > +\tAssert(spec_set);\t\t\t/* No spec set - no reloption \nchange. Should\n> > +\t\t\t\t\t\t\t\t * never get \nhere */\n> > \n> > \tforeach(cell, defList)\n> > \t{\n> > \t\n> > \t\tDefElem *def = (DefElem *) lfirst(cell);\n> > \n> > +\n> > \n> > \t\tint\t\t\ti;\n> > \n> > -\t\tfor (i = 0; relOpts[i]; i++)\n> > +\t\tfor (i = 0; i < spec_set->num; i++)\n> > \n> > \t\t{\n> > \n> > -\t\t\tif (strncmp(relOpts[i]->name,\n> > -\t\t\t\t\t\tdef->defname,\n> > -\t\t\t\t\t\trelOpts[i]->namelen + 1) == \n0)\n> > -\t\t\t{\n> > -\t\t\t\tif (lockmode < relOpts[i]->lockmode)\n> > -\t\t\t\t\tlockmode = relOpts[i]->lockmode;\n> > -\t\t\t}\n> > +\t\t\toption_spec_basic *gen = spec_set->definitions[i];\n> > +\n> > +\t\t\tif (pg_strcasecmp(gen->name,\n> > +\t\t\t\t\t\t\t def->defname) == 0)\n> > +\t\t\t\tif (lockmode < gen->lockmode)\n> > +\t\t\t\t\tlockmode = gen->lockmode;\n> > \n> > \t\t}\n> > \t\n> > \t}\n> > \n> > -\n> > \n> > \treturn lockmode;\n> > \n> > -}\n> > +}\n> > \\ No newline at end of file\n> > diff --git a/src/backend/access/gin/gininsert.c\n> > b/src/backend/access/gin/gininsert.c index 0e8672c..0cbffad 100644\n> > --- a/src/backend/access/gin/gininsert.c\n> > +++ b/src/backend/access/gin/gininsert.c\n> > @@ -512,6 +512,8 @@ gininsert(Relation index, Datum *values, bool *isnull,\n> > \n> > \toldCtx = MemoryContextSwitchTo(insertCtx);\n> > \n> > +// elog(WARNING, \"GinGetUseFastUpdate = %i\", GinGetUseFastUpdate(index));\n> > +\n> > \n> > \tif (GinGetUseFastUpdate(index))\n> > \t{\n> > \t\n> > \t\tGinTupleCollector collector;\n> > \n> > diff --git a/src/backend/access/gin/ginutil.c\n> > b/src/backend/access/gin/ginutil.c index 6d2d71b..d1fa3a0 100644\n> > --- a/src/backend/access/gin/ginutil.c\n> > +++ b/src/backend/access/gin/ginutil.c\n> > @@ -16,7 +16,7 @@\n> > \n> > #include \"access/gin_private.h\"\n> > #include \"access/ginxlog.h\"\n> > \n> > -#include \"access/reloptions.h\"\n> > +#include \"access/options.h\"\n> > \n> > #include \"access/xloginsert.h\"\n> > #include \"catalog/pg_collation.h\"\n> > #include \"catalog/pg_type.h\"\n> > \n> > @@ -28,6 +28,7 @@\n> > \n> > #include \"utils/builtins.h\"\n> > #include \"utils/index_selfuncs.h\"\n> > #include \"utils/typcache.h\"\n> > \n> > +#include \"utils/guc.h\"\n> > \n> > /*\n> > \n> > @@ -67,7 +68,6 @@ ginhandler(PG_FUNCTION_ARGS)\n> > \n> > \tamroutine->amvacuumcleanup = ginvacuumcleanup;\n> > \tamroutine->amcanreturn = NULL;\n> > \tamroutine->amcostestimate = gincostestimate;\n> > \n> > -\tamroutine->amoptions = ginoptions;\n> > \n> > \tamroutine->amproperty = NULL;\n> > \tamroutine->ambuildphasename = NULL;\n> > \tamroutine->amvalidate = ginvalidate;\n> > \n> > @@ -82,6 +82,7 @@ ginhandler(PG_FUNCTION_ARGS)\n> > \n> > \tamroutine->amestimateparallelscan = NULL;\n> > \tamroutine->aminitparallelscan = NULL;\n> > \tamroutine->amparallelrescan = NULL;\n> > \n> > +\tamroutine->amreloptspecset = gingetreloptspecset;\n> > \n> > \tPG_RETURN_POINTER(amroutine);\n> > \n> > }\n> > \n> > @@ -604,6 +605,7 @@ ginExtractEntries(GinState *ginstate, OffsetNumber\n> > attnum,> \n> > \treturn entries;\n> > \n> > }\n> > \n> > +/*\n> > \n> > bytea *\n> > ginoptions(Datum reloptions, bool validate)\n> > {\n> > \n> > @@ -618,6 +620,7 @@ ginoptions(Datum reloptions, bool validate)\n> > \n> > \t\t\t\t\t\t\t\t\t \nsizeof(GinOptions),\n> > \t\t\t\t\t\t\t\t\t \ntab, lengthof(tab));\n> > \n> > }\n> > \n> > +*/\n> > \n> > /*\n> > \n> > * Fetch index's statistical data into *stats\n> > \n> > @@ -705,3 +708,31 @@ ginUpdateStats(Relation index, const GinStatsData\n> > *stats, bool is_build)> \n> > \tEND_CRIT_SECTION();\n> > \n> > }\n> > \n> > +\n> > +static options_spec_set *gin_relopt_specset = NULL;\n> > +\n> > +void *\n> > +gingetreloptspecset(void)\n> > +{\n> > +\tif (gin_relopt_specset)\n> > +\t\treturn gin_relopt_specset;\n> > +\n> > +\tgin_relopt_specset = allocateOptionsSpecSet(NULL,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\tsizeof(GinOptions), 2);\n> > +\n> > +\toptionsSpecSetAddBool(gin_relopt_specset, \"fastupdate\",\n> > +\t\t\t\t\t\t\"Enables \\\"fast update\\\" \nfeature for this GIN index\",\n> > +\t\t\t\t\t\t\t \nAccessExclusiveLock,\n> > +\t\t\t\t\t\t\t 0,\n> > +\t\t\t\t\t\t\t offsetof(GinOptions, \nuseFastUpdate),\n> > +\t\t\t\t\t\t\t \nGIN_DEFAULT_USE_FASTUPDATE);\n> > +\n> > +\toptionsSpecSetAddInt(gin_relopt_specset, \"gin_pending_list_limit\",\n> > +\t\t \"Maximum size of the pending list for this GIN index, in \nkilobytes\",\n> > +\t\t\t\t\t\t\t AccessExclusiveLock,\n> > +\t\t\t\t\t\t\t 0,\n> > +\t\t\t\t\t\t\t offsetof(GinOptions, \npendingListCleanupSize),\n> > +\t\t\t\t\t\t\t -1, 64, \nMAX_KILOBYTES);\n> > +\n> > +\treturn gin_relopt_specset;\n> > +}\n> > diff --git a/src/backend/access/gist/gist.c\n> > b/src/backend/access/gist/gist.c index 0683f42..cbbc6a5 100644\n> > --- a/src/backend/access/gist/gist.c\n> > +++ b/src/backend/access/gist/gist.c\n> > @@ -88,7 +88,6 @@ gisthandler(PG_FUNCTION_ARGS)\n> > \n> > \tamroutine->amvacuumcleanup = gistvacuumcleanup;\n> > \tamroutine->amcanreturn = gistcanreturn;\n> > \tamroutine->amcostestimate = gistcostestimate;\n> > \n> > -\tamroutine->amoptions = gistoptions;\n> > \n> > \tamroutine->amproperty = gistproperty;\n> > \tamroutine->ambuildphasename = NULL;\n> > \tamroutine->amvalidate = gistvalidate;\n> > \n> > @@ -103,6 +102,7 @@ gisthandler(PG_FUNCTION_ARGS)\n> > \n> > \tamroutine->amestimateparallelscan = NULL;\n> > \tamroutine->aminitparallelscan = NULL;\n> > \tamroutine->amparallelrescan = NULL;\n> > \n> > +\tamroutine->amreloptspecset = gistgetreloptspecset;\n> > \n> > \tPG_RETURN_POINTER(amroutine);\n> > \n> > }\n> > \n> > diff --git a/src/backend/access/gist/gistbuild.c\n> > b/src/backend/access/gist/gistbuild.c index baad28c..931d249 100644\n> > --- a/src/backend/access/gist/gistbuild.c\n> > +++ b/src/backend/access/gist/gistbuild.c\n> > @@ -215,6 +215,7 @@ gistbuild(Relation heap, Relation index, IndexInfo\n> > *indexInfo)> \n> > \t\t\tbuildstate.buildMode = GIST_BUFFERING_DISABLED;\n> > \t\t\n> > \t\telse\t\t\t\t\t/* must be \"auto\" \n*/\n> > \t\t\n> > \t\t\tbuildstate.buildMode = GIST_BUFFERING_AUTO;\n> > \n> > +//elog(WARNING, \"biffering_mode = %i\", options->buffering_mode);\n> > \n> > \t}\n> > \telse\n> > \t{\n> > \n> > diff --git a/src/backend/access/gist/gistutil.c\n> > b/src/backend/access/gist/gistutil.c index 43ba03b..0391915 100644\n> > --- a/src/backend/access/gist/gistutil.c\n> > +++ b/src/backend/access/gist/gistutil.c\n> > @@ -17,7 +17,7 @@\n> > \n> > #include \"access/gist_private.h\"\n> > #include \"access/htup_details.h\"\n> > \n> > -#include \"access/reloptions.h\"\n> > +#include \"access/options.h\"\n> > \n> > #include \"catalog/pg_opclass.h\"\n> > #include \"storage/indexfsm.h\"\n> > #include \"storage/lmgr.h\"\n> > \n> > @@ -916,20 +916,6 @@ gistPageRecyclable(Page page)\n> > \n> > \treturn false;\n> > \n> > }\n> > \n> > -bytea *\n> > -gistoptions(Datum reloptions, bool validate)\n> > -{\n> > -\tstatic const relopt_parse_elt tab[] = {\n> > -\t\t{\"fillfactor\", RELOPT_TYPE_INT, offsetof(GiSTOptions, \nfillfactor)},\n> > -\t\t{\"buffering\", RELOPT_TYPE_ENUM, offsetof(GiSTOptions, \nbuffering_mode)}\n> > -\t};\n> > -\n> > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > -\t\t\t\t\t\t\t\t\t \nRELOPT_KIND_GIST,\n> > -\t\t\t\t\t\t\t\t\t \nsizeof(GiSTOptions),\n> > -\t\t\t\t\t\t\t\t\t \ntab, lengthof(tab));\n> > -}\n> > -\n> > \n> > /*\n> > \n> > *\tgistproperty() -- Check boolean properties of indexes.\n> > *\n> > \n> > @@ -1064,3 +1050,42 @@ gistGetFakeLSN(Relation rel)\n> > \n> > \t\treturn GetFakeLSNForUnloggedRel();\n> > \t\n> > \t}\n> > \n> > }\n> > \n> > +\n> > +/* values from GistOptBufferingMode */\n> > +opt_enum_elt_def gistBufferingOptValues[] =\n> > +{\n> > +\t{\"auto\", GIST_OPTION_BUFFERING_AUTO},\n> > +\t{\"on\", GIST_OPTION_BUFFERING_ON},\n> > +\t{\"off\", GIST_OPTION_BUFFERING_OFF},\n> > +\t{(const char *) NULL}\t\t/* list terminator */\n> > +};\n> > +\n> > +static options_spec_set *gist_relopt_specset = NULL;\n> > +\n> > +void *\n> > +gistgetreloptspecset(void)\n> > +{\n> > +\tif (gist_relopt_specset)\n> > +\t\treturn gist_relopt_specset;\n> > +\n> > +\tgist_relopt_specset = allocateOptionsSpecSet(NULL,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t sizeof(GiSTOptions), 2);\n> > +\n> > +\toptionsSpecSetAddInt(gist_relopt_specset, \"fillfactor\",\n> > +\t\t\t\t\t\t\"Packs gist index pages only \nto this percentage\",\n> > +\t\t\t\t\t\t\t NoLock,\t\t/* \nNo ALTER, no lock */\n> > +\t\t\t\t\t\t\t 0,\n> > +\t\t\t\t\t\t\t offsetof(GiSTOptions, \nfillfactor),\n> > +\t\t\t\t\t\t\t \nGIST_DEFAULT_FILLFACTOR,\n> > +\t\t\t\t\t\t\t GIST_MIN_FILLFACTOR, \n100);\n> > +\n> > +\toptionsSpecSetAddEnum(gist_relopt_specset, \"buffering\",\n> > +\t\t\t\t\t\t \"Enables buffering build \nfor this GiST index\",\n> > +\t\t\t\t\t\t\t NoLock,\t\t/* \nNo ALTER, no lock */\n> > +\t\t\t\t\t\t\t 0,\n> > +\t\t\t\t\t\t\t \noffsetof(GiSTOptions, buffering_mode),\n> > +\t\t\t\t\t\t\t \ngistBufferingOptValues,\n> > +\t\t\t\t\t\t\t \nGIST_OPTION_BUFFERING_AUTO,\n> > +\t\t\t\t\t\t\t gettext_noop(\"Valid \nvalues are \\\"on\\\", \\\"off\\\", and\n> > \\\"auto\\\".\"));\n> > +\treturn gist_relopt_specset;\n> > +}\n> > diff --git a/src/backend/access/hash/hash.c\n> > b/src/backend/access/hash/hash.c index eb38104..8dc4ca7 100644\n> > --- a/src/backend/access/hash/hash.c\n> > +++ b/src/backend/access/hash/hash.c\n> > @@ -85,7 +85,6 @@ hashhandler(PG_FUNCTION_ARGS)\n> > \n> > \tamroutine->amvacuumcleanup = hashvacuumcleanup;\n> > \tamroutine->amcanreturn = NULL;\n> > \tamroutine->amcostestimate = hashcostestimate;\n> > \n> > -\tamroutine->amoptions = hashoptions;\n> > \n> > \tamroutine->amproperty = NULL;\n> > \tamroutine->ambuildphasename = NULL;\n> > \tamroutine->amvalidate = hashvalidate;\n> > \n> > @@ -100,6 +99,7 @@ hashhandler(PG_FUNCTION_ARGS)\n> > \n> > \tamroutine->amestimateparallelscan = NULL;\n> > \tamroutine->aminitparallelscan = NULL;\n> > \tamroutine->amparallelrescan = NULL;\n> > \n> > +\tamroutine->amreloptspecset = hashgetreloptspecset;\n> > \n> > \tPG_RETURN_POINTER(amroutine);\n> > \n> > }\n> > \n> > diff --git a/src/backend/access/hash/hashpage.c\n> > b/src/backend/access/hash/hashpage.c index 159646c..38f64ef 100644\n> > --- a/src/backend/access/hash/hashpage.c\n> > +++ b/src/backend/access/hash/hashpage.c\n> > @@ -359,6 +359,8 @@ _hash_init(Relation rel, double num_tuples, ForkNumber\n> > forkNum)> \n> > \tdata_width = sizeof(uint32);\n> > \titem_width = MAXALIGN(sizeof(IndexTupleData)) + MAXALIGN(data_width) \n+\n> > \t\n> > \t\tsizeof(ItemIdData);\t\t/* include the line pointer */\n> > \n> > +//elog(WARNING, \"fillfactor = %i\", HashGetFillFactor(rel));\n> > +\n> > \n> > \tffactor = HashGetTargetPageUsage(rel) / item_width;\n> > \t/* keep to a sane range */\n> > \tif (ffactor < 10)\n> > \n> > diff --git a/src/backend/access/hash/hashutil.c\n> > b/src/backend/access/hash/hashutil.c index 5198728..826beab 100644\n> > --- a/src/backend/access/hash/hashutil.c\n> > +++ b/src/backend/access/hash/hashutil.c\n> > @@ -15,7 +15,7 @@\n> > \n> > #include \"postgres.h\"\n> > \n> > #include \"access/hash.h\"\n> > \n> > -#include \"access/reloptions.h\"\n> > +#include \"access/options.h\"\n> > \n> > #include \"access/relscan.h\"\n> > #include \"port/pg_bitutils.h\"\n> > #include \"storage/buf_internals.h\"\n> > \n> > @@ -272,19 +272,6 @@ _hash_checkpage(Relation rel, Buffer buf, int flags)\n> > \n> > \t}\n> > \n> > }\n> > \n> > -bytea *\n> > -hashoptions(Datum reloptions, bool validate)\n> > -{\n> > -\tstatic const relopt_parse_elt tab[] = {\n> > -\t\t{\"fillfactor\", RELOPT_TYPE_INT, offsetof(HashOptions, \nfillfactor)},\n> > -\t};\n> > -\n> > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > -\t\t\t\t\t\t\t\t\t \nRELOPT_KIND_HASH,\n> > -\t\t\t\t\t\t\t\t\t \nsizeof(HashOptions),\n> > -\t\t\t\t\t\t\t\t\t \ntab, lengthof(tab));\n> > -}\n> > -\n> > \n> > /*\n> > \n> > * _hash_get_indextuple_hashkey - get the hash index tuple's hash key\n> > value\n> > */\n> > \n> > @@ -620,3 +607,24 @@ _hash_kill_items(IndexScanDesc scan)\n> > \n> > \telse\n> > \t\n> > \t\t_hash_relbuf(rel, buf);\n> > \n> > }\n> > \n> > +\n> > +static options_spec_set *hash_relopt_specset = NULL;\n> > +\n> > +void *\n> > +hashgetreloptspecset(void)\n> > +{\n> > +\tif (hash_relopt_specset)\n> > +\t\treturn hash_relopt_specset;\n> > +\n> > +\thash_relopt_specset = allocateOptionsSpecSet(NULL,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t sizeof(HashOptions), 1);\n> > +\toptionsSpecSetAddInt(hash_relopt_specset, \"fillfactor\",\n> > +\t\t\t\t\t\t\"Packs hash index pages only \nto this percentage\",\n> > +\t\t\t\t\t\t\t NoLock,\t\t/* \nNo ALTER -- no lock */\n> > +\t\t\t\t\t\t\t 0,\n> > +\t\t\t\t\t\t\t offsetof(HashOptions, \nfillfactor),\n> > +\t\t\t\t\t\t\t \nHASH_DEFAULT_FILLFACTOR,\n> > +\t\t\t\t\t\t\t HASH_MIN_FILLFACTOR, \n100);\n> > +\n> > +\treturn hash_relopt_specset;\n> > +}\n> > diff --git a/src/backend/access/nbtree/nbtinsert.c\n> > b/src/backend/access/nbtree/nbtinsert.c index 7355e1d..f7b117e 100644\n> > --- a/src/backend/access/nbtree/nbtinsert.c\n> > +++ b/src/backend/access/nbtree/nbtinsert.c\n> > @@ -2745,6 +2745,8 @@ _bt_delete_or_dedup_one_page(Relation rel, Relation\n> > heapRel,> \n> > \t\t_bt_bottomupdel_pass(rel, buffer, heapRel, insertstate-\n>itemsz))\n> > \t\treturn;\n> > \n> > +// elog(WARNING, \"Deduplicate_items = %i\", BTGetDeduplicateItems(rel));\n> > +\n> > \n> > \t/* Perform deduplication pass (when enabled and index-is-\nallequalimage)\n> > \t*/\n> > \tif (BTGetDeduplicateItems(rel) && itup_key->allequalimage)\n> > \t\n> > \t\t_bt_dedup_pass(rel, buffer, heapRel, insertstate->itup,\n> > \n> > diff --git a/src/backend/access/nbtree/nbtree.c\n> > b/src/backend/access/nbtree/nbtree.c index 40ad095..f171c54 100644\n> > --- a/src/backend/access/nbtree/nbtree.c\n> > +++ b/src/backend/access/nbtree/nbtree.c\n> > @@ -22,6 +22,7 @@\n> > \n> > #include \"access/nbtxlog.h\"\n> > #include \"access/relscan.h\"\n> > #include \"access/xlog.h\"\n> > \n> > +#include \"access/options.h\"\n> > \n> > #include \"commands/progress.h\"\n> > #include \"commands/vacuum.h\"\n> > #include \"miscadmin.h\"\n> > \n> > @@ -124,7 +125,6 @@ bthandler(PG_FUNCTION_ARGS)\n> > \n> > \tamroutine->amvacuumcleanup = btvacuumcleanup;\n> > \tamroutine->amcanreturn = btcanreturn;\n> > \tamroutine->amcostestimate = btcostestimate;\n> > \n> > -\tamroutine->amoptions = btoptions;\n> > \n> > \tamroutine->amproperty = btproperty;\n> > \tamroutine->ambuildphasename = btbuildphasename;\n> > \tamroutine->amvalidate = btvalidate;\n> > \n> > @@ -139,6 +139,7 @@ bthandler(PG_FUNCTION_ARGS)\n> > \n> > \tamroutine->amestimateparallelscan = btestimateparallelscan;\n> > \tamroutine->aminitparallelscan = btinitparallelscan;\n> > \tamroutine->amparallelrescan = btparallelrescan;\n> > \n> > +\tamroutine->amreloptspecset = btgetreloptspecset;\n> > \n> > \tPG_RETURN_POINTER(amroutine);\n> > \n> > }\n> > \n> > @@ -1418,3 +1419,37 @@ btcanreturn(Relation index, int attno)\n> > \n> > {\n> > \n> > \treturn true;\n> > \n> > }\n> > \n> > +\n> > +static options_spec_set *bt_relopt_specset = NULL;\n> > +\n> > +void *\n> > +btgetreloptspecset(void)\n> > +{\n> > +\tif (bt_relopt_specset)\n> > +\t\treturn bt_relopt_specset;\n> > +\n> > +\tbt_relopt_specset = allocateOptionsSpecSet(NULL,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t sizeof(BTOptions), 3);\n> > +\n> > +\toptionsSpecSetAddInt(\n> > +\t\tbt_relopt_specset, \"fillfactor\",\n> > +\t\t\"Packs btree index pages only to this percentage\",\n> > +\t\tShareUpdateExclusiveLock, /* since it applies only to later \ninserts */\n> > +\t\t0, offsetof(BTOptions, fillfactor),\n> > +\t\tBTREE_DEFAULT_FILLFACTOR, BTREE_MIN_FILLFACTOR, 100\n> > +\t);\n> > +\toptionsSpecSetAddReal(\n> > +\t\tbt_relopt_specset, \"vacuum_cleanup_index_scale_factor\",\n> > +\t\t\"Number of tuple inserts prior to index cleanup as a fraction \nof\n> > reltuples\", +\t\tShareUpdateExclusiveLock,\n> > +\t\t0, offsetof(BTOptions,vacuum_cleanup_index_scale_factor),\n> > +\t\t-1, 0.0, 1e10\n> > +\t);\n> > +\toptionsSpecSetAddBool(\n> > +\t\tbt_relopt_specset, \"deduplicate_items\",\n> > +\t\t\"Enables \\\"deduplicate items\\\" feature for this btree index\",\n> > +\t\tShareUpdateExclusiveLock, /* since it applies only to later \ninserts */\n> > +\t\t0, offsetof(BTOptions,deduplicate_items), true\n> > +\t);\n> > +\treturn bt_relopt_specset;\n> > +}\n> > diff --git a/src/backend/access/nbtree/nbtutils.c\n> > b/src/backend/access/nbtree/nbtutils.c index c72b456..2588a30 100644\n> > --- a/src/backend/access/nbtree/nbtutils.c\n> > +++ b/src/backend/access/nbtree/nbtutils.c\n> > @@ -18,7 +18,7 @@\n> > \n> > #include <time.h>\n> > \n> > #include \"access/nbtree.h\"\n> > \n> > -#include \"access/reloptions.h\"\n> > +#include \"storage/lock.h\"\n> > \n> > #include \"access/relscan.h\"\n> > #include \"catalog/catalog.h\"\n> > #include \"commands/progress.h\"\n> > \n> > @@ -2100,25 +2100,6 @@ BTreeShmemInit(void)\n> > \n> > \t\tAssert(found);\n> > \n> > }\n> > \n> > -bytea *\n> > -btoptions(Datum reloptions, bool validate)\n> > -{\n> > -\tstatic const relopt_parse_elt tab[] = {\n> > -\t\t{\"fillfactor\", RELOPT_TYPE_INT, offsetof(BTOptions, \nfillfactor)},\n> > -\t\t{\"vacuum_cleanup_index_scale_factor\", RELOPT_TYPE_REAL,\n> > -\t\toffsetof(BTOptions, vacuum_cleanup_index_scale_factor)},\n> > -\t\t{\"deduplicate_items\", RELOPT_TYPE_BOOL,\n> > -\t\toffsetof(BTOptions, deduplicate_items)}\n> > -\n> > -\t};\n> > -\n> > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > -\t\t\t\t\t\t\t\t\t \nRELOPT_KIND_BTREE,\n> > -\t\t\t\t\t\t\t\t\t \nsizeof(BTOptions),\n> > -\t\t\t\t\t\t\t\t\t \ntab, lengthof(tab));\n> > -\n> > -}\n> > -\n> > \n> > /*\n> > \n> > *\tbtproperty() -- Check boolean properties of indexes.\n> > *\n> > \n> > diff --git a/src/backend/access/spgist/spgutils.c\n> > b/src/backend/access/spgist/spgutils.c index 03a9cd3..14429ad 100644\n> > --- a/src/backend/access/spgist/spgutils.c\n> > +++ b/src/backend/access/spgist/spgutils.c\n> > @@ -17,7 +17,7 @@\n> > \n> > #include \"access/amvalidate.h\"\n> > #include \"access/htup_details.h\"\n> > \n> > -#include \"access/reloptions.h\"\n> > +#include \"access/options.h\"\n> > \n> > #include \"access/spgist_private.h\"\n> > #include \"access/toast_compression.h\"\n> > #include \"access/transam.h\"\n> > \n> > @@ -72,7 +72,6 @@ spghandler(PG_FUNCTION_ARGS)\n> > \n> > \tamroutine->amvacuumcleanup = spgvacuumcleanup;\n> > \tamroutine->amcanreturn = spgcanreturn;\n> > \tamroutine->amcostestimate = spgcostestimate;\n> > \n> > -\tamroutine->amoptions = spgoptions;\n> > \n> > \tamroutine->amproperty = spgproperty;\n> > \tamroutine->ambuildphasename = NULL;\n> > \tamroutine->amvalidate = spgvalidate;\n> > \n> > @@ -87,6 +86,7 @@ spghandler(PG_FUNCTION_ARGS)\n> > \n> > \tamroutine->amestimateparallelscan = NULL;\n> > \tamroutine->aminitparallelscan = NULL;\n> > \tamroutine->amparallelrescan = NULL;\n> > \n> > +\tamroutine->amreloptspecset = spggetreloptspecset;\n> > \n> > \tPG_RETURN_POINTER(amroutine);\n> > \n> > }\n> > \n> > @@ -550,6 +550,7 @@ SpGistGetBuffer(Relation index, int flags, int\n> > needSpace, bool *isNew)> \n> > \t * related to the ones already on it. But fillfactor mustn't cause \nan\n> > \t * error for requests that would otherwise be legal.\n> > \t */\n> > \n> > +//elog(WARNING, \"fillfactor = %i\", SpGistGetFillFactor(index));\n> > \n> > \tneedSpace += SpGistGetTargetPageFreeSpace(index);\n> > \tneedSpace = Min(needSpace, SPGIST_PAGE_CAPACITY);\n> > \n> > @@ -721,23 +722,6 @@ SpGistInitMetapage(Page page)\n> > \n> > }\n> > \n> > /*\n> > \n> > - * reloptions processing for SPGiST\n> > - */\n> > -bytea *\n> > -spgoptions(Datum reloptions, bool validate)\n> > -{\n> > -\tstatic const relopt_parse_elt tab[] = {\n> > -\t\t{\"fillfactor\", RELOPT_TYPE_INT, offsetof(SpGistOptions, \nfillfactor)},\n> > -\t};\n> > -\n> > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > -\t\t\t\t\t\t\t\t\t \nRELOPT_KIND_SPGIST,\n> > -\t\t\t\t\t\t\t\t\t \nsizeof(SpGistOptions),\n> > -\t\t\t\t\t\t\t\t\t \ntab, lengthof(tab));\n> > -\n> > -}\n> > -\n> > -/*\n> > \n> > * Get the space needed to store a non-null datum of the indicated type\n> > * in an inner tuple (that is, as a prefix or node label).\n> > * Note the result is already rounded up to a MAXALIGN boundary.\n> > \n> > @@ -1336,3 +1320,25 @@ spgproperty(Oid index_oid, int attno,\n> > \n> > \treturn true;\n> > \n> > }\n> > \n> > +\n> > +static options_spec_set *spgist_relopt_specset = NULL;\n> > +\n> > +void *\n> > +spggetreloptspecset(void)\n> > +{\n> > +\tif (!spgist_relopt_specset)\n> > +\t{\n> > +\t\tspgist_relopt_specset = allocateOptionsSpecSet(NULL,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\tsizeof(SpGistOptions), 1);\n> > +\n> > +\t\toptionsSpecSetAddInt(spgist_relopt_specset, \"fillfactor\",\n> > +\t\t\t\t\t\t \"Packs spgist index pages \nonly to this percentage\",\n> > +\t\t\t\t\t\t\t\t \nShareUpdateExclusiveLock,\t\t/* since it applies only\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t * to later inserts */\n> > +\t\t\t\t\t\t\t\t 0,\n> > +\t\t\t\t\t\t\t\t \noffsetof(SpGistOptions, fillfactor),\n> > +\t\t\t\t\t\t\t\t \nSPGIST_DEFAULT_FILLFACTOR,\n> > +\t\t\t\t\t\t\t\t \nSPGIST_MIN_FILLFACTOR, 100);\n> > +\t}\n> > +\treturn spgist_relopt_specset;\n> > +}\n> > diff --git a/src/backend/commands/createas.c\n> > b/src/backend/commands/createas.c index 0982851..4f3dbb8 100644\n> > --- a/src/backend/commands/createas.c\n> > +++ b/src/backend/commands/createas.c\n> > @@ -90,6 +90,7 @@ create_ctas_internal(List *attrList, IntoClause *into)\n> > \n> > \tDatum\t\ttoast_options;\n> > \tstatic char *validnsps[] = HEAP_RELOPT_NAMESPACES;\n> > \tObjectAddress intoRelationAddr;\n> > \n> > +\tList\t *toastDefList;\n> > \n> > \t/* This code supports both CREATE TABLE AS and CREATE MATERIALIZED \nVIEW\n> > \t*/\n> > \tis_matview = (into->viewQuery != NULL);\n> > \n> > @@ -124,14 +125,12 @@ create_ctas_internal(List *attrList, IntoClause\n> > *into)> \n> > \tCommandCounterIncrement();\n> > \t\n> > \t/* parse and validate reloptions for the toast table */\n> > \n> > -\ttoast_options = transformRelOptions((Datum) 0,\n> > -\t\t\t\t\t\t\t\t\t\t\ncreate->options,\n> > -\t\t\t\t\t\t\t\t\t\t\n\"toast\",\n> > -\t\t\t\t\t\t\t\t\t\t\nvalidnsps,\n> > -\t\t\t\t\t\t\t\t\t\t\ntrue, false);\n> > \n> > -\t(void) heap_reloptions(RELKIND_TOASTVALUE, toast_options, true);\n> > +\toptionsDefListValdateNamespaces(create->options, validnsps);\n> > +\ttoastDefList = optionsDefListFilterNamespaces(create->options, \n\"toast\");\n> > \n> > +\ttoast_options = transformOptions(get_toast_relopt_spec_set(), (Datum) \n0,\n> > +\t\t\t\t\t\t\t\t\t \ntoastDefList, 0);\n> > \n> > \tNewRelationCreateToastTable(intoRelationAddr.objectId, \ntoast_options);\n> > \t\n> > \t/* Create the \"view\" part of a materialized view. */\n> > \n> > diff --git a/src/backend/commands/foreigncmds.c\n> > b/src/backend/commands/foreigncmds.c index 146fa57..758ca34 100644\n> > --- a/src/backend/commands/foreigncmds.c\n> > +++ b/src/backend/commands/foreigncmds.c\n> > @@ -112,7 +112,7 @@ transformGenericOptions(Oid catalogId,\n> > \n> > \t\t\t\t\t\tList *options,\n> > \t\t\t\t\t\tOid fdwvalidator)\n> > \n> > {\n> > \n> > -\tList\t *resultOptions = untransformRelOptions(oldOptions);\n> > +\tList\t *resultOptions = optionsTextArrayToDefList(oldOptions);\n> > \n> > \tListCell *optcell;\n> > \tDatum\t\tresult;\n> > \n> > diff --git a/src/backend/commands/indexcmds.c\n> > b/src/backend/commands/indexcmds.c index c14ca27..96d465a 100644\n> > --- a/src/backend/commands/indexcmds.c\n> > +++ b/src/backend/commands/indexcmds.c\n> > @@ -19,6 +19,7 @@\n> > \n> > #include \"access/heapam.h\"\n> > #include \"access/htup_details.h\"\n> > #include \"access/reloptions.h\"\n> > \n> > +#include \"access/options.h\"\n> > \n> > #include \"access/sysattr.h\"\n> > #include \"access/tableam.h\"\n> > #include \"access/xact.h\"\n> > \n> > @@ -531,7 +532,7 @@ DefineIndex(Oid relationId,\n> > \n> > \tForm_pg_am\taccessMethodForm;\n> > \tIndexAmRoutine *amRoutine;\n> > \tbool\t\tamcanorder;\n> > \n> > -\tamoptions_function amoptions;\n> > +\tamreloptspecset_function amreloptspecsetfn;\n> > \n> > \tbool\t\tpartitioned;\n> > \tbool\t\tsafe_index;\n> > \tDatum\t\treloptions;\n> > \n> > @@ -837,7 +838,7 @@ DefineIndex(Oid relationId,\n> > \n> > \t\t\t\t\t\taccessMethodName)));\n> > \t\n> > \tamcanorder = amRoutine->amcanorder;\n> > \n> > -\tamoptions = amRoutine->amoptions;\n> > +\tamreloptspecsetfn = amRoutine->amreloptspecset;\n> > \n> > \tpfree(amRoutine);\n> > \tReleaseSysCache(tuple);\n> > \n> > @@ -851,10 +852,19 @@ DefineIndex(Oid relationId,\n> > \n> > \t/*\n> > \t\n> > \t * Parse AM-specific options, convert to text array form, validate.\n> > \t */\n> > \n> > -\treloptions = transformRelOptions((Datum) 0, stmt->options,\n> > -\t\t\t\t\t\t\t\t\t \nNULL, NULL, false, false);\n> > \n> > -\t(void) index_reloptions(amoptions, reloptions, true);\n> > +\tif (amreloptspecsetfn)\n> > +\t{\n> > +\t\treloptions = transformOptions(amreloptspecsetfn(),\n> > +\t\t\t\t\t\t\t\t\t \n(Datum) 0, stmt->options, 0);\n> > +\t}\n> > +\telse\n> > +\t{\n> > +\t\tereport(ERROR,\n> > +\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > +\t\t\t\t errmsg(\"access method %s does not support \noptions\",\n> > +\t\t\t\t\t\taccessMethodName)));\n> > +\t}\n> > \n> > \t/*\n> > \t\n> > \t * Prepare arguments for index_create, primarily an IndexInfo \nstructure.\n> > \n> > @@ -1986,8 +1996,7 @@ ComputeIndexAttrs(IndexInfo *indexInfo,\n> > \n> > \t\t\t\t\tpalloc0(sizeof(Datum) * indexInfo-\n>ii_NumIndexAttrs);\n> > \t\t\t\n> > \t\t\tindexInfo->ii_OpclassOptions[attn] =\n> > \n> > -\t\t\t\ttransformRelOptions((Datum) 0, attribute-\n>opclassopts,\n> > -\t\t\t\t\t\t\t\t\t\nNULL, NULL, false, false);\n> > +\t\t\t\toptionsDefListToTextArray(attribute-\n>opclassopts);\n> > \n> > \t\t}\n> > \t\t\n> > \t\tattn++;\n> > \n> > diff --git a/src/backend/commands/tablecmds.c\n> > b/src/backend/commands/tablecmds.c index 1c2ebe1..7f3004f 100644\n> > --- a/src/backend/commands/tablecmds.c\n> > +++ b/src/backend/commands/tablecmds.c\n> > @@ -20,6 +20,7 @@\n> > \n> > #include \"access/heapam_xlog.h\"\n> > #include \"access/multixact.h\"\n> > #include \"access/reloptions.h\"\n> > \n> > +#include \"access/options.h\"\n> > \n> > #include \"access/relscan.h\"\n> > #include \"access/sysattr.h\"\n> > #include \"access/tableam.h\"\n> > \n> > @@ -641,7 +642,6 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid\n> > ownerId,> \n> > \tListCell *listptr;\n> > \tAttrNumber\tattnum;\n> > \tbool\t\tpartitioned;\n> > \n> > -\tstatic char *validnsps[] = HEAP_RELOPT_NAMESPACES;\n> > \n> > \tOid\t\t\tofTypeId;\n> > \tObjectAddress address;\n> > \tLOCKMODE\tparentLockmode;\n> > \n> > @@ -789,19 +789,37 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid\n> > ownerId,> \n> > \t/*\n> > \t\n> > \t * Parse and validate reloptions, if any.\n> > \t */\n> > \n> > -\treloptions = transformRelOptions((Datum) 0, stmt->options, NULL,\n> > validnsps, -\t\t\t\t\t\t\t\t\t\n true, false);\n> > \n> > \tswitch (relkind)\n> > \t{\n> > \t\n> > \t\tcase RELKIND_VIEW:\n> > -\t\t\t(void) view_reloptions(reloptions, true);\n> > +\t\t\treloptions = transformOptions(\n> > +\t\t\t\t\t\t\t\t\t \nget_view_relopt_spec_set(),\n> > +\t\t\t\t\t\t\t\t\t \n(Datum) 0, stmt->options, 0);\n> > \n> > \t\t\tbreak;\n> > \t\t\n> > \t\tcase RELKIND_PARTITIONED_TABLE:\n> > -\t\t\t(void) partitioned_table_reloptions(reloptions, \ntrue);\n> > +\t\t{\n> > +\t\t\t/* If it is not all listed above, then it if heap */\n> > +\t\t\tchar\t *namespaces[] = HEAP_RELOPT_NAMESPACES;\n> > +\t\t\tList\t *heapDefList;\n> > +\n> > +\t\t\toptionsDefListValdateNamespaces(stmt->options, \nnamespaces);\n> > +\t\t\theapDefList = optionsDefListFilterNamespaces(stmt-\n>options, NULL);\n> > +\t\t\treloptions = \ntransformOptions(get_partitioned_relopt_spec_set(),\n> > +\t\t\t\t\t\t\t\t\t \n(Datum) 0, heapDefList, 0);\n> > \n> > \t\t\tbreak;\n> > \n> > +\t\t}\n> > \n> > \t\tdefault:\n> > -\t\t\t(void) heap_reloptions(relkind, reloptions, true);\n> > +\t\t{\n> > +\t\t\t/* If it is not all listed above, then it if heap */\n> > +\t\t\tchar\t *namespaces[] = HEAP_RELOPT_NAMESPACES;\n> > +\t\t\tList\t *heapDefList;\n> > +\n> > +\t\t\toptionsDefListValdateNamespaces(stmt->options, \nnamespaces);\n> > +\t\t\theapDefList = optionsDefListFilterNamespaces(stmt-\n>options, NULL);\n> > +\t\t\treloptions = \ntransformOptions(get_heap_relopt_spec_set(),\n> > +\t\t\t\t\t\t\t\t\t \n(Datum) 0, heapDefList, 0);\n> > +\t\t}\n> > \n> > \t}\n> > \t\n> > \tif (stmt->ofTypename)\n> > \n> > @@ -4022,7 +4040,7 @@ void\n> > \n> > AlterTableInternal(Oid relid, List *cmds, bool recurse)\n> > {\n> > \n> > \tRelation\trel;\n> > \n> > -\tLOCKMODE\tlockmode = AlterTableGetLockLevel(cmds);\n> > +\tLOCKMODE\tlockmode = AlterTableGetLockLevel(relid, cmds);\n> > \n> > \trel = relation_open(relid, lockmode);\n> > \n> > @@ -4064,7 +4082,7 @@ AlterTableInternal(Oid relid, List *cmds, bool\n> > recurse)> \n> > * otherwise we might end up with an inconsistent dump that can't\n> > restore.\n> > */\n> > \n> > LOCKMODE\n> > \n> > -AlterTableGetLockLevel(List *cmds)\n> > +AlterTableGetLockLevel(Oid relid, List *cmds)\n> > \n> > {\n> > \n> > \t/*\n> > \t\n> > \t * This only works if we read catalog tables using MVCC snapshots.\n> > \n> > @@ -4285,9 +4303,13 @@ AlterTableGetLockLevel(List *cmds)\n> > \n> > \t\t\t\t\t\t\t\t\t * \ngetTables() */\n> > \t\t\t\n> > \t\t\tcase AT_ResetRelOptions:\t/* Uses MVCC in \ngetIndexes() and\n> > \t\t\t\n> > \t\t\t\t\t\t\t\t\t\t\n * getTables() */\n> > \n> > -\t\t\t\tcmd_lockmode = \nAlterTableGetRelOptionsLockLevel((List *) cmd->def);\n> > -\t\t\t\tbreak;\n> > -\n> > +\t\t\t\t{\n> > +\t\t\t\t\tRelation rel = relation_open(relid, \nNoLock); // FIXME I am not sure\n> > how wise it is +\t\t\t\t\tcmd_lockmode = \nAlterTableGetRelOptionsLockLevel(rel,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t\tcastNode(List, cmd->def));\n> > +\t\t\t\t\trelation_close(rel,NoLock);\n> > +\t\t\t\t\tbreak;\n> > +\t\t\t\t}\n> > \n> > \t\t\tcase AT_AttachPartition:\n> > \t\t\t\tcmd_lockmode = ShareUpdateExclusiveLock;\n> > \t\t\t\tbreak;\n> > \n> > @@ -8062,11 +8084,11 @@ ATExecSetOptions(Relation rel, const char\n> > *colName, Node *options,> \n> > \t/* Generate new proposed attoptions (text array) */\n> > \tdatum = SysCacheGetAttr(ATTNAME, tuple, \nAnum_pg_attribute_attoptions,\n> > \t\n> > \t\t\t\t\t\t\t&isnull);\n> > \n> > -\tnewOptions = transformRelOptions(isnull ? (Datum) 0 : datum,\n> > -\t\t\t\t\t\t\t\t\t \ncastNode(List, options), NULL, NULL,\n> > -\t\t\t\t\t\t\t\t\t \nfalse, isReset);\n> > -\t/* Validate new options */\n> > -\t(void) attribute_reloptions(newOptions, true);\n> > +\n> > +\tnewOptions = transformOptions(get_attribute_options_spec_set(),\n> > +\t\t\t\t\t\t\t\t isnull ? \n(Datum) 0 : datum,\n> > +\t\t\t\t\t castNode(List, options), \nOPTIONS_PARSE_MODE_FOR_ALTER |\n> > +\t\t\t\t\t\t\t (isReset ? \nOPTIONS_PARSE_MODE_FOR_RESET : 0));\n> > \n> > \t/* Build new tuple. */\n> > \tmemset(repl_null, false, sizeof(repl_null));\n> > \n> > @@ -13704,7 +13726,8 @@ ATExecSetRelOptions(Relation rel, List *defList,\n> > AlterTableType operation,> \n> > \tDatum\t\trepl_val[Natts_pg_class];\n> > \tbool\t\trepl_null[Natts_pg_class];\n> > \tbool\t\trepl_repl[Natts_pg_class];\n> > \n> > -\tstatic char *validnsps[] = HEAP_RELOPT_NAMESPACES;\n> > +\tList\t *toastDefList;\n> > +\toptions_parse_mode parse_mode;\n> > \n> > \tif (defList == NIL && operation != AT_ReplaceRelOptions)\n> > \t\n> > \t\treturn;\t\t\t\t\t/* nothing to do \n*/\n> > \n> > @@ -13734,27 +13757,68 @@ ATExecSetRelOptions(Relation rel, List *defList,\n> > AlterTableType operation,> \n> > \t}\n> > \t\n> > \t/* Generate new proposed reloptions (text array) */\n> > \n> > -\tnewOptions = transformRelOptions(isnull ? (Datum) 0 : datum,\n> > -\t\t\t\t\t\t\t\t\t \ndefList, NULL, validnsps, false,\n> > -\t\t\t\t\t\t\t\t\t \noperation == AT_ResetRelOptions);\n> > \n> > \t/* Validate */\n> > \n> > +\tparse_mode = OPTIONS_PARSE_MODE_FOR_ALTER;\n> > +\tif (operation == AT_ResetRelOptions)\n> > +\t\tparse_mode |= OPTIONS_PARSE_MODE_FOR_RESET;\n> > +\n> > \n> > \tswitch (rel->rd_rel->relkind)\n> > \t{\n> > \t\n> > \t\tcase RELKIND_RELATION:\n> > -\t\tcase RELKIND_TOASTVALUE:\n> > +\t\tcase RELKIND_TOASTVALUE: // FIXME why it is here???\n> > \n> > \t\tcase RELKIND_MATVIEW:\n> > -\t\t\t(void) heap_reloptions(rel->rd_rel->relkind, \nnewOptions, true);\n> > +\t\t\t{\n> > +\t\t\t\tchar\t *namespaces[] = \nHEAP_RELOPT_NAMESPACES;\n> > +\t\t\t\tList\t *heapDefList;\n> > +\n> > +\t\t\t\toptionsDefListValdateNamespaces(defList, \nnamespaces);\n> > +\t\t\t\theapDefList = optionsDefListFilterNamespaces(\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t defList, NULL);\n> > +\t\t\t\tnewOptions = \ntransformOptions(get_heap_relopt_spec_set(),\n> > +\t\t\t\t\t\t\t\t\t\t\n\t isnull ? (Datum) 0 : datum,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t heapDefList, parse_mode);\n> > +\t\t\t}\n> > \n> > \t\t\tbreak;\n> > \n> > +\n> > \n> > \t\tcase RELKIND_PARTITIONED_TABLE:\n> > -\t\t\t(void) partitioned_table_reloptions(newOptions, \ntrue);\n> > -\t\t\tbreak;\n> > +\t\t\t{\n> > +\t\t\t\tchar\t *namespaces[] = \nHEAP_RELOPT_NAMESPACES;\n> > +\t\t\t\tList\t *heapDefList;\n> > +\n> > +\t\t\t\toptionsDefListValdateNamespaces(defList, \nnamespaces);\n> > +\t\t\t\theapDefList = optionsDefListFilterNamespaces(\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t defList, NULL);\n> > +\t\t\t\tnewOptions = \ntransformOptions(get_partitioned_relopt_spec_set(),\n> > +\t\t\t\t\t\t\t\t\t\t\n\t isnull ? (Datum) 0 : datum,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t heapDefList, parse_mode);\n> > +\t\t\t\tbreak;\n> > +\t\t\t}\n> > \n> > \t\tcase RELKIND_VIEW:\n> > -\t\t\t(void) view_reloptions(newOptions, true);\n> > -\t\t\tbreak;\n> > +\t\t\t{\n> > +\n> > +\t\t\t\tnewOptions = transformOptions(\n> > +\t\t\t\t\t\t\t\t\t \nget_view_relopt_spec_set(),\n> > +\t\t\t\t\t\t\t\t\t \ndatum, defList, parse_mode);\n> > +\t\t\t\tbreak;\n> > +\t\t\t}\n> > \n> > \t\tcase RELKIND_INDEX:\n> > \n> > \t\tcase RELKIND_PARTITIONED_INDEX:\n> > -\t\t\t(void) index_reloptions(rel->rd_indam->amoptions, \nnewOptions, true);\n> > +\t\t\tif (! rel->rd_indam->amreloptspecset)\n> > +\t\t\t{\n> > +\t\t\t\tereport(ERROR,\n> > +\t\t\t\t\t\t\n(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > +\t\t\t\t\t\t errmsg(\"index %s does not \nsupport options\",\n> > +\t\t\t\t\t\t\t\t\nRelationGetRelationName(rel))));\n> > +\t\t\t\tbreak;\n> > +\t\t\t}\n> > +\t\t\tparse_mode = OPTIONS_PARSE_MODE_FOR_ALTER;\n> > +\t\t\tif (operation == AT_ResetRelOptions)\n> > +\t\t\t\tparse_mode |= OPTIONS_PARSE_MODE_FOR_RESET;\n> > +\t\t\tnewOptions = transformOptions(\n> > +\t\t\t\t\t\t\t\t\trel-\n>rd_indam->amreloptspecset(),\n> > +\t\t\t\t\t\t\t\t\t\t\n\tisnull ? (Datum) 0 : datum,\n> > +\t\t\t\t\t\t\t\t\t\t\n\tdefList, parse_mode);\n> > \n> > \t\t\tbreak;\n> > \t\t\n> > \t\tdefault:\n> > \t\t\tereport(ERROR,\n> > \n> > @@ -13769,7 +13833,7 @@ ATExecSetRelOptions(Relation rel, List *defList,\n> > AlterTableType operation,> \n> > \tif (rel->rd_rel->relkind == RELKIND_VIEW)\n> > \t{\n> > \t\n> > \t\tQuery\t *view_query = get_view_query(rel);\n> > \n> > -\t\tList\t *view_options = \nuntransformRelOptions(newOptions);\n> > +\t\tList\t *view_options = \noptionsTextArrayToDefList(newOptions);\n> > \n> > \t\tListCell *cell;\n> > \t\tbool\t\tcheck_option = false;\n> > \n> > @@ -13853,11 +13917,15 @@ ATExecSetRelOptions(Relation rel, List *defList,\n> > AlterTableType operation,> \n> > \t\t\t\t\t\t\t\t\t\n&isnull);\n> > \t\t\n> > \t\t}\n> > \n> > -\t\tnewOptions = transformRelOptions(isnull ? (Datum) 0 : datum,\n> > -\t\t\t\t\t\t\t\t\t\t\n defList, \"toast\", validnsps, false,\n> > -\t\t\t\t\t\t\t\t\t\t\n operation == AT_ResetRelOptions);\n> > +\t\tparse_mode = OPTIONS_PARSE_MODE_FOR_ALTER;\n> > +\t\tif (operation == AT_ResetRelOptions)\n> > +\t\t\tparse_mode |= OPTIONS_PARSE_MODE_FOR_RESET;\n> > +\n> > +\t\ttoastDefList = optionsDefListFilterNamespaces(defList, \n\"toast\");\n> > \n> > -\t\t(void) heap_reloptions(RELKIND_TOASTVALUE, newOptions, \ntrue);\n> > +\t\tnewOptions = transformOptions(get_toast_relopt_spec_set(),\n> > +\t\t\t\t\t\t\t\t\t \nisnull ? (Datum) 0 : datum,\n> > +\t\t\t\t\t\t\t\t\t \ntoastDefList, parse_mode);\n> > \n> > \t\tmemset(repl_val, 0, sizeof(repl_val));\n> > \t\tmemset(repl_null, false, sizeof(repl_null));\n> > \n> > diff --git a/src/backend/commands/tablespace.c\n> > b/src/backend/commands/tablespace.c index 4b96eec..912699b 100644\n> > --- a/src/backend/commands/tablespace.c\n> > +++ b/src/backend/commands/tablespace.c\n> > @@ -345,10 +345,9 @@ CreateTableSpace(CreateTableSpaceStmt *stmt)\n> > \n> > \tnulls[Anum_pg_tablespace_spcacl - 1] = true;\n> > \t\n> > \t/* Generate new proposed spcoptions (text array) */\n> > \n> > -\tnewOptions = transformRelOptions((Datum) 0,\n> > -\t\t\t\t\t\t\t\t\t \nstmt->options,\n> > -\t\t\t\t\t\t\t\t\t \nNULL, NULL, false, false);\n> > -\t(void) tablespace_reloptions(newOptions, true);\n> > +\tnewOptions = transformOptions(get_tablespace_options_spec_set(),\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t(Datum) 0, stmt->options, 0);\n> > +\n> > \n> > \tif (newOptions != (Datum) 0)\n> > \t\n> > \t\tvalues[Anum_pg_tablespace_spcoptions - 1] = newOptions;\n> > \t\n> > \telse\n> > \n> > @@ -1053,10 +1052,11 @@ AlterTableSpaceOptions(AlterTableSpaceOptionsStmt\n> > *stmt)> \n> > \t/* Generate new proposed spcoptions (text array) */\n> > \tdatum = heap_getattr(tup, Anum_pg_tablespace_spcoptions,\n> > \t\n> > \t\t\t\t\t\t RelationGetDescr(rel), \n&isnull);\n> > \n> > -\tnewOptions = transformRelOptions(isnull ? (Datum) 0 : datum,\n> > -\t\t\t\t\t\t\t\t\t \nstmt->options, NULL, NULL, false,\n> > -\t\t\t\t\t\t\t\t\t \nstmt->isReset);\n> > -\t(void) tablespace_reloptions(newOptions, true);\n> > +\tnewOptions = transformOptions(get_tablespace_options_spec_set(),\n> > +\t\t\t\t\t\t\t\t isnull ? \n(Datum) 0 : datum,\n> > +\t\t\t\t\t\t\t\t stmt-\n>options,\n> > +\t\t\t\t\t\t\t\t \nOPTIONS_PARSE_MODE_FOR_ALTER |\n> > +\t\t\t\t\t\t (stmt->isReset ? \nOPTIONS_PARSE_MODE_FOR_RESET : 0));\n> > \n> > \t/* Build new tuple. */\n> > \tmemset(repl_null, false, sizeof(repl_null));\n> > \n> > diff --git a/src/backend/foreign/foreign.c b/src/backend/foreign/foreign.c\n> > index 5564dc3..0370be7 100644\n> > --- a/src/backend/foreign/foreign.c\n> > +++ b/src/backend/foreign/foreign.c\n> > @@ -78,7 +78,7 @@ GetForeignDataWrapperExtended(Oid fdwid, bits16 flags)\n> > \n> > \tif (isnull)\n> > \t\n> > \t\tfdw->options = NIL;\n> > \t\n> > \telse\n> > \n> > -\t\tfdw->options = untransformRelOptions(datum);\n> > +\t\tfdw->options = optionsTextArrayToDefList(datum);\n> > \n> > \tReleaseSysCache(tp);\n> > \n> > @@ -165,7 +165,7 @@ GetForeignServerExtended(Oid serverid, bits16 flags)\n> > \n> > \tif (isnull)\n> > \t\n> > \t\tserver->options = NIL;\n> > \t\n> > \telse\n> > \n> > -\t\tserver->options = untransformRelOptions(datum);\n> > +\t\tserver->options = optionsTextArrayToDefList(datum);\n> > \n> > \tReleaseSysCache(tp);\n> > \n> > @@ -233,7 +233,7 @@ GetUserMapping(Oid userid, Oid serverid)\n> > \n> > \tif (isnull)\n> > \t\n> > \t\tum->options = NIL;\n> > \t\n> > \telse\n> > \n> > -\t\tum->options = untransformRelOptions(datum);\n> > +\t\tum->options = optionsTextArrayToDefList(datum);\n> > \n> > \tReleaseSysCache(tp);\n> > \n> > @@ -270,7 +270,7 @@ GetForeignTable(Oid relid)\n> > \n> > \tif (isnull)\n> > \t\n> > \t\tft->options = NIL;\n> > \t\n> > \telse\n> > \n> > -\t\tft->options = untransformRelOptions(datum);\n> > +\t\tft->options = optionsTextArrayToDefList(datum);\n> > \n> > \tReleaseSysCache(tp);\n> > \n> > @@ -303,7 +303,7 @@ GetForeignColumnOptions(Oid relid, AttrNumber attnum)\n> > \n> > \tif (isnull)\n> > \t\n> > \t\toptions = NIL;\n> > \t\n> > \telse\n> > \n> > -\t\toptions = untransformRelOptions(datum);\n> > +\t\toptions = optionsTextArrayToDefList(datum);\n> > \n> > \tReleaseSysCache(tp);\n> > \n> > @@ -572,7 +572,7 @@ pg_options_to_table(PG_FUNCTION_ARGS)\n> > \n> > \tDatum\t\tarray = PG_GETARG_DATUM(0);\n> > \t\n> > \tdeflist_to_tuplestore((ReturnSetInfo *) fcinfo->resultinfo,\n> > \n> > -\t\t\t\t\t\t \nuntransformRelOptions(array));\n> > +\t\t\t\t\t\t \noptionsTextArrayToDefList(array));\n> > \n> > \treturn (Datum) 0;\n> > \n> > }\n> > \n> > @@ -643,7 +643,7 @@ is_conninfo_option(const char *option, Oid context)\n> > \n> > Datum\n> > postgresql_fdw_validator(PG_FUNCTION_ARGS)\n> > {\n> > \n> > -\tList\t *options_list = \nuntransformRelOptions(PG_GETARG_DATUM(0));\n> > +\tList\t *options_list = \noptionsTextArrayToDefList(PG_GETARG_DATUM(0));\n> > \n> > \tOid\t\t\tcatalog = PG_GETARG_OID(1);\n> > \t\n> > \tListCell *cell;\n> > \n> > diff --git a/src/backend/parser/parse_utilcmd.c\n> > b/src/backend/parser/parse_utilcmd.c index 313d7b6..1fe41b4 100644\n> > --- a/src/backend/parser/parse_utilcmd.c\n> > +++ b/src/backend/parser/parse_utilcmd.c\n> > @@ -1757,7 +1757,7 @@ generateClonedIndexStmt(RangeVar *heapRel, Relation\n> > source_idx,> \n> > \t\t/* Add the operator class name, if non-default */\n> > \t\tiparam->opclass = get_opclass(indclass->values[keyno], \nkeycoltype);\n> > \t\tiparam->opclassopts =\n> > \n> > -\t\t\tuntransformRelOptions(get_attoptions(source_relid, \nkeyno + 1));\n> > +\t\t\t\noptionsTextArrayToDefList(get_attoptions(source_relid, keyno + 1));\n> > \n> > \t\tiparam->ordering = SORTBY_DEFAULT;\n> > \t\tiparam->nulls_ordering = SORTBY_NULLS_DEFAULT;\n> > \n> > @@ -1821,7 +1821,7 @@ generateClonedIndexStmt(RangeVar *heapRel, Relation\n> > source_idx,> \n> > \tdatum = SysCacheGetAttr(RELOID, ht_idxrel,\n> > \t\n> > \t\t\t\t\t\t\t\nAnum_pg_class_reloptions, &isnull);\n> > \t\n> > \tif (!isnull)\n> > \n> > -\t\tindex->options = untransformRelOptions(datum);\n> > +\t\tindex->options = optionsTextArrayToDefList(datum);\n> > \n> > \t/* If it's a partial index, decompile and append the predicate */\n> > \tdatum = SysCacheGetAttr(INDEXRELID, ht_idx,\n> > \n> > diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c\n> > index bf085aa..d12ab1a 100644\n> > --- a/src/backend/tcop/utility.c\n> > +++ b/src/backend/tcop/utility.c\n> > @@ -1155,6 +1155,7 @@ ProcessUtilitySlow(ParseState *pstate,\n> > \n> > \t\t\t\t\t\t\tCreateStmt *cstmt = \n(CreateStmt *) stmt;\n> > \t\t\t\t\t\t\tDatum\t\t\ntoast_options;\n> > \t\t\t\t\t\t\tstatic char \n*validnsps[] = HEAP_RELOPT_NAMESPACES;\n> > \n> > +\t\t\t\t\t\t\tList\t \n*toastDefList;\n> > \n> > \t\t\t\t\t\t\t/* Remember \ntransformed RangeVar for LIKE */\n> > \t\t\t\t\t\t\ttable_rv = cstmt-\n>relation;\n> > \n> > @@ -1178,15 +1179,17 @@ ProcessUtilitySlow(ParseState *pstate,\n> > \n> > \t\t\t\t\t\t\t * parse and \nvalidate reloptions for the toast\n> > \t\t\t\t\t\t\t * table\n> > \t\t\t\t\t\t\t */\n> > \n> > -\t\t\t\t\t\t\ttoast_options = \ntransformRelOptions((Datum) 0,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\tcstmt->options,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\"toast\",\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\tvalidnsps,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\ttrue,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\tfalse);\n> > -\t\t\t\t\t\t\t(void) \nheap_reloptions(RELKIND_TOASTVALUE,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t toast_options,\n> > -\t\t\t\t\t\t\t\t\t\t\n\t\t true);\n> > +\n> > +\t\t\t\t\t\t\t\noptionsDefListValdateNamespaces(\n> > +\t\t\t\t\t\t\t\t\t\t\n\t ((CreateStmt *) stmt)->options,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\tvalidnsps);\n> > +\n> > +\t\t\t\t\t\t\ttoastDefList = \noptionsDefListFilterNamespaces(\n> > +\t\t\t\t\t\t\t\t\t\n((CreateStmt *) stmt)->options, \"toast\");\n> > +\n> > +\t\t\t\t\t\t\ttoast_options = \ntransformOptions(\n> > +\t\t\t\t\t\t\t\t\t \nget_toast_relopt_spec_set(), (Datum) 0,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t toastDefList, 0);\n> > \n> > \t\t\t\t\t\t\t\nNewRelationCreateToastTable(address.objectId,\n> > \t\t\t\t\t\t\t\n> > \t\t\t\t\t\t\t\t\t\t\n\t\t\t\ttoast_options);\n> > \n> > @@ -1295,9 +1298,12 @@ ProcessUtilitySlow(ParseState *pstate,\n> > \n> > \t\t\t\t\t * lock on (for example) a relation \non which we have no\n> > \t\t\t\t\t * permissions.\n> > \t\t\t\t\t */\n> > \n> > -\t\t\t\t\tlockmode = \nAlterTableGetLockLevel(atstmt->cmds);\n> > -\t\t\t\t\trelid = \nAlterTableLookupRelation(atstmt, lockmode);\n> > -\n> > +\t\t\t\t\trelid = \nAlterTableLookupRelation(atstmt, NoLock); // FIXME!\n> > +\t\t\t\t\tif (OidIsValid(relid))\n> > +\t\t\t\t\t{\n> > +\t\t\t\t\t\tlockmode = \nAlterTableGetLockLevel(relid, atstmt->cmds);\n> > +\t\t\t\t\t\trelid = \nAlterTableLookupRelation(atstmt, lockmode);\n> > +\t\t\t\t\t}\n> > \n> > \t\t\t\t\tif (OidIsValid(relid))\n> > \t\t\t\t\t{\n> > \t\t\t\t\t\n> > \t\t\t\t\t\tAlterTableUtilityContext \natcontext;\n> > \n> > diff --git a/src/backend/utils/cache/attoptcache.c\n> > b/src/backend/utils/cache/attoptcache.c index 72d89cb..f651129 100644\n> > --- a/src/backend/utils/cache/attoptcache.c\n> > +++ b/src/backend/utils/cache/attoptcache.c\n> > @@ -16,6 +16,7 @@\n> > \n> > */\n> > \n> > #include \"postgres.h\"\n> > \n> > +#include \"access/options.h\"\n> > \n> > #include \"access/reloptions.h\"\n> > #include \"utils/attoptcache.h\"\n> > #include \"utils/catcache.h\"\n> > \n> > @@ -148,7 +149,8 @@ get_attribute_options(Oid attrelid, int attnum)\n> > \n> > \t\t\t\topts = NULL;\n> > \t\t\t\n> > \t\t\telse\n> > \t\t\t{\n> > \n> > -\t\t\t\tbytea\t *bytea_opts = \nattribute_reloptions(datum, false);\n> > +\t\t\t\tbytea *bytea_opts = \noptionsTextArrayToBytea(\n> > +\t\t\t\t\t\t\t\t\t\nget_attribute_options_spec_set(), datum, 0);\n> > \n> > \t\t\t\topts = \nMemoryContextAlloc(CacheMemoryContext,\n> > \t\t\t\t\n> > \t\t\t\t\t\t\t\t\t\t\n VARSIZE(bytea_opts));\n> > \n> > diff --git a/src/backend/utils/cache/relcache.c\n> > b/src/backend/utils/cache/relcache.c index 13d9994..f22c2d9 100644\n> > --- a/src/backend/utils/cache/relcache.c\n> > +++ b/src/backend/utils/cache/relcache.c\n> > @@ -441,7 +441,7 @@ static void\n> > \n> > RelationParseRelOptions(Relation relation, HeapTuple tuple)\n> > {\n> > \n> > \tbytea\t *options;\n> > \n> > -\tamoptions_function amoptsfn;\n> > +\tamreloptspecset_function amoptspecsetfn;\n> > \n> > \trelation->rd_options = NULL;\n> > \n> > @@ -456,11 +456,11 @@ RelationParseRelOptions(Relation relation, HeapTuple\n> > tuple)> \n> > \t\tcase RELKIND_VIEW:\n> > \t\tcase RELKIND_MATVIEW:\n> > \n> > \t\tcase RELKIND_PARTITIONED_TABLE:\n> > -\t\t\tamoptsfn = NULL;\n> > +\t\t\tamoptspecsetfn = NULL;\n> > \n> > \t\t\tbreak;\n> > \t\t\n> > \t\tcase RELKIND_INDEX:\n> > \n> > \t\tcase RELKIND_PARTITIONED_INDEX:\n> > -\t\t\tamoptsfn = relation->rd_indam->amoptions;\n> > +\t\t\tamoptspecsetfn = relation->rd_indam->amreloptspecset;\n> > \n> > \t\t\tbreak;\n> > \t\t\n> > \t\tdefault:\n> > \t\t\treturn;\n> > \n> > @@ -471,7 +471,7 @@ RelationParseRelOptions(Relation relation, HeapTuple\n> > tuple)> \n> > \t * we might not have any other for pg_class yet (consider executing \nthis\n> > \t * code for pg_class itself)\n> > \t */\n> > \n> > -\toptions = extractRelOptions(tuple, GetPgClassDescriptor(), \namoptsfn);\n> > +\toptions = extractRelOptions(tuple, GetPgClassDescriptor(),\n> > amoptspecsetfn);> \n> > \t/*\n> > \t\n> > \t * Copy parsed data into CacheMemoryContext. To guard against the\n> > \n> > diff --git a/src/backend/utils/cache/spccache.c\n> > b/src/backend/utils/cache/spccache.c index 5870f43..87f2fa5 100644\n> > --- a/src/backend/utils/cache/spccache.c\n> > +++ b/src/backend/utils/cache/spccache.c\n> > @@ -148,7 +148,8 @@ get_tablespace(Oid spcid)\n> > \n> > \t\t\topts = NULL;\n> > \t\t\n> > \t\telse\n> > \t\t{\n> > \n> > -\t\t\tbytea\t *bytea_opts = \ntablespace_reloptions(datum, false);\n> > +\t\t\tbytea *bytea_opts = optionsTextArrayToBytea(\n> > +\t\t\t\t\t\t\t\t\nget_tablespace_options_spec_set(), datum, 0);\n> > \n> > \t\t\topts = MemoryContextAlloc(CacheMemoryContext, \nVARSIZE(bytea_opts));\n> > \t\t\tmemcpy(opts, bytea_opts, VARSIZE(bytea_opts));\n> > \n> > diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h\n> > index d357ebb..b8fb6b9 100644\n> > --- a/src/include/access/amapi.h\n> > +++ b/src/include/access/amapi.h\n> > @@ -136,10 +136,6 @@ typedef void (*amcostestimate_function) (struct\n> > PlannerInfo *root,> \n> > \t\t\t\t\t\t\t\t\t\t\n double *indexCorrelation,\n> > \t\t\t\t\t\t\t\t\t\t\n double *indexPages);\n> > \n> > -/* parse index reloptions */\n> > -typedef bytea *(*amoptions_function) (Datum reloptions,\n> > -\t\t\t\t\t\t\t\t\t \nbool validate);\n> > -\n> > \n> > /* report AM, index, or index column property */\n> > typedef bool (*amproperty_function) (Oid index_oid, int attno,\n> > \n> > \t\t\t\t\t\t\t\t\t \nIndexAMProperty prop, const char *propname,\n> > \n> > @@ -186,6 +182,9 @@ typedef void (*ammarkpos_function) (IndexScanDesc\n> > scan);> \n> > /* restore marked scan position */\n> > typedef void (*amrestrpos_function) (IndexScanDesc scan);\n> > \n> > +/* get catalog of reloptions definitions */\n> > +typedef void *(*amreloptspecset_function) ();\n> > +\n> > \n> > /*\n> > \n> > * Callback function signatures - for parallel index scans.\n> > */\n> > \n> > @@ -263,7 +262,6 @@ typedef struct IndexAmRoutine\n> > \n> > \tamvacuumcleanup_function amvacuumcleanup;\n> > \tamcanreturn_function amcanreturn;\t/* can be NULL */\n> > \tamcostestimate_function amcostestimate;\n> > \n> > -\tamoptions_function amoptions;\n> > \n> > \tamproperty_function amproperty; /* can be NULL */\n> > \tambuildphasename_function ambuildphasename; /* can be NULL */\n> > \tamvalidate_function amvalidate;\n> > \n> > @@ -275,6 +273,7 @@ typedef struct IndexAmRoutine\n> > \n> > \tamendscan_function amendscan;\n> > \tammarkpos_function ammarkpos;\t/* can be NULL */\n> > \tamrestrpos_function amrestrpos; /* can be NULL */\n> > \n> > +\tamreloptspecset_function amreloptspecset; /* can be NULL */\n> > \n> > \t/* interface functions to support parallel index scans */\n> > \tamestimateparallelscan_function amestimateparallelscan; /* can be \nNULL\n> > \t*/\n> > \n> > diff --git a/src/include/access/brin.h b/src/include/access/brin.h\n> > index 4e2be13..25b3456 100644\n> > --- a/src/include/access/brin.h\n> > +++ b/src/include/access/brin.h\n> > @@ -36,6 +36,8 @@ typedef struct BrinStatsData\n> > \n> > #define BRIN_DEFAULT_PAGES_PER_RANGE\t128\n> > \n> > +#define BRIN_MIN_PAGES_PER_RANGE\t\t1\n> > +#define BRIN_MAX_PAGES_PER_RANGE\t\t131072\n> > \n> > #define BrinGetPagesPerRange(relation) \\\n> > \n> > \t(AssertMacro(relation->rd_rel->relkind == RELKIND_INDEX && \\\n> > \t\n> > \t\t\t\t relation->rd_rel->relam == BRIN_AM_OID), \\\n> > \n> > diff --git a/src/include/access/brin_internal.h\n> > b/src/include/access/brin_internal.h index 79440eb..a798a96 100644\n> > --- a/src/include/access/brin_internal.h\n> > +++ b/src/include/access/brin_internal.h\n> > @@ -14,6 +14,7 @@\n> > \n> > #include \"access/amapi.h\"\n> > #include \"storage/bufpage.h\"\n> > #include \"utils/typcache.h\"\n> > \n> > +#include \"access/options.h\"\n> > \n> > /*\n> > \n> > @@ -108,6 +109,7 @@ extern IndexBulkDeleteResult\n> > *brinbulkdelete(IndexVacuumInfo *info,> \n> > extern IndexBulkDeleteResult *brinvacuumcleanup(IndexVacuumInfo *info,\n> > \n> > \t\t\t\t\t\t\t\t\t\t\n\t\tIndexBulkDeleteResult *stats);\n> > \n> > extern bytea *brinoptions(Datum reloptions, bool validate);\n> > \n> > +extern void * bringetreloptspecset (void);\n> > \n> > /* brin_validate.c */\n> > extern bool brinvalidate(Oid opclassoid);\n> > \n> > diff --git a/src/include/access/gin_private.h\n> > b/src/include/access/gin_private.h index 670a40b..2b7c25c 100644\n> > --- a/src/include/access/gin_private.h\n> > +++ b/src/include/access/gin_private.h\n> > @@ -108,6 +108,7 @@ extern Datum *ginExtractEntries(GinState *ginstate,\n> > OffsetNumber attnum,> \n> > extern OffsetNumber gintuple_get_attrnum(GinState *ginstate, IndexTuple\n> > tuple); extern Datum gintuple_get_key(GinState *ginstate, IndexTuple\n> > tuple,> \n> > \t\t\t\t\t\t\t GinNullCategory \n*category);\n> > \n> > +extern void *gingetreloptspecset(void);\n> > \n> > /* gininsert.c */\n> > extern IndexBuildResult *ginbuild(Relation heap, Relation index,\n> > \n> > diff --git a/src/include/access/gist_private.h\n> > b/src/include/access/gist_private.h index 553d364..015b75a 100644\n> > --- a/src/include/access/gist_private.h\n> > +++ b/src/include/access/gist_private.h\n> > @@ -22,6 +22,7 @@\n> > \n> > #include \"storage/buffile.h\"\n> > #include \"utils/hsearch.h\"\n> > #include \"access/genam.h\"\n> > \n> > +#include \"access/reloptions.h\" //FIXME! should be replaced with options.h\n> > finally> \n> > /*\n> > \n> > * Maximum number of \"halves\" a page can be split into in one operation.\n> > \n> > @@ -388,6 +389,7 @@ typedef enum GistOptBufferingMode\n> > \n> > \tGIST_OPTION_BUFFERING_OFF\n> > \n> > } GistOptBufferingMode;\n> > \n> > +\n> > \n> > /*\n> > \n> > * Storage type for GiST's reloptions\n> > */\n> > \n> > @@ -478,7 +480,7 @@ extern void gistadjustmembers(Oid opfamilyoid,\n> > \n> > #define GIST_MIN_FILLFACTOR\t\t\t10\n> > #define GIST_DEFAULT_FILLFACTOR\t\t90\n> > \n> > -extern bytea *gistoptions(Datum reloptions, bool validate);\n> > +extern void *gistgetreloptspecset(void);\n> > \n> > extern bool gistproperty(Oid index_oid, int attno,\n> > \n> > \t\t\t\t\t\t IndexAMProperty prop, const \nchar *propname,\n> > \t\t\t\t\t\t bool *res, bool *isnull);\n> > \n> > diff --git a/src/include/access/hash.h b/src/include/access/hash.h\n> > index 1cce865..91922ef 100644\n> > --- a/src/include/access/hash.h\n> > +++ b/src/include/access/hash.h\n> > @@ -378,7 +378,6 @@ extern IndexBulkDeleteResult\n> > *hashbulkdelete(IndexVacuumInfo *info,> \n> > \t\t\t\t\t\t\t\t\t\t\n\t void *callback_state);\n> > \n> > extern IndexBulkDeleteResult *hashvacuumcleanup(IndexVacuumInfo *info,\n> > \n> > \t\t\t\t\t\t\t\t\t\t\n\t\tIndexBulkDeleteResult *stats);\n> > \n> > -extern bytea *hashoptions(Datum reloptions, bool validate);\n> > \n> > extern bool hashvalidate(Oid opclassoid);\n> > extern void hashadjustmembers(Oid opfamilyoid,\n> > \n> > \t\t\t\t\t\t\t Oid opclassoid,\n> > \n> > @@ -470,6 +469,7 @@ extern BlockNumber\n> > _hash_get_newblock_from_oldbucket(Relation rel, Bucket old_bu> \n> > extern Bucket _hash_get_newbucket_from_oldbucket(Relation rel, Bucket\n> > old_bucket,> \n> > \t\t\t\t\t\t\t\t\t\t\n\t\t uint32 lowmask, uint32 maxbucket);\n> > \n> > extern void _hash_kill_items(IndexScanDesc scan);\n> > \n> > +extern void *hashgetreloptspecset(void);\n> > \n> > /* hash.c */\n> > extern void hashbucketcleanup(Relation rel, Bucket cur_bucket,\n> > \n> > diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h\n> > index 30a216e..1fcb5f5 100644\n> > --- a/src/include/access/nbtree.h\n> > +++ b/src/include/access/nbtree.h\n> > @@ -1252,7 +1252,7 @@ extern void _bt_end_vacuum(Relation rel);\n> > \n> > extern void _bt_end_vacuum_callback(int code, Datum arg);\n> > extern Size BTreeShmemSize(void);\n> > extern void BTreeShmemInit(void);\n> > \n> > -extern bytea *btoptions(Datum reloptions, bool validate);\n> > +extern void * btgetreloptspecset (void);\n> > \n> > extern bool btproperty(Oid index_oid, int attno,\n> > \n> > \t\t\t\t\t IndexAMProperty prop, const char \n*propname,\n> > \t\t\t\t\t bool *res, bool *isnull);\n> > \n> > diff --git a/src/include/access/options.h b/src/include/access/options.h\n> > new file mode 100644\n> > index 0000000..34e2917\n> > --- /dev/null\n> > +++ b/src/include/access/options.h\n> > @@ -0,0 +1,245 @@\n> > +/*-----------------------------------------------------------------------\n> > -- + *\n> > + * options.h\n> > + *\t Core support for relation and tablespace options\n> > (pg_class.reloptions\n> > + *\t and pg_tablespace.spcoptions)\n> > + *\n> > + * Note: the functions dealing with text-array options values declare\n> > + * them as Datum, not ArrayType *, to avoid needing to include array.h\n> > + * into a lot of low-level code.\n> > + *\n> > + *\n> > + * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group\n> > + * Portions Copyright (c) 1994, Regents of the University of California\n> > + *\n> > + * src/include/access/options.h\n> > + *\n> > +\n> > *------------------------------------------------------------------------\n> > - + */\n> > +#ifndef OPTIONS_H\n> > +#define OPTIONS_H\n> > +\n> > +#include \"storage/lock.h\"\n> > +#include \"nodes/pg_list.h\"\n> > +\n> > +\n> > +/* supported option types */\n> > +typedef enum option_type\n> > +{\n> > +\tOPTION_TYPE_BOOL,\n> > +\tOPTION_TYPE_INT,\n> > +\tOPTION_TYPE_REAL,\n> > +\tOPTION_TYPE_ENUM,\n> > +\tOPTION_TYPE_STRING\n> > +}\toption_type;\n> > +\n> > +\n> > +typedef enum option_value_status\n> > +{\n> > +\tOPTION_VALUE_STATUS_EMPTY,\t/* Option was just initialized */\n> > +\tOPTION_VALUE_STATUS_RAW,\t/* Option just came from syntax analyzer in\n> > +\t\t\t\t\t\t\t\t * has name, \nand raw (unparsed) value */\n> > +\tOPTION_VALUE_STATUS_PARSED, /* Option was parsed and has link to \ncatalog\n> > +\t\t\t\t\t\t\t\t * entry and \nproper value */\n> > +\tOPTION_VALUE_STATUS_FOR_RESET\t\t/* This option came from \nALTER xxx\n> > +\t\t\t\t\t\t\t\t\t\t\n * RESET */\n> > +}\toption_value_status;\n> > +\n> > +/* flags for reloptinon definition */\n> > +typedef enum option_spec_flags\n> > +{\n> > +\tOPTION_DEFINITION_FLAG_FORBID_ALTER = (1 << 0),\t\t/* \nAltering this option\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t * is forbidden */\n> > +\tOPTION_DEFINITION_FLAG_IGNORE = (1 << 1),\t/* Skip this option while\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t * parsing. Used for WITH OIDS\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t * special case */\n> > +\tOPTION_DEFINITION_FLAG_REJECT = (1 << 2)\t/* Option will be \nrejected\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t * when comes from syntax\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t * analyzer, but still have\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t * default value and offset */\n> > +} option_spec_flags;\n> > +\n> > +/* flags that tells reloption parser how to parse*/\n> > +typedef enum options_parse_mode\n> > +{\n> > +\tOPTIONS_PARSE_MODE_VALIDATE = (1 << 0),\n> > +\tOPTIONS_PARSE_MODE_FOR_ALTER = (1 << 1),\n> > +\tOPTIONS_PARSE_MODE_FOR_RESET = (1 << 2)\n> > +} options_parse_mode;\n> > +\n> > +\n> > +\n> > +/*\n> > + * opt_enum_elt_def -- One member of the array of acceptable values\n> > + * of an enum reloption.\n> > + */\n> > +typedef struct opt_enum_elt_def\n> > +{\n> > +\tconst char *string_val;\n> > +\tint\t\t\tsymbol_val;\n> > +} opt_enum_elt_def;\n> > +\n> > +\n> > +/* generic structure to store Option Spec information */\n> > +typedef struct option_spec_basic\n> > +{\n> > +\tconst char *name;\t\t\t/* must be first (used as list \ntermination\n> > +\t\t\t\t\t\t\t\t * marker) */\n> > +\tconst char *desc;\n> > +\tLOCKMODE\tlockmode;\n> > +\toption_spec_flags flags;\n> > +\toption_type type;\n> > +\tint\t\t\tstruct_offset;\t/* offset of the value in \nBytea representation */\n> > +}\toption_spec_basic;\n> > +\n> > +\n> > +/* reloptions records for specific variable types */\n> > +typedef struct option_spec_bool\n> > +{\n> > +\toption_spec_basic base;\n> > +\tbool\t\tdefault_val;\n> > +}\toption_spec_bool;\n> > +\n> > +typedef struct option_spec_int\n> > +{\n> > +\toption_spec_basic base;\n> > +\tint\t\t\tdefault_val;\n> > +\tint\t\t\tmin;\n> > +\tint\t\t\tmax;\n> > +}\toption_spec_int;\n> > +\n> > +typedef struct option_spec_real\n> > +{\n> > +\toption_spec_basic base;\n> > +\tdouble\t\tdefault_val;\n> > +\tdouble\t\tmin;\n> > +\tdouble\t\tmax;\n> > +}\toption_spec_real;\n> > +\n> > +typedef struct option_spec_enum\n> > +{\n> > +\toption_spec_basic base;\n> > +\topt_enum_elt_def *members;/* FIXME rewrite. Null terminated array of\n> > allowed values for +\t\t\t\t\t\t\t\t\n * the option */\n> > +\tint\t\t\tdefault_val;\t/* Number of item of \nallowed_values array */\n> > +\tconst char *detailmsg;\n> > +}\toption_spec_enum;\n> > +\n> > +/* validation routines for strings */\n> > +typedef void (*validate_string_option) (const char *value);\n> > +\n> > +/*\n> > + * When storing sting reloptions, we shoud deal with special case when\n> > + * option value is not set. For fixed length options, we just copy\n> > default\n> > + * option value into the binary structure. For varlen value, there can be\n> > + * \"not set\" special case, with no default value offered.\n> > + * In this case we will set offset value to -1, so code that use\n> > relptions\n> > + * can deal this case. For better readability it was defined as a\n> > constant. + */\n> > +#define OPTION_STRING_VALUE_NOT_SET_OFFSET -1\n> > +\n> > +typedef struct option_spec_string\n> > +{\n> > +\toption_spec_basic base;\n> > +\tvalidate_string_option validate_cb;\n> > +\tchar\t *default_val;\n> > +}\toption_spec_string;\n> > +\n> > +typedef void (*postprocess_bytea_options_function) (void *data, bool\n> > validate); +\n> > +typedef struct options_spec_set\n> > +{\n> > +\toption_spec_basic **definitions;\n> > +\tint\t\t\tnum;\t\t\t/* Number of \nspec_set items in use */\n> > +\tint\t\t\tnum_allocated;\t/* Number of spec_set \nitems allocated */\n> > +\tbool\t\tforbid_realloc; /* If number of items of the \nspec_set were\n> > +\t\t\t\t\t\t\t\t * strictly \nset to certain value do no allow\n> > +\t\t\t\t\t\t\t\t * adding \nmore idems */\n> > +\tSize\t\tstruct_size;\t/* Size of a structure for \noptions in binary\n> > +\t\t\t\t\t\t\t\t * \nrepresentation */\n> > +\tpostprocess_bytea_options_function postprocess_fun; /* This function \nis\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t * called after options\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t * were converted in\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t * Bytea represenation.\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t * Can be used for extra\n> > +\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t * validation and so on */\n> > +\tchar\t *namespace;\t\t/* spec_set is used for options \nfrom this\n> > +\t\t\t\t\t\t\t\t * namespase \n*/\n> > +}\toptions_spec_set;\n> > +\n> > +\n> > +/* holds an option value parsed or unparsed */\n> > +typedef struct option_value\n> > +{\n> > +\toption_spec_basic *gen;\n> > +\tchar\t *namespace;\n> > +\toption_value_status status;\n> > +\tchar\t *raw_value;\t\t/* allocated separately */\n> > +\tchar\t *raw_name;\n> > +\tunion\n> > +\t{\n> > +\t\tbool\t\tbool_val;\n> > +\t\tint\t\t\tint_val;\n> > +\t\tdouble\t\treal_val;\n> > +\t\tint\t\t\tenum_val;\n> > +\t\tchar\t *string_val; /* allocated separately */\n> > +\t}\t\t\tvalues;\n> > +}\toption_value;\n> > +\n> > +\n> > +\n> > +\n> > +/*\n> > + * Options spec_set related functions\n> > + */\n> > +extern options_spec_set *allocateOptionsSpecSet(const char *namespace,\n> > +\t\t\t\t\t\t\t\t int \nsize_of_bytea, int num_items_expected);\n> > +extern void optionsSpecSetAddBool(options_spec_set * spec_set, const char\n> > *name, +\t\t\t\t const char *desc, LOCKMODE \nlockmode, option_spec_flags\n> > flags, +\t\t\t\t\t\t\t\t\t\nint struct_offset, bool default_val);\n> > +extern void optionsSpecSetAddInt(options_spec_set * spec_set, const char\n> > *name, +\t\t\t\t\tconst char *desc, LOCKMODE \nlockmode, option_spec_flags\n> > flags, +\t\t\t\t\tint struct_offset, int \ndefault_val, int min_val, int\n> > max_val); +extern void optionsSpecSetAddReal(options_spec_set * spec_set,\n> > const char *name, +\t\t const char *desc, LOCKMODE lockmode,\n> > option_spec_flags flags, +\t int struct_offset, double default_val,\n> > double min_val, double max_val); +extern void\n> > optionsSpecSetAddEnum(options_spec_set * spec_set,\n> > +\t\t\t\t\t\t const char *name, const \nchar *desc, LOCKMODE lockmode,\n> > option_spec_flags flags, +\t\t\tint struct_offset, \nopt_enum_elt_def*\n> > members, int default_val, const char *detailmsg); +extern void\n> > optionsSpecSetAddString(options_spec_set * spec_set, const char *name,\n> > +\t\t const char *desc, LOCKMODE lockmode, option_spec_flags flags, \n+int\n> > struct_offset, const char *default_val, validate_string_option\n> > validator); +\n> > +\n> > +/*\n> > + * This macro allows to get string option value from bytea\n> > representation.\n> > + * \"optstruct\" - is a structure that is stored in bytea options\n> > representation + * \"member\" - member of this structure that has string\n> > option value + * (actually string values are stored in bytea after the\n> > structure, and + * and \"member\" will contain an offset to this value.\n> > This macro do all + * the math\n> > + */\n> > +#define GET_STRING_OPTION(optstruct, member) \\\n> > +\t((optstruct)->member == OPTION_STRING_VALUE_NOT_SET_OFFSET ? NULL : \\\n> > +\t (char *)(optstruct) + (optstruct)->member)\n> > +\n> > +/*\n> > + * Functions related to option convertation, parsing, manipulation\n> > + * and validation\n> > + */\n> > +extern void optionsDefListValdateNamespaces(List *defList,\n> > +\t\t\t\t\t\t\t\tchar \n**allowed_namespaces);\n> > +extern List *optionsDefListFilterNamespaces(List *defList, const char\n> > *namespace); +extern List *optionsTextArrayToDefList(Datum options);\n> > +extern Datum optionsDefListToTextArray(List *defList);\n> > +/*\n> > + * Meta functions that uses functions above to get options for relations,\n> > + * tablespaces, views and so on\n> > + */\n> > +\n> > +extern bytea *optionsTextArrayToBytea(options_spec_set * spec_set, Datum\n> > data, +\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\tbool validate);\n> > +extern Datum transformOptions(options_spec_set * spec_set, Datum\n> > oldOptions, +\t\t\t\t List *defList, options_parse_mode \nparse_mode);\n> > +\n> > +#endif /* OPTIONS_H */\n> > diff --git a/src/include/access/reloptions.h\n> > b/src/include/access/reloptions.h index 7c5fbeb..21b91df 100644\n> > --- a/src/include/access/reloptions.h\n> > +++ b/src/include/access/reloptions.h\n> > @@ -22,6 +22,7 @@\n> > \n> > #include \"access/amapi.h\"\n> > #include \"access/htup.h\"\n> > #include \"access/tupdesc.h\"\n> > \n> > +#include \"access/options.h\"\n> > \n> > #include \"nodes/pg_list.h\"\n> > #include \"storage/lock.h\"\n> > \n> > @@ -110,20 +111,10 @@ typedef struct relopt_real\n> > \n> > \tdouble\t\tmax;\n> > \n> > } relopt_real;\n> > \n> > -/*\n> > - * relopt_enum_elt_def -- One member of the array of acceptable values\n> > - * of an enum reloption.\n> > - */\n> > -typedef struct relopt_enum_elt_def\n> > -{\n> > -\tconst char *string_val;\n> > -\tint\t\t\tsymbol_val;\n> > -} relopt_enum_elt_def;\n> > -\n> > \n> > typedef struct relopt_enum\n> > {\n> > \n> > \trelopt_gen\tgen;\n> > \n> > -\trelopt_enum_elt_def *members;\n> > +\topt_enum_elt_def *members;\n> > \n> > \tint\t\t\tdefault_val;\n> > \tconst char *detailmsg;\n> > \t/* null-terminated array of members */\n> > \n> > @@ -167,6 +158,7 @@ typedef struct local_relopts\n> > \n> > \tList\t *options;\t\t/* list of local_relopt \ndefinitions */\n> > \tList\t *validators;\t\t/* list of relopts_validator \ncallbacks */\n> > \tSize\t\trelopt_struct_size; /* size of parsed bytea \nstructure */\n> > \n> > +\toptions_spec_set * spec_set; /* FIXME */\n> > \n> > } local_relopts;\n> > \n> > /*\n> > \n> > @@ -179,21 +171,6 @@ typedef struct local_relopts\n> > \n> > \t((optstruct)->member == 0 ? NULL : \\\n> > \t\n> > \t (char *)(optstruct) + (optstruct)->member)\n> > \n> > -extern relopt_kind add_reloption_kind(void);\n> > -extern void add_bool_reloption(bits32 kinds, const char *name, const char\n> > *desc, -\t\t\t\t\t\t\t bool \ndefault_val, LOCKMODE lockmode);\n> > -extern void add_int_reloption(bits32 kinds, const char *name, const char\n> > *desc, -\t\t\t\t\t\t\t int \ndefault_val, int min_val, int max_val,\n> > -\t\t\t\t\t\t\t LOCKMODE \nlockmode);\n> > -extern void add_real_reloption(bits32 kinds, const char *name, const char\n> > *desc, -\t\t\t\t\t\t\t double \ndefault_val, double min_val, double max_val,\n> > -\t\t\t\t\t\t\t LOCKMODE \nlockmode);\n> > -extern void add_enum_reloption(bits32 kinds, const char *name, const char\n> > *desc, -\t\t\t\t\t\t\t \nrelopt_enum_elt_def *members, int default_val,\n> > -\t\t\t\t\t\t\t const char \n*detailmsg, LOCKMODE lockmode);\n> > -extern void add_string_reloption(bits32 kinds, const char *name, const\n> > char *desc, -\t\t\t\t\t\t\t\t \nconst char *default_val, validate_string_relopt\n> > validator, -\t\t\t\t\t\t\t\t \nLOCKMODE lockmode);\n> > \n> > extern void init_local_reloptions(local_relopts *opts, Size\n> > relopt_struct_size); extern void\n> > register_reloptions_validator(local_relopts *opts,\n> > \n> > @@ -210,7 +187,7 @@ extern void add_local_real_reloption(local_relopts\n> > *opts, const char *name,> \n> > \t\t\t\t\t\t\t\t\t int \noffset);\n> > \n> > extern void add_local_enum_reloption(local_relopts *relopts,\n> > \n> > \t\t\t\t\t\t\t\t\t \nconst char *name, const char *desc,\n> > \n> > -\t\t\t\t\t\t\t\t\t \nrelopt_enum_elt_def *members,\n> > +\t\t\t\t\t\t\t\t\t \nopt_enum_elt_def *members,\n> > \n> > \t\t\t\t\t\t\t\t\t int \ndefault_val, const char *detailmsg,\n> > \t\t\t\t\t\t\t\t\t int \noffset);\n> > \n> > extern void add_local_string_reloption(local_relopts *opts, const char\n> > *name,> \n> > @@ -219,29 +196,17 @@ extern void add_local_string_reloption(local_relopts\n> > *opts, const char *name,> \n> > \t\t\t\t\t\t\t\t\t \nvalidate_string_relopt validator,\n> > \t\t\t\t\t\t\t\t\t \nfill_string_relopt filler, int offset);\n> > \n> > -extern Datum transformRelOptions(Datum oldOptions, List *defList,\n> > -\t\t\t\t\t\t\t\t const char \n*namspace, char *validnsps[],\n> > -\t\t\t\t\t\t\t\t bool \nacceptOidsOff, bool isReset);\n> > -extern List *untransformRelOptions(Datum options);\n> > \n> > extern bytea *extractRelOptions(HeapTuple tuple, TupleDesc tupdesc,\n> > \n> > -\t\t\t\t\t\t\t\t\namoptions_function amoptions);\n> > -extern void *build_reloptions(Datum reloptions, bool validate,\n> > -\t\t\t\t\t\t\t relopt_kind kind,\n> > -\t\t\t\t\t\t\t Size \nrelopt_struct_size,\n> > -\t\t\t\t\t\t\t const \nrelopt_parse_elt *relopt_elems,\n> > -\t\t\t\t\t\t\t int \nnum_relopt_elems);\n> > +\t\t\t\t\t\t\t\t\namreloptspecset_function amoptions_def_set);\n> > \n> > extern void *build_local_reloptions(local_relopts *relopts, Datum\n> > options,\n> > \n> > \t\t\t\t\t\t\t\t\tbool \nvalidate);\n> > \n> > -extern bytea *default_reloptions(Datum reloptions, bool validate,\n> > -\t\t\t\t\t\t\t\t relopt_kind \nkind);\n> > -extern bytea *heap_reloptions(char relkind, Datum reloptions, bool\n> > validate); -extern bytea *view_reloptions(Datum reloptions, bool\n> > validate);\n> > -extern bytea *partitioned_table_reloptions(Datum reloptions, bool\n> > validate); -extern bytea *index_reloptions(amoptions_function amoptions,\n> > Datum reloptions, -\t\t\t\t\t\t\t \nbool validate);\n> > -extern bytea *attribute_reloptions(Datum reloptions, bool validate);\n> > -extern bytea *tablespace_reloptions(Datum reloptions, bool validate);\n> > -extern LOCKMODE AlterTableGetRelOptionsLockLevel(List *defList);\n> > +options_spec_set *get_heap_relopt_spec_set(void);\n> > +options_spec_set *get_toast_relopt_spec_set(void);\n> > +options_spec_set *get_partitioned_relopt_spec_set(void);\n> > +options_spec_set *get_view_relopt_spec_set(void);\n> > +options_spec_set *get_attribute_options_spec_set(void);\n> > +options_spec_set *get_tablespace_options_spec_set(void);\n> > +extern LOCKMODE AlterTableGetRelOptionsLockLevel(Relation rel, List\n> > *defList);> \n> > #endif\t\t\t\t\t\t\t/* \nRELOPTIONS_H */\n> > \n> > diff --git a/src/include/access/spgist.h b/src/include/access/spgist.h\n> > index 2eb2f42..d9a9b2d 100644\n> > --- a/src/include/access/spgist.h\n> > +++ b/src/include/access/spgist.h\n> > @@ -189,9 +189,6 @@ typedef struct spgLeafConsistentOut\n> > \n> > } spgLeafConsistentOut;\n> > \n> > -/* spgutils.c */\n> > -extern bytea *spgoptions(Datum reloptions, bool validate);\n> > -\n> > \n> > /* spginsert.c */\n> > extern IndexBuildResult *spgbuild(Relation heap, Relation index,\n> > \n> > \t\t\t\t\t\t\t\t struct \nIndexInfo *indexInfo);\n> > \n> > diff --git a/src/include/access/spgist_private.h\n> > b/src/include/access/spgist_private.h index 40d3b71..dd9a05a 100644\n> > --- a/src/include/access/spgist_private.h\n> > +++ b/src/include/access/spgist_private.h\n> > @@ -529,6 +529,7 @@ extern OffsetNumber SpGistPageAddNewItem(SpGistState\n> > *state, Page page,> \n> > extern bool spgproperty(Oid index_oid, int attno,\n> > \n> > \t\t\t\t\t\tIndexAMProperty prop, const \nchar *propname,\n> > \t\t\t\t\t\tbool *res, bool *isnull);\n> > \n> > +extern void *spggetreloptspecset(void);\n> > \n> > /* spgdoinsert.c */\n> > extern void spgUpdateNodeLink(SpGistInnerTuple tup, int nodeN,\n> > \n> > diff --git a/src/include/commands/tablecmds.h\n> > b/src/include/commands/tablecmds.h index 336549c..3f87f98 100644\n> > --- a/src/include/commands/tablecmds.h\n> > +++ b/src/include/commands/tablecmds.h\n> > @@ -34,7 +34,7 @@ extern Oid\tAlterTableLookupRelation(AlterTableStmt\n> > *stmt, LOCKMODE lockmode);> \n> > extern void AlterTable(AlterTableStmt *stmt, LOCKMODE lockmode,\n> > \n> > \t\t\t\t\t struct AlterTableUtilityContext \n*context);\n> > \n> > -extern LOCKMODE AlterTableGetLockLevel(List *cmds);\n> > +extern LOCKMODE AlterTableGetLockLevel(Oid relid, List *cmds);\n> > \n> > extern void ATExecChangeOwner(Oid relationOid, Oid newOwnerId, bool\n> > recursing, LOCKMODE lockmode);> \n> > diff --git a/src/test/modules/dummy_index_am/dummy_index_am.c\n> > b/src/test/modules/dummy_index_am/dummy_index_am.c index\n> > 5365b063..80b39e8 100644\n> > --- a/src/test/modules/dummy_index_am/dummy_index_am.c\n> > +++ b/src/test/modules/dummy_index_am/dummy_index_am.c\n> > @@ -14,7 +14,7 @@\n> > \n> > #include \"postgres.h\"\n> > \n> > #include \"access/amapi.h\"\n> > \n> > -#include \"access/reloptions.h\"\n> > +#include \"access/options.h\"\n> > \n> > #include \"catalog/index.h\"\n> > #include \"commands/vacuum.h\"\n> > #include \"nodes/pathnodes.h\"\n> > \n> > @@ -25,12 +25,6 @@ PG_MODULE_MAGIC;\n> > \n> > void\t\t_PG_init(void);\n> > \n> > -/* parse table for fillRelOptions */\n> > -relopt_parse_elt di_relopt_tab[6];\n> > -\n> > -/* Kind of relation options for dummy index */\n> > -relopt_kind di_relopt_kind;\n> > -\n> > \n> > typedef enum DummyAmEnum\n> > {\n> > \n> > \tDUMMY_AM_ENUM_ONE,\n> > \n> > @@ -49,7 +43,7 @@ typedef struct DummyIndexOptions\n> > \n> > \tint\t\t\toption_string_null_offset;\n> > \n> > }\t\t\tDummyIndexOptions;\n> > \n> > -relopt_enum_elt_def dummyAmEnumValues[] =\n> > +opt_enum_elt_def dummyAmEnumValues[] =\n> > \n> > {\n> > \n> > \t{\"one\", DUMMY_AM_ENUM_ONE},\n> > \t{\"two\", DUMMY_AM_ENUM_TWO},\n> > \n> > @@ -63,77 +57,85 @@ PG_FUNCTION_INFO_V1(dihandler);\n> > \n> > * Validation function for string relation options.\n> > */\n> > \n> > static void\n> > \n> > -validate_string_option(const char *value)\n> > +divalidate_string_option(const char *value)\n> > \n> > {\n> > \n> > \tereport(NOTICE,\n> > \t\n> > \t\t\t(errmsg(\"new option value for string parameter %s\",\n> > \t\t\t\n> > \t\t\t\t\tvalue ? value : \"NULL\")));\n> > \n> > }\n> > \n> > -/*\n> > - * This function creates a full set of relation option types,\n> > - * with various patterns.\n> > - */\n> > -static void\n> > -create_reloptions_table(void)\n> > +static options_spec_set *di_relopt_specset = NULL;\n> > +void * digetreloptspecset(void);\n> > +\n> > +void *\n> > +digetreloptspecset(void)\n> > \n> > {\n> > \n> > -\tdi_relopt_kind = add_reloption_kind();\n> > -\n> > -\tadd_int_reloption(di_relopt_kind, \"option_int\",\n> > -\t\t\t\t\t \"Integer option for \ndummy_index_am\",\n> > -\t\t\t\t\t 10, -10, 100, \nAccessExclusiveLock);\n> > -\tdi_relopt_tab[0].optname = \"option_int\";\n> > -\tdi_relopt_tab[0].opttype = RELOPT_TYPE_INT;\n> > -\tdi_relopt_tab[0].offset = offsetof(DummyIndexOptions, option_int);\n> > -\n> > -\tadd_real_reloption(di_relopt_kind, \"option_real\",\n> > -\t\t\t\t\t \"Real option for dummy_index_am\",\n> > -\t\t\t\t\t 3.1415, -10, 100, \nAccessExclusiveLock);\n> > -\tdi_relopt_tab[1].optname = \"option_real\";\n> > -\tdi_relopt_tab[1].opttype = RELOPT_TYPE_REAL;\n> > -\tdi_relopt_tab[1].offset = offsetof(DummyIndexOptions, option_real);\n> > -\n> > -\tadd_bool_reloption(di_relopt_kind, \"option_bool\",\n> > -\t\t\t\t\t \"Boolean option for \ndummy_index_am\",\n> > -\t\t\t\t\t true, AccessExclusiveLock);\n> > -\tdi_relopt_tab[2].optname = \"option_bool\";\n> > -\tdi_relopt_tab[2].opttype = RELOPT_TYPE_BOOL;\n> > -\tdi_relopt_tab[2].offset = offsetof(DummyIndexOptions, option_bool);\n> > -\n> > -\tadd_enum_reloption(di_relopt_kind, \"option_enum\",\n> > -\t\t\t\t\t \"Enum option for dummy_index_am\",\n> > -\t\t\t\t\t dummyAmEnumValues,\n> > -\t\t\t\t\t DUMMY_AM_ENUM_ONE,\n> > -\t\t\t\t\t \"Valid values are \\\"one\\\" and \n\\\"two\\\".\",\n> > -\t\t\t\t\t AccessExclusiveLock);\n> > -\tdi_relopt_tab[3].optname = \"option_enum\";\n> > -\tdi_relopt_tab[3].opttype = RELOPT_TYPE_ENUM;\n> > -\tdi_relopt_tab[3].offset = offsetof(DummyIndexOptions, option_enum);\n> > -\n> > -\tadd_string_reloption(di_relopt_kind, \"option_string_val\",\n> > -\t\t\t\t\t\t \"String option for \ndummy_index_am with non-NULL default\",\n> > -\t\t\t\t\t\t \"DefaultValue\", \n&validate_string_option,\n> > -\t\t\t\t\t\t AccessExclusiveLock);\n> > -\tdi_relopt_tab[4].optname = \"option_string_val\";\n> > -\tdi_relopt_tab[4].opttype = RELOPT_TYPE_STRING;\n> > -\tdi_relopt_tab[4].offset = offsetof(DummyIndexOptions,\n> > -\t\t\t\t\t\t\t\t\t \noption_string_val_offset);\n> > +\tif (di_relopt_specset)\n> > +\t\treturn di_relopt_specset;\n> > +\n> > +\tdi_relopt_specset = allocateOptionsSpecSet(NULL,\n> > +\t\t\t\t\t\t\t\t\t\t\n\t sizeof(DummyIndexOptions), 6);\n> > +\n> > +\toptionsSpecSetAddInt(\n> > +\t\tdi_relopt_specset, \"option_int\",\n> > +\t\t\"Integer option for dummy_index_am\",\n> > +\t\tAccessExclusiveLock,\n> > +\t\t0, offsetof(DummyIndexOptions, option_int),\n> > +\t\t10, -10, 100\n> > +\t);\n> > +\n> > +\n> > +\toptionsSpecSetAddReal(\n> > +\t\tdi_relopt_specset, \"option_real\",\n> > +\t\t\"Real option for dummy_index_am\",\n> > +\t\tAccessExclusiveLock,\n> > +\t\t0, offsetof(DummyIndexOptions, option_real),\n> > +\t\t3.1415, -10, 100\n> > +\t);\n> > +\n> > +\toptionsSpecSetAddBool(\n> > +\t\tdi_relopt_specset, \"option_bool\",\n> > +\t\t\"Boolean option for dummy_index_am\",\n> > +\t\tAccessExclusiveLock,\n> > +\t\t0, offsetof(DummyIndexOptions, option_bool), true\n> > +\t);\n> > +\n> > +\toptionsSpecSetAddEnum(di_relopt_specset, \"option_enum\",\n> > +\t\t\"Enum option for dummy_index_am\",\n> > +\t\tAccessExclusiveLock,\n> > +\t\t0,\n> > +\t\toffsetof(DummyIndexOptions, option_enum),\n> > +\t\tdummyAmEnumValues,\n> > +\t\tDUMMY_AM_ENUM_ONE,\n> > +\t\t\"Valid values are \\\"one\\\" and \\\"two\\\".\"\n> > +\t);\n> > +\n> > +\toptionsSpecSetAddString(di_relopt_specset, \"option_string_val\",\n> > +\t\t\"String option for dummy_index_am with non-NULL default\",\n> > +\t\tAccessExclusiveLock,\n> > +\t\t0,\n> > +\t\toffsetof(DummyIndexOptions, option_string_val_offset),\n> > +\t\t\"DefaultValue\", &divalidate_string_option\n> > +\t);\n> > \n> > \t/*\n> > \t\n> > \t * String option for dummy_index_am with NULL default, and without\n> > \t * description.\n> > \t */\n> > \n> > -\tadd_string_reloption(di_relopt_kind, \"option_string_null\",\n> > -\t\t\t\t\t\t NULL,\t/* description */\n> > -\t\t\t\t\t\t NULL, \n&validate_string_option,\n> > -\t\t\t\t\t\t AccessExclusiveLock);\n> > -\tdi_relopt_tab[5].optname = \"option_string_null\";\n> > -\tdi_relopt_tab[5].opttype = RELOPT_TYPE_STRING;\n> > -\tdi_relopt_tab[5].offset = offsetof(DummyIndexOptions,\n> > -\t\t\t\t\t\t\t\t\t \noption_string_null_offset);\n> > +\n> > +\toptionsSpecSetAddString(di_relopt_specset, \"option_string_null\",\n> > +\t\tNULL,\t/* description */\n> > +\t\tAccessExclusiveLock,\n> > +\t\t0,\n> > +\t\toffsetof(DummyIndexOptions, option_string_null_offset),\n> > +\t\tNULL, &divalidate_string_option\n> > +\t);\n> > +\n> > +\treturn di_relopt_specset;\n> > \n> > }\n> > \n> > +\n> > \n> > /*\n> > \n> > * Build a new index.\n> > */\n> > \n> > @@ -219,19 +221,6 @@ dicostestimate(PlannerInfo *root, IndexPath *path,\n> > double loop_count,> \n> > }\n> > \n> > /*\n> > \n> > - * Parse relation options for index AM, returning a DummyIndexOptions\n> > - * structure filled with option values.\n> > - */\n> > -static bytea *\n> > -dioptions(Datum reloptions, bool validate)\n> > -{\n> > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > -\t\t\t\t\t\t\t\t\t \ndi_relopt_kind,\n> > -\t\t\t\t\t\t\t\t\t \nsizeof(DummyIndexOptions),\n> > -\t\t\t\t\t\t\t\t\t \ndi_relopt_tab, lengthof(di_relopt_tab));\n> > -}\n> > -\n> > -/*\n> > \n> > * Validator for index AM.\n> > */\n> > \n> > static bool\n> > \n> > @@ -308,7 +297,6 @@ dihandler(PG_FUNCTION_ARGS)\n> > \n> > \tamroutine->amvacuumcleanup = divacuumcleanup;\n> > \tamroutine->amcanreturn = NULL;\n> > \tamroutine->amcostestimate = dicostestimate;\n> > \n> > -\tamroutine->amoptions = dioptions;\n> > \n> > \tamroutine->amproperty = NULL;\n> > \tamroutine->ambuildphasename = NULL;\n> > \tamroutine->amvalidate = divalidate;\n> > \n> > @@ -322,12 +310,7 @@ dihandler(PG_FUNCTION_ARGS)\n> > \n> > \tamroutine->amestimateparallelscan = NULL;\n> > \tamroutine->aminitparallelscan = NULL;\n> > \tamroutine->amparallelrescan = NULL;\n> > \n> > +\tamroutine->amreloptspecset = digetreloptspecset;\n> > \n> > \tPG_RETURN_POINTER(amroutine);\n> > \n> > }\n> > \n> > -\n> > -void\n> > -_PG_init(void)\n> > -{\n> > -\tcreate_reloptions_table();\n> > -}\n\n\n\n\n\n\n", "msg_date": "Fri, 26 Nov 2021 11:19:16 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: Suggestion: Unified options API. Need help from core team" }, { "msg_contents": "On Fri, Nov 26, 2021 at 11:19:16AM +0300, Nikolay Shaplov wrote:\n> В письме от вторник, 26 октября 2021 г. 17:25:32 MSK пользователь Bruce \n> Momjian написал:\n> > Uh, the core team does not get involved in development issues, unless\n> > there is a issue that clearly cannot be resolved by discussion on the\n> > hackers list.\n> Then may be I used wrong therm. May be I should say \"experienced postgres \n> developers\". \n\nOK, but \"experienced Postgres developers\" are by definition on the\nhackers email list, not necessarily on the core team. In fact, some\ncore team members are not Postgres backend developers.\n\n---------------------------------------------------------------------------\n\n> > \n> > On Mon, Oct 18, 2021 at 04:24:23PM +0300, Nikolay Shaplov wrote:\n> > > Hi!\n> > > \n> > > I am still hoping to finish my work on reloptions I've started some years\n> > > ago.\n> > > \n> > > I've renewed my patch and I think I need help from core team to finish it.\n> > > \n> > > General idea of the patch: Now we have three ways to define options for\n> > > different objects, with more or less different code used for it. It wold\n> > > be\n> > > better to have unified context independent API for processing options,\n> > > instead.\n> > > \n> > > Long story short:\n> > > \n> > > There is Option Specification object, that has all information about\n> > > single\n> > > option, how it should be parsed and validated.\n> > > \n> > > There is Option Specification Set object, an array of Option Specs, that\n> > > defines all options available for certain object (am of some index for\n> > > example).\n> > > \n> > > When some object (relation, opclass, etc) wants to have an options, it\n> > > creates an Option Spec Set for there options, and uses it for converting\n> > > options between different representations (to get is from SQL, to store it\n> > > in pg_class, to pass it to the core code as bytea etc)\n> > > \n> > > For indexes Option Spec Set is available via Access Method API.\n> > > \n> > > For non-index relations all Option Spec Sets are left in reloption.c file,\n> > > and should be moved to heap AM later. (They are not in AM now so will not\n> > > change it now)\n> > > \n> > > Main problem:\n> > > \n> > > There are LockModes. LockModes for options is also stored in Option Spec\n> > > Set. For indexes Option Spec Sec is accessable via AM. So to get LockMode\n> > > for option of an index you need to have access for it's relation object\n> > > (so you can call proper AM method to fetch spec set). So you need\n> > > \"Relation rel\" in AlterTableGetRelOptionsLockLevel where Lock Level is\n> > > determinated (src/ backend/access/common/reloptions.c)\n> > > AlterTableGetRelOptionsLockLevel is called from AlterTableGetLockLevel\n> > > (src/ backend/commands/tablecmds.c) so we need \"Relation rel\" there too.\n> > > AlterTableGetLockLevel is called from AlterTableInternal (/src/backend/\n> > > commands/tablecmds.c) There we have \"Oid relid\" so we can try to open\n> > > relation like this\n> > > \n> > > Relation rel = relation_open(relid, NoLock);\n> > > cmd_lockmode = AlterTableGetRelOptionsLockLevel(rel,\n> > > \n> > > castNode(List,\n> > > cmd->def));\n> > > \n> > > relation_close(rel,NoLock);\n> > > break;\n> > > \n> > > but this will trigger the assertion\n> > > \n> > > Assert(lockmode != NoLock ||\n> > > \n> > > IsBootstrapProcessingMode() ||\n> > > CheckRelationLockedByMe(r, c, true));\n> > > \n> > > in relation_open (b/src/backend/access/common/relation.c)\n> > > \n> > > For now I've commented this assertion out. I've tried to open relation\n> > > with\n> > > AccessShareLock but this caused one test to fail, and I am not sure this\n> > > solution is better.\n> > > \n> > > What I have done here I consider a hack, so I need a help of core-team\n> > > here to do it in right way.\n> > > \n> > > General problems:\n> > > \n> > > I guess I need a coauthor, or supervisor from core team, to finish this\n> > > patch. The amount of code is big, and I guess there are parts that can be\n> > > made more in postgres way, then I did them. And I would need an advice\n> > > there, and I guess it would be better to do if before sending it to\n> > > commitfest.\n> > > \n> > > \n> > > Current patch status:\n> > > \n> > > 1. It is Beta. Some minor issues and FIXMEs are not solved. Some code\n> > > comments needs revising, but in general it do what it is intended to do.\n> > > \n> > > 2. This patch does not intend to change postgres behavior at all, all\n> > > should work as before, all changes are internal only.\n> > > \n> > > The only exception is error message for unexciting option name in toast\n> > > namespace\n> > > \n> > > CREATE TABLE reloptions_test2 (i int) WITH (toast.not_existing_option =\n> > > 42);> \n> > > -ERROR: unrecognized parameter \"not_existing_option\"\n> > > +ERROR: unrecognized parameter \"toast.not_existing_option\"\n> > > \n> > > New message is better I guess, though I can change it back if needed.\n> > > \n> > > 3. I am doing my development in this blanch\n> > > https://gitlab.com/dhyannataraj/ postgres/-/tree/new_options_take_two I\n> > > am making changes every day, so last version will be available there\n> > > \n> > > Would be glad to hear from coreteam before I finish with this patch and\n> > > made it ready for commit-fest.\n> > > \n> > > \n> > > \n> > > diff --git a/contrib/bloom/bloom.h b/contrib/bloom/bloom.h\n> > > index a22a6df..8f2d5e7 100644\n> > > --- a/contrib/bloom/bloom.h\n> > > +++ b/contrib/bloom/bloom.h\n> > > @@ -17,6 +17,7 @@\n> > > \n> > > #include \"access/generic_xlog.h\"\n> > > #include \"access/itup.h\"\n> > > #include \"access/xlog.h\"\n> > > \n> > > +#include \"access/options.h\"\n> > > \n> > > #include \"fmgr.h\"\n> > > #include \"nodes/pathnodes.h\"\n> > > \n> > > @@ -207,7 +208,8 @@ extern IndexBulkDeleteResult\n> > > *blbulkdelete(IndexVacuumInfo *info,> \n> > > \t\t\t\t\t\t\t\t\t\t\n> void *callback_state);\n> > > \n> > > extern IndexBulkDeleteResult *blvacuumcleanup(IndexVacuumInfo *info,\n> > > \n> > > \t\t\t\t\t\t\t\t\t\t\n> \t IndexBulkDeleteResult *stats);\n> > > \n> > > -extern bytea *bloptions(Datum reloptions, bool validate);\n> > > +extern void *blrelopt_specset(void);\n> > > +extern void blReloptionPostprocess(void *, bool validate);\n> > > \n> > > extern void blcostestimate(PlannerInfo *root, IndexPath *path,\n> > > \n> > > \t\t\t\t\t\t double loop_count, Cost \n> *indexStartupCost,\n> > > \t\t\t\t\t\t Cost *indexTotalCost, \n> Selectivity *indexSelectivity,\n> > > \n> > > diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c\n> > > index 754de00..54dad16 100644\n> > > --- a/contrib/bloom/blutils.c\n> > > +++ b/contrib/bloom/blutils.c\n> > > @@ -15,7 +15,7 @@\n> > > \n> > > #include \"access/amapi.h\"\n> > > #include \"access/generic_xlog.h\"\n> > > \n> > > -#include \"access/reloptions.h\"\n> > > +#include \"access/options.h\"\n> > > \n> > > #include \"bloom.h\"\n> > > #include \"catalog/index.h\"\n> > > #include \"commands/vacuum.h\"\n> > > \n> > > @@ -34,53 +34,13 @@\n> > > \n> > > PG_FUNCTION_INFO_V1(blhandler);\n> > > \n> > > -/* Kind of relation options for bloom index */\n> > > -static relopt_kind bl_relopt_kind;\n> > > -\n> > > -/* parse table for fillRelOptions */\n> > > -static relopt_parse_elt bl_relopt_tab[INDEX_MAX_KEYS + 1];\n> > > +/* Catalog of relation options for bloom index */\n> > > +static options_spec_set *bl_relopt_specset;\n> > > \n> > > static int32 myRand(void);\n> > > static void mySrand(uint32 seed);\n> > > \n> > > /*\n> > > \n> > > - * Module initialize function: initialize info about Bloom relation\n> > > options. - *\n> > > - * Note: keep this in sync with makeDefaultBloomOptions().\n> > > - */\n> > > -void\n> > > -_PG_init(void)\n> > > -{\n> > > -\tint\t\t\ti;\n> > > -\tchar\t\tbuf[16];\n> > > -\n> > > -\tbl_relopt_kind = add_reloption_kind();\n> > > -\n> > > -\t/* Option for length of signature */\n> > > -\tadd_int_reloption(bl_relopt_kind, \"length\",\n> > > -\t\t\t\t\t \"Length of signature in bits\",\n> > > -\t\t\t\t\t DEFAULT_BLOOM_LENGTH, 1, \n> MAX_BLOOM_LENGTH,\n> > > -\t\t\t\t\t AccessExclusiveLock);\n> > > -\tbl_relopt_tab[0].optname = \"length\";\n> > > -\tbl_relopt_tab[0].opttype = RELOPT_TYPE_INT;\n> > > -\tbl_relopt_tab[0].offset = offsetof(BloomOptions, bloomLength);\n> > > -\n> > > -\t/* Number of bits for each possible index column: col1, col2, ... */\n> > > -\tfor (i = 0; i < INDEX_MAX_KEYS; i++)\n> > > -\t{\n> > > -\t\tsnprintf(buf, sizeof(buf), \"col%d\", i + 1);\n> > > -\t\tadd_int_reloption(bl_relopt_kind, buf,\n> > > -\t\t\t\t\t\t \"Number of bits generated \n> for each index column\",\n> > > -\t\t\t\t\t\t DEFAULT_BLOOM_BITS, 1, \n> MAX_BLOOM_BITS,\n> > > -\t\t\t\t\t\t AccessExclusiveLock);\n> > > -\t\tbl_relopt_tab[i + 1].optname = \n> MemoryContextStrdup(TopMemoryContext,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t buf);\n> > > -\t\tbl_relopt_tab[i + 1].opttype = RELOPT_TYPE_INT;\n> > > -\t\tbl_relopt_tab[i + 1].offset = offsetof(BloomOptions, \n> bitSize[0]) +\n> > > sizeof(int) * i; -\t}\n> > > -}\n> > > -\n> > > -/*\n> > > \n> > > * Construct a default set of Bloom options.\n> > > */\n> > > \n> > > static BloomOptions *\n> > > \n> > > @@ -135,7 +95,7 @@ blhandler(PG_FUNCTION_ARGS)\n> > > \n> > > \tamroutine->amvacuumcleanup = blvacuumcleanup;\n> > > \tamroutine->amcanreturn = NULL;\n> > > \tamroutine->amcostestimate = blcostestimate;\n> > > \n> > > -\tamroutine->amoptions = bloptions;\n> > > +\tamroutine->amreloptspecset = blrelopt_specset;\n> > > \n> > > \tamroutine->amproperty = NULL;\n> > > \tamroutine->ambuildphasename = NULL;\n> > > \tamroutine->amvalidate = blvalidate;\n> > > \n> > > @@ -154,6 +114,28 @@ blhandler(PG_FUNCTION_ARGS)\n> > > \n> > > \tPG_RETURN_POINTER(amroutine);\n> > > \n> > > }\n> > > \n> > > +void\n> > > +blReloptionPostprocess(void *data, bool validate)\n> > > +{\n> > > +\tBloomOptions *opts = (BloomOptions *) data;\n> > > +\tint\t\t\ti;\n> > > +\n> > > +\tif (validate)\n> > > +\t\tfor (i = 0; i < INDEX_MAX_KEYS; i++)\n> > > +\t\t{\n> > > +\t\t\tif (opts->bitSize[i] >= opts->bloomLength)\n> > > +\t\t\t{\n> > > +\t\t\t\tereport(ERROR,\n> > > +\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > +\t\t\t\t\t errmsg(\"col%i should not be grater \n> than length\", i)));\n> > > +\t\t\t}\n> > > +\t\t}\n> > > +\n> > > +\t/* Convert signature length from # of bits to # to words, rounding up \n> */\n> > > +\topts->bloomLength = (opts->bloomLength + SIGNWORDBITS - 1) /\n> > > SIGNWORDBITS; +}\n> > > +\n> > > +\n> > > \n> > > /*\n> > > \n> > > * Fill BloomState structure for particular index.\n> > > */\n> > > \n> > > @@ -474,24 +456,39 @@ BloomInitMetapage(Relation index)\n> > > \n> > > \tUnlockReleaseBuffer(metaBuffer);\n> > > \n> > > }\n> > > \n> > > -/*\n> > > - * Parse reloptions for bloom index, producing a BloomOptions struct.\n> > > - */\n> > > -bytea *\n> > > -bloptions(Datum reloptions, bool validate)\n> > > +void *\n> > > +blrelopt_specset(void)\n> > > \n> > > {\n> > > \n> > > -\tBloomOptions *rdopts;\n> > > +\tint\t\t\ti;\n> > > +\tchar\t\tbuf[16];\n> > > \n> > > -\t/* Parse the user-given reloptions */\n> > > -\trdopts = (BloomOptions *) build_reloptions(reloptions, validate,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t bl_relopt_kind,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t sizeof(BloomOptions),\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t bl_relopt_tab,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t lengthof(bl_relopt_tab));\n> > > +\tif (bl_relopt_specset)\n> > > +\t\treturn bl_relopt_specset;\n> > > \n> > > -\t/* Convert signature length from # of bits to # to words, rounding \n> up */\n> > > -\tif (rdopts)\n> > > -\t\trdopts->bloomLength = (rdopts->bloomLength + SIGNWORDBITS - \n> 1) /\n> > > SIGNWORDBITS;\n> > > \n> > > -\treturn (bytea *) rdopts;\n> > > +\tbl_relopt_specset = allocateOptionsSpecSet(NULL,\n> > > +\t\t\t\t\t\t\t \n> sizeof(BloomOptions), INDEX_MAX_KEYS + 1);\n> > > +\tbl_relopt_specset->postprocess_fun = blReloptionPostprocess;\n> > > +\n> > > +\toptionsSpecSetAddInt(bl_relopt_specset, \"length\",\n> > > +\t\t\t\t\t\t\t \"Length of signature \n> in bits\",\n> > > +\t\t\t\t\t\t\t NoLock,\t\t/* \n> No lock as far as ALTER is\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t * forbidden */\n> > > +\t\t\t\t\t\t\t 0,\n> > > +\t\t\t\t\t\t\t \n> offsetof(BloomOptions, bloomLength),\n> > > +\t\t\t\t\t\t\t \n> DEFAULT_BLOOM_LENGTH, 1, MAX_BLOOM_LENGTH);\n> > > +\n> > > +\t/* Number of bits for each possible index column: col1, col2, ... */\n> > > +\tfor (i = 0; i < INDEX_MAX_KEYS; i++)\n> > > +\t{\n> > > +\t\tsnprintf(buf, 16, \"col%d\", i + 1);\n> > > +\t\toptionsSpecSetAddInt(bl_relopt_specset, buf,\n> > > +\t\t\t\t\t\t\t \"Number of bits \n> for corresponding column\",\n> > > +\t\t\t\t\t\t\t\t NoLock,\t/* \n> No lock as far as ALTER is\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t * forbidden */\n> > > +\t\t\t\t\t\t\t\t 0,\n> > > +\t\t\t\t\t\t\t\t \n> offsetof(BloomOptions, bitSize[i]),\n> > > +\t\t\t\t\t\t\t\t \n> DEFAULT_BLOOM_BITS, 1, MAX_BLOOM_BITS);\n> > > +\t}\n> > > +\treturn bl_relopt_specset;\n> > > \n> > > }\n> > > \n> > > diff --git a/contrib/bloom/expected/bloom.out\n> > > b/contrib/bloom/expected/bloom.out index dae12a7..e79456d 100644\n> > > --- a/contrib/bloom/expected/bloom.out\n> > > +++ b/contrib/bloom/expected/bloom.out\n> > > @@ -228,3 +228,6 @@ CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH\n> > > (length=0);> \n> > > ERROR: value 0 out of bounds for option \"length\"\n> > > CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH (col1=0);\n> > > ERROR: value 0 out of bounds for option \"col1\"\n> > > \n> > > +-- check post_validate for colN<lengh\n> > > +CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH\n> > > (length=10,col1=11);\n> > > +ERROR: col0 should not be grater than length\n> > > diff --git a/contrib/bloom/sql/bloom.sql b/contrib/bloom/sql/bloom.sql\n> > > index 4733e1e..0bfc767 100644\n> > > --- a/contrib/bloom/sql/bloom.sql\n> > > +++ b/contrib/bloom/sql/bloom.sql\n> > > @@ -93,3 +93,6 @@ SELECT reloptions FROM pg_class WHERE oid =\n> > > 'bloomidx'::regclass;> \n> > > \\set VERBOSITY terse\n> > > CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH (length=0);\n> > > CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH (col1=0);\n> > > \n> > > +\n> > > +-- check post_validate for colN<lengh\n> > > +CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH\n> > > (length=10,col1=11);\n> > > diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c\n> > > index 3a0beaa..a15a10b 100644\n> > > --- a/contrib/dblink/dblink.c\n> > > +++ b/contrib/dblink/dblink.c\n> > > @@ -2005,7 +2005,7 @@ PG_FUNCTION_INFO_V1(dblink_fdw_validator);\n> > > \n> > > Datum\n> > > dblink_fdw_validator(PG_FUNCTION_ARGS)\n> > > {\n> > > \n> > > -\tList\t *options_list = \n> untransformRelOptions(PG_GETARG_DATUM(0));\n> > > +\tList\t *options_list = \n> optionsTextArrayToDefList(PG_GETARG_DATUM(0));\n> > > \n> > > \tOid\t\t\tcontext = PG_GETARG_OID(1);\n> > > \tListCell *cell;\n> > > \n> > > diff --git a/contrib/file_fdw/file_fdw.c b/contrib/file_fdw/file_fdw.c\n> > > index 2c2f149..1194747 100644\n> > > --- a/contrib/file_fdw/file_fdw.c\n> > > +++ b/contrib/file_fdw/file_fdw.c\n> > > @@ -195,7 +195,7 @@ file_fdw_handler(PG_FUNCTION_ARGS)\n> > > \n> > > Datum\n> > > file_fdw_validator(PG_FUNCTION_ARGS)\n> > > {\n> > > \n> > > -\tList\t *options_list = \n> untransformRelOptions(PG_GETARG_DATUM(0));\n> > > +\tList\t *options_list = \n> optionsTextArrayToDefList(PG_GETARG_DATUM(0));\n> > > \n> > > \tOid\t\t\tcatalog = PG_GETARG_OID(1);\n> > > \tchar\t *filename = NULL;\n> > > \tDefElem *force_not_null = NULL;\n> > > \n> > > diff --git a/contrib/postgres_fdw/option.c b/contrib/postgres_fdw/option.c\n> > > index 5bb1af4..bbd4167 100644\n> > > --- a/contrib/postgres_fdw/option.c\n> > > +++ b/contrib/postgres_fdw/option.c\n> > > @@ -72,7 +72,7 @@ PG_FUNCTION_INFO_V1(postgres_fdw_validator);\n> > > \n> > > Datum\n> > > postgres_fdw_validator(PG_FUNCTION_ARGS)\n> > > {\n> > > \n> > > -\tList\t *options_list = \n> untransformRelOptions(PG_GETARG_DATUM(0));\n> > > +\tList\t *options_list = \n> optionsTextArrayToDefList(PG_GETARG_DATUM(0));\n> > > \n> > > \tOid\t\t\tcatalog = PG_GETARG_OID(1);\n> > > \tListCell *cell;\n> > > \n> > > diff --git a/src/backend/access/brin/brin.c\n> > > b/src/backend/access/brin/brin.c index ccc9fa0..5dd52a4 100644\n> > > --- a/src/backend/access/brin/brin.c\n> > > +++ b/src/backend/access/brin/brin.c\n> > > @@ -20,7 +20,6 @@\n> > > \n> > > #include \"access/brin_pageops.h\"\n> > > #include \"access/brin_xlog.h\"\n> > > #include \"access/relation.h\"\n> > > \n> > > -#include \"access/reloptions.h\"\n> > > \n> > > #include \"access/relscan.h\"\n> > > #include \"access/table.h\"\n> > > #include \"access/tableam.h\"\n> > > \n> > > @@ -40,7 +39,6 @@\n> > > \n> > > #include \"utils/memutils.h\"\n> > > #include \"utils/rel.h\"\n> > > \n> > > -\n> > > \n> > > /*\n> > > \n> > > * We use a BrinBuildState during initial construction of a BRIN index.\n> > > * The running state is kept in a BrinMemTuple.\n> > > \n> > > @@ -119,7 +117,6 @@ brinhandler(PG_FUNCTION_ARGS)\n> > > \n> > > \tamroutine->amvacuumcleanup = brinvacuumcleanup;\n> > > \tamroutine->amcanreturn = NULL;\n> > > \tamroutine->amcostestimate = brincostestimate;\n> > > \n> > > -\tamroutine->amoptions = brinoptions;\n> > > \n> > > \tamroutine->amproperty = NULL;\n> > > \tamroutine->ambuildphasename = NULL;\n> > > \tamroutine->amvalidate = brinvalidate;\n> > > \n> > > @@ -134,6 +131,7 @@ brinhandler(PG_FUNCTION_ARGS)\n> > > \n> > > \tamroutine->amestimateparallelscan = NULL;\n> > > \tamroutine->aminitparallelscan = NULL;\n> > > \tamroutine->amparallelrescan = NULL;\n> > > \n> > > +\tamroutine->amreloptspecset = bringetreloptspecset;\n> > > \n> > > \tPG_RETURN_POINTER(amroutine);\n> > > \n> > > }\n> > > \n> > > @@ -963,23 +961,6 @@ brinvacuumcleanup(IndexVacuumInfo *info,\n> > > IndexBulkDeleteResult *stats)> \n> > > }\n> > > \n> > > /*\n> > > \n> > > - * reloptions processor for BRIN indexes\n> > > - */\n> > > -bytea *\n> > > -brinoptions(Datum reloptions, bool validate)\n> > > -{\n> > > -\tstatic const relopt_parse_elt tab[] = {\n> > > -\t\t{\"pages_per_range\", RELOPT_TYPE_INT, offsetof(BrinOptions,\n> > > pagesPerRange)}, -\t\t{\"autosummarize\", RELOPT_TYPE_BOOL,\n> > > offsetof(BrinOptions, autosummarize)} -\t};\n> > > -\n> > > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > > -\t\t\t\t\t\t\t\t\t \n> RELOPT_KIND_BRIN,\n> > > -\t\t\t\t\t\t\t\t\t \n> sizeof(BrinOptions),\n> > > -\t\t\t\t\t\t\t\t\t \n> tab, lengthof(tab));\n> > > -}\n> > > -\n> > > -/*\n> > > \n> > > * SQL-callable function to scan through an index and summarize all\n> > > ranges\n> > > * that are not currently summarized.\n> > > */\n> > > \n> > > @@ -1765,3 +1746,32 @@ check_null_keys(BrinValues *bval, ScanKey\n> > > *nullkeys, int nnullkeys)> \n> > > \treturn true;\n> > > \n> > > }\n> > > \n> > > +\n> > > +static options_spec_set *brin_relopt_specset = NULL;\n> > > +\n> > > +void *\n> > > +bringetreloptspecset(void)\n> > > +{\n> > > +\tif (brin_relopt_specset)\n> > > +\t\treturn brin_relopt_specset;\n> > > +\tbrin_relopt_specset = allocateOptionsSpecSet(NULL,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t sizeof(BrinOptions), 2);\n> > > +\n> > > +\toptionsSpecSetAddInt(brin_relopt_specset, \"pages_per_range\",\n> > > +\t\t \"Number of pages that each page range covers in a BRIN \n> index\",\n> > > +\t\t\t\t\t\t\t NoLock,\t\t/* \n> since ALTER is not allowed\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t * no lock needed */\n> > > +\t\t\t\t\t\t\t 0,\n> > > +\t\t\t\t\t\t\t offsetof(BrinOptions, \n> pagesPerRange),\n> > > +\t\t\t\t\t\t\t \n> BRIN_DEFAULT_PAGES_PER_RANGE,\n> > > +\t\t\t\t\t\t\t \n> BRIN_MIN_PAGES_PER_RANGE,\n> > > +\t\t\t\t\t\t\t \n> BRIN_MAX_PAGES_PER_RANGE);\n> > > +\t\toptionsSpecSetAddBool(brin_relopt_specset, \"autosummarize\",\n> > > +\t\t\t\t\t\"Enables automatic summarization on \n> this BRIN index\",\n> > > +\t\t\t\t\t\t\t \n> AccessExclusiveLock,\n> > > +\t\t\t\t\t\t\t 0,\n> > > +\t\t\t\t\t\t\t \n> offsetof(BrinOptions, autosummarize),\n> > > +\t\t\t\t\t\t\t false);\n> > > +\treturn brin_relopt_specset;\n> > > +}\n> > > +\n> > > diff --git a/src/backend/access/brin/brin_pageops.c\n> > > b/src/backend/access/brin/brin_pageops.c index df9ffc2..1940b3d 100644\n> > > --- a/src/backend/access/brin/brin_pageops.c\n> > > +++ b/src/backend/access/brin/brin_pageops.c\n> > > @@ -420,6 +420,9 @@ brin_doinsert(Relation idxrel, BlockNumber\n> > > pagesPerRange,> \n> > > \t\tfreespace = br_page_get_freespace(page);\n> > > \t\n> > > \tItemPointerSet(&tid, blk, off);\n> > > \n> > > +\n> > > +//elog(WARNING, \"pages_per_range = %i\", pagesPerRange);\n> > > +\n> > > \n> > > \tbrinSetHeapBlockItemptr(revmapbuf, pagesPerRange, heapBlk, tid);\n> > > \tMarkBufferDirty(revmapbuf);\n> > > \n> > > diff --git a/src/backend/access/common/Makefile\n> > > b/src/backend/access/common/Makefile index b9aff0c..78c9c5a 100644\n> > > --- a/src/backend/access/common/Makefile\n> > > +++ b/src/backend/access/common/Makefile\n> > > @@ -18,6 +18,7 @@ OBJS = \\\n> > > \n> > > \tdetoast.o \\\n> > > \theaptuple.o \\\n> > > \tindextuple.o \\\n> > > \n> > > +\toptions.o \\\n> > > \n> > > \tprintsimple.o \\\n> > > \tprinttup.o \\\n> > > \trelation.o \\\n> > > \n> > > diff --git a/src/backend/access/common/options.c\n> > > b/src/backend/access/common/options.c new file mode 100644\n> > > index 0000000..752cddc\n> > > --- /dev/null\n> > > +++ b/src/backend/access/common/options.c\n> > > @@ -0,0 +1,1468 @@\n> > > +/*-----------------------------------------------------------------------\n> > > -- + *\n> > > + * options.c\n> > > + *\t An unifom, context-free API for processing name=value options. \n> Used\n> > > + *\t to process relation optons (reloptions), attribute options, \n> opclass\n> > > + *\t options, etc.\n> > > + *\n> > > + * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group\n> > > + * Portions Copyright (c) 1994, Regents of the University of California\n> > > + *\n> > > + *\n> > > + * IDENTIFICATION\n> > > + *\t src/backend/access/common/options.c\n> > > + *\n> > > +\n> > > *------------------------------------------------------------------------\n> > > - + */\n> > > +\n> > > +#include \"postgres.h\"\n> > > +\n> > > +#include \"access/options.h\"\n> > > +#include \"catalog/pg_type.h\"\n> > > +#include \"commands/defrem.h\"\n> > > +#include \"nodes/makefuncs.h\"\n> > > +#include \"utils/builtins.h\"\n> > > +#include \"utils/guc.h\"\n> > > +#include \"utils/memutils.h\"\n> > > +#include \"mb/pg_wchar.h\"\n> > > +\n> > > +\n> > > +/*\n> > > + * OPTIONS SPECIFICATION and OPTION SPECIFICATION SET\n> > > + *\n> > > + * Each option is defined via Option Specification object (Option Spec).\n> > > + * Option Spec should have all information that is needed for processing\n> > > + * (parsing, validating, converting) of a single option. Implemented via\n> > > set of + * option_spec_* structures.\n> > > + *\n> > > + * A set of Option Specs (Options Spec Set), defines all options\n> > > available for + * certain object (certain relation kind for example). It\n> > > is a list of + * Options Specs, plus validation functions that can be\n> > > used to validate whole + * option set, if needed. Implemenred via\n> > > options_spec_set structure and set of + * optionsSpecSetAdd* functions\n> > > that are used for adding Option Specs items to + * a Set.\n> > > + *\n> > > + * NOTE: we choose therm \"sepcification\" instead of \"definition\" because\n> > > therm + * \"definition\" is used for objects that came from lexer. So to\n> > > avoud confusion + * here we have Option Specifications, and all\n> > > \"definitions\" are from lexer. + */\n> > > +\n> > > +/*\n> > > + * OPTION VALUES REPRESENTATIONS\n> > > + *\n> > > + * Option values usually came from lexer in form of defList obect, stored\n> > > in + * pg_catalog as text array, and used when they are stored in memory\n> > > as + * C-structure. These are different option values representations.\n> > > Here goes + * brief description of all representations used in the code.\n> > > + *\n> > > + * Values\n> > > + *\n> > > + * Values are an internal representation that is used while converting\n> > > + * Values between other representation. Value is called \"parsed\",\n> > > + * when Value's value is converted to a proper type and validated, or is\n> > > called + * \"unparsed\", when Value's value is stored as raw string that\n> > > was obtained + * from the source without any cheks. In convertation\n> > > funcion names first case + * is refered as Values, second case is refered\n> > > as RawValues. Values is + * implemented as List of option_value\n> > > C-structures.\n> > > + *\n> > > + * defList\n> > > + *\n> > > + * Options in form of definition List that comes from lexer. (For\n> > > reloptions it + * is a part of SQL query that goes after WITH, SET or\n> > > RESET keywords). Can be + * converted to and from Values using\n> > > optionsDefListToRawValues and + * optionsTextArrayToRawValues functions.\n> > > + *\n> > > + * TEXT[]\n> > > + *\n> > > + * Options in form suitable for storig in TEXT[] field in DB. (E.g.\n> > > reloptions + * are stores in pg_catalog.pg_class table in reloptions\n> > > field). Can be converted + * to and from Values using\n> > > optionsValuesToTextArray and optionsTextArrayToRawValues + * functions.\n> > > + *\n> > > + * Bytea\n> > > + *\n> > > + * Option data stored in C-structure with varlena header in the beginning\n> > > of the + * structure. This representation is used to pass option values\n> > > to the core + * postgres. It is fast to read, it can be cached and so on.\n> > > Bytea rpresentation + * can be obtained from Vales using\n> > > optionsValuesToBytea function, and can't be + * converted back.\n> > > + */\n> > > +\n> > > +static option_spec_basic *allocateOptionSpec(int type, const char *name,\n> > > +\t\t\t\t\t\t const char *desc, LOCKMODE \n> lockmode,\n> > > +\t\t\t\t\t\t option_spec_flags flags, int \n> struct_offset);\n> > > +\n> > > +static void parse_one_option(option_value * option, const char *text_str,\n> > > +\t\t\t\t int text_len, bool validate);\n> > > +static void *optionsAllocateBytea(options_spec_set * spec_set, List\n> > > *options); +\n> > > +\n> > > +static List *\n> > > +optionsDefListToRawValues(List *defList, options_parse_mode\n> > > +\t\t\t\t\t\t parse_mode);\n> > > +static Datum optionsValuesToTextArray(List *options_values);\n> > > +static List *optionsMergeOptionValues(List *old_options, List\n> > > *new_options); +static bytea *optionsValuesToBytea(List *options,\n> > > options_spec_set * spec_set); +List *optionsTextArrayToRawValues(Datum\n> > > array_datum);\n> > > +List *optionsParseRawValues(List *raw_values, options_spec_set *\n> > > spec_set,\n> > > +\t\t\t\t\t options_parse_mode mode);\n> > > +\n> > > +\n> > > +/*\n> > > + * Options spec_set functions\n> > > + */\n> > > +\n> > > +/*\n> > > + * Options catalog describes options available for certain object.\n> > > Catalog has + * all necessary information for parsing transforming and\n> > > validating options + * for an object. All\n> > > parsing/validation/transformation functions should not + * know any\n> > > details of option implementation for certain object, all this + *\n> > > information should be stored in catalog instead and interpreted by + *\n> > > pars/valid/transf functions blindly.\n> > > + *\n> > > + * The heart of the option catalog is an array of option definitions. \n> > > Options + * definition specifies name of option, type, range of\n> > > acceptable values, and + * default value.\n> > > + *\n> > > + * Options values can be one of the following types: bool, int, real,\n> > > enum, + * string. For more info see \"option_type\" and\n> > > \"optionsCatalogAddItemYyyy\" + * functions.\n> > > + *\n> > > + * Option definition flags allows to define parser behavior for special\n> > > (or not + * so special) cases. See option_spec_flags for more info.\n> > > + *\n> > > + * Options and Lock levels:\n> > > + *\n> > > + * The default choice for any new option should be AccessExclusiveLock.\n> > > + * In some cases the lock level can be reduced from there, but the lock\n> > > + * level chosen should always conflict with itself to ensure that\n> > > multiple\n> > > + * changes aren't lost when we attempt concurrent changes.\n> > > + * The choice of lock level depends completely upon how that parameter\n> > > + * is used within the server, not upon how and when you'd like to change\n> > > it. + * Safety first. Existing choices are documented here, and elsewhere\n> > > in + * backend code where the parameters are used.\n> > > + *\n> > > + * In general, anything that affects the results obtained from a SELECT\n> > > must be + * protected by AccessExclusiveLock.\n> > > + *\n> > > + * Autovacuum related parameters can be set at ShareUpdateExclusiveLock\n> > > + * since they are only used by the AV procs and don't change anything\n> > > + * currently executing.\n> > > + *\n> > > + * Fillfactor can be set because it applies only to subsequent changes\n> > > made to + * data blocks, as documented in heapio.c\n> > > + *\n> > > + * n_distinct options can be set at ShareUpdateExclusiveLock because they\n> > > + * are only used during ANALYZE, which uses a ShareUpdateExclusiveLock,\n> > > + * so the ANALYZE will not be affected by in-flight changes. Changing\n> > > those + * values has no affect until the next ANALYZE, so no need for\n> > > stronger lock. + *\n> > > + * Planner-related parameters can be set with ShareUpdateExclusiveLock\n> > > because + * they only affect planning and not the correctness of the\n> > > execution. Plans + * cannot be changed in mid-flight, so changes here\n> > > could not easily result in + * new improved plans in any case. So we\n> > > allow existing queries to continue + * and existing plans to survive, a\n> > > small price to pay for allowing better + * plans to be introduced\n> > > concurrently without interfering with users. + *\n> > > + * Setting parallel_workers is safe, since it acts the same as\n> > > + * max_parallel_workers_per_gather which is a USERSET parameter that\n> > > doesn't + * affect existing plans or queries.\n> > > +*/\n> > > +\n> > > +/*\n> > > + * allocateOptionsSpecSet\n> > > + *\t\tCreates new Option Spec Set object: Allocates memory and \n> initializes\n> > > + *\t\tstructure members.\n> > > + *\n> > > + * Spec Set items can be add via allocateOptionSpec and\n> > > optionSpecSetAddItem functions + * or by calling directly any of\n> > > optionsSpecSetAdd* function (preferable way) + *\n> > > + * namespace - Spec Set can be bind to certain namespace (E.g.\n> > > + * namespace.option=value). Options from other namespaces will be ignored\n> > > while + * processing. If set to NULL, no namespace will be used at all.\n> > > + *\n> > > + * size_of_bytea - size of target structure of Bytea options\n> > > represenation\n> > > + *\n> > > + * num_items_expected - if you know expected number of Spec Set items set\n> > > it here. + * Set to -1 in other cases. num_items_expected will be used\n> > > for preallocating memory + * and will trigger error, if you try to add\n> > > more items than you expected. + */\n> > > +\n> > > +options_spec_set *\n> > > +allocateOptionsSpecSet(const char *namespace, int size_of_bytea, int\n> > > num_items_expected) +{\n> > > +\tMemoryContext oldcxt;\n> > > +\toptions_spec_set *spec_set;\n> > > +\n> > > +\toldcxt = MemoryContextSwitchTo(TopMemoryContext);\n> > > +\tspec_set = palloc(sizeof(options_spec_set));\n> > > +\tif (namespace)\n> > > +\t{\n> > > +\t\tspec_set->namespace = palloc(strlen(namespace) + 1);\n> > > +\t\tstrcpy(spec_set->namespace, namespace);\n> > > +\t}\n> > > +\telse\n> > > +\t\tspec_set->namespace = NULL;\n> > > +\tif (num_items_expected > 0)\n> > > +\t{\n> > > +\t\tspec_set->num_allocated = num_items_expected;\n> > > +\t\tspec_set->forbid_realloc = true;\n> > > +\t\tspec_set->definitions = palloc(\n> > > +\t\t\t\t spec_set->num_allocated * \n> sizeof(option_spec_basic *));\n> > > +\t}\n> > > +\telse\n> > > +\t{\n> > > +\t\tspec_set->num_allocated = 0;\n> > > +\t\tspec_set->forbid_realloc = false;\n> > > +\t\tspec_set->definitions = NULL;\n> > > +\t}\n> > > +\tspec_set->num = 0;\n> > > +\tspec_set->struct_size = size_of_bytea;\n> > > +\tspec_set->postprocess_fun = NULL;\n> > > +\tMemoryContextSwitchTo(oldcxt);\n> > > +\treturn spec_set;\n> > > +}\n> > > +\n> > > +/*\n> > > + * allocateOptionSpec\n> > > + *\t\tAllocates a new Option Specifiation object of desired type \n> and\n> > > + *\t\tinitialize the type-independent fields\n> > > + */\n> > > +static option_spec_basic *\n> > > +allocateOptionSpec(int type, const char *name, const char *desc, LOCKMODE\n> > > lockmode, +\t\t\t\t\t\t option_spec_flags \n> flags, int struct_offset)\n> > > +{\n> > > +\tMemoryContext oldcxt;\n> > > +\tsize_t\t\tsize;\n> > > +\toption_spec_basic *newoption;\n> > > +\n> > > +\toldcxt = MemoryContextSwitchTo(TopMemoryContext);\n> > > +\n> > > +\tswitch (type)\n> > > +\t{\n> > > +\t\tcase OPTION_TYPE_BOOL:\n> > > +\t\t\tsize = sizeof(option_spec_bool);\n> > > +\t\t\tbreak;\n> > > +\t\tcase OPTION_TYPE_INT:\n> > > +\t\t\tsize = sizeof(option_spec_int);\n> > > +\t\t\tbreak;\n> > > +\t\tcase OPTION_TYPE_REAL:\n> > > +\t\t\tsize = sizeof(option_spec_real);\n> > > +\t\t\tbreak;\n> > > +\t\tcase OPTION_TYPE_ENUM:\n> > > +\t\t\tsize = sizeof(option_spec_enum);\n> > > +\t\t\tbreak;\n> > > +\t\tcase OPTION_TYPE_STRING:\n> > > +\t\t\tsize = sizeof(option_spec_string);\n> > > +\t\t\tbreak;\n> > > +\t\tdefault:\n> > > +\t\t\telog(ERROR, \"unsupported reloption type %d\", type);\n> > > +\t\t\treturn NULL;\t\t/* keep compiler quiet */\n> > > +\t}\n> > > +\n> > > +\tnewoption = palloc(size);\n> > > +\n> > > +\tnewoption->name = pstrdup(name);\n> > > +\tif (desc)\n> > > +\t\tnewoption->desc = pstrdup(desc);\n> > > +\telse\n> > > +\t\tnewoption->desc = NULL;\n> > > +\tnewoption->type = type;\n> > > +\tnewoption->lockmode = lockmode;\n> > > +\tnewoption->flags = flags;\n> > > +\tnewoption->struct_offset = struct_offset;\n> > > +\n> > > +\tMemoryContextSwitchTo(oldcxt);\n> > > +\n> > > +\treturn newoption;\n> > > +}\n> > > +\n> > > +/*\n> > > + * optionSpecSetAddItem\n> > > + *\t\tAdds pre-created Option Specification objec to the Spec Set\n> > > + */\n> > > +static void\n> > > +optionSpecSetAddItem(option_spec_basic * newoption,\n> > > +\t\t\t\t\t options_spec_set * spec_set)\n> > > +{\n> > > +\tif (spec_set->num >= spec_set->num_allocated)\n> > > +\t{\n> > > +\t\tMemoryContext oldcxt;\n> > > +\n> > > +\t\tAssert(!spec_set->forbid_realloc);\n> > > +\t\toldcxt = MemoryContextSwitchTo(TopMemoryContext);\n> > > +\n> > > +\t\tif (spec_set->num_allocated == 0)\n> > > +\t\t{\n> > > +\t\t\tspec_set->num_allocated = 8;\n> > > +\t\t\tspec_set->definitions = palloc(\n> > > +\t\t\t\t spec_set->num_allocated * \n> sizeof(option_spec_basic *));\n> > > +\t\t}\n> > > +\t\telse\n> > > +\t\t{\n> > > +\t\t\tspec_set->num_allocated *= 2;\n> > > +\t\t\tspec_set->definitions = repalloc(spec_set->definitions,\n> > > +\t\t\t\t spec_set->num_allocated * \n> sizeof(option_spec_basic *));\n> > > +\t\t}\n> > > +\t\tMemoryContextSwitchTo(oldcxt);\n> > > +\t}\n> > > +\tspec_set->definitions[spec_set->num] = newoption;\n> > > +\tspec_set->num++;\n> > > +}\n> > > +\n> > > +\n> > > +/*\n> > > + * optionsSpecSetAddBool\n> > > + *\t\tAdds boolean Option Specification entry to the Spec Set\n> > > + */\n> > > +void\n> > > +optionsSpecSetAddBool(options_spec_set * spec_set, const char *name,\n> > > const char *desc, +\t\t\t\t\t\t \n> LOCKMODE lockmode, option_spec_flags flags,\n> > > +\t\t\t\t\t\t int struct_offset, bool \n> default_val)\n> > > +{\n> > > +\toption_spec_bool *spec_set_item;\n> > > +\n> > > +\tspec_set_item = (option_spec_bool *)\n> > > +\t\tallocateOptionSpec(OPTION_TYPE_BOOL, name, desc, lockmode,\n> > > +\t\t\t\t\t\t\t\t flags, \n> struct_offset);\n> > > +\n> > > +\tspec_set_item->default_val = default_val;\n> > > +\n> > > +\toptionSpecSetAddItem((option_spec_basic *) spec_set_item, spec_set);\n> > > +}\n> > > +\n> > > +/*\n> > > + * optionsSpecSetAddInt\n> > > + *\t\tAdds integer Option Specification entry to the Spec Set\n> > > + */\n> > > +void\n> > > +optionsSpecSetAddInt(options_spec_set * spec_set, const char *name,\n> > > +\t\t const char *desc, LOCKMODE lockmode, option_spec_flags flags,\n> > > +\t\t\t\tint struct_offset, int default_val, int \n> min_val, int max_val)\n> > > +{\n> > > +\toption_spec_int *spec_set_item;\n> > > +\n> > > +\tspec_set_item = (option_spec_int *)\n> > > +\t\tallocateOptionSpec(OPTION_TYPE_INT, name, desc, lockmode,\n> > > +\t\t\t\t\t\t\t\t flags, \n> struct_offset);\n> > > +\n> > > +\tspec_set_item->default_val = default_val;\n> > > +\tspec_set_item->min = min_val;\n> > > +\tspec_set_item->max = max_val;\n> > > +\n> > > +\toptionSpecSetAddItem((option_spec_basic *) spec_set_item, spec_set);\n> > > +}\n> > > +\n> > > +/*\n> > > + * optionsSpecSetAddReal\n> > > + *\t\tAdds float Option Specification entry to the Spec Set\n> > > + */\n> > > +void\n> > > +optionsSpecSetAddReal(options_spec_set * spec_set, const char *name,\n> > > const char *desc, +\t\t LOCKMODE lockmode, option_spec_flags \n> flags, int\n> > > struct_offset, +\t\t\t\t\t\t double \n> default_val, double min_val, double\n> > > max_val)\n> > > +{\n> > > +\toption_spec_real *spec_set_item;\n> > > +\n> > > +\tspec_set_item = (option_spec_real *)\n> > > +\t\tallocateOptionSpec(OPTION_TYPE_REAL, name, desc, lockmode,\n> > > +\t\t\t\t\t\t\t\t flags, \n> struct_offset);\n> > > +\n> > > +\tspec_set_item->default_val = default_val;\n> > > +\tspec_set_item->min = min_val;\n> > > +\tspec_set_item->max = max_val;\n> > > +\n> > > +\toptionSpecSetAddItem((option_spec_basic *) spec_set_item, spec_set);\n> > > +}\n> > > +\n> > > +/*\n> > > + * optionsSpecSetAddEnum\n> > > + *\t\tAdds enum Option Specification entry to the Spec Set\n> > > + *\n> > > + * The members array must have a terminating NULL entry.\n> > > + *\n> > > + * The detailmsg is shown when unsupported values are passed, and has\n> > > this\n> > > + * form: \"Valid values are \\\"foo\\\", \\\"bar\\\", and \\\"bar\\\".\"\n> > > + *\n> > > + * The members array and detailmsg are not copied -- caller must ensure\n> > > that + * they are valid throughout the life of the process.\n> > > + */\n> > > +\n> > > +void\n> > > +optionsSpecSetAddEnum(options_spec_set * spec_set, const char *name,\n> > > const char *desc, +\t\tLOCKMODE lockmode, option_spec_flags flags, \n> int\n> > > struct_offset,\n> > > +\t\topt_enum_elt_def * members, int default_val, const char \n> *detailmsg)\n> > > +{\n> > > +\toption_spec_enum *spec_set_item;\n> > > +\n> > > +\tspec_set_item = (option_spec_enum *)\n> > > +\t\tallocateOptionSpec(OPTION_TYPE_ENUM, name, desc, lockmode,\n> > > +\t\t\t\t\t\t\t\t flags, \n> struct_offset);\n> > > +\n> > > +\tspec_set_item->default_val = default_val;\n> > > +\tspec_set_item->members = members;\n> > > +\tspec_set_item->detailmsg = detailmsg;\n> > > +\n> > > +\toptionSpecSetAddItem((option_spec_basic *) spec_set_item, spec_set);\n> > > +}\n> > > +\n> > > +/*\n> > > + * optionsSpecSetAddString\n> > > + *\t\tAdds string Option Specification entry to the Spec Set\n> > > + *\n> > > + * \"validator\" is an optional function pointer that can be used to test\n> > > the + * validity of the values. It must elog(ERROR) when the argument\n> > > string is + * not acceptable for the variable. Note that the default\n> > > value must pass + * the validation.\n> > > + */\n> > > +void\n> > > +optionsSpecSetAddString(options_spec_set * spec_set, const char *name,\n> > > const char *desc, +\t\t LOCKMODE lockmode, option_spec_flags \n> flags, int\n> > > struct_offset, +\t\t\t\t const char *default_val, \n> validate_string_option\n> > > validator) +{\n> > > +\toption_spec_string *spec_set_item;\n> > > +\n> > > +\t/* make sure the validator/default combination is sane */\n> > > +\tif (validator)\n> > > +\t\t(validator) (default_val);\n> > > +\n> > > +\tspec_set_item = (option_spec_string *)\n> > > +\t\tallocateOptionSpec(OPTION_TYPE_STRING, name, desc, lockmode,\n> > > +\t\t\t\t\t\t\t\t flags, \n> struct_offset);\n> > > +\tspec_set_item->validate_cb = validator;\n> > > +\n> > > +\tif (default_val)\n> > > +\t\tspec_set_item->default_val = \n> MemoryContextStrdup(TopMemoryContext,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\tdefault_val);\n> > > +\telse\n> > > +\t\tspec_set_item->default_val = NULL;\n> > > +\toptionSpecSetAddItem((option_spec_basic *) spec_set_item, spec_set);\n> > > +}\n> > > +\n> > > +\n> > > +/*\n> > > + * Options transform functions\n> > > + */\n> > > +\n> > > +/* FIXME this comment should be updated\n> > > + * Option values exists in five representations: DefList, TextArray,\n> > > Values and + * Bytea:\n> > > + *\n> > > + * DefList: Is a List of DefElem structures, that comes from syntax\n> > > analyzer. + * It can be transformed to Values representation for further\n> > > parsing and + * validating\n> > > + *\n> > > + * Values: A List of option_value structures. Is divided into two\n> > > subclasses: + * RawValues, when values are already transformed from\n> > > DefList or TextArray, + * but not parsed yet. (In this case you should\n> > > use raw_name and raw_value + * structure members to see option content).\n> > > ParsedValues (or just simple + * Values) is crated after finding a\n> > > definition for this option in a spec_set + * and after parsing of the raw\n> > > value. For ParsedValues content is stored in + * values structure member,\n> > > and name can be taken from option definition in gen + * structure member.\n> > > Actually Value list can have both Raw and Parsed values, + * as we do\n> > > not validate options that came from database, and db option that + * does\n> > > not exist in spec_set is just ignored, and kept as RawValues + *\n> > > + * TextArray: The representation in which options for existing object\n> > > comes + * and goes from/to database; for example from\n> > > pg_class.reloptions. It is a + * plain TEXT[] db object with name=value\n> > > text inside. This representation can + * be transformed into Values for\n> > > further processing, using options spec_set. + *\n> > > + * Bytea: Is a binary representation of options. Each object that has\n> > > code that + * uses options, should create a C-structure for this options,\n> > > with varlen + * 4-byte header in front of the data; all items of options\n> > > spec_set should have + * an offset of a corresponding binary data in this\n> > > structure, so transform + * function can put this data in the correct\n> > > place. One can transform options + * data from values representation into\n> > > Bytea, using spec_set data, and then use + * it as a usual Datum object,\n> > > when needed. This Datum should be cached + * somewhere (for example in\n> > > rel->rd_options for relations) when object that + * has option is loaded\n> > > from db.\n> > > + */\n> > > +\n> > > +\n> > > +/* optionsDefListToRawValues\n> > > + *\t\tConverts option values that came from syntax analyzer \n> (DefList) into\n> > > + *\t\tValues List.\n> > > + *\n> > > + * No parsing is done here except for checking that RESET syntax is\n> > > correct + * (syntax analyzer do not see difference between SET and RESET\n> > > cases, we + * should treat it here manually\n> > > + */\n> > > +static List *\n> > > +optionsDefListToRawValues(List *defList, options_parse_mode parse_mode)\n> > > +{\n> > > +\tListCell *cell;\n> > > +\tList\t *result = NIL;\n> > > +\n> > > +\tforeach(cell, defList)\n> > > +\t{\n> > > +\t\toption_value *option_dst;\n> > > +\t\tDefElem *def = (DefElem *) lfirst(cell);\n> > > +\t\tchar\t *value;\n> > > +\n> > > +\t\toption_dst = palloc(sizeof(option_value));\n> > > +\n> > > +\t\tif (def->defnamespace)\n> > > +\t\t{\n> > > +\t\t\toption_dst->namespace = palloc(strlen(def-\n> >defnamespace) + 1);\n> > > +\t\t\tstrcpy(option_dst->namespace, def->defnamespace);\n> > > +\t\t}\n> > > +\t\telse\n> > > +\t\t{\n> > > +\t\t\toption_dst->namespace = NULL;\n> > > +\t\t}\n> > > +\t\toption_dst->raw_name = palloc(strlen(def->defname) + 1);\n> > > +\t\tstrcpy(option_dst->raw_name, def->defname);\n> > > +\n> > > +\t\tif (parse_mode & OPTIONS_PARSE_MODE_FOR_RESET)\n> > > +\t\t{\n> > > +\t\t\t/*\n> > > +\t\t\t * If this option came from RESET statement we should \n> throw error\n> > > +\t\t\t * it it brings us name=value data, as syntax \n> analyzer do not\n> > > +\t\t\t * prevent it\n> > > +\t\t\t */\n> > > +\t\t\tif (def->arg != NULL)\n> > > +\t\t\t\tereport(ERROR,\n> > > +\t\t\t\t\t\t\n> (errcode(ERRCODE_SYNTAX_ERROR),\n> > > +\t\t\t\t\terrmsg(\"RESET must not include values \n> for parameters\")));\n> > > +\n> > > +\t\t\toption_dst->status = OPTION_VALUE_STATUS_FOR_RESET;\n> > > +\t\t}\n> > > +\t\telse\n> > > +\t\t{\n> > > +\t\t\t/*\n> > > +\t\t\t * For SET statement we should treat (name) \n> expression as if it is\n> > > +\t\t\t * actually (name=true) so do it here manually. In \n> other cases\n> > > +\t\t\t * just use value as we should use it\n> > > +\t\t\t */\n> > > +\t\t\toption_dst->status = OPTION_VALUE_STATUS_RAW;\n> > > +\t\t\tif (def->arg != NULL)\n> > > +\t\t\t\tvalue = defGetString(def);\n> > > +\t\t\telse\n> > > +\t\t\t\tvalue = \"true\";\n> > > +\t\t\toption_dst->raw_value = palloc(strlen(value) + 1);\n> > > +\t\t\tstrcpy(option_dst->raw_value, value);\n> > > +\t\t}\n> > > +\n> > > +\t\tresult = lappend(result, option_dst);\n> > > +\t}\n> > > +\treturn result;\n> > > +}\n> > > +\n> > > +/*\n> > > + * optionsValuesToTextArray\n> > > + *\t\tConverts List of option_values into TextArray\n> > > + *\n> > > + *\tConvertation is made to put options into database (e.g. in\n> > > + *\tpg_class.reloptions for all relation options)\n> > > + */\n> > > +\n> > > +Datum\n> > > +optionsValuesToTextArray(List *options_values)\n> > > +{\n> > > +\tArrayBuildState *astate = NULL;\n> > > +\tListCell *cell;\n> > > +\tDatum\t\tresult;\n> > > +\n> > > +\tforeach(cell, options_values)\n> > > +\t{\n> > > +\t\toption_value *option = (option_value *) lfirst(cell);\n> > > +\t\tconst char *name;\n> > > +\t\tchar\t *value;\n> > > +\t\ttext\t *t;\n> > > +\t\tint\t\t\tlen;\n> > > +\n> > > +\t\t/*\n> > > +\t\t * Raw value were not cleared while parsing, so instead of \n> converting\n> > > +\t\t * it back, just use it to store value as text\n> > > +\t\t */\n> > > +\t\tvalue = option->raw_value;\n> > > +\n> > > +\t\tAssert(option->status != OPTION_VALUE_STATUS_EMPTY);\n> > > +\n> > > +\t\t/*\n> > > +\t\t * Name will be taken from option definition, if option were \n> parsed or\n> > > +\t\t * from raw_name if option were not parsed for some reason\n> > > +\t\t */\n> > > +\t\tif (option->status == OPTION_VALUE_STATUS_PARSED)\n> > > +\t\t\tname = option->gen->name;\n> > > +\t\telse\n> > > +\t\t\tname = option->raw_name;\n> > > +\n> > > +\t\t/*\n> > > +\t\t * Now build \"name=value\" string and append it to the array\n> > > +\t\t */\n> > > +\t\tlen = VARHDRSZ + strlen(name) + strlen(value) + 1;\n> > > +\t\tt = (text *) palloc(len + 1);\n> > > +\t\tSET_VARSIZE(t, len);\n> > > +\t\tsprintf(VARDATA(t), \"%s=%s\", name, value);\n> > > +\t\tastate = accumArrayResult(astate, PointerGetDatum(t), false,\n> > > +\t\t\t\t\t\t\t\t TEXTOID, \n> CurrentMemoryContext);\n> > > +\t}\n> > > +\tif (astate)\n> > > +\t\tresult = makeArrayResult(astate, CurrentMemoryContext);\n> > > +\telse\n> > > +\t\tresult = (Datum) 0;\n> > > +\n> > > +\treturn result;\n> > > +}\n> > > +\n> > > +/*\n> > > + * optionsTextArrayToRawValues\n> > > + *\t\tConverts options from TextArray format into RawValues list.\n> > > + *\n> > > + *\tThis function is used to convert options data that comes from \n> database\n> > > to + *\tList of option_values, for further parsing, and, in the case of\n> > > ALTER + *\tcommand, for merging with new option values.\n> > > + */\n> > > +List *\n> > > +optionsTextArrayToRawValues(Datum array_datum)\n> > > +{\n> > > +\tList\t *result = NIL;\n> > > +\n> > > +\tif (PointerIsValid(DatumGetPointer(array_datum)))\n> > > +\t{\n> > > +\t\tArrayType *array = DatumGetArrayTypeP(array_datum);\n> > > +\t\tDatum\t *options;\n> > > +\t\tint\t\t\tnoptions;\n> > > +\t\tint\t\t\ti;\n> > > +\n> > > +\t\tdeconstruct_array(array, TEXTOID, -1, false, 'i',\n> > > +\t\t\t\t\t\t &options, NULL, &noptions);\n> > > +\n> > > +\t\tfor (i = 0; i < noptions; i++)\n> > > +\t\t{\n> > > +\t\t\toption_value *option_dst;\n> > > +\t\t\tchar\t *text_str = VARDATA(options[i]);\n> > > +\t\t\tint\t\t\ttext_len = \n> VARSIZE(options[i]) - VARHDRSZ;\n> > > +\t\t\tint\t\t\ti;\n> > > +\t\t\tint\t\t\tname_len = -1;\n> > > +\t\t\tchar\t *name;\n> > > +\t\t\tint\t\t\traw_value_len;\n> > > +\t\t\tchar\t *raw_value;\n> > > +\n> > > +\t\t\t/*\n> > > +\t\t\t * Find position of '=' sign and treat id as a \n> separator between\n> > > +\t\t\t * name and value in \"name=value\" item\n> > > +\t\t\t */\n> > > +\t\t\tfor (i = 0; i < text_len; i = i + pg_mblen(text_str))\n> > > +\t\t\t{\n> > > +\t\t\t\tif (text_str[i] == '=')\n> > > +\t\t\t\t{\n> > > +\t\t\t\t\tname_len = i;\n> > > +\t\t\t\t\tbreak;\n> > > +\t\t\t\t}\n> > > +\t\t\t}\n> > > +\t\t\tAssert(name_len >= 1);\t\t/* Just in case \n> */\n> > > +\n> > > +\t\t\traw_value_len = text_len - name_len - 1;\n> > > +\n> > > +\t\t\t/*\n> > > +\t\t\t * Copy name from src\n> > > +\t\t\t */\n> > > +\t\t\tname = palloc(name_len + 1);\n> > > +\t\t\tmemcpy(name, text_str, name_len);\n> > > +\t\t\tname[name_len] = '\\0';\n> > > +\n> > > +\t\t\t/*\n> > > +\t\t\t * Copy value from src\n> > > +\t\t\t */\n> > > +\t\t\traw_value = palloc(raw_value_len + 1);\n> > > +\t\t\tmemcpy(raw_value, text_str + name_len + 1, \n> raw_value_len);\n> > > +\t\t\traw_value[raw_value_len] = '\\0';\n> > > +\n> > > +\t\t\t/*\n> > > +\t\t\t * Create new option_value item\n> > > +\t\t\t */\n> > > +\t\t\toption_dst = palloc(sizeof(option_value));\n> > > +\t\t\toption_dst->status = OPTION_VALUE_STATUS_RAW;\n> > > +\t\t\toption_dst->raw_name = name;\n> > > +\t\t\toption_dst->raw_value = raw_value;\n> > > +\t\t\toption_dst->namespace = NULL;\n> > > +\n> > > +\t\t\tresult = lappend(result, option_dst);\n> > > +\t\t}\n> > > +\t}\n> > > +\treturn result;\n> > > +}\n> > > +\n> > > +/*\n> > > + * optionsMergeOptionValues\n> > > + *\t\tMerges two lists of option_values into one list\n> > > + *\n> > > + * This function is used to merge two Values list into one. It is used\n> > > for all + * kinds of ALTER commands when existing options are\n> > > merged|replaced with new + * options list. This function also process\n> > > RESET variant of ALTER command. It + * merges two lists as usual, and\n> > > then removes all items with RESET flag on. + *\n> > > + * Both incoming lists will be destroyed while merging\n> > > + */\n> > > +static List *\n> > > +optionsMergeOptionValues(List *old_options, List *new_options)\n> > > +{\n> > > +\tList\t *result = NIL;\n> > > +\tListCell *old_cell;\n> > > +\tListCell *new_cell;\n> > > +\n> > > +\t/*\n> > > +\t * First add to result all old options that are not mentioned in new\n> > > list\n> > > +\t */\n> > > +\tforeach(old_cell, old_options)\n> > > +\t{\n> > > +\t\tbool\t\tfound;\n> > > +\t\tconst char *old_name;\n> > > +\t\toption_value *old_option;\n> > > +\n> > > +\t\told_option = (option_value *) lfirst(old_cell);\n> > > +\t\tif (old_option->status == OPTION_VALUE_STATUS_PARSED)\n> > > +\t\t\told_name = old_option->gen->name;\n> > > +\t\telse\n> > > +\t\t\told_name = old_option->raw_name;\n> > > +\n> > > +\t\t/*\n> > > +\t\t * Looking for a new option with same name\n> > > +\t\t */\n> > > +\t\tfound = false;\n> > > +\t\tforeach(new_cell, new_options)\n> > > +\t\t{\n> > > +\t\t\toption_value *new_option;\n> > > +\t\t\tconst char *new_name;\n> > > +\n> > > +\t\t\tnew_option = (option_value *) lfirst(new_cell);\n> > > +\t\t\tif (new_option->status == OPTION_VALUE_STATUS_PARSED)\n> > > +\t\t\t\tnew_name = new_option->gen->name;\n> > > +\t\t\telse\n> > > +\t\t\t\tnew_name = new_option->raw_name;\n> > > +\n> > > +\t\t\tif (strcmp(new_name, old_name) == 0)\n> > > +\t\t\t{\n> > > +\t\t\t\tfound = true;\n> > > +\t\t\t\tbreak;\n> > > +\t\t\t}\n> > > +\t\t}\n> > > +\t\tif (!found)\n> > > +\t\t\tresult = lappend(result, old_option);\n> > > +\t}\n> > > +\t/*\n> > > +\t * Now add all to result all new options that are not designated for\n> > > reset +\t */\n> > > +\tforeach(new_cell, new_options)\n> > > +\t{\n> > > +\t\toption_value *new_option;\n> > > +\t\tnew_option = (option_value *) lfirst(new_cell);\n> > > +\n> > > +\t\tif(new_option->status != OPTION_VALUE_STATUS_FOR_RESET)\n> > > +\t\t\tresult = lappend(result, new_option);\n> > > +\t}\n> > > +\treturn result;\n> > > +}\n> > > +\n> > > +/*\n> > > + * optionsDefListValdateNamespaces\n> > > + *\t\tFunction checks that all options represented as DefList has \n> no\n> > > + *\t\tnamespaces or have namespaces only from allowed list\n> > > + *\n> > > + * Function accept options as DefList and NULL terminated list of allowed\n> > > + * namespaces. It throws an error if not proper namespace was found.\n> > > + *\n> > > + * This function actually used only for tables with it's toast. namespace\n> > > + */\n> > > +void\n> > > +optionsDefListValdateNamespaces(List *defList, char **allowed_namespaces)\n> > > +{\n> > > +\tListCell *cell;\n> > > +\n> > > +\tforeach(cell, defList)\n> > > +\t{\n> > > +\t\tDefElem *def = (DefElem *) lfirst(cell);\n> > > +\n> > > +\t\t/*\n> > > +\t\t * Checking namespace only for options that have namespaces. \n> Options\n> > > +\t\t * with no namespaces are always accepted\n> > > +\t\t */\n> > > +\t\tif (def->defnamespace)\n> > > +\t\t{\n> > > +\t\t\tbool\t\tfound = false;\n> > > +\t\t\tint\t\t\ti = 0;\n> > > +\n> > > +\t\t\twhile (allowed_namespaces[i])\n> > > +\t\t\t{\n> > > +\t\t\t\tif (strcmp(def->defnamespace,\n> > > +\t\t\t\t\t\t\t\t \n> allowed_namespaces[i]) == 0)\n> > > +\t\t\t\t{\n> > > +\t\t\t\t\tfound = true;\n> > > +\t\t\t\t\tbreak;\n> > > +\t\t\t\t}\n> > > +\t\t\t\ti++;\n> > > +\t\t\t}\n> > > +\t\t\tif (!found)\n> > > +\t\t\t\tereport(ERROR,\n> > > +\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > +\t\t\t\t\t\t errmsg(\"unrecognized \n> parameter namespace \\\"%s\\\"\",\n> > > +\t\t\t\t\t\t\t\tdef-\n> >defnamespace)));\n> > > +\t\t}\n> > > +\t}\n> > > +}\n> > > +\n> > > +/*\n> > > + * optionsDefListFilterNamespaces\n> > > + *\t\tIterates over DefList, choose items with specified namespace \n> and adds\n> > > + *\t\tthem to a result List\n> > > + *\n> > > + * This function does not destroy source DefList but does not create\n> > > copies + * of List nodes.\n> > > + * It is actually used only for tables, in order to split toast and heap\n> > > + * reloptions, so each one can be stored in on it's own pg_class record\n> > > + */\n> > > +List *\n> > > +optionsDefListFilterNamespaces(List *defList, const char *namespace)\n> > > +{\n> > > +\tListCell *cell;\n> > > +\tList\t *result = NIL;\n> > > +\n> > > +\tforeach(cell, defList)\n> > > +\t{\n> > > +\t\tDefElem *def = (DefElem *) lfirst(cell);\n> > > +\n> > > +\t\tif ((!namespace && !def->defnamespace) ||\n> > > +\t\t\t(namespace && def->defnamespace &&\n> > > +\t\t\t strcmp(namespace, def->defnamespace) == 0))\n> > > +\t\t{\n> > > +\t\t\tresult = lappend(result, def);\n> > > +\t\t}\n> > > +\t}\n> > > +\treturn result;\n> > > +}\n> > > +\n> > > +/*\n> > > + * optionsTextArrayToDefList\n> > > + *\t\tConvert the text-array format of reloptions into a List of \n> DefElem.\n> > > + */\n> > > +List *\n> > > +optionsTextArrayToDefList(Datum options)\n> > > +{\n> > > +\tList\t *result = NIL;\n> > > +\tArrayType *array;\n> > > +\tDatum\t *optiondatums;\n> > > +\tint\t\t\tnoptions;\n> > > +\tint\t\t\ti;\n> > > +\n> > > +\t/* Nothing to do if no options */\n> > > +\tif (!PointerIsValid(DatumGetPointer(options)))\n> > > +\t\treturn result;\n> > > +\n> > > +\tarray = DatumGetArrayTypeP(options);\n> > > +\n> > > +\tdeconstruct_array(array, TEXTOID, -1, false, 'i',\n> > > +\t\t\t\t\t &optiondatums, NULL, &noptions);\n> > > +\n> > > +\tfor (i = 0; i < noptions; i++)\n> > > +\t{\n> > > +\t\tchar\t *s;\n> > > +\t\tchar\t *p;\n> > > +\t\tNode\t *val = NULL;\n> > > +\n> > > +\t\ts = TextDatumGetCString(optiondatums[i]);\n> > > +\t\tp = strchr(s, '=');\n> > > +\t\tif (p)\n> > > +\t\t{\n> > > +\t\t\t*p++ = '\\0';\n> > > +\t\t\tval = (Node *) makeString(pstrdup(p));\n> > > +\t\t}\n> > > +\t\tresult = lappend(result, makeDefElem(pstrdup(s), val, -1));\n> > > +\t}\n> > > +\n> > > +\treturn result;\n> > > +}\n> > > +\n> > > +/* FIXME write comment here */\n> > > +\n> > > +Datum\n> > > +optionsDefListToTextArray(List *defList)\n> > > +{\n> > > +\tListCell *cell;\n> > > +\tDatum\t\tresult;\n> > > +\tArrayBuildState *astate = NULL;\n> > > +\n> > > +\tforeach(cell, defList)\n> > > +\t{\n> > > +\t\tDefElem\t *def = (DefElem *) lfirst(cell);\n> > > +\t\tconst char *name = def->defname;\n> > > +\t\tconst char *value;\n> > > +\t\ttext\t *t;\n> > > +\t\tint\t\t\tlen;\n> > > +\n> > > +\t\tif (def->arg != NULL)\n> > > +\t\t\tvalue = defGetString(def);\n> > > +\t\telse\n> > > +\t\t\tvalue = \"true\";\n> > > +\n> > > +\t\tif (def->defnamespace)\n> > > +\t\t{\n> > > +\t\t\tAssert(false); /* Should not get here */\n> > > +\t\t\t/* This function is used for backward compatibility \n> in the place were\n> > > namespases are not allowed */ +\t\t\treturn (Datum) 0;\n> > > +\t\t}\n> > > +\t\tlen = VARHDRSZ + strlen(name) + strlen(value) + 1;\n> > > +\t\tt = (text *) palloc(len + 1);\n> > > +\t\tSET_VARSIZE(t, len);\n> > > +\t\tsprintf(VARDATA(t), \"%s=%s\", name, value);\n> > > +\t\tastate = accumArrayResult(astate, PointerGetDatum(t), false,\n> > > +\t\t\t\t\t\t\t\t TEXTOID, \n> CurrentMemoryContext);\n> > > +\n> > > +\t}\n> > > +\tif (astate)\n> > > +\t\tresult = makeArrayResult(astate, CurrentMemoryContext);\n> > > +\telse\n> > > +\t\tresult = (Datum) 0;\n> > > +\treturn result;\n> > > +}\n> > > +\n> > > +\n> > > +/*\n> > > + * optionsParseRawValues\n> > > + *\t\tParses and vlaidates (if proper flag is set) option_values. \n> As a\n> > > result + *\t\tcaller will get the list of parsed (or partly \n> parsed)\n> > > option_values + *\n> > > + * This function is used in cases when caller gets raw values from db or\n> > > + * syntax and want to parse them.\n> > > + * This function uses option_spec_set to get information about how each\n> > > option + * should be parsed.\n> > > + * If validate mode is off, function found an option that do not have\n> > > proper + * option_spec_set entry, this option kept unparsed (if some\n> > > garbage came from + * the DB, we should put it back there)\n> > > + *\n> > > + * This function destroys incoming list.\n> > > + */\n> > > +List *\n> > > +optionsParseRawValues(List *raw_values, options_spec_set * spec_set,\n> > > +\t\t\t\t\t options_parse_mode mode)\n> > > +{\n> > > +\tListCell *cell;\n> > > +\tList\t *result = NIL;\n> > > +\tbool\t *is_set;\n> > > +\tint\t\t\ti;\n> > > +\tbool\t\tvalidate = mode & OPTIONS_PARSE_MODE_VALIDATE;\n> > > +\tbool\t\tfor_alter = mode & OPTIONS_PARSE_MODE_FOR_ALTER;\n> > > +\n> > > +\n> > > +\tis_set = palloc0(sizeof(bool) * spec_set->num);\n> > > +\tforeach(cell, raw_values)\n> > > +\t{\n> > > +\t\toption_value *option = (option_value *) lfirst(cell);\n> > > +\t\tbool\t\tfound = false;\n> > > +\t\tbool\t\tskip = false;\n> > > +\n> > > +\n> > > +\t\tif (option->status == OPTION_VALUE_STATUS_PARSED)\n> > > +\t\t{\n> > > +\t\t\t/*\n> > > +\t\t\t * This can happen while ALTER, when new values were \n> already\n> > > +\t\t\t * parsed, but old values merged from DB are still \n> raw\n> > > +\t\t\t */\n> > > +\t\t\tresult = lappend(result, option);\n> > > +\t\t\tcontinue;\n> > > +\t\t}\n> > > +\t\tif (validate && option->namespace && (!spec_set->namespace ||\n> > > +\t\t\t\t strcmp(spec_set->namespace, option-\n> >namespace) != 0))\n> > > +\t\t{\n> > > +\t\t\tereport(ERROR,\n> > > +\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > +\t\t\t\t\t errmsg(\"unrecognized parameter \n> namespace \\\"%s\\\"\",\n> > > +\t\t\t\t\t\t\toption->namespace)));\n> > > +\t\t}\n> > > +\n> > > +\t\tfor (i = 0; i < spec_set->num; i++)\n> > > +\t\t{\n> > > +\t\t\toption_spec_basic *definition = spec_set-\n> >definitions[i];\n> > > +\n> > > +\t\t\tif (strcmp(option->raw_name,\n> > > +\t\t\t\t\t\t\t definition->name) == \n> 0)\n> > > +\t\t\t{\n> > > +\t\t\t\t/*\n> > > +\t\t\t\t * Skip option with \"ignore\" flag, as it is \n> processed\n> > > +\t\t\t\t * somewhere else. (WITH OIDS special case)\n> > > +\t\t\t\t */\n> > > +\t\t\t\tif (definition->flags & \n> OPTION_DEFINITION_FLAG_IGNORE)\n> > > +\t\t\t\t{\n> > > +\t\t\t\t\tfound = true;\n> > > +\t\t\t\t\tskip = true;\n> > > +\t\t\t\t\tbreak;\n> > > +\t\t\t\t}\n> > > +\n> > > +\t\t\t\t/*\n> > > +\t\t\t\t * Reject option as if it was not in \n> spec_set. Needed for cases\n> > > +\t\t\t\t * when option should have default value, but \n> should not be\n> > > +\t\t\t\t * changed\n> > > +\t\t\t\t */\n> > > +\t\t\t\tif (definition->flags & \n> OPTION_DEFINITION_FLAG_REJECT)\n> > > +\t\t\t\t{\n> > > +\t\t\t\t\tfound = false;\n> > > +\t\t\t\t\tbreak;\n> > > +\t\t\t\t}\n> > > +\n> > > +\t\t\t\tif (validate && is_set[i])\n> > > +\t\t\t\t{\n> > > +\t\t\t\t\tereport(ERROR,\n> > > +\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > +\t\t\t\t\t\t errmsg(\"parameter \\\"%s\\\" \n> specified more than once\",\n> > > +\t\t\t\t\t\t\t\t option-\n> >raw_name)));\n> > > +\t\t\t\t}\n> > > +\t\t\t\tif ((for_alter) &&\n> > > +\t\t\t\t\t(definition->flags & \n> OPTION_DEFINITION_FLAG_FORBID_ALTER))\n> > > +\t\t\t\t{\n> > > +\t\t\t\t\tereport(ERROR,\n> > > +\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > +\t\t\t\t\t\t errmsg(\"changing parameter \n> \\\"%s\\\" is not allowed\",\n> > > +\t\t\t\t\t\t\t\t definition-\n> >name)));\n> > > +\t\t\t\t}\n> > > +\t\t\t\tif (option->status == \n> OPTION_VALUE_STATUS_FOR_RESET)\n> > > +\t\t\t\t{\n> > > +\t\t\t\t\t/*\n> > > +\t\t\t\t\t * For RESET options do not need \n> further processing so\n> > > +\t\t\t\t\t * mark it found and stop searching\n> > > +\t\t\t\t\t */\n> > > +\t\t\t\t\tfound = true;\n> > > +\t\t\t\t\tbreak;\n> > > +\t\t\t\t}\n> > > +\t\t\t\tpfree(option->raw_name);\n> > > +\t\t\t\toption->raw_name = NULL;\n> > > +\t\t\t\toption->gen = definition;\n> > > +\t\t\t\tparse_one_option(option, NULL, -1, validate);\n> > > +\t\t\t\tis_set[i] = true;\n> > > +\t\t\t\tfound = true;\n> > > +\t\t\t\tbreak;\n> > > +\t\t\t}\n> > > +\t\t}\n> > > +\t\tif (!found)\n> > > +\t\t{\n> > > +\t\t\tif (validate)\n> > > +\t\t\t{\n> > > +\t\t\t\tif (option->namespace)\n> > > +\t\t\t\t\tereport(ERROR,\n> > > +\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > +\t\t\t\t\t\t\t errmsg(\"unrecognized \n> parameter \\\"%s.%s\\\"\",\n> > > +\t\t\t\t\t\t\t\t\t\n> option->namespace, option->raw_name)));\n> > > +\t\t\t\telse\n> > > +\t\t\t\t\tereport(ERROR,\n> > > +\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > +\t\t\t\t\t\t\t errmsg(\"unrecognized \n> parameter \\\"%s\\\"\",\n> > > +\t\t\t\t\t\t\t\t\t\n> option->raw_name)));\n> > > +\t\t\t} else\n> > > +\t\t\t{\n> > > +\t\t\t\t/* RESET is always in non-validating mode, \n> unkown names should\n> > > +\t\t\t\t * be ignored. This is traditional behaviour \n> of postgres/\n> > > +\t\t\t\t * FIXME may be it should be changed someday\n> > > +\t\t\t\t */\n> > > +\t\t\t\tif (option->status == \n> OPTION_VALUE_STATUS_FOR_RESET)\n> > > +\t\t\t\t{\n> > > +\t\t\t\t\tskip = true;\n> > > +\t\t\t\t}\n> > > +\t\t\t}\n> > > +\t\t\t/*\n> > > +\t\t\t * In other cases, if we are parsing not in validate \n> mode, then\n> > > +\t\t\t * we should keep unknown node, because non-validate \n> mode is for\n> > > +\t\t\t * data that is already in the DB and should not be \n> changed after\n> > > +\t\t\t * altering another entries\n> > > +\t\t\t */\n> > > +\t\t}\n> > > +\t\tif (!skip)\n> > > +\t\t\tresult = lappend(result, option);\n> > > +\t}\n> > > +\treturn result;\n> > > +}\n> > > +\n> > > +/*\n> > > + * parse_one_option\n> > > + *\n> > > + *\t\tSubroutine for optionsParseRawValues, to parse and validate \n> a\n> > > + *\t\tsingle option's value\n> > > + */\n> > > +static void\n> > > +parse_one_option(option_value * option, const char *text_str, int\n> > > text_len, +\t\t\t\t bool validate)\n> > > +{\n> > > +\tchar\t *value;\n> > > +\tbool\t\tparsed;\n> > > +\n> > > +\tvalue = option->raw_value;\n> > > +\n> > > +\tswitch (option->gen->type)\n> > > +\t{\n> > > +\t\tcase OPTION_TYPE_BOOL:\n> > > +\t\t\t{\n> > > +\t\t\t\tparsed = parse_bool(value, &option-\n> >values.bool_val);\n> > > +\t\t\t\tif (validate && !parsed)\n> > > +\t\t\t\t\tereport(ERROR,\n> > > +\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > +\t\t\t\t\t\terrmsg(\"invalid value for \n> boolean option \\\"%s\\\": %s\",\n> > > +\t\t\t\t\t\t\t option->gen->name, \n> value)));\n> > > +\t\t\t}\n> > > +\t\t\tbreak;\n> > > +\t\tcase OPTION_TYPE_INT:\n> > > +\t\t\t{\n> > > +\t\t\t\toption_spec_int *optint =\n> > > +\t\t\t\t(option_spec_int *) option->gen;\n> > > +\n> > > +\t\t\t\tparsed = parse_int(value, &option-\n> >values.int_val, 0, NULL);\n> > > +\t\t\t\tif (validate && !parsed)\n> > > +\t\t\t\t\tereport(ERROR,\n> > > +\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > +\t\t\t\t\t\terrmsg(\"invalid value for \n> integer option \\\"%s\\\": %s\",\n> > > +\t\t\t\t\t\t\t option->gen->name, \n> value)));\n> > > +\t\t\t\tif (validate && (option->values.int_val < \n> optint->min ||\n> > > +\t\t\t\t\t\t\t\t option-\n> >values.int_val > optint->max))\n> > > +\t\t\t\t\tereport(ERROR,\n> > > +\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > +\t\t\t\t\t\t errmsg(\"value %s out of \n> bounds for option \\\"%s\\\"\",\n> > > +\t\t\t\t\t\t\t\t value, \n> option->gen->name),\n> > > +\t\t\t\t\t errdetail(\"Valid values are between \n> \\\"%d\\\" and \\\"%d\\\".\",\n> > > +\t\t\t\t\t\t\t optint->min, \n> optint->max)));\n> > > +\t\t\t}\n> > > +\t\t\tbreak;\n> > > +\t\tcase OPTION_TYPE_REAL:\n> > > +\t\t\t{\n> > > +\t\t\t\toption_spec_real *optreal =\n> > > +\t\t\t\t(option_spec_real *) option->gen;\n> > > +\n> > > +\t\t\t\tparsed = parse_real(value, &option-\n> >values.real_val, 0, NULL);\n> > > +\t\t\t\tif (validate && !parsed)\n> > > +\t\t\t\t\tereport(ERROR,\n> > > +\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > +\t\t\t\t\t\t\t errmsg(\"invalid \n> value for floating point option \\\"%s\\\": %s\",\n> > > +\t\t\t\t\t\t\t\t\t\n> option->gen->name, value)));\n> > > +\t\t\t\tif (validate && (option->values.real_val < \n> optreal->min ||\n> > > +\t\t\t\t\t\t\t\t option-\n> >values.real_val > optreal->max))\n> > > +\t\t\t\t\tereport(ERROR,\n> > > +\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > +\t\t\t\t\t\t errmsg(\"value %s out of \n> bounds for option \\\"%s\\\"\",\n> > > +\t\t\t\t\t\t\t\t value, \n> option->gen->name),\n> > > +\t\t\t\t\t errdetail(\"Valid values are between \n> \\\"%f\\\" and \\\"%f\\\".\",\n> > > +\t\t\t\t\t\t\t optreal->min, \n> optreal->max)));\n> > > +\t\t\t}\n> > > +\t\t\tbreak;\n> > > +\t\tcase OPTION_TYPE_ENUM:\n> > > +\t\t\t{\n> > > +\t\t\t\toption_spec_enum *optenum =\n> > > +\t\t\t\t\t\t\t\t\t\t\n> (option_spec_enum *) option->gen;\n> > > +\t\t\t\topt_enum_elt_def *elt;\n> > > +\t\t\t\tparsed = false;\n> > > +\t\t\t\tfor (elt = optenum->members; elt->string_val; \n> elt++)\n> > > +\t\t\t\t{\n> > > +\t\t\t\t\tif (strcmp(value, elt->string_val) == \n> 0)\n> > > +\t\t\t\t\t{\n> > > +\t\t\t\t\t\toption->values.enum_val = \n> elt->symbol_val;\n> > > +\t\t\t\t\t\tparsed = true;\n> > > +\t\t\t\t\t\tbreak;\n> > > +\t\t\t\t\t}\n> > > +\t\t\t\t}\n> > > +\t\t\t\tif (!parsed)\n> > > +\t\t\t\t{\n> > > +\t\t\t\t\tereport(ERROR,\n> > > +\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > +\t\t\t\t\t\t\t errmsg(\"invalid \n> value for enum option \\\"%s\\\": %s\",\n> > > +\t\t\t\t\t\t\t\t\t\n> option->gen->name, value),\n> > > +\t\t\t\t\t\t\t optenum->detailmsg ?\n> > > +\t\t\t\t\t\t\t \n> errdetail_internal(\"%s\", _(optenum->detailmsg)) : 0));\n> > > +\t\t\t\t}\n> > > +\t\t\t}\n> > > +\t\t\tbreak;\n> > > +\t\tcase OPTION_TYPE_STRING:\n> > > +\t\t\t{\n> > > +\t\t\t\toption_spec_string *optstring =\n> > > +\t\t\t\t(option_spec_string *) option->gen;\n> > > +\n> > > +\t\t\t\toption->values.string_val = value;\n> > > +\t\t\t\tif (validate && optstring->validate_cb)\n> > > +\t\t\t\t\t(optstring->validate_cb) (value);\n> > > +\t\t\t\tparsed = true;\n> > > +\t\t\t}\n> > > +\t\t\tbreak;\n> > > +\t\tdefault:\n> > > +\t\t\telog(ERROR, \"unsupported reloption type %d\", option-\n> >gen->type);\n> > > +\t\t\tparsed = true;\t\t/* quiet compiler */\n> > > +\t\t\tbreak;\n> > > +\t}\n> > > +\n> > > +\tif (parsed)\n> > > +\t\toption->status = OPTION_VALUE_STATUS_PARSED;\n> > > +\n> > > +}\n> > > +\n> > > +/*\n> > > + * optionsAllocateBytea\n> > > + *\t\tAllocates memory for bytea options representation\n> > > + *\n> > > + * Function allocates memory for byrea structure of an option, plus adds\n> > > space + * for values of string options. We should keep all data including\n> > > string + * values in the same memory chunk, because Cache code copies\n> > > bytea option + * data from one MemoryConext to another without knowing\n> > > about it's internal + * structure, so it would not be able to copy string\n> > > values if they are outside + * of bytea memory chunk.\n> > > + */\n> > > +static void *\n> > > +optionsAllocateBytea(options_spec_set * spec_set, List *options)\n> > > +{\n> > > +\tSize\t\tsize;\n> > > +\tint\t\t\ti;\n> > > +\tListCell *cell;\n> > > +\tint\t\t\tlength;\n> > > +\tvoid\t *res;\n> > > +\n> > > +\tsize = spec_set->struct_size;\n> > > +\n> > > +\t/* Calculate size needed to store all string values for this option \n> */\n> > > +\tfor (i = 0; i < spec_set->num; i++)\n> > > +\t{\n> > > +\t\toption_spec_basic *definition = spec_set->definitions[i];\n> > > +\t\tbool\t\tfound = false;\n> > > +\t\toption_value *option;\n> > > +\n> > > +\t\t/* Not interested in non-string options, skipping */\n> > > +\t\tif (definition->type != OPTION_TYPE_STRING)\n> > > +\t\t\tcontinue;\n> > > +\n> > > +\t\t/*\n> > > +\t\t * Trying to find option_value that references definition \n> spec_set\n> > > +\t\t * entry\n> > > +\t\t */\n> > > +\t\tforeach(cell, options)\n> > > +\t\t{\n> > > +\t\t\toption = (option_value *) lfirst(cell);\n> > > +\t\t\tif (option->status == OPTION_VALUE_STATUS_PARSED &&\n> > > +\t\t\t\tstrcmp(option->gen->name, definition->name) == \n> 0)\n> > > +\t\t\t{\n> > > +\t\t\t\tfound = true;\n> > > +\t\t\t\tbreak;\n> > > +\t\t\t}\n> > > +\t\t}\n> > > +\t\tif (found)\n> > > +\t\t\t/* If found, it'value will be stored */\n> > > +\t\t\tlength = strlen(option->values.string_val) + 1;\n> > > +\t\telse\n> > > +\t\t\t/* If not found, then there would be default value \n> there */\n> > > +\t\tif (((option_spec_string *) definition)->default_val)\n> > > +\t\t\tlength = strlen(\n> > > +\t\t\t\t ((option_spec_string *) definition)-\n> >default_val) + 1;\n> > > +\t\telse\n> > > +\t\t\tlength = 0;\n> > > +\t\t/* Add total length of all string values to basic size */\n> > > +\t\tsize += length;\n> > > +\t}\n> > > +\n> > > +\tres = palloc0(size);\n> > > +\tSET_VARSIZE(res, size);\n> > > +\treturn res;\n> > > +}\n> > > +\n> > > +/*\n> > > + * optionsValuesToBytea\n> > > + *\t\tConverts options from List of option_values to binary bytea \n> structure\n> > > + *\n> > > + * Convertation goes according to options_spec_set: each spec_set item\n> > > + * has offset value, and option value in binary mode is written to the\n> > > + * structure with that offset.\n> > > + *\n> > > + * More special case is string values. Memory for bytea structure is\n> > > allocated + * by optionsAllocateBytea which adds some more space for\n> > > string values to + * the size of original structure. All string values\n> > > are copied there and + * inside the bytea structure an offset to that\n> > > value is kept.\n> > > + *\n> > > + */\n> > > +static bytea *\n> > > +optionsValuesToBytea(List *options, options_spec_set * spec_set)\n> > > +{\n> > > +\tchar\t *data;\n> > > +\tchar\t *string_values_buffer;\n> > > +\tint\t\t\ti;\n> > > +\n> > > +\tdata = optionsAllocateBytea(spec_set, options);\n> > > +\n> > > +\t/* place for string data starts right after original structure */\n> > > +\tstring_values_buffer = data + spec_set->struct_size;\n> > > +\n> > > +\tfor (i = 0; i < spec_set->num; i++)\n> > > +\t{\n> > > +\t\toption_value *found = NULL;\n> > > +\t\tListCell *cell;\n> > > +\t\tchar\t *item_pos;\n> > > +\t\toption_spec_basic *definition = spec_set->definitions[i];\n> > > +\n> > > +\t\tif (definition->flags & OPTION_DEFINITION_FLAG_IGNORE)\n> > > +\t\t\tcontinue;\n> > > +\n> > > +\t\t/* Calculate the position of the item inside the structure */\n> > > +\t\titem_pos = data + definition->struct_offset;\n> > > +\n> > > +\t\t/* Looking for the corresponding option from options list */\n> > > +\t\tforeach(cell, options)\n> > > +\t\t{\n> > > +\t\t\toption_value *option = (option_value *) lfirst(cell);\n> > > +\n> > > +\t\t\tif (option->status == OPTION_VALUE_STATUS_RAW)\n> > > +\t\t\t\tcontinue;\t\t/* raw can come from db. \n> Just ignore them then */\n> > > +\t\t\tAssert(option->status != OPTION_VALUE_STATUS_EMPTY);\n> > > +\n> > > +\t\t\tif (strcmp(definition->name, option->gen->name) == 0)\n> > > +\t\t\t{\n> > > +\t\t\t\tfound = option;\n> > > +\t\t\t\tbreak;\n> > > +\t\t\t}\n> > > +\t\t}\n> > > +\t\t/* writing to the proper position either option value or \n> default val */\n> > > +\t\tswitch (definition->type)\n> > > +\t\t{\n> > > +\t\t\tcase OPTION_TYPE_BOOL:\n> > > +\t\t\t\t*(bool *) item_pos = found ?\n> > > +\t\t\t\t\tfound->values.bool_val :\n> > > +\t\t\t\t\t((option_spec_bool *) definition)-\n> >default_val;\n> > > +\t\t\t\tbreak;\n> > > +\t\t\tcase OPTION_TYPE_INT:\n> > > +\t\t\t\t*(int *) item_pos = found ?\n> > > +\t\t\t\t\tfound->values.int_val :\n> > > +\t\t\t\t\t((option_spec_int *) definition)-\n> >default_val;\n> > > +\t\t\t\tbreak;\n> > > +\t\t\tcase OPTION_TYPE_REAL:\n> > > +\t\t\t\t*(double *) item_pos = found ?\n> > > +\t\t\t\t\tfound->values.real_val :\n> > > +\t\t\t\t\t((option_spec_real *) definition)-\n> >default_val;\n> > > +\t\t\t\tbreak;\n> > > +\t\t\tcase OPTION_TYPE_ENUM:\n> > > +\t\t\t\t*(int *) item_pos = found ?\n> > > +\t\t\t\t\tfound->values.enum_val :\n> > > +\t\t\t\t\t((option_spec_enum *) definition)-\n> >default_val;\n> > > +\t\t\t\tbreak;\n> > > +\n> > > +\t\t\tcase OPTION_TYPE_STRING:\n> > > +\t\t\t\t{\n> > > +\t\t\t\t\t/*\n> > > +\t\t\t\t\t * For string options: writing string \n> value at the string\n> > > +\t\t\t\t\t * buffer after the structure, and \n> storing and offset to\n> > > +\t\t\t\t\t * that value\n> > > +\t\t\t\t\t */\n> > > +\t\t\t\t\tchar\t *value = NULL;\n> > > +\n> > > +\t\t\t\t\tif (found)\n> > > +\t\t\t\t\t\tvalue = found-\n> >values.string_val;\n> > > +\t\t\t\t\telse\n> > > +\t\t\t\t\t\tvalue = ((option_spec_string \n> *) definition)\n> > > +\t\t\t\t\t\t\t->default_val;\n> > > +\t\t\t\t\t*(int *) item_pos = value ?\n> > > +\t\t\t\t\t\tstring_values_buffer - data :\n> > > +\t\t\t\t\t\t\n> OPTION_STRING_VALUE_NOT_SET_OFFSET;\n> > > +\t\t\t\t\tif (value)\n> > > +\t\t\t\t\t{\n> > > +\t\t\t\t\t\tstrcpy(string_values_buffer, \n> value);\n> > > +\t\t\t\t\t\tstring_values_buffer += \n> strlen(value) + 1;\n> > > +\t\t\t\t\t}\n> > > +\t\t\t\t}\n> > > +\t\t\t\tbreak;\n> > > +\t\t\tdefault:\n> > > +\t\t\t\telog(ERROR, \"unsupported reloption type %d\",\n> > > +\t\t\t\t\t definition->type);\n> > > +\t\t\t\tbreak;\n> > > +\t\t}\n> > > +\t}\n> > > +\treturn (void *) data;\n> > > +}\n> > > +\n> > > +\n> > > +/*\n> > > + * transformOptions\n> > > + *\t\tThis function is used by src/backend/commands/Xxxx in order \n> to\n> > > process\n> > > + *\t\tnew option values, merge them with existing values (in the \n> case of\n> > > + *\t\tALTER command) and prepare to put them [back] into DB\n> > > + */\n> > > +\n> > > +Datum\n> > > +transformOptions(options_spec_set * spec_set, Datum oldOptions,\n> > > +\t\t\t\t List *defList, options_parse_mode \n> parse_mode)\n> > > +{\n> > > +\tDatum\t\tresult;\n> > > +\tList\t *new_values;\n> > > +\tList\t *old_values;\n> > > +\tList\t *merged_values;\n> > > +\n> > > +\t/*\n> > > +\t * Parse and validate New values\n> > > +\t */\n> > > +\tnew_values = optionsDefListToRawValues(defList, parse_mode);\n> > > +\tif (! (parse_mode & OPTIONS_PARSE_MODE_FOR_RESET))\n> > > +\t{\n> > > +\t\t/* FIXME: postgres usual behaviour vas not to vaidate names \n> that\n> > > +\t\t * came from RESET command. Once this behavious should be \n> changed,\n> > > +\t\t * I guess. But for now we keep it as it was.\n> > > +\t\t */\n> > > +\t\tparse_mode|= OPTIONS_PARSE_MODE_VALIDATE;\n> > > +\t}\n> > > +\tnew_values = optionsParseRawValues(new_values, spec_set, parse_mode);\n> > > +\n> > > +\t/*\n> > > +\t * Old values exists in case of ALTER commands. Transform them to raw\n> > > +\t * values and merge them with new_values, and parse it.\n> > > +\t */\n> > > +\tif (PointerIsValid(DatumGetPointer(oldOptions)))\n> > > +\t{\n> > > +\t\told_values = optionsTextArrayToRawValues(oldOptions);\n> > > +\t\tmerged_values = optionsMergeOptionValues(old_values, \n> new_values);\n> > > +\n> > > +\t\t/*\n> > > +\t\t * Parse options only after merging in order not to parse \n> options that\n> > > +\t\t * would be removed by merging later\n> > > +\t\t */\n> > > +\t\tmerged_values = optionsParseRawValues(merged_values, \n> spec_set, 0);\n> > > +\t}\n> > > +\telse\n> > > +\t{\n> > > +\t\tmerged_values = new_values;\n> > > +\t}\n> > > +\n> > > +\t/*\n> > > +\t * If we have postprocess_fun function defined in spec_set, then there\n> > > +\t * might be some custom options checks there, with error throwing. So \n> we\n> > > +\t * should do it here to throw these errors while CREATing or ALTERing\n> > > +\t * options\n> > > +\t */\n> > > +\tif (spec_set->postprocess_fun)\n> > > +\t{\n> > > +\t\tbytea\t *data = optionsValuesToBytea(merged_values, \n> spec_set);\n> > > +\n> > > +\t\tspec_set->postprocess_fun(data, true);\n> > > +\t\tpfree(data);\n> > > +\t}\n> > > +\n> > > +\t/*\n> > > +\t * Convert options to TextArray format so caller can store them into\n> > > +\t * database\n> > > +\t */\n> > > +\tresult = optionsValuesToTextArray(merged_values);\n> > > +\treturn result;\n> > > +}\n> > > +\n> > > +\n> > > +/*\n> > > + * optionsTextArrayToBytea\n> > > + *\t\tA meta-function that transforms options stored as TextArray \n> into\n> > > binary + *\t\t(bytea) representation.\n> > > + *\n> > > + *\tThis function runs other transform functions that leads to the \n> desired\n> > > + *\tresult in no-validation mode. This function is used by cache\n> > > mechanism,\n> > > + *\tin order to load and cache options when object itself is loaded and\n> > > cached + */\n> > > +bytea *\n> > > +optionsTextArrayToBytea(options_spec_set * spec_set, Datum data, bool\n> > > validate) +{\n> > > +\tList\t *values;\n> > > +\tbytea\t *options;\n> > > +\n> > > +\tvalues = optionsTextArrayToRawValues(data);\n> > > +\tvalues = optionsParseRawValues(values, spec_set,\n> > > +\t\t\t\t\t\t\t\tvalidate ? \n> OPTIONS_PARSE_MODE_VALIDATE : 0);\n> > > +\toptions = optionsValuesToBytea(values, spec_set);\n> > > +\n> > > +\tif (spec_set->postprocess_fun)\n> > > +\t{\n> > > +\t\tspec_set->postprocess_fun(options, false);\n> > > +\t}\n> > > +\treturn options;\n> > > +}\n> > > diff --git a/src/backend/access/common/relation.c\n> > > b/src/backend/access/common/relation.c index 632d13c..49ad197 100644\n> > > --- a/src/backend/access/common/relation.c\n> > > +++ b/src/backend/access/common/relation.c\n> > > @@ -65,9 +65,13 @@ relation_open(Oid relationId, LOCKMODE lockmode)\n> > > \n> > > \t * If we didn't get the lock ourselves, assert that caller holds \n> one,\n> > > \t * except in bootstrap mode where no locks are used.\n> > > \t */\n> > > \n> > > -\tAssert(lockmode != NoLock ||\n> > > -\t\t IsBootstrapProcessingMode() ||\n> > > -\t\t CheckRelationLockedByMe(r, AccessShareLock, true));\n> > > +\n> > > +// FIXME We need NoLock mode to get AM data when choosing Lock for\n> > > +// attoptions is changed. See ProcessUtilitySlow problems comes from\n> > > there\n> > > +// This is a dirty hack, we need better solution for this case;\n> > > +//\tAssert(lockmode != NoLock ||\n> > > +//\t\t IsBootstrapProcessingMode() ||\n> > > +//\t\t CheckRelationLockedByMe(r, AccessShareLock, true));\n> > > \n> > > \t/* Make note that we've accessed a temporary relation */\n> > > \tif (RelationUsesLocalBuffers(r))\n> > > \n> > > diff --git a/src/backend/access/common/reloptions.c\n> > > b/src/backend/access/common/reloptions.c index b5602f5..29ab98a 100644\n> > > --- a/src/backend/access/common/reloptions.c\n> > > +++ b/src/backend/access/common/reloptions.c\n> > > @@ -1,7 +1,7 @@\n> > > \n> > > /*-----------------------------------------------------------------------\n> > > --\n> > > \n> > > *\n> > > * reloptions.c\n> > > \n> > > - *\t Core support for relation options (pg_class.reloptions)\n> > > + *\t Support for relation options (pg_class.reloptions)\n> > > \n> > > *\n> > > * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group\n> > > * Portions Copyright (c) 1994, Regents of the University of California\n> > > \n> > > @@ -17,13 +17,10 @@\n> > > \n> > > #include <float.h>\n> > > \n> > > -#include \"access/gist_private.h\"\n> > > -#include \"access/hash.h\"\n> > > \n> > > #include \"access/heaptoast.h\"\n> > > #include \"access/htup_details.h\"\n> > > \n> > > -#include \"access/nbtree.h\"\n> > > \n> > > #include \"access/reloptions.h\"\n> > > \n> > > -#include \"access/spgist_private.h\"\n> > > +#include \"access/options.h\"\n> > > \n> > > #include \"catalog/pg_type.h\"\n> > > #include \"commands/defrem.h\"\n> > > #include \"commands/tablespace.h\"\n> > > \n> > > @@ -36,6 +33,7 @@\n> > > \n> > > #include \"utils/guc.h\"\n> > > #include \"utils/memutils.h\"\n> > > #include \"utils/rel.h\"\n> > > \n> > > +#include \"storage/bufmgr.h\"\n> > > \n> > > /*\n> > > \n> > > * Contents of pg_class.reloptions\n> > > \n> > > @@ -93,380 +91,8 @@\n> > > \n> > > * value has no effect until the next VACUUM, so no need for stronger\n> > > lock.\n> > > */\n> > > \n> > > -static relopt_bool boolRelOpts[] =\n> > > -{\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"autosummarize\",\n> > > -\t\t\t\"Enables automatic summarization on this BRIN \n> index\",\n> > > -\t\t\tRELOPT_KIND_BRIN,\n> > > -\t\t\tAccessExclusiveLock\n> > > -\t\t},\n> > > -\t\tfalse\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"autovacuum_enabled\",\n> > > -\t\t\t\"Enables autovacuum in this relation\",\n> > > -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\ttrue\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"user_catalog_table\",\n> > > -\t\t\t\"Declare a table as an additional catalog table, \n> e.g. for the purpose\n> > > of logical replication\", -\t\t\tRELOPT_KIND_HEAP,\n> > > -\t\t\tAccessExclusiveLock\n> > > -\t\t},\n> > > -\t\tfalse\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"fastupdate\",\n> > > -\t\t\t\"Enables \\\"fast update\\\" feature for this GIN \n> index\",\n> > > -\t\t\tRELOPT_KIND_GIN,\n> > > -\t\t\tAccessExclusiveLock\n> > > -\t\t},\n> > > -\t\ttrue\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"security_barrier\",\n> > > -\t\t\t\"View acts as a row security barrier\",\n> > > -\t\t\tRELOPT_KIND_VIEW,\n> > > -\t\t\tAccessExclusiveLock\n> > > -\t\t},\n> > > -\t\tfalse\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"vacuum_truncate\",\n> > > -\t\t\t\"Enables vacuum to truncate empty pages at the end \n> of this table\",\n> > > -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\ttrue\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"deduplicate_items\",\n> > > -\t\t\t\"Enables \\\"deduplicate items\\\" feature for this \n> btree index\",\n> > > -\t\t\tRELOPT_KIND_BTREE,\n> > > -\t\t\tShareUpdateExclusiveLock\t/* since it applies only \n> to later\n> > > -\t\t\t\t\t\t\t\t\t\t\n> * inserts */\n> > > -\t\t},\n> > > -\t\ttrue\n> > > -\t},\n> > > -\t/* list terminator */\n> > > -\t{{NULL}}\n> > > -};\n> > > -\n> > > -static relopt_int intRelOpts[] =\n> > > -{\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"fillfactor\",\n> > > -\t\t\t\"Packs table pages only to this percentage\",\n> > > -\t\t\tRELOPT_KIND_HEAP,\n> > > -\t\t\tShareUpdateExclusiveLock\t/* since it applies only \n> to later\n> > > -\t\t\t\t\t\t\t\t\t\t\n> * inserts */\n> > > -\t\t},\n> > > -\t\tHEAP_DEFAULT_FILLFACTOR, HEAP_MIN_FILLFACTOR, 100\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"fillfactor\",\n> > > -\t\t\t\"Packs btree index pages only to this percentage\",\n> > > -\t\t\tRELOPT_KIND_BTREE,\n> > > -\t\t\tShareUpdateExclusiveLock\t/* since it applies only \n> to later\n> > > -\t\t\t\t\t\t\t\t\t\t\n> * inserts */\n> > > -\t\t},\n> > > -\t\tBTREE_DEFAULT_FILLFACTOR, BTREE_MIN_FILLFACTOR, 100\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"fillfactor\",\n> > > -\t\t\t\"Packs hash index pages only to this percentage\",\n> > > -\t\t\tRELOPT_KIND_HASH,\n> > > -\t\t\tShareUpdateExclusiveLock\t/* since it applies only \n> to later\n> > > -\t\t\t\t\t\t\t\t\t\t\n> * inserts */\n> > > -\t\t},\n> > > -\t\tHASH_DEFAULT_FILLFACTOR, HASH_MIN_FILLFACTOR, 100\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"fillfactor\",\n> > > -\t\t\t\"Packs gist index pages only to this percentage\",\n> > > -\t\t\tRELOPT_KIND_GIST,\n> > > -\t\t\tShareUpdateExclusiveLock\t/* since it applies only \n> to later\n> > > -\t\t\t\t\t\t\t\t\t\t\n> * inserts */\n> > > -\t\t},\n> > > -\t\tGIST_DEFAULT_FILLFACTOR, GIST_MIN_FILLFACTOR, 100\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"fillfactor\",\n> > > -\t\t\t\"Packs spgist index pages only to this percentage\",\n> > > -\t\t\tRELOPT_KIND_SPGIST,\n> > > -\t\t\tShareUpdateExclusiveLock\t/* since it applies only \n> to later\n> > > -\t\t\t\t\t\t\t\t\t\t\n> * inserts */\n> > > -\t\t},\n> > > -\t\tSPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"autovacuum_vacuum_threshold\",\n> > > -\t\t\t\"Minimum number of tuple updates or deletes prior to \n> vacuum\",\n> > > -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, 0, INT_MAX\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"autovacuum_vacuum_insert_threshold\",\n> > > -\t\t\t\"Minimum number of tuple inserts prior to vacuum, or \n> -1 to disable\n> > > insert vacuums\", -\t\t\tRELOPT_KIND_HEAP | \n> RELOPT_KIND_TOAST,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-2, -1, INT_MAX\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"autovacuum_analyze_threshold\",\n> > > -\t\t\t\"Minimum number of tuple inserts, updates or deletes \n> prior to\n> > > analyze\",\n> > > -\t\t\tRELOPT_KIND_HEAP,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, 0, INT_MAX\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"autovacuum_vacuum_cost_limit\",\n> > > -\t\t\t\"Vacuum cost amount available before napping, for \n> autovacuum\",\n> > > -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, 1, 10000\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"autovacuum_freeze_min_age\",\n> > > -\t\t\t\"Minimum age at which VACUUM should freeze a table \n> row, for\n> > > autovacuum\", -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, 0, 1000000000\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"autovacuum_multixact_freeze_min_age\",\n> > > -\t\t\t\"Minimum multixact age at which VACUUM should freeze \n> a row\n> > > multixact's, for autovacuum\", -\t\t\tRELOPT_KIND_HEAP | \n> RELOPT_KIND_TOAST,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, 0, 1000000000\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"autovacuum_freeze_max_age\",\n> > > -\t\t\t\"Age at which to autovacuum a table to prevent \n> transaction ID\n> > > wraparound\", -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, 100000, 2000000000\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"autovacuum_multixact_freeze_max_age\",\n> > > -\t\t\t\"Multixact age at which to autovacuum a table to \n> prevent multixact\n> > > wraparound\", -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, 10000, 2000000000\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"autovacuum_freeze_table_age\",\n> > > -\t\t\t\"Age at which VACUUM should perform a full table \n> sweep to freeze row\n> > > versions\", -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t}, -1, 0, 2000000000\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"autovacuum_multixact_freeze_table_age\",\n> > > -\t\t\t\"Age of multixact at which VACUUM should perform a \n> full table sweep to\n> > > freeze row versions\", -\t\t\tRELOPT_KIND_HEAP | \n> RELOPT_KIND_TOAST,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t}, -1, 0, 2000000000\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"log_autovacuum_min_duration\",\n> > > -\t\t\t\"Sets the minimum execution time above which \n> autovacuum actions will\n> > > be logged\", -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, -1, INT_MAX\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"toast_tuple_target\",\n> > > -\t\t\t\"Sets the target tuple length at which external \n> columns will be\n> > > toasted\", -\t\t\tRELOPT_KIND_HEAP,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\tTOAST_TUPLE_TARGET, 128, TOAST_TUPLE_TARGET_MAIN\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"pages_per_range\",\n> > > -\t\t\t\"Number of pages that each page range covers in a \n> BRIN index\",\n> > > -\t\t\tRELOPT_KIND_BRIN,\n> > > -\t\t\tAccessExclusiveLock\n> > > -\t\t}, 128, 1, 131072\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"gin_pending_list_limit\",\n> > > -\t\t\t\"Maximum size of the pending list for this GIN \n> index, in kilobytes.\",\n> > > -\t\t\tRELOPT_KIND_GIN,\n> > > -\t\t\tAccessExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, 64, MAX_KILOBYTES\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"effective_io_concurrency\",\n> > > -\t\t\t\"Number of simultaneous requests that can be handled \n> efficiently by\n> > > the disk subsystem.\", -\t\t\tRELOPT_KIND_TABLESPACE,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -#ifdef USE_PREFETCH\n> > > -\t\t-1, 0, MAX_IO_CONCURRENCY\n> > > -#else\n> > > -\t\t0, 0, 0\n> > > -#endif\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"maintenance_io_concurrency\",\n> > > -\t\t\t\"Number of simultaneous requests that can be handled \n> efficiently by\n> > > the disk subsystem for maintenance work.\", -\t\t\t\n> RELOPT_KIND_TABLESPACE,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -#ifdef USE_PREFETCH\n> > > -\t\t-1, 0, MAX_IO_CONCURRENCY\n> > > -#else\n> > > -\t\t0, 0, 0\n> > > -#endif\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"parallel_workers\",\n> > > -\t\t\t\"Number of parallel processes that can be used per \n> executor node for\n> > > this relation.\", -\t\t\tRELOPT_KIND_HEAP,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, 0, 1024\n> > > -\t},\n> > > -\n> > > -\t/* list terminator */\n> > > -\t{{NULL}}\n> > > -};\n> > > -\n> > > -static relopt_real realRelOpts[] =\n> > > -{\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"autovacuum_vacuum_cost_delay\",\n> > > -\t\t\t\"Vacuum cost delay in milliseconds, for autovacuum\",\n> > > -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, 0.0, 100.0\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"autovacuum_vacuum_scale_factor\",\n> > > -\t\t\t\"Number of tuple updates or deletes prior to vacuum \n> as a fraction of\n> > > reltuples\", -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, 0.0, 100.0\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"autovacuum_vacuum_insert_scale_factor\",\n> > > -\t\t\t\"Number of tuple inserts prior to vacuum as a \n> fraction of reltuples\",\n> > > -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, 0.0, 100.0\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"autovacuum_analyze_scale_factor\",\n> > > -\t\t\t\"Number of tuple inserts, updates or deletes prior \n> to analyze as a\n> > > fraction of reltuples\", -\t\t\tRELOPT_KIND_HEAP,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, 0.0, 100.0\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"seq_page_cost\",\n> > > -\t\t\t\"Sets the planner's estimate of the cost of a \n> sequentially fetched\n> > > disk page.\", -\t\t\tRELOPT_KIND_TABLESPACE,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, 0.0, DBL_MAX\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"random_page_cost\",\n> > > -\t\t\t\"Sets the planner's estimate of the cost of a \n> nonsequentially fetched\n> > > disk page.\", -\t\t\tRELOPT_KIND_TABLESPACE,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, 0.0, DBL_MAX\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"n_distinct\",\n> > > -\t\t\t\"Sets the planner's estimate of the number of \n> distinct values\n> > > appearing in a column (excluding child relations).\",\n> > > -\t\t\tRELOPT_KIND_ATTRIBUTE,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t0, -1.0, DBL_MAX\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"n_distinct_inherited\",\n> > > -\t\t\t\"Sets the planner's estimate of the number of \n> distinct values\n> > > appearing in a column (including child relations).\",\n> > > -\t\t\tRELOPT_KIND_ATTRIBUTE,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t0, -1.0, DBL_MAX\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"vacuum_cleanup_index_scale_factor\",\n> > > -\t\t\t\"Deprecated B-Tree parameter.\",\n> > > -\t\t\tRELOPT_KIND_BTREE,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\t-1, 0.0, 1e10\n> > > -\t},\n> > > -\t/* list terminator */\n> > > -\t{{NULL}}\n> > > -};\n> > > -\n> > > \n> > > /* values from StdRdOptIndexCleanup */\n> > > \n> > > -relopt_enum_elt_def StdRdOptIndexCleanupValues[] =\n> > > +opt_enum_elt_def StdRdOptIndexCleanupValues[] =\n> > > \n> > > {\n> > > \n> > > \t{\"auto\", STDRD_OPTION_VACUUM_INDEX_CLEANUP_AUTO},\n> > > \t{\"on\", STDRD_OPTION_VACUUM_INDEX_CLEANUP_ON},\n> > > \n> > > @@ -480,17 +106,8 @@ relopt_enum_elt_def StdRdOptIndexCleanupValues[] =\n> > > \n> > > \t{(const char *) NULL}\t\t/* list terminator */\n> > > \n> > > };\n> > > \n> > > -/* values from GistOptBufferingMode */\n> > > -relopt_enum_elt_def gistBufferingOptValues[] =\n> > > -{\n> > > -\t{\"auto\", GIST_OPTION_BUFFERING_AUTO},\n> > > -\t{\"on\", GIST_OPTION_BUFFERING_ON},\n> > > -\t{\"off\", GIST_OPTION_BUFFERING_OFF},\n> > > -\t{(const char *) NULL}\t\t/* list terminator */\n> > > -};\n> > > -\n> > > \n> > > /* values from ViewOptCheckOption */\n> > > \n> > > -relopt_enum_elt_def viewCheckOptValues[] =\n> > > +opt_enum_elt_def viewCheckOptValues[] =\n> > > \n> > > {\n> > > \n> > > \t/* no value for NOT_SET */\n> > > \t{\"local\", VIEW_OPTION_CHECK_OPTION_LOCAL},\n> > > \n> > > @@ -498,61 +115,8 @@ relopt_enum_elt_def viewCheckOptValues[] =\n> > > \n> > > \t{(const char *) NULL}\t\t/* list terminator */\n> > > \n> > > };\n> > > \n> > > -static relopt_enum enumRelOpts[] =\n> > > -{\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"vacuum_index_cleanup\",\n> > > -\t\t\t\"Controls index vacuuming and index cleanup\",\n> > > -\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> > > -\t\t\tShareUpdateExclusiveLock\n> > > -\t\t},\n> > > -\t\tStdRdOptIndexCleanupValues,\n> > > -\t\tSTDRD_OPTION_VACUUM_INDEX_CLEANUP_AUTO,\n> > > -\t\tgettext_noop(\"Valid values are \\\"on\\\", \\\"off\\\", and \n> \\\"auto\\\".\")\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"buffering\",\n> > > -\t\t\t\"Enables buffering build for this GiST index\",\n> > > -\t\t\tRELOPT_KIND_GIST,\n> > > -\t\t\tAccessExclusiveLock\n> > > -\t\t},\n> > > -\t\tgistBufferingOptValues,\n> > > -\t\tGIST_OPTION_BUFFERING_AUTO,\n> > > -\t\tgettext_noop(\"Valid values are \\\"on\\\", \\\"off\\\", and \n> \\\"auto\\\".\")\n> > > -\t},\n> > > -\t{\n> > > -\t\t{\n> > > -\t\t\t\"check_option\",\n> > > -\t\t\t\"View has WITH CHECK OPTION defined (local or \n> cascaded).\",\n> > > -\t\t\tRELOPT_KIND_VIEW,\n> > > -\t\t\tAccessExclusiveLock\n> > > -\t\t},\n> > > -\t\tviewCheckOptValues,\n> > > -\t\tVIEW_OPTION_CHECK_OPTION_NOT_SET,\n> > > -\t\tgettext_noop(\"Valid values are \\\"local\\\" and \\\"cascaded\\\".\")\n> > > -\t},\n> > > -\t/* list terminator */\n> > > -\t{{NULL}}\n> > > -};\n> > > -\n> > > -static relopt_string stringRelOpts[] =\n> > > -{\n> > > -\t/* list terminator */\n> > > -\t{{NULL}}\n> > > -};\n> > > -\n> > > -static relopt_gen **relOpts = NULL;\n> > > -static bits32 last_assigned_kind = RELOPT_KIND_LAST_DEFAULT;\n> > > -\n> > > -static int\tnum_custom_options = 0;\n> > > -static relopt_gen **custom_options = NULL;\n> > > -static bool need_initialization = true;\n> > > \n> > > -static void initialize_reloptions(void);\n> > > -static void parse_one_reloption(relopt_value *option, char *text_str,\n> > > -\t\t\t\t\t\t\t\tint \n> text_len, bool validate);\n> > > +options_spec_set *get_stdrd_relopt_spec_set(relopt_kind kind);\n> > > \n> > > /*\n> > > \n> > > * Get the length of a string reloption (either default or the\n> > > user-defined\n> > > \n> > > @@ -563,160 +127,6 @@ static void parse_one_reloption(relopt_value\n> > > *option, char *text_str,> \n> > > \t((option).isset ? strlen((option).values.string_val) : \\\n> > > \t\n> > > \t ((relopt_string *) (option).gen)->default_len)\n> > > \n> > > -/*\n> > > - * initialize_reloptions\n> > > - *\t\tinitialization routine, must be called before parsing\n> > > - *\n> > > - * Initialize the relOpts array and fill each variable's type and name\n> > > length. - */\n> > > -static void\n> > > -initialize_reloptions(void)\n> > > -{\n> > > -\tint\t\t\ti;\n> > > -\tint\t\t\tj;\n> > > -\n> > > -\tj = 0;\n> > > -\tfor (i = 0; boolRelOpts[i].gen.name; i++)\n> > > -\t{\n> > > -\t\tAssert(DoLockModesConflict(boolRelOpts[i].gen.lockmode,\n> > > -\t\t\t\t\t\t\t\t \n> boolRelOpts[i].gen.lockmode));\n> > > -\t\tj++;\n> > > -\t}\n> > > -\tfor (i = 0; intRelOpts[i].gen.name; i++)\n> > > -\t{\n> > > -\t\tAssert(DoLockModesConflict(intRelOpts[i].gen.lockmode,\n> > > -\t\t\t\t\t\t\t\t \n> intRelOpts[i].gen.lockmode));\n> > > -\t\tj++;\n> > > -\t}\n> > > -\tfor (i = 0; realRelOpts[i].gen.name; i++)\n> > > -\t{\n> > > -\t\tAssert(DoLockModesConflict(realRelOpts[i].gen.lockmode,\n> > > -\t\t\t\t\t\t\t\t \n> realRelOpts[i].gen.lockmode));\n> > > -\t\tj++;\n> > > -\t}\n> > > -\tfor (i = 0; enumRelOpts[i].gen.name; i++)\n> > > -\t{\n> > > -\t\tAssert(DoLockModesConflict(enumRelOpts[i].gen.lockmode,\n> > > -\t\t\t\t\t\t\t\t \n> enumRelOpts[i].gen.lockmode));\n> > > -\t\tj++;\n> > > -\t}\n> > > -\tfor (i = 0; stringRelOpts[i].gen.name; i++)\n> > > -\t{\n> > > -\t\tAssert(DoLockModesConflict(stringRelOpts[i].gen.lockmode,\n> > > -\t\t\t\t\t\t\t\t \n> stringRelOpts[i].gen.lockmode));\n> > > -\t\tj++;\n> > > -\t}\n> > > -\tj += num_custom_options;\n> > > -\n> > > -\tif (relOpts)\n> > > -\t\tpfree(relOpts);\n> > > -\trelOpts = MemoryContextAlloc(TopMemoryContext,\n> > > -\t\t\t\t\t\t\t\t (j + 1) * \n> sizeof(relopt_gen *));\n> > > -\n> > > -\tj = 0;\n> > > -\tfor (i = 0; boolRelOpts[i].gen.name; i++)\n> > > -\t{\n> > > -\t\trelOpts[j] = &boolRelOpts[i].gen;\n> > > -\t\trelOpts[j]->type = RELOPT_TYPE_BOOL;\n> > > -\t\trelOpts[j]->namelen = strlen(relOpts[j]->name);\n> > > -\t\tj++;\n> > > -\t}\n> > > -\n> > > -\tfor (i = 0; intRelOpts[i].gen.name; i++)\n> > > -\t{\n> > > -\t\trelOpts[j] = &intRelOpts[i].gen;\n> > > -\t\trelOpts[j]->type = RELOPT_TYPE_INT;\n> > > -\t\trelOpts[j]->namelen = strlen(relOpts[j]->name);\n> > > -\t\tj++;\n> > > -\t}\n> > > -\n> > > -\tfor (i = 0; realRelOpts[i].gen.name; i++)\n> > > -\t{\n> > > -\t\trelOpts[j] = &realRelOpts[i].gen;\n> > > -\t\trelOpts[j]->type = RELOPT_TYPE_REAL;\n> > > -\t\trelOpts[j]->namelen = strlen(relOpts[j]->name);\n> > > -\t\tj++;\n> > > -\t}\n> > > -\n> > > -\tfor (i = 0; enumRelOpts[i].gen.name; i++)\n> > > -\t{\n> > > -\t\trelOpts[j] = &enumRelOpts[i].gen;\n> > > -\t\trelOpts[j]->type = RELOPT_TYPE_ENUM;\n> > > -\t\trelOpts[j]->namelen = strlen(relOpts[j]->name);\n> > > -\t\tj++;\n> > > -\t}\n> > > -\n> > > -\tfor (i = 0; stringRelOpts[i].gen.name; i++)\n> > > -\t{\n> > > -\t\trelOpts[j] = &stringRelOpts[i].gen;\n> > > -\t\trelOpts[j]->type = RELOPT_TYPE_STRING;\n> > > -\t\trelOpts[j]->namelen = strlen(relOpts[j]->name);\n> > > -\t\tj++;\n> > > -\t}\n> > > -\n> > > -\tfor (i = 0; i < num_custom_options; i++)\n> > > -\t{\n> > > -\t\trelOpts[j] = custom_options[i];\n> > > -\t\tj++;\n> > > -\t}\n> > > -\n> > > -\t/* add a list terminator */\n> > > -\trelOpts[j] = NULL;\n> > > -\n> > > -\t/* flag the work is complete */\n> > > -\tneed_initialization = false;\n> > > -}\n> > > -\n> > > -/*\n> > > - * add_reloption_kind\n> > > - *\t\tCreate a new relopt_kind value, to be used in custom \n> reloptions by\n> > > - *\t\tuser-defined AMs.\n> > > - */\n> > > -relopt_kind\n> > > -add_reloption_kind(void)\n> > > -{\n> > > -\t/* don't hand out the last bit so that the enum's behavior is \n> portable\n> > > */\n> > > -\tif (last_assigned_kind >= RELOPT_KIND_MAX)\n> > > -\t\tereport(ERROR,\n> > > -\t\t\t\t(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n> > > -\t\t\t\t errmsg(\"user-defined relation parameter \n> types limit exceeded\")));\n> > > -\tlast_assigned_kind <<= 1;\n> > > -\treturn (relopt_kind) last_assigned_kind;\n> > > -}\n> > > -\n> > > -/*\n> > > - * add_reloption\n> > > - *\t\tAdd an already-created custom reloption to the list, and \n> recompute\n> > > the\n> > > - *\t\tmain parser table.\n> > > - */\n> > > -static void\n> > > -add_reloption(relopt_gen *newoption)\n> > > -{\n> > > -\tstatic int\tmax_custom_options = 0;\n> > > -\n> > > -\tif (num_custom_options >= max_custom_options)\n> > > -\t{\n> > > -\t\tMemoryContext oldcxt;\n> > > -\n> > > -\t\toldcxt = MemoryContextSwitchTo(TopMemoryContext);\n> > > -\n> > > -\t\tif (max_custom_options == 0)\n> > > -\t\t{\n> > > -\t\t\tmax_custom_options = 8;\n> > > -\t\t\tcustom_options = palloc(max_custom_options * \n> sizeof(relopt_gen *));\n> > > -\t\t}\n> > > -\t\telse\n> > > -\t\t{\n> > > -\t\t\tmax_custom_options *= 2;\n> > > -\t\t\tcustom_options = repalloc(custom_options,\n> > > -\t\t\t\t\t\t\t\t\t \n> max_custom_options * sizeof(relopt_gen *));\n> > > -\t\t}\n> > > -\t\tMemoryContextSwitchTo(oldcxt);\n> > > -\t}\n> > > -\tcustom_options[num_custom_options++] = newoption;\n> > > -\n> > > -\tneed_initialization = true;\n> > > -}\n> > > \n> > > /*\n> > > \n> > > * init_local_reloptions\n> > > \n> > > @@ -729,6 +139,7 @@ init_local_reloptions(local_relopts *opts, Size\n> > > relopt_struct_size)> \n> > > \topts->options = NIL;\n> > > \topts->validators = NIL;\n> > > \topts->relopt_struct_size = relopt_struct_size;\n> > > \n> > > +\topts->spec_set = allocateOptionsSpecSet(NULL, relopt_struct_size, 0);\n> > > \n> > > }\n> > > \n> > > /*\n> > > \n> > > @@ -743,112 +154,6 @@ register_reloptions_validator(local_relopts *opts,\n> > > relopts_validator validator)> \n> > > }\n> > > \n> > > /*\n> > > \n> > > - * add_local_reloption\n> > > - *\t\tAdd an already-created custom reloption to the local list.\n> > > - */\n> > > -static void\n> > > -add_local_reloption(local_relopts *relopts, relopt_gen *newoption, int\n> > > offset) -{\n> > > -\tlocal_relopt *opt = palloc(sizeof(*opt));\n> > > -\n> > > -\tAssert(offset < relopts->relopt_struct_size);\n> > > -\n> > > -\topt->option = newoption;\n> > > -\topt->offset = offset;\n> > > -\n> > > -\trelopts->options = lappend(relopts->options, opt);\n> > > -}\n> > > -\n> > > -/*\n> > > - * allocate_reloption\n> > > - *\t\tAllocate a new reloption and initialize the type-agnostic \n> fields\n> > > - *\t\t(for types other than string)\n> > > - */\n> > > -static relopt_gen *\n> > > -allocate_reloption(bits32 kinds, int type, const char *name, const char\n> > > *desc, -\t\t\t\t LOCKMODE lockmode)\n> > > -{\n> > > -\tMemoryContext oldcxt;\n> > > -\tsize_t\t\tsize;\n> > > -\trelopt_gen *newoption;\n> > > -\n> > > -\tif (kinds != RELOPT_KIND_LOCAL)\n> > > -\t\toldcxt = MemoryContextSwitchTo(TopMemoryContext);\n> > > -\telse\n> > > -\t\toldcxt = NULL;\n> > > -\n> > > -\tswitch (type)\n> > > -\t{\n> > > -\t\tcase RELOPT_TYPE_BOOL:\n> > > -\t\t\tsize = sizeof(relopt_bool);\n> > > -\t\t\tbreak;\n> > > -\t\tcase RELOPT_TYPE_INT:\n> > > -\t\t\tsize = sizeof(relopt_int);\n> > > -\t\t\tbreak;\n> > > -\t\tcase RELOPT_TYPE_REAL:\n> > > -\t\t\tsize = sizeof(relopt_real);\n> > > -\t\t\tbreak;\n> > > -\t\tcase RELOPT_TYPE_ENUM:\n> > > -\t\t\tsize = sizeof(relopt_enum);\n> > > -\t\t\tbreak;\n> > > -\t\tcase RELOPT_TYPE_STRING:\n> > > -\t\t\tsize = sizeof(relopt_string);\n> > > -\t\t\tbreak;\n> > > -\t\tdefault:\n> > > -\t\t\telog(ERROR, \"unsupported reloption type %d\", type);\n> > > -\t\t\treturn NULL;\t\t/* keep compiler quiet */\n> > > -\t}\n> > > -\n> > > -\tnewoption = palloc(size);\n> > > -\n> > > -\tnewoption->name = pstrdup(name);\n> > > -\tif (desc)\n> > > -\t\tnewoption->desc = pstrdup(desc);\n> > > -\telse\n> > > -\t\tnewoption->desc = NULL;\n> > > -\tnewoption->kinds = kinds;\n> > > -\tnewoption->namelen = strlen(name);\n> > > -\tnewoption->type = type;\n> > > -\tnewoption->lockmode = lockmode;\n> > > -\n> > > -\tif (oldcxt != NULL)\n> > > -\t\tMemoryContextSwitchTo(oldcxt);\n> > > -\n> > > -\treturn newoption;\n> > > -}\n> > > -\n> > > -/*\n> > > - * init_bool_reloption\n> > > - *\t\tAllocate and initialize a new boolean reloption\n> > > - */\n> > > -static relopt_bool *\n> > > -init_bool_reloption(bits32 kinds, const char *name, const char *desc,\n> > > -\t\t\t\t\tbool default_val, LOCKMODE lockmode)\n> > > -{\n> > > -\trelopt_bool *newoption;\n> > > -\n> > > -\tnewoption = (relopt_bool *) allocate_reloption(kinds, \n> RELOPT_TYPE_BOOL,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t name, desc, lockmode);\n> > > -\tnewoption->default_val = default_val;\n> > > -\n> > > -\treturn newoption;\n> > > -}\n> > > -\n> > > -/*\n> > > - * add_bool_reloption\n> > > - *\t\tAdd a new boolean reloption\n> > > - */\n> > > -void\n> > > -add_bool_reloption(bits32 kinds, const char *name, const char *desc,\n> > > -\t\t\t\t bool default_val, LOCKMODE lockmode)\n> > > -{\n> > > -\trelopt_bool *newoption = init_bool_reloption(kinds, name, desc,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t default_val, lockmode);\n> > > -\n> > > -\tadd_reloption((relopt_gen *) newoption);\n> > > -}\n> > > -\n> > > -/*\n> > > \n> > > * add_local_bool_reloption\n> > > *\t\tAdd a new boolean local reloption\n> > > *\n> > > \n> > > @@ -858,47 +163,8 @@ void\n> > > \n> > > add_local_bool_reloption(local_relopts *relopts, const char *name,\n> > > \n> > > \t\t\t\t\t\t const char *desc, bool \n> default_val, int offset)\n> > > \n> > > {\n> > > \n> > > -\trelopt_bool *newoption = init_bool_reloption(RELOPT_KIND_LOCAL,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t name, desc,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t default_val, 0);\n> > > -\n> > > -\tadd_local_reloption(relopts, (relopt_gen *) newoption, offset);\n> > > -}\n> > > -\n> > > -\n> > > -/*\n> > > - * init_real_reloption\n> > > - *\t\tAllocate and initialize a new integer reloption\n> > > - */\n> > > -static relopt_int *\n> > > -init_int_reloption(bits32 kinds, const char *name, const char *desc,\n> > > -\t\t\t\t int default_val, int min_val, int \n> max_val,\n> > > -\t\t\t\t LOCKMODE lockmode)\n> > > -{\n> > > -\trelopt_int *newoption;\n> > > -\n> > > -\tnewoption = (relopt_int *) allocate_reloption(kinds, \n> RELOPT_TYPE_INT,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t name, desc, lockmode);\n> > > -\tnewoption->default_val = default_val;\n> > > -\tnewoption->min = min_val;\n> > > -\tnewoption->max = max_val;\n> > > -\n> > > -\treturn newoption;\n> > > -}\n> > > -\n> > > -/*\n> > > - * add_int_reloption\n> > > - *\t\tAdd a new integer reloption\n> > > - */\n> > > -void\n> > > -add_int_reloption(bits32 kinds, const char *name, const char *desc, int\n> > > default_val, -\t\t\t\t int min_val, int max_val, \n> LOCKMODE lockmode)\n> > > -{\n> > > -\trelopt_int *newoption = init_int_reloption(kinds, name, desc,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t default_val, min_val,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t max_val, lockmode);\n> > > -\n> > > -\tadd_reloption((relopt_gen *) newoption);\n> > > +\toptionsSpecSetAddBool(relopts->spec_set, name, desc, NoLock, 0, \n> offset,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t\t\tdefault_val);\n> > > \n> > > }\n> > > \n> > > /*\n> > > \n> > > @@ -912,47 +178,8 @@ add_local_int_reloption(local_relopts *relopts, const\n> > > char *name,> \n> > > \t\t\t\t\t\tconst char *desc, int \n> default_val, int min_val,\n> > > \t\t\t\t\t\tint max_val, int offset)\n> > > \n> > > {\n> > > \n> > > -\trelopt_int *newoption = init_int_reloption(RELOPT_KIND_LOCAL,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t name, desc, default_val,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t min_val, max_val, 0);\n> > > -\n> > > -\tadd_local_reloption(relopts, (relopt_gen *) newoption, offset);\n> > > -}\n> > > -\n> > > -/*\n> > > - * init_real_reloption\n> > > - *\t\tAllocate and initialize a new real reloption\n> > > - */\n> > > -static relopt_real *\n> > > -init_real_reloption(bits32 kinds, const char *name, const char *desc,\n> > > -\t\t\t\t\tdouble default_val, double min_val, \n> double max_val,\n> > > -\t\t\t\t\tLOCKMODE lockmode)\n> > > -{\n> > > -\trelopt_real *newoption;\n> > > -\n> > > -\tnewoption = (relopt_real *) allocate_reloption(kinds, \n> RELOPT_TYPE_REAL,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t name, desc, lockmode);\n> > > -\tnewoption->default_val = default_val;\n> > > -\tnewoption->min = min_val;\n> > > -\tnewoption->max = max_val;\n> > > -\n> > > -\treturn newoption;\n> > > -}\n> > > -\n> > > -/*\n> > > - * add_real_reloption\n> > > - *\t\tAdd a new float reloption\n> > > - */\n> > > -void\n> > > -add_real_reloption(bits32 kinds, const char *name, const char *desc,\n> > > -\t\t\t\t double default_val, double min_val, \n> double max_val,\n> > > -\t\t\t\t LOCKMODE lockmode)\n> > > -{\n> > > -\trelopt_real *newoption = init_real_reloption(kinds, name, desc,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t default_val, min_val,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t max_val, lockmode);\n> > > -\n> > > -\tadd_reloption((relopt_gen *) newoption);\n> > > +\toptionsSpecSetAddInt(relopts->spec_set, name, desc, NoLock, 0, offset,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\tdefault_val, min_val, max_val);\n> > > \n> > > }\n> > > \n> > > /*\n> > > \n> > > @@ -966,57 +193,9 @@ add_local_real_reloption(local_relopts *relopts,\n> > > const char *name,> \n> > > \t\t\t\t\t\t const char *desc, double \n> default_val,\n> > > \t\t\t\t\t\t double min_val, double \n> max_val, int offset)\n> > > \n> > > {\n> > > \n> > > -\trelopt_real *newoption = init_real_reloption(RELOPT_KIND_LOCAL,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t name, desc,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t default_val, min_val,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t max_val, 0);\n> > > -\n> > > -\tadd_local_reloption(relopts, (relopt_gen *) newoption, offset);\n> > > -}\n> > > -\n> > > -/*\n> > > - * init_enum_reloption\n> > > - *\t\tAllocate and initialize a new enum reloption\n> > > - */\n> > > -static relopt_enum *\n> > > -init_enum_reloption(bits32 kinds, const char *name, const char *desc,\n> > > -\t\t\t\t\trelopt_enum_elt_def *members, int \n> default_val,\n> > > -\t\t\t\t\tconst char *detailmsg, LOCKMODE \n> lockmode)\n> > > -{\n> > > -\trelopt_enum *newoption;\n> > > -\n> > > -\tnewoption = (relopt_enum *) allocate_reloption(kinds, \n> RELOPT_TYPE_ENUM,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t name, desc, lockmode);\n> > > -\tnewoption->members = members;\n> > > -\tnewoption->default_val = default_val;\n> > > -\tnewoption->detailmsg = detailmsg;\n> > > -\n> > > -\treturn newoption;\n> > > -}\n> > > -\n> > > -\n> > > -/*\n> > > - * add_enum_reloption\n> > > - *\t\tAdd a new enum reloption\n> > > - *\n> > > - * The members array must have a terminating NULL entry.\n> > > - *\n> > > - * The detailmsg is shown when unsupported values are passed, and has\n> > > this\n> > > - * form: \"Valid values are \\\"foo\\\", \\\"bar\\\", and \\\"bar\\\".\"\n> > > - *\n> > > - * The members array and detailmsg are not copied -- caller must ensure\n> > > that - * they are valid throughout the life of the process.\n> > > - */\n> > > -void\n> > > -add_enum_reloption(bits32 kinds, const char *name, const char *desc,\n> > > -\t\t\t\t relopt_enum_elt_def *members, int \n> default_val,\n> > > -\t\t\t\t const char *detailmsg, LOCKMODE lockmode)\n> > > -{\n> > > -\trelopt_enum *newoption = init_enum_reloption(kinds, name, desc,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t members, default_val,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t detailmsg, lockmode);\n> > > +\toptionsSpecSetAddReal(relopts->spec_set, name, desc, NoLock, 0, \n> offset,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\tdefault_val, min_val, max_val);\n> > > \n> > > -\tadd_reloption((relopt_gen *) newoption);\n> > > \n> > > }\n> > > \n> > > /*\n> > > \n> > > @@ -1027,77 +206,11 @@ add_enum_reloption(bits32 kinds, const char *name,\n> > > const char *desc,> \n> > > */\n> > > \n> > > void\n> > > add_local_enum_reloption(local_relopts *relopts, const char *name,\n> > > \n> > > -\t\t\t\t\t\t const char *desc, \n> relopt_enum_elt_def *members,\n> > > +\t\t\t\t\t\t const char *desc, \n> opt_enum_elt_def *members,\n> > > \n> > > \t\t\t\t\t\t int default_val, const char \n> *detailmsg, int offset)\n> > > \n> > > {\n> > > \n> > > -\trelopt_enum *newoption = init_enum_reloption(RELOPT_KIND_LOCAL,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t name, desc,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t members, default_val,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t detailmsg, 0);\n> > > -\n> > > -\tadd_local_reloption(relopts, (relopt_gen *) newoption, offset);\n> > > -}\n> > > -\n> > > -/*\n> > > - * init_string_reloption\n> > > - *\t\tAllocate and initialize a new string reloption\n> > > - */\n> > > -static relopt_string *\n> > > -init_string_reloption(bits32 kinds, const char *name, const char *desc,\n> > > -\t\t\t\t\t const char *default_val,\n> > > -\t\t\t\t\t validate_string_relopt validator,\n> > > -\t\t\t\t\t fill_string_relopt filler,\n> > > -\t\t\t\t\t LOCKMODE lockmode)\n> > > -{\n> > > -\trelopt_string *newoption;\n> > > -\n> > > -\t/* make sure the validator/default combination is sane */\n> > > -\tif (validator)\n> > > -\t\t(validator) (default_val);\n> > > -\n> > > -\tnewoption = (relopt_string *) allocate_reloption(kinds,\n> > > RELOPT_TYPE_STRING, -\t\t\t\t\t\t\t\t\n> \t\t\t\t\t name, desc, lockmode);\n> > > -\tnewoption->validate_cb = validator;\n> > > -\tnewoption->fill_cb = filler;\n> > > -\tif (default_val)\n> > > -\t{\n> > > -\t\tif (kinds == RELOPT_KIND_LOCAL)\n> > > -\t\t\tnewoption->default_val = strdup(default_val);\n> > > -\t\telse\n> > > -\t\t\tnewoption->default_val = \n> MemoryContextStrdup(TopMemoryContext,\n> > > default_val); -\t\tnewoption->default_len = strlen(default_val);\n> > > -\t\tnewoption->default_isnull = false;\n> > > -\t}\n> > > -\telse\n> > > -\t{\n> > > -\t\tnewoption->default_val = \"\";\n> > > -\t\tnewoption->default_len = 0;\n> > > -\t\tnewoption->default_isnull = true;\n> > > -\t}\n> > > -\n> > > -\treturn newoption;\n> > > -}\n> > > -\n> > > -/*\n> > > - * add_string_reloption\n> > > - *\t\tAdd a new string reloption\n> > > - *\n> > > - * \"validator\" is an optional function pointer that can be used to test\n> > > the - * validity of the values. It must elog(ERROR) when the argument\n> > > string is - * not acceptable for the variable. Note that the default\n> > > value must pass - * the validation.\n> > > - */\n> > > -void\n> > > -add_string_reloption(bits32 kinds, const char *name, const char *desc,\n> > > -\t\t\t\t\t const char *default_val, \n> validate_string_relopt validator,\n> > > -\t\t\t\t\t LOCKMODE lockmode)\n> > > -{\n> > > -\trelopt_string *newoption = init_string_reloption(kinds, name, desc,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t\t default_val,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t\t validator, NULL,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t\t lockmode);\n> > > -\n> > > -\tadd_reloption((relopt_gen *) newoption);\n> > > +\toptionsSpecSetAddEnum(relopts->spec_set, name, desc, NoLock, 0, \n> offset,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \tmembers, default_val, detailmsg);\n> > > \n> > > }\n> > > \n> > > /*\n> > > \n> > > @@ -1113,249 +226,9 @@ add_local_string_reloption(local_relopts *relopts,\n> > > const char *name,> \n> > > \t\t\t\t\t\t validate_string_relopt \n> validator,\n> > > \t\t\t\t\t\t fill_string_relopt filler, \n> int offset)\n> > > \n> > > {\n> > > \n> > > -\trelopt_string *newoption = init_string_reloption(RELOPT_KIND_LOCAL,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t\t name, desc,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t\t default_val,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t\t validator, filler,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t\t 0);\n> > > -\n> > > -\tadd_local_reloption(relopts, (relopt_gen *) newoption, offset);\n> > > -}\n> > > -\n> > > -/*\n> > > - * Transform a relation options list (list of DefElem) into the text\n> > > array\n> > > - * format that is kept in pg_class.reloptions, including only those\n> > > options - * that are in the passed namespace. The output values do not\n> > > include the - * namespace.\n> > > - *\n> > > - * This is used for three cases: CREATE TABLE/INDEX, ALTER TABLE SET, and\n> > > - * ALTER TABLE RESET. In the ALTER cases, oldOptions is the existing\n> > > - * reloptions value (possibly NULL), and we replace or remove entries\n> > > - * as needed.\n> > > - *\n> > > - * If acceptOidsOff is true, then we allow oids = false, but throw error\n> > > when - * on. This is solely needed for backwards compatibility.\n> > > - *\n> > > - * Note that this is not responsible for determining whether the options\n> > > - * are valid, but it does check that namespaces for all the options given\n> > > are - * listed in validnsps. The NULL namespace is always valid and need\n> > > not be - * explicitly listed. Passing a NULL pointer means that only the\n> > > NULL - * namespace is valid.\n> > > - *\n> > > - * Both oldOptions and the result are text arrays (or NULL for\n> > > \"default\"),\n> > > - * but we declare them as Datums to avoid including array.h in\n> > > reloptions.h. - */\n> > > -Datum\n> > > -transformRelOptions(Datum oldOptions, List *defList, const char\n> > > *namspace,\n> > > -\t\t\t\t\tchar *validnsps[], bool \n> acceptOidsOff, bool isReset)\n> > > -{\n> > > -\tDatum\t\tresult;\n> > > -\tArrayBuildState *astate;\n> > > -\tListCell *cell;\n> > > -\n> > > -\t/* no change if empty list */\n> > > -\tif (defList == NIL)\n> > > -\t\treturn oldOptions;\n> > > -\n> > > -\t/* We build new array using accumArrayResult */\n> > > -\tastate = NULL;\n> > > -\n> > > -\t/* Copy any oldOptions that aren't to be replaced */\n> > > -\tif (PointerIsValid(DatumGetPointer(oldOptions)))\n> > > -\t{\n> > > -\t\tArrayType *array = DatumGetArrayTypeP(oldOptions);\n> > > -\t\tDatum\t *oldoptions;\n> > > -\t\tint\t\t\tnoldoptions;\n> > > -\t\tint\t\t\ti;\n> > > -\n> > > -\t\tdeconstruct_array(array, TEXTOID, -1, false, TYPALIGN_INT,\n> > > -\t\t\t\t\t\t &oldoptions, NULL, \n> &noldoptions);\n> > > -\n> > > -\t\tfor (i = 0; i < noldoptions; i++)\n> > > -\t\t{\n> > > -\t\t\tchar\t *text_str = VARDATA(oldoptions[i]);\n> > > -\t\t\tint\t\t\ttext_len = \n> VARSIZE(oldoptions[i]) - VARHDRSZ;\n> > > -\n> > > -\t\t\t/* Search for a match in defList */\n> > > -\t\t\tforeach(cell, defList)\n> > > -\t\t\t{\n> > > -\t\t\t\tDefElem *def = (DefElem *) lfirst(cell);\n> > > -\t\t\t\tint\t\t\tkw_len;\n> > > -\n> > > -\t\t\t\t/* ignore if not in the same namespace */\n> > > -\t\t\t\tif (namspace == NULL)\n> > > -\t\t\t\t{\n> > > -\t\t\t\t\tif (def->defnamespace != NULL)\n> > > -\t\t\t\t\t\tcontinue;\n> > > -\t\t\t\t}\n> > > -\t\t\t\telse if (def->defnamespace == NULL)\n> > > -\t\t\t\t\tcontinue;\n> > > -\t\t\t\telse if (strcmp(def->defnamespace, namspace) \n> != 0)\n> > > -\t\t\t\t\tcontinue;\n> > > -\n> > > -\t\t\t\tkw_len = strlen(def->defname);\n> > > -\t\t\t\tif (text_len > kw_len && text_str[kw_len] == \n> '=' &&\n> > > -\t\t\t\t\tstrncmp(text_str, def->defname, \n> kw_len) == 0)\n> > > -\t\t\t\t\tbreak;\n> > > -\t\t\t}\n> > > -\t\t\tif (!cell)\n> > > -\t\t\t{\n> > > -\t\t\t\t/* No match, so keep old option */\n> > > -\t\t\t\tastate = accumArrayResult(astate, \n> oldoptions[i],\n> > > -\t\t\t\t\t\t\t\t\t\t\n> false, TEXTOID,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> CurrentMemoryContext);\n> > > -\t\t\t}\n> > > -\t\t}\n> > > -\t}\n> > > -\n> > > -\t/*\n> > > -\t * If CREATE/SET, add new options to array; if RESET, just check \n> that\n> > > the\n> > > -\t * user didn't say RESET (option=val). (Must do this because the\n> > > grammar\n> > > -\t * doesn't enforce it.)\n> > > -\t */\n> > > -\tforeach(cell, defList)\n> > > -\t{\n> > > -\t\tDefElem *def = (DefElem *) lfirst(cell);\n> > > -\n> > > -\t\tif (isReset)\n> > > -\t\t{\n> > > -\t\t\tif (def->arg != NULL)\n> > > -\t\t\t\tereport(ERROR,\n> > > -\t\t\t\t\t\t\n> (errcode(ERRCODE_SYNTAX_ERROR),\n> > > -\t\t\t\t\t\t errmsg(\"RESET must not \n> include values for parameters\")));\n> > > -\t\t}\n> > > -\t\telse\n> > > -\t\t{\n> > > -\t\t\ttext\t *t;\n> > > -\t\t\tconst char *value;\n> > > -\t\t\tSize\t\tlen;\n> > > -\n> > > -\t\t\t/*\n> > > -\t\t\t * Error out if the namespace is not valid. A NULL \n> namespace is\n> > > -\t\t\t * always valid.\n> > > -\t\t\t */\n> > > -\t\t\tif (def->defnamespace != NULL)\n> > > -\t\t\t{\n> > > -\t\t\t\tbool\t\tvalid = false;\n> > > -\t\t\t\tint\t\t\ti;\n> > > -\n> > > -\t\t\t\tif (validnsps)\n> > > -\t\t\t\t{\n> > > -\t\t\t\t\tfor (i = 0; validnsps[i]; i++)\n> > > -\t\t\t\t\t{\n> > > -\t\t\t\t\t\tif (strcmp(def-\n> >defnamespace, validnsps[i]) == 0)\n> > > -\t\t\t\t\t\t{\n> > > -\t\t\t\t\t\t\tvalid = true;\n> > > -\t\t\t\t\t\t\tbreak;\n> > > -\t\t\t\t\t\t}\n> > > -\t\t\t\t\t}\n> > > -\t\t\t\t}\n> > > -\n> > > -\t\t\t\tif (!valid)\n> > > -\t\t\t\t\tereport(ERROR,\n> > > -\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > -\t\t\t\t\t\t\t \n> errmsg(\"unrecognized parameter namespace \\\"%s\\\"\",\n> > > -\t\t\t\t\t\t\t\t\tdef-\n> >defnamespace)));\n> > > -\t\t\t}\n> > > -\n> > > -\t\t\t/* ignore if not in the same namespace */\n> > > -\t\t\tif (namspace == NULL)\n> > > -\t\t\t{\n> > > -\t\t\t\tif (def->defnamespace != NULL)\n> > > -\t\t\t\t\tcontinue;\n> > > -\t\t\t}\n> > > -\t\t\telse if (def->defnamespace == NULL)\n> > > -\t\t\t\tcontinue;\n> > > -\t\t\telse if (strcmp(def->defnamespace, namspace) != 0)\n> > > -\t\t\t\tcontinue;\n> > > -\n> > > -\t\t\t/*\n> > > -\t\t\t * Flatten the DefElem into a text string like \n> \"name=arg\". If we\n> > > -\t\t\t * have just \"name\", assume \"name=true\" is meant. \n> Note: the\n> > > -\t\t\t * namespace is not output.\n> > > -\t\t\t */\n> > > -\t\t\tif (def->arg != NULL)\n> > > -\t\t\t\tvalue = defGetString(def);\n> > > -\t\t\telse\n> > > -\t\t\t\tvalue = \"true\";\n> > > -\n> > > -\t\t\t/*\n> > > -\t\t\t * This is not a great place for this test, but \n> there's no other\n> > > -\t\t\t * convenient place to filter the option out. As WITH \n> (oids =\n> > > -\t\t\t * false) will be removed someday, this seems like \n> an acceptable\n> > > -\t\t\t * amount of ugly.\n> > > -\t\t\t */\n> > > -\t\t\tif (acceptOidsOff && def->defnamespace == NULL &&\n> > > -\t\t\t\tstrcmp(def->defname, \"oids\") == 0)\n> > > -\t\t\t{\n> > > -\t\t\t\tif (defGetBoolean(def))\n> > > -\t\t\t\t\tereport(ERROR,\n> > > -\t\t\t\t\t\t\t\n> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > > -\t\t\t\t\t\t\t errmsg(\"tables \n> declared WITH OIDS are not supported\")));\n> > > -\t\t\t\t/* skip over option, reloptions machinery \n> doesn't know it */\n> > > -\t\t\t\tcontinue;\n> > > -\t\t\t}\n> > > -\n> > > -\t\t\tlen = VARHDRSZ + strlen(def->defname) + 1 + \n> strlen(value);\n> > > -\t\t\t/* +1 leaves room for sprintf's trailing null */\n> > > -\t\t\tt = (text *) palloc(len + 1);\n> > > -\t\t\tSET_VARSIZE(t, len);\n> > > -\t\t\tsprintf(VARDATA(t), \"%s=%s\", def->defname, value);\n> > > -\n> > > -\t\t\tastate = accumArrayResult(astate, \n> PointerGetDatum(t),\n> > > -\t\t\t\t\t\t\t\t\t \n> false, TEXTOID,\n> > > -\t\t\t\t\t\t\t\t\t \n> CurrentMemoryContext);\n> > > -\t\t}\n> > > -\t}\n> > > -\n> > > -\tif (astate)\n> > > -\t\tresult = makeArrayResult(astate, CurrentMemoryContext);\n> > > -\telse\n> > > -\t\tresult = (Datum) 0;\n> > > -\n> > > -\treturn result;\n> > > -}\n> > > -\n> > > -\n> > > -/*\n> > > - * Convert the text-array format of reloptions into a List of DefElem.\n> > > - * This is the inverse of transformRelOptions().\n> > > - */\n> > > -List *\n> > > -untransformRelOptions(Datum options)\n> > > -{\n> > > -\tList\t *result = NIL;\n> > > -\tArrayType *array;\n> > > -\tDatum\t *optiondatums;\n> > > -\tint\t\t\tnoptions;\n> > > -\tint\t\t\ti;\n> > > -\n> > > -\t/* Nothing to do if no options */\n> > > -\tif (!PointerIsValid(DatumGetPointer(options)))\n> > > -\t\treturn result;\n> > > -\n> > > -\tarray = DatumGetArrayTypeP(options);\n> > > -\n> > > -\tdeconstruct_array(array, TEXTOID, -1, false, TYPALIGN_INT,\n> > > -\t\t\t\t\t &optiondatums, NULL, &noptions);\n> > > -\n> > > -\tfor (i = 0; i < noptions; i++)\n> > > -\t{\n> > > -\t\tchar\t *s;\n> > > -\t\tchar\t *p;\n> > > -\t\tNode\t *val = NULL;\n> > > -\n> > > -\t\ts = TextDatumGetCString(optiondatums[i]);\n> > > -\t\tp = strchr(s, '=');\n> > > -\t\tif (p)\n> > > -\t\t{\n> > > -\t\t\t*p++ = '\\0';\n> > > -\t\t\tval = (Node *) makeString(pstrdup(p));\n> > > -\t\t}\n> > > -\t\tresult = lappend(result, makeDefElem(pstrdup(s), val, -1));\n> > > -\t}\n> > > -\n> > > -\treturn result;\n> > > +\toptionsSpecSetAddString(relopts->spec_set, name, desc, NoLock, 0,\n> > > offset,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \tdefault_val, validator);\n> > > +/* FIXME solve mistery with filler option! */\n> > > \n> > > }\n> > > \n> > > /*\n> > > \n> > > @@ -1372,12 +245,13 @@ untransformRelOptions(Datum options)\n> > > \n> > > */\n> > > \n> > > bytea *\n> > > extractRelOptions(HeapTuple tuple, TupleDesc tupdesc,\n> > > \n> > > -\t\t\t\t amoptions_function amoptions)\n> > > +\t\t\t\t amreloptspecset_function \n> amoptionsspecsetfn)\n> > > \n> > > {\n> > > \n> > > \tbytea\t *options;\n> > > \tbool\t\tisnull;\n> > > \tDatum\t\tdatum;\n> > > \tForm_pg_class classForm;\n> > > \n> > > +\toptions_spec_set *spec_set;\n> > > \n> > > \tdatum = fastgetattr(tuple,\n> > > \t\n> > > \t\t\t\t\t\tAnum_pg_class_reloptions,\n> > > \n> > > @@ -1394,702 +268,341 @@ extractRelOptions(HeapTuple tuple, TupleDesc\n> > > tupdesc,> \n> > > \t\tcase RELKIND_RELATION:\n> > > \t\tcase RELKIND_TOASTVALUE:\n> > > \n> > > \t\tcase RELKIND_MATVIEW:\n> > > -\t\t\toptions = heap_reloptions(classForm->relkind, datum, \n> false);\n> > > +\t\t\tspec_set = get_heap_relopt_spec_set();\n> > > \n> > > \t\t\tbreak;\n> > > \t\t\n> > > \t\tcase RELKIND_PARTITIONED_TABLE:\n> > > -\t\t\toptions = partitioned_table_reloptions(datum, \n> false);\n> > > +\t\t\tspec_set = get_partitioned_relopt_spec_set();\n> > > \n> > > \t\t\tbreak;\n> > > \t\t\n> > > \t\tcase RELKIND_VIEW:\n> > > -\t\t\toptions = view_reloptions(datum, false);\n> > > +\t\t\tspec_set = get_view_relopt_spec_set();\n> > > \n> > > \t\t\tbreak;\n> > > \t\t\n> > > \t\tcase RELKIND_INDEX:\n> > > \n> > > \t\tcase RELKIND_PARTITIONED_INDEX:\n> > > -\t\t\toptions = index_reloptions(amoptions, datum, false);\n> > > +\t\t\tif (amoptionsspecsetfn)\n> > > +\t\t\t\tspec_set = amoptionsspecsetfn();\n> > > +\t\t\telse\n> > > +\t\t\t\tspec_set = NULL;\n> > > \n> > > \t\t\tbreak;\n> > > \t\t\n> > > \t\tcase RELKIND_FOREIGN_TABLE:\n> > > -\t\t\toptions = NULL;\n> > > +\t\t\tspec_set = NULL;\n> > > \n> > > \t\t\tbreak;\n> > > \t\t\n> > > \t\tdefault:\n> > > \t\t\tAssert(false);\t\t/* can't get here */\n> > > \n> > > -\t\t\toptions = NULL;\t\t/* keep compiler quiet */\n> > > +\t\t\tspec_set = NULL;\t\t/* keep compiler quiet */\n> > > \n> > > \t\t\tbreak;\n> > > \t\n> > > \t}\n> > > \n> > > +\tif (spec_set)\n> > > +\t\toptions = optionsTextArrayToBytea(spec_set, datum, 0);\n> > > +\telse\n> > > +\t\toptions = NULL;\n> > > \n> > > \treturn options;\n> > > \n> > > }\n> > > \n> > > -static void\n> > > -parseRelOptionsInternal(Datum options, bool validate,\n> > > -\t\t\t\t\t\trelopt_value *reloptions, \n> int numoptions)\n> > > -{\n> > > -\tArrayType *array = DatumGetArrayTypeP(options);\n> > > -\tDatum\t *optiondatums;\n> > > -\tint\t\t\tnoptions;\n> > > -\tint\t\t\ti;\n> > > -\n> > > -\tdeconstruct_array(array, TEXTOID, -1, false, TYPALIGN_INT,\n> > > -\t\t\t\t\t &optiondatums, NULL, &noptions);\n> > > +options_spec_set *\n> > > +get_stdrd_relopt_spec_set(relopt_kind kind)\n> > > +{\n> > > +\tbool is_for_toast = (kind == RELOPT_KIND_TOAST);\n> > > +\n> > > +\toptions_spec_set * stdrd_relopt_spec_set = allocateOptionsSpecSet(\n> > > +\t\t\t\t\tis_for_toast ? \"toast\" : NULL, \n> sizeof(StdRdOptions), 0); //FIXME\n> > > change 0 to actual value (may be)\n> > > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"fillfactor\",\n> > > +\t\t\t\t\t\t\t\t \"Packs table \n> pages only to this percentag\",\n> > > +\t\t\t\t\t\t\t\t \n> ShareUpdateExclusiveLock,\t\t/* since it applies only\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t\t\t * to later inserts */\n> > > +\t\t\t\t\t\t\t\tis_for_toast \n> ? OPTION_DEFINITION_FLAG_REJECT : 0,\n> > > +\t\t\t\t\t\t\t\t\n> offsetof(StdRdOptions, fillfactor),\n> > > +\t\t\t\t\t\t HEAP_DEFAULT_FILLFACTOR, \n> HEAP_MIN_FILLFACTOR, 100);\n> > > +\toptionsSpecSetAddBool(stdrd_relopt_spec_set, \"autovacuum_enabled\",\n> > > +\t\t\t\t\t\t\t \"Enables autovacuum \n> in this relation\",\n> > > +\t\t\t\t\t\t\t \n> ShareUpdateExclusiveLock, 0,\n> > > +\t\t\toffsetof(StdRdOptions, autovacuum) + \n> offsetof(AutoVacOpts, enabled),\n> > > +\t\t\t\t\t\t\t true);\n> > > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set,\n> > > \"autovacuum_vacuum_threshold\", +\t\t\t\t\"Minimum number \n> of tuple updates or\n> > > deletes prior to vacuum\", +\t\t\t\t\t\t\n> \t ShareUpdateExclusiveLock,\n> > > +\t\t\t\t\t0, offsetof(StdRdOptions, autovacuum) \n> + offsetof(AutoVacOpts,\n> > > vacuum_threshold), +\t\t\t\t\t\t\t \n> -1, 0, INT_MAX);\n> > > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set,\n> > > \"autovacuum_analyze_threshold\", +\t\t\t\t\"Minimum number \n> of tuple updates or\n> > > deletes prior to vacuum\", +\t\t\t\t\t\t\n> \t ShareUpdateExclusiveLock,\n> > > +\t\t\t\t\t\t\t is_for_toast ? \n> OPTION_DEFINITION_FLAG_REJECT : 0,\n> > > +\t\t\t\t\t offsetof(StdRdOptions, autovacuum) + \n> offsetof(AutoVacOpts,\n> > > analyze_threshold), +\t\t\t\t\t\t\t \n> -1, 0, INT_MAX);\n> > > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set,\n> > > \"autovacuum_vacuum_cost_limit\", +\t\t\t \"Vacuum cost amount \n> available\n> > > before napping, for autovacuum\", +\t\t\t\t\t\t\n> \t ShareUpdateExclusiveLock,\n> > > +\t\t\t\t 0, offsetof(StdRdOptions, autovacuum) + \n> offsetof(AutoVacOpts,\n> > > vacuum_cost_limit), +\t\t\t\t\t\t\t \n> -1, 0, 10000);\n> > > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \n> \"autovacuum_freeze_min_age\",\n> > > +\t \"Minimum age at which VACUUM should freeze a table row, for\n> > > autovacuum\",\n> > > +\t\t\t\t\t\t\t \n> ShareUpdateExclusiveLock,\n> > > +\t\t\t\t\t 0, offsetof(StdRdOptions, \n> autovacuum) + offsetof(AutoVacOpts,\n> > > freeze_min_age), +\t\t\t\t\t\t\t \n> -1, 0, 1000000000);\n> > > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \n> \"autovacuum_freeze_max_age\",\n> > > +\t\"Age at which to autovacuum a table to prevent transaction ID\n> > > wraparound\", +\t\t\t\t\t\t\t \n> ShareUpdateExclusiveLock,\n> > > +\t\t\t\t\t 0, offsetof(StdRdOptions, \n> autovacuum) + offsetof(AutoVacOpts,\n> > > freeze_max_age), +\t\t\t\t\t\t\t \n> -1, 100000, 2000000000);\n> > > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set,\n> > > \"autovacuum_freeze_table_age\", +\t\t\t\t\t\t\t\n> \"Age at which VACUUM should\n> > > perform a full table sweep to freeze row versions\", +\t\t\t\t\n> \t\t\t\n> > > ShareUpdateExclusiveLock,\n> > > +\t\t\t\t\t0, offsetof(StdRdOptions, autovacuum) \n> + offsetof(AutoVacOpts,\n> > > freeze_table_age), +\t\t\t\t\t\t\t \n> -1, 0, 2000000000);\n> > > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set,\n> > > \"autovacuum_multixact_freeze_min_age\", +\t\t\t\t\t\n> \t\t \"Minimum multixact age at\n> > > which VACUUM should freeze a row multixact's, for autovacuum\", +\t\t\n> \t\t\t\t\t\n> > > ShareUpdateExclusiveLock,\n> > > +\t\t\t0, offsetof(StdRdOptions, autovacuum) + \n> offsetof(AutoVacOpts,\n> > > multixact_freeze_min_age), +\t\t\t\t\t\t\t\n> -1, 0, 1000000000);\n> > > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set,\n> > > \"autovacuum_multixact_freeze_max_age\", +\t\t\t\t\t\t\n> \t \"Multixact age at which\n> > > to autovacuum a table to prevent multixact wraparound\", +\t\t\t\n> \t\t\t\t\n> > > ShareUpdateExclusiveLock,\n> > > +\t\t\t0, offsetof(StdRdOptions, autovacuum) + \n> offsetof(AutoVacOpts,\n> > > multixact_freeze_max_age), +\t\t\t\t\t\t\t\n> -1, 10000, 2000000000);\n> > > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set,\n> > > \"autovacuum_multixact_freeze_table_age\", +\t\t\t\t\t\n> \t\t \"Age of multixact at\n> > > which VACUUM should perform a full table sweep to freeze row versions\",\n> > > +\t\t\t\t\t\t\t \n> ShareUpdateExclusiveLock,\n> > > +\t\t 0, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > > multixact_freeze_table_age), +\t\t\t\t\t\t\n> \t -1, 0, 2000000000);\n> > > +\t\n> optionsSpecSetAddInt(stdrd_relopt_spec_set,\"log_autovacuum_min_duration\"\n> > > ,\n> > > +\t\t\t\t\t\t\t \"Sets the minimum \n> execution time above which autovacuum actions\n> > > will be logged\", +\t\t\t\t\t\t\t \n> ShareUpdateExclusiveLock,\n> > > +\t\t\t\t\t0, offsetof(StdRdOptions, autovacuum) \n> + offsetof(AutoVacOpts,\n> > > log_min_duration), +\t\t\t\t\t\t\t \n> -1, -1, INT_MAX);\n> > > +\toptionsSpecSetAddReal(stdrd_relopt_spec_set,\n> > > \"autovacuum_vacuum_cost_delay\", +\t\t\t\t\t\t\n> \"Vacuum cost delay in\n> > > milliseconds, for autovacuum\",\n> > > +\t\t\t\t\t\t\t \n> ShareUpdateExclusiveLock,\n> > > +\t\t\t\t 0, offsetof(StdRdOptions, autovacuum) + \n> offsetof(AutoVacOpts,\n> > > vacuum_cost_delay), +\t\t\t\t\t\t\t \n> -1, 0.0, 100.0);\n> > > +\toptionsSpecSetAddReal(stdrd_relopt_spec_set,\n> > > \"autovacuum_vacuum_scale_factor\", +\t\t\t\t\t\t\n> \t \"Number of tuple updates or\n> > > deletes prior to vacuum as a fraction of reltuples\", +\t\t\t\t\n> \t\t\t \n> > > ShareUpdateExclusiveLock,\n> > > +\t\t\t\t 0, offsetof(StdRdOptions, autovacuum) + \n> offsetof(AutoVacOpts,\n> > > vacuum_scale_factor), +\t\t\t\t\t\t\t \n> -1, 0.0, 100.0);\n> > > +\n> > > +\toptionsSpecSetAddReal(stdrd_relopt_spec_set,\n> > > \"autovacuum_vacuum_insert_scale_factor\", +\t\t\t\t\t\n> \t\t \"Number of tuple\n> > > inserts prior to vacuum as a fraction of reltuples\", +\t\t\t\t\n> \t\t\t \n> > > ShareUpdateExclusiveLock,\n> > > +\t\t\t\t 0, offsetof(StdRdOptions, autovacuum) + \n> offsetof(AutoVacOpts,\n> > > vacuum_ins_scale_factor), +\t\t\t\t\t\t\t\n> -1, 0.0, 100.0);\n> > > +\n> > > +\toptionsSpecSetAddReal(stdrd_relopt_spec_set,\n> > > \"autovacuum_analyze_scale_factor\", +\t\t\t\t\t\n> \t\t \"Number of tuple inserts,\n> > > updates or deletes prior to analyze as a fraction of reltuples\", +\t\t\n> \t\t\t\t\t\n> > > ShareUpdateExclusiveLock,\n> > > +\t\t\t\t\t\t\t is_for_toast ? \n> OPTION_DEFINITION_FLAG_REJECT : 0,\n> > > +\t\t\t\t offsetof(StdRdOptions, autovacuum) + \n> offsetof(AutoVacOpts,\n> > > analyze_scale_factor), +\t\t\t\t\t\t\t\n> -1, 0.0, 100.0);\n> > > +\n> > > +\n> > > +\n> > > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"toast_tuple_target\",\n> > > +\t\t\t\t\t\t\t\t \"Sets the \n> target tuple length at which external columns will be\n> > > toasted\", +\t\t\t\t\t\t\t\t\n> ShareUpdateExclusiveLock,\n> > > +\t\t\t\t\t\t\t\tis_for_toast \n> ? OPTION_DEFINITION_FLAG_REJECT : 0,\n> > > +\t\t\t\t\t\t\t\t\n> offsetof(StdRdOptions, toast_tuple_target),\n> > > +\t\t\t\t\t\t TOAST_TUPLE_TARGET, 128, \n> TOAST_TUPLE_TARGET_MAIN);\n> > > +\n> > > +\toptionsSpecSetAddBool(stdrd_relopt_spec_set, \"user_catalog_table\",\n> > > +\t\t\t\t\t\t\t\t \"Declare a \n> table as an additional catalog table, e.g. for the\n> > > purpose of logical replication\", +\t\t\t\t\t\t\n> \t\t AccessExclusiveLock,\n> > > +\t\t\t\t\t\t\t\tis_for_toast \n> ? OPTION_DEFINITION_FLAG_REJECT : 0,\n> > > +\t\t\t\t\t\t\t\t \n> offsetof(StdRdOptions, user_catalog_table),\n> > > +\t\t\t\t\t\t\t\t false);\n> > > +\n> > > +\toptionsSpecSetAddInt(stdrd_relopt_spec_set, \"parallel_workers\",\n> > > +\t\t\t\t\t\t\t\t\"Number of \n> parallel processes that can be used per executor node\n> > > for this relation.\", +\t\t\t\t\t\t\t\n> \tShareUpdateExclusiveLock,\n> > > +\t\t\t\t\t\t\t\tis_for_toast \n> ? OPTION_DEFINITION_FLAG_REJECT : 0,\n> > > +\t\t\t\t\t\t\t\t\n> offsetof(StdRdOptions, parallel_workers),\n> > > +\t\t\t\t\t\t\t\t-1, 0, 1024);\n> > > +\n> > > +\toptionsSpecSetAddEnum(stdrd_relopt_spec_set, \"vacuum_index_cleanup\",\n> > > +\t\t\t\t\t\t\t\t\"Controls \n> index vacuuming and index cleanup\",\n> > > +\t\t\t\t\t\t\t\t\n> ShareUpdateExclusiveLock, 0,\n> > > +\t\t\t\t\t\t\t\t\n> offsetof(StdRdOptions, vacuum_index_cleanup),\n> > > +\t\t\t\t\t\t\t\t\n> StdRdOptIndexCleanupValues,\n> > > +\t\t\t\t\t\t\t\t\n> STDRD_OPTION_VACUUM_INDEX_CLEANUP_AUTO,\n> > > +\t\t\t\t\t\t\t\t\n> gettext_noop(\"Valid values are \\\"on\\\", \\\"off\\\", and \\\"auto\\\".\"));\n> > > +\n> > > +\toptionsSpecSetAddBool(stdrd_relopt_spec_set, \"vacuum_truncate\",\n> > > +\t\t\t\t\t\t\t\t\"Enables \n> vacuum to truncate empty pages at the end of this\n> > > table\",\n> > > +\t\t\t\t\t\t\t\t\n> ShareUpdateExclusiveLock, 0,\n> > > +\t\t\t\t\t\t\t\t\n> offsetof(StdRdOptions, vacuum_truncate),\n> > > +\t\t\t\t\t\t\t\ttrue);\n> > > +\n> > > +// FIXME Do something with OIDS\n> > > +\n> > > +\treturn stdrd_relopt_spec_set;\n> > > +}\n> > > +\n> > > +\n> > > +static options_spec_set *heap_relopt_spec_set = NULL;\n> > > +\n> > > +options_spec_set *\n> > > +get_heap_relopt_spec_set(void)\n> > > +{\n> > > +\tif (heap_relopt_spec_set)\n> > > +\t\treturn heap_relopt_spec_set;\n> > > +\theap_relopt_spec_set = get_stdrd_relopt_spec_set(RELOPT_KIND_HEAP);\n> > > +\treturn heap_relopt_spec_set;\n> > > +}\n> > > +\n> > > +static options_spec_set *toast_relopt_spec_set = NULL;\n> > > +\n> > > +options_spec_set *\n> > > +get_toast_relopt_spec_set(void)\n> > > +{\n> > > +\tif (toast_relopt_spec_set)\n> > > +\t\treturn toast_relopt_spec_set;\n> > > +\ttoast_relopt_spec_set = get_stdrd_relopt_spec_set(RELOPT_KIND_TOAST);\n> > > +\treturn toast_relopt_spec_set;\n> > > +}\n> > > +\n> > > +static options_spec_set *partitioned_relopt_spec_set = NULL;\n> > > \n> > > -\tfor (i = 0; i < noptions; i++)\n> > > -\t{\n> > > -\t\tchar\t *text_str = VARDATA(optiondatums[i]);\n> > > -\t\tint\t\t\ttext_len = VARSIZE(optiondatums[i]) \n> - VARHDRSZ;\n> > > -\t\tint\t\t\tj;\n> > > -\n> > > -\t\t/* Search for a match in reloptions */\n> > > -\t\tfor (j = 0; j < numoptions; j++)\n> > > -\t\t{\n> > > -\t\t\tint\t\t\tkw_len = reloptions[j].gen-\n> >namelen;\n> > > -\n> > > -\t\t\tif (text_len > kw_len && text_str[kw_len] == '=' &&\n> > > -\t\t\t\tstrncmp(text_str, reloptions[j].gen->name, \n> kw_len) == 0)\n> > > -\t\t\t{\n> > > -\t\t\t\tparse_one_reloption(&reloptions[j], \n> text_str, text_len,\n> > > -\t\t\t\t\t\t\t\t\t\n> validate);\n> > > -\t\t\t\tbreak;\n> > > -\t\t\t}\n> > > -\t\t}\n> > > -\n> > > -\t\tif (j >= numoptions && validate)\n> > > -\t\t{\n> > > -\t\t\tchar\t *s;\n> > > -\t\t\tchar\t *p;\n> > > -\n> > > -\t\t\ts = TextDatumGetCString(optiondatums[i]);\n> > > -\t\t\tp = strchr(s, '=');\n> > > -\t\t\tif (p)\n> > > -\t\t\t\t*p = '\\0';\n> > > -\t\t\tereport(ERROR,\n> > > -\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > -\t\t\t\t\t errmsg(\"unrecognized parameter \n> \\\"%s\\\"\", s)));\n> > > -\t\t}\n> > > -\t}\n> > > -\n> > > -\t/* It's worth avoiding memory leaks in this function */\n> > > -\tpfree(optiondatums);\n> > > +options_spec_set *\n> > > +get_partitioned_relopt_spec_set(void)\n> > > +{\n> > > +\tif (partitioned_relopt_spec_set)\n> > > +\t\treturn partitioned_relopt_spec_set;\n> > > +\tpartitioned_relopt_spec_set = allocateOptionsSpecSet(\n> > > +\t\t\t\t\tNULL, sizeof(StdRdOptions), 0);\n> > > +\t/* No options for now, so spec set is empty */\n> > > \n> > > -\tif (((void *) array) != DatumGetPointer(options))\n> > > -\t\tpfree(array);\n> > > +\treturn partitioned_relopt_spec_set;\n> > > \n> > > }\n> > > \n> > > /*\n> > > \n> > > - * Interpret reloptions that are given in text-array format.\n> > > - *\n> > > - * options is a reloption text array as constructed by\n> > > transformRelOptions. - * kind specifies the family of options to be\n> > > processed.\n> > > - *\n> > > - * The return value is a relopt_value * array on which the options\n> > > actually - * set in the options array are marked with isset=true. The\n> > > length of this - * array is returned in *numrelopts. Options not set are\n> > > also present in the - * array; this is so that the caller can easily\n> > > locate the default values. - *\n> > > - * If there are no options of the given kind, numrelopts is set to 0 and\n> > > NULL - * is returned (unless options are illegally supplied despite none\n> > > being - * defined, in which case an error occurs).\n> > > - *\n> > > - * Note: values of type int, bool and real are allocated as part of the\n> > > - * returned array. Values of type string are allocated separately and\n> > > must - * be freed by the caller.\n> > > + * Parse local options, allocate a bytea struct that's of the specified\n> > > + * 'base_size' plus any extra space that's needed for string variables,\n> > > + * fill its option's fields located at the given offsets and return it.\n> > > \n> > > */\n> > > \n> > > -static relopt_value *\n> > > -parseRelOptions(Datum options, bool validate, relopt_kind kind,\n> > > -\t\t\t\tint *numrelopts)\n> > > -{\n> > > -\trelopt_value *reloptions = NULL;\n> > > -\tint\t\t\tnumoptions = 0;\n> > > -\tint\t\t\ti;\n> > > -\tint\t\t\tj;\n> > > -\n> > > -\tif (need_initialization)\n> > > -\t\tinitialize_reloptions();\n> > > -\n> > > -\t/* Build a list of expected options, based on kind */\n> > > -\n> > > -\tfor (i = 0; relOpts[i]; i++)\n> > > -\t\tif (relOpts[i]->kinds & kind)\n> > > -\t\t\tnumoptions++;\n> > > -\n> > > -\tif (numoptions > 0)\n> > > -\t{\n> > > -\t\treloptions = palloc(numoptions * sizeof(relopt_value));\n> > > -\n> > > -\t\tfor (i = 0, j = 0; relOpts[i]; i++)\n> > > -\t\t{\n> > > -\t\t\tif (relOpts[i]->kinds & kind)\n> > > -\t\t\t{\n> > > -\t\t\t\treloptions[j].gen = relOpts[i];\n> > > -\t\t\t\treloptions[j].isset = false;\n> > > -\t\t\t\tj++;\n> > > -\t\t\t}\n> > > -\t\t}\n> > > -\t}\n> > > -\n> > > -\t/* Done if no options */\n> > > -\tif (PointerIsValid(DatumGetPointer(options)))\n> > > -\t\tparseRelOptionsInternal(options, validate, reloptions, \n> numoptions);\n> > > -\n> > > -\t*numrelopts = numoptions;\n> > > -\treturn reloptions;\n> > > -}\n> > > -\n> > > -/* Parse local unregistered options. */\n> > > -static relopt_value *\n> > > -parseLocalRelOptions(local_relopts *relopts, Datum options, bool\n> > > validate)\n> > > +void *\n> > > +build_local_reloptions(local_relopts *relopts, Datum options, bool\n> > > validate)> \n> > > {\n> > > \n> > > -\tint\t\t\tnopts = list_length(relopts->options);\n> > > -\trelopt_value *values = palloc(sizeof(*values) * nopts);\n> > > +\tvoid\t *opts;\n> > > \n> > > \tListCell *lc;\n> > > \n> > > -\tint\t\t\ti = 0;\n> > > -\n> > > -\tforeach(lc, relopts->options)\n> > > -\t{\n> > > -\t\tlocal_relopt *opt = lfirst(lc);\n> > > -\n> > > -\t\tvalues[i].gen = opt->option;\n> > > -\t\tvalues[i].isset = false;\n> > > -\n> > > -\t\ti++;\n> > > -\t}\n> > > -\n> > > -\tif (options != (Datum) 0)\n> > > -\t\tparseRelOptionsInternal(options, validate, values, nopts);\n> > > +\topts = (void *) optionsTextArrayToBytea(relopts->spec_set, options,\n> > > validate);\n> > > \n> > > -\treturn values;\n> > > -}\n> > > -\n> > > -/*\n> > > - * Subroutine for parseRelOptions, to parse and validate a single\n> > > option's\n> > > - * value\n> > > - */\n> > > -static void\n> > > -parse_one_reloption(relopt_value *option, char *text_str, int text_len,\n> > > -\t\t\t\t\tbool validate)\n> > > -{\n> > > -\tchar\t *value;\n> > > -\tint\t\t\tvalue_len;\n> > > -\tbool\t\tparsed;\n> > > -\tbool\t\tnofree = false;\n> > > -\n> > > -\tif (option->isset && validate)\n> > > -\t\tereport(ERROR,\n> > > -\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > -\t\t\t\t errmsg(\"parameter \\\"%s\\\" specified more than \n> once\",\n> > > -\t\t\t\t\t\toption->gen->name)));\n> > > -\n> > > -\tvalue_len = text_len - option->gen->namelen - 1;\n> > > -\tvalue = (char *) palloc(value_len + 1);\n> > > -\tmemcpy(value, text_str + option->gen->namelen + 1, value_len);\n> > > -\tvalue[value_len] = '\\0';\n> > > -\n> > > -\tswitch (option->gen->type)\n> > > -\t{\n> > > -\t\tcase RELOPT_TYPE_BOOL:\n> > > -\t\t\t{\n> > > -\t\t\t\tparsed = parse_bool(value, &option-\n> >values.bool_val);\n> > > -\t\t\t\tif (validate && !parsed)\n> > > -\t\t\t\t\tereport(ERROR,\n> > > -\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > -\t\t\t\t\t\t\t errmsg(\"invalid \n> value for boolean option \\\"%s\\\": %s\",\n> > > -\t\t\t\t\t\t\t\t\t\n> option->gen->name, value)));\n> > > -\t\t\t}\n> > > -\t\t\tbreak;\n> > > -\t\tcase RELOPT_TYPE_INT:\n> > > -\t\t\t{\n> > > -\t\t\t\trelopt_int *optint = (relopt_int *) option-\n> >gen;\n> > > -\n> > > -\t\t\t\tparsed = parse_int(value, &option-\n> >values.int_val, 0, NULL);\n> > > -\t\t\t\tif (validate && !parsed)\n> > > -\t\t\t\t\tereport(ERROR,\n> > > -\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > -\t\t\t\t\t\t\t errmsg(\"invalid \n> value for integer option \\\"%s\\\": %s\",\n> > > -\t\t\t\t\t\t\t\t\t\n> option->gen->name, value)));\n> > > -\t\t\t\tif (validate && (option->values.int_val < \n> optint->min ||\n> > > -\t\t\t\t\t\t\t\t option-\n> >values.int_val > optint->max))\n> > > -\t\t\t\t\tereport(ERROR,\n> > > -\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > -\t\t\t\t\t\t\t errmsg(\"value %s \n> out of bounds for option \\\"%s\\\"\",\n> > > -\t\t\t\t\t\t\t\t\t\n> value, option->gen->name),\n> > > -\t\t\t\t\t\t\t errdetail(\"Valid \n> values are between \\\"%d\\\" and \\\"%d\\\".\",\n> > > -\t\t\t\t\t\t\t\t\t \n> optint->min, optint->max)));\n> > > -\t\t\t}\n> > > -\t\t\tbreak;\n> > > -\t\tcase RELOPT_TYPE_REAL:\n> > > -\t\t\t{\n> > > -\t\t\t\trelopt_real *optreal = (relopt_real *) \n> option->gen;\n> > > -\n> > > -\t\t\t\tparsed = parse_real(value, &option-\n> >values.real_val, 0, NULL);\n> > > -\t\t\t\tif (validate && !parsed)\n> > > -\t\t\t\t\tereport(ERROR,\n> > > -\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > -\t\t\t\t\t\t\t errmsg(\"invalid \n> value for floating point option \\\"%s\\\": %s\",\n> > > -\t\t\t\t\t\t\t\t\t\n> option->gen->name, value)));\n> > > -\t\t\t\tif (validate && (option->values.real_val < \n> optreal->min ||\n> > > -\t\t\t\t\t\t\t\t option-\n> >values.real_val > optreal->max))\n> > > -\t\t\t\t\tereport(ERROR,\n> > > -\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > -\t\t\t\t\t\t\t errmsg(\"value %s \n> out of bounds for option \\\"%s\\\"\",\n> > > -\t\t\t\t\t\t\t\t\t\n> value, option->gen->name),\n> > > -\t\t\t\t\t\t\t errdetail(\"Valid \n> values are between \\\"%f\\\" and \\\"%f\\\".\",\n> > > -\t\t\t\t\t\t\t\t\t \n> optreal->min, optreal->max)));\n> > > -\t\t\t}\n> > > -\t\t\tbreak;\n> > > -\t\tcase RELOPT_TYPE_ENUM:\n> > > -\t\t\t{\n> > > -\t\t\t\trelopt_enum *optenum = (relopt_enum *) \n> option->gen;\n> > > -\t\t\t\trelopt_enum_elt_def *elt;\n> > > -\n> > > -\t\t\t\tparsed = false;\n> > > -\t\t\t\tfor (elt = optenum->members; elt-\n> >string_val; elt++)\n> > > -\t\t\t\t{\n> > > -\t\t\t\t\tif (pg_strcasecmp(value, elt-\n> >string_val) == 0)\n> > > -\t\t\t\t\t{\n> > > -\t\t\t\t\t\toption->values.enum_val = \n> elt->symbol_val;\n> > > -\t\t\t\t\t\tparsed = true;\n> > > -\t\t\t\t\t\tbreak;\n> > > -\t\t\t\t\t}\n> > > -\t\t\t\t}\n> > > -\t\t\t\tif (validate && !parsed)\n> > > -\t\t\t\t\tereport(ERROR,\n> > > -\t\t\t\t\t\t\t\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > -\t\t\t\t\t\t\t errmsg(\"invalid \n> value for enum option \\\"%s\\\": %s\",\n> > > -\t\t\t\t\t\t\t\t\t\n> option->gen->name, value),\n> > > -\t\t\t\t\t\t\t optenum->detailmsg \n> ?\n> > > -\t\t\t\t\t\t\t \n> errdetail_internal(\"%s\", _(optenum->detailmsg)) : 0));\n> > > -\n> > > -\t\t\t\t/*\n> > > -\t\t\t\t * If value is not among the allowed string \n> values, but we are\n> > > -\t\t\t\t * not asked to validate, just use the \n> default numeric value.\n> > > -\t\t\t\t */\n> > > -\t\t\t\tif (!parsed)\n> > > -\t\t\t\t\toption->values.enum_val = optenum-\n> >default_val;\n> > > -\t\t\t}\n> > > -\t\t\tbreak;\n> > > -\t\tcase RELOPT_TYPE_STRING:\n> > > -\t\t\t{\n> > > -\t\t\t\trelopt_string *optstring = (relopt_string *) \n> option->gen;\n> > > -\n> > > -\t\t\t\toption->values.string_val = value;\n> > > -\t\t\t\tnofree = true;\n> > > -\t\t\t\tif (validate && optstring->validate_cb)\n> > > -\t\t\t\t\t(optstring->validate_cb) (value);\n> > > -\t\t\t\tparsed = true;\n> > > -\t\t\t}\n> > > -\t\t\tbreak;\n> > > -\t\tdefault:\n> > > -\t\t\telog(ERROR, \"unsupported reloption type %d\", option-\n> >gen->type);\n> > > -\t\t\tparsed = true;\t\t/* quiet compiler */\n> > > -\t\t\tbreak;\n> > > -\t}\n> > > +\tforeach(lc, relopts->validators)\n> > > +\t\t((relopts_validator) lfirst(lc)) (opts, NULL, 0);\n> > > +//\t\t((relopts_validator) lfirst(lc)) (opts, vals, noptions);\n> > > +// FIXME solve problem with validation of separate option values;\n> > > +\treturn opts;\n> > > \n> > > -\tif (parsed)\n> > > -\t\toption->isset = true;\n> > > -\tif (!nofree)\n> > > -\t\tpfree(value);\n> > > \n> > > }\n> > > \n> > > /*\n> > > \n> > > - * Given the result from parseRelOptions, allocate a struct that's of the\n> > > - * specified base size plus any extra space that's needed for string\n> > > variables. - *\n> > > - * \"base\" should be sizeof(struct) of the reloptions struct (StdRdOptions\n> > > or - * equivalent).\n> > > + * get_view_relopt_spec_set\n> > > + *\t\tReturns an options catalog for view relation.\n> > > \n> > > */\n> > > \n> > > -static void *\n> > > -allocateReloptStruct(Size base, relopt_value *options, int numoptions)\n> > > -{\n> > > -\tSize\t\tsize = base;\n> > > -\tint\t\t\ti;\n> > > -\n> > > -\tfor (i = 0; i < numoptions; i++)\n> > > -\t{\n> > > -\t\trelopt_value *optval = &options[i];\n> > > -\n> > > -\t\tif (optval->gen->type == RELOPT_TYPE_STRING)\n> > > -\t\t{\n> > > -\t\t\trelopt_string *optstr = (relopt_string *) optval-\n> >gen;\n> > > -\n> > > -\t\t\tif (optstr->fill_cb)\n> > > -\t\t\t{\n> > > -\t\t\t\tconst char *val = optval->isset ? optval-\n> >values.string_val :\n> > > -\t\t\t\toptstr->default_isnull ? NULL : optstr-\n> >default_val;\n> > > -\n> > > -\t\t\t\tsize += optstr->fill_cb(val, NULL);\n> > > -\t\t\t}\n> > > -\t\t\telse\n> > > -\t\t\t\tsize += GET_STRING_RELOPTION_LEN(*optval) + \n> 1;\n> > > -\t\t}\n> > > -\t}\n> > > -\n> > > -\treturn palloc0(size);\n> > > -}\n> > > +static options_spec_set *view_relopt_spec_set = NULL;\n> > > \n> > > -/*\n> > > - * Given the result of parseRelOptions and a parsing table, fill in the\n> > > - * struct (previously allocated with allocateReloptStruct) with the\n> > > parsed\n> > > - * values.\n> > > - *\n> > > - * rdopts is the pointer to the allocated struct to be filled.\n> > > - * basesize is the sizeof(struct) that was passed to\n> > > allocateReloptStruct.\n> > > - * options, of length numoptions, is parseRelOptions' output.\n> > > - * elems, of length numelems, is the table describing the allowed\n> > > options.\n> > > - * When validate is true, it is expected that all options appear in\n> > > elems.\n> > > - */\n> > > -static void\n> > > -fillRelOptions(void *rdopts, Size basesize,\n> > > -\t\t\t relopt_value *options, int numoptions,\n> > > -\t\t\t bool validate,\n> > > -\t\t\t const relopt_parse_elt *elems, int numelems)\n> > > +options_spec_set *\n> > > +get_view_relopt_spec_set(void)\n> > > \n> > > {\n> > > \n> > > -\tint\t\t\ti;\n> > > -\tint\t\t\toffset = basesize;\n> > > +\tif (view_relopt_spec_set)\n> > > +\t\treturn view_relopt_spec_set;\n> > > \n> > > -\tfor (i = 0; i < numoptions; i++)\n> > > -\t{\n> > > -\t\tint\t\t\tj;\n> > > -\t\tbool\t\tfound = false;\n> > > +\tview_relopt_spec_set = allocateOptionsSpecSet(NULL,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t sizeof(ViewOptions), 2);\n> > > \n> > > -\t\tfor (j = 0; j < numelems; j++)\n> > > -\t\t{\n> > > -\t\t\tif (strcmp(options[i].gen->name, elems[j].optname) \n> == 0)\n> > > -\t\t\t{\n> > > -\t\t\t\trelopt_string *optstring;\n> > > -\t\t\t\tchar\t *itempos = ((char *) rdopts) + \n> elems[j].offset;\n> > > -\t\t\t\tchar\t *string_val;\n> > > -\n> > > -\t\t\t\tswitch (options[i].gen->type)\n> > > -\t\t\t\t{\n> > > -\t\t\t\t\tcase RELOPT_TYPE_BOOL:\n> > > -\t\t\t\t\t\t*(bool *) itempos = \n> options[i].isset ?\n> > > -\t\t\t\t\t\t\t\n> options[i].values.bool_val :\n> > > -\t\t\t\t\t\t\t((relopt_bool *) \n> options[i].gen)->default_val;\n> > > -\t\t\t\t\t\tbreak;\n> > > -\t\t\t\t\tcase RELOPT_TYPE_INT:\n> > > -\t\t\t\t\t\t*(int *) itempos = \n> options[i].isset ?\n> > > -\t\t\t\t\t\t\t\n> options[i].values.int_val :\n> > > -\t\t\t\t\t\t\t((relopt_int *) \n> options[i].gen)->default_val;\n> > > -\t\t\t\t\t\tbreak;\n> > > -\t\t\t\t\tcase RELOPT_TYPE_REAL:\n> > > -\t\t\t\t\t\t*(double *) itempos = \n> options[i].isset ?\n> > > -\t\t\t\t\t\t\t\n> options[i].values.real_val :\n> > > -\t\t\t\t\t\t\t((relopt_real *) \n> options[i].gen)->default_val;\n> > > -\t\t\t\t\t\tbreak;\n> > > -\t\t\t\t\tcase RELOPT_TYPE_ENUM:\n> > > -\t\t\t\t\t\t*(int *) itempos = \n> options[i].isset ?\n> > > -\t\t\t\t\t\t\t\n> options[i].values.enum_val :\n> > > -\t\t\t\t\t\t\t((relopt_enum *) \n> options[i].gen)->default_val;\n> > > -\t\t\t\t\t\tbreak;\n> > > -\t\t\t\t\tcase RELOPT_TYPE_STRING:\n> > > -\t\t\t\t\t\toptstring = (relopt_string \n> *) options[i].gen;\n> > > -\t\t\t\t\t\tif (options[i].isset)\n> > > -\t\t\t\t\t\t\tstring_val = \n> options[i].values.string_val;\n> > > -\t\t\t\t\t\telse if (!optstring-\n> >default_isnull)\n> > > -\t\t\t\t\t\t\tstring_val = \n> optstring->default_val;\n> > > -\t\t\t\t\t\telse\n> > > -\t\t\t\t\t\t\tstring_val = NULL;\n> > > -\n> > > -\t\t\t\t\t\tif (optstring->fill_cb)\n> > > -\t\t\t\t\t\t{\n> > > -\t\t\t\t\t\t\tSize\t\t\n> size =\n> > > -\t\t\t\t\t\t\toptstring-\n> >fill_cb(string_val,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t (char *) rdopts + offset);\n> > > -\n> > > -\t\t\t\t\t\t\tif (size)\n> > > -\t\t\t\t\t\t\t{\n> > > -\t\t\t\t\t\t\t\t*(int *) \n> itempos = offset;\n> > > -\t\t\t\t\t\t\t\toffset += \n> size;\n> > > -\t\t\t\t\t\t\t}\n> > > -\t\t\t\t\t\t\telse\n> > > -\t\t\t\t\t\t\t\t*(int *) \n> itempos = 0;\n> > > -\t\t\t\t\t\t}\n> > > -\t\t\t\t\t\telse if (string_val == NULL)\n> > > -\t\t\t\t\t\t\t*(int *) itempos = \n> 0;\n> > > -\t\t\t\t\t\telse\n> > > -\t\t\t\t\t\t{\n> > > -\t\t\t\t\t\t\tstrcpy((char *) \n> rdopts + offset, string_val);\n> > > -\t\t\t\t\t\t\t*(int *) itempos = \n> offset;\n> > > -\t\t\t\t\t\t\toffset += \n> strlen(string_val) + 1;\n> > > -\t\t\t\t\t\t}\n> > > -\t\t\t\t\t\tbreak;\n> > > -\t\t\t\t\tdefault:\n> > > -\t\t\t\t\t\telog(ERROR, \"unsupported \n> reloption type %d\",\n> > > -\t\t\t\t\t\t\t options[i].gen-\n> >type);\n> > > -\t\t\t\t\t\tbreak;\n> > > -\t\t\t\t}\n> > > -\t\t\t\tfound = true;\n> > > -\t\t\t\tbreak;\n> > > -\t\t\t}\n> > > -\t\t}\n> > > -\t\tif (validate && !found)\n> > > -\t\t\telog(ERROR, \"reloption \\\"%s\\\" not found in parse \n> table\",\n> > > -\t\t\t\t options[i].gen->name);\n> > > -\t}\n> > > -\tSET_VARSIZE(rdopts, offset);\n> > > -}\n> > > +\toptionsSpecSetAddBool(view_relopt_spec_set, \"security_barrier\",\n> > > +\t\t\t\t\t\t\t \"View acts as a row \n> security barrier\",\n> > > +\t\t\t\t\t\t\t \n> AccessExclusiveLock,\n> > > +\t\t\t\t\t 0, offsetof(ViewOptions, \n> security_barrier), false);\n> > > \n> > > +\toptionsSpecSetAddEnum(view_relopt_spec_set, \"check_option\",\n> > > +\t\t\t\t\t\t \"View has WITH CHECK \n> OPTION defined (local or cascaded)\",\n> > > +\t\t\t\t\t\t\t \n> AccessExclusiveLock, 0,\n> > > +\t\t\t\t\t\t\t \n> offsetof(ViewOptions, check_option),\n> > > +\t\t\t\t\t\t\t viewCheckOptValues,\n> > > +\t\t\t\t\t\t\t \n> VIEW_OPTION_CHECK_OPTION_NOT_SET,\n> > > +\t\t\t\t\t\t\t gettext_noop(\"Valid \n> values are \\\"local\\\" and \\\"cascaded\\\".\"));\n> > > \n> > > -/*\n> > > - * Option parser for anything that uses StdRdOptions.\n> > > - */\n> > > -bytea *\n> > > -default_reloptions(Datum reloptions, bool validate, relopt_kind kind)\n> > > -{\n> > > -\tstatic const relopt_parse_elt tab[] = {\n> > > -\t\t{\"fillfactor\", RELOPT_TYPE_INT, offsetof(StdRdOptions, \n> fillfactor)},\n> > > -\t\t{\"autovacuum_enabled\", RELOPT_TYPE_BOOL,\n> > > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, \n> enabled)},\n> > > -\t\t{\"autovacuum_vacuum_threshold\", RELOPT_TYPE_INT,\n> > > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > > vacuum_threshold)}, -\t\t{\"autovacuum_vacuum_insert_threshold\",\n> > > RELOPT_TYPE_INT,\n> > > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > > vacuum_ins_threshold)}, -\t\t{\"autovacuum_analyze_threshold\",\n> > > RELOPT_TYPE_INT,\n> > > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > > analyze_threshold)}, -\t\t{\"autovacuum_vacuum_cost_limit\", \n> RELOPT_TYPE_INT,\n> > > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > > vacuum_cost_limit)}, -\t\t{\"autovacuum_freeze_min_age\", \n> RELOPT_TYPE_INT,\n> > > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > > freeze_min_age)}, -\t\t{\"autovacuum_freeze_max_age\", \n> RELOPT_TYPE_INT,\n> > > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > > freeze_max_age)}, -\t\t{\"autovacuum_freeze_table_age\", \n> RELOPT_TYPE_INT,\n> > > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > > freeze_table_age)}, -\t\t{\"autovacuum_multixact_freeze_min_age\",\n> > > RELOPT_TYPE_INT,\n> > > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > > multixact_freeze_min_age)}, -\t\t\n> {\"autovacuum_multixact_freeze_max_age\",\n> > > RELOPT_TYPE_INT,\n> > > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > > multixact_freeze_max_age)}, -\t\t\n> {\"autovacuum_multixact_freeze_table_age\",\n> > > RELOPT_TYPE_INT,\n> > > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > > multixact_freeze_table_age)}, -\t\t\n> {\"log_autovacuum_min_duration\",\n> > > RELOPT_TYPE_INT,\n> > > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > > log_min_duration)}, -\t\t{\"toast_tuple_target\", RELOPT_TYPE_INT,\n> > > -\t\toffsetof(StdRdOptions, toast_tuple_target)},\n> > > -\t\t{\"autovacuum_vacuum_cost_delay\", RELOPT_TYPE_REAL,\n> > > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > > vacuum_cost_delay)}, -\t\t{\"autovacuum_vacuum_scale_factor\",\n> > > RELOPT_TYPE_REAL,\n> > > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > > vacuum_scale_factor)}, -\t\t\n> {\"autovacuum_vacuum_insert_scale_factor\",\n> > > RELOPT_TYPE_REAL,\n> > > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > > vacuum_ins_scale_factor)}, -\t\t\n> {\"autovacuum_analyze_scale_factor\",\n> > > RELOPT_TYPE_REAL,\n> > > -\t\toffsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts,\n> > > analyze_scale_factor)}, -\t\t{\"user_catalog_table\", \n> RELOPT_TYPE_BOOL,\n> > > -\t\toffsetof(StdRdOptions, user_catalog_table)},\n> > > -\t\t{\"parallel_workers\", RELOPT_TYPE_INT,\n> > > -\t\toffsetof(StdRdOptions, parallel_workers)},\n> > > -\t\t{\"vacuum_index_cleanup\", RELOPT_TYPE_ENUM,\n> > > -\t\toffsetof(StdRdOptions, vacuum_index_cleanup)},\n> > > -\t\t{\"vacuum_truncate\", RELOPT_TYPE_BOOL,\n> > > -\t\toffsetof(StdRdOptions, vacuum_truncate)}\n> > > -\t};\n> > > -\n> > > -\treturn (bytea *) build_reloptions(reloptions, validate, kind,\n> > > -\t\t\t\t\t\t\t\t\t \n> sizeof(StdRdOptions),\n> > > -\t\t\t\t\t\t\t\t\t \n> tab, lengthof(tab));\n> > > +\treturn view_relopt_spec_set;\n> > > \n> > > }\n> > > \n> > > /*\n> > > \n> > > - * build_reloptions\n> > > - *\n> > > - * Parses \"reloptions\" provided by the caller, returning them in a\n> > > - * structure containing the parsed options. The parsing is done with\n> > > - * the help of a parsing table describing the allowed options, defined\n> > > - * by \"relopt_elems\" of length \"num_relopt_elems\".\n> > > - *\n> > > - * \"validate\" must be true if reloptions value is freshly built by\n> > > - * transformRelOptions(), as opposed to being read from the catalog, in\n> > > which - * case the values contained in it must already be valid.\n> > > - *\n> > > - * NULL is returned if the passed-in options did not match any of the\n> > > options - * in the parsing table, unless validate is true in which case\n> > > an error would - * be reported.\n> > > + * get_attribute_options_spec_set\n> > > + *\t\tReturns an options spec det for heap attributes\n> > > \n> > > */\n> > > \n> > > -void *\n> > > -build_reloptions(Datum reloptions, bool validate,\n> > > -\t\t\t\t relopt_kind kind,\n> > > -\t\t\t\t Size relopt_struct_size,\n> > > -\t\t\t\t const relopt_parse_elt *relopt_elems,\n> > > -\t\t\t\t int num_relopt_elems)\n> > > -{\n> > > -\tint\t\t\tnumoptions;\n> > > -\trelopt_value *options;\n> > > -\tvoid\t *rdopts;\n> > > -\n> > > -\t/* parse options specific to given relation option kind */\n> > > -\toptions = parseRelOptions(reloptions, validate, kind, &numoptions);\n> > > -\tAssert(numoptions <= num_relopt_elems);\n> > > -\n> > > -\t/* if none set, we're done */\n> > > -\tif (numoptions == 0)\n> > > -\t{\n> > > -\t\tAssert(options == NULL);\n> > > -\t\treturn NULL;\n> > > -\t}\n> > > -\n> > > -\t/* allocate and fill the structure */\n> > > -\trdopts = allocateReloptStruct(relopt_struct_size, options, \n> numoptions);\n> > > -\tfillRelOptions(rdopts, relopt_struct_size, options, numoptions,\n> > > -\t\t\t\t validate, relopt_elems, \n> num_relopt_elems);\n> > > +static options_spec_set *attribute_options_spec_set = NULL;\n> > > \n> > > -\tpfree(options);\n> > > -\n> > > -\treturn rdopts;\n> > > -}\n> > > -\n> > > -/*\n> > > - * Parse local options, allocate a bytea struct that's of the specified\n> > > - * 'base_size' plus any extra space that's needed for string variables,\n> > > - * fill its option's fields located at the given offsets and return it.\n> > > - */\n> > > -void *\n> > > -build_local_reloptions(local_relopts *relopts, Datum options, bool\n> > > validate) +options_spec_set *\n> > > +get_attribute_options_spec_set(void)\n> > > \n> > > {\n> > > \n> > > -\tint\t\t\tnoptions = list_length(relopts->options);\n> > > -\trelopt_parse_elt *elems = palloc(sizeof(*elems) * noptions);\n> > > -\trelopt_value *vals;\n> > > -\tvoid\t *opts;\n> > > -\tint\t\t\ti = 0;\n> > > -\tListCell *lc;\n> > > +\tif (attribute_options_spec_set)\n> > > +\t\t\treturn attribute_options_spec_set;\n> > > \n> > > -\tforeach(lc, relopts->options)\n> > > -\t{\n> > > -\t\tlocal_relopt *opt = lfirst(lc);\n> > > -\n> > > -\t\telems[i].optname = opt->option->name;\n> > > -\t\telems[i].opttype = opt->option->type;\n> > > -\t\telems[i].offset = opt->offset;\n> > > -\n> > > -\t\ti++;\n> > > -\t}\n> > > +\tattribute_options_spec_set = allocateOptionsSpecSet(NULL,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t sizeof(AttributeOpts), 2);\n> > > \n> > > -\tvals = parseLocalRelOptions(relopts, options, validate);\n> > > -\topts = allocateReloptStruct(relopts->relopt_struct_size, vals,\n> > > noptions);\n> > > -\tfillRelOptions(opts, relopts->relopt_struct_size, vals, noptions,\n> > > validate, -\t\t\t\t elems, noptions);\n> > > +\toptionsSpecSetAddReal(attribute_options_spec_set, \"n_distinct\",\n> > > +\t\t\t\t\t\t \"Sets the planner's \n> estimate of the number of distinct values\n> > > appearing in a column (excluding child relations).\", +\t\t\t\t\n> \t\t \n> > > ShareUpdateExclusiveLock,\n> > > +\t\t\t 0, offsetof(AttributeOpts, n_distinct), 0, -1.0, \n> DBL_MAX);\n> > > \n> > > -\tforeach(lc, relopts->validators)\n> > > -\t\t((relopts_validator) lfirst(lc)) (opts, vals, noptions);\n> > > -\n> > > -\tif (elems)\n> > > -\t\tpfree(elems);\n> > > +\toptionsSpecSetAddReal(attribute_options_spec_set,\n> > > +\t\t\t\t\t\t \"n_distinct_inherited\",\n> > > +\t\t\t\t\t\t \"Sets the planner's \n> estimate of the number of distinct values\n> > > appearing in a column (including child relations).\", +\t\t\t\t\n> \t\t \n> > > ShareUpdateExclusiveLock,\n> > > +\t 0, offsetof(AttributeOpts, n_distinct_inherited), 0, -1.0, DBL_MAX);\n> > > \n> > > -\treturn opts;\n> > > +\treturn attribute_options_spec_set;\n> > > \n> > > }\n> > > \n> > > -/*\n> > > - * Option parser for partitioned tables\n> > > - */\n> > > -bytea *\n> > > -partitioned_table_reloptions(Datum reloptions, bool validate)\n> > > -{\n> > > -\t/*\n> > > -\t * There are no options for partitioned tables yet, but this is able \n> to\n> > > do -\t * some validation.\n> > > -\t */\n> > > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > > -\t\t\t\t\t\t\t\t\t \n> RELOPT_KIND_PARTITIONED,\n> > > -\t\t\t\t\t\t\t\t\t 0, \n> NULL, 0);\n> > > -}\n> > > \n> > > /*\n> > > \n> > > - * Option parser for views\n> > > - */\n> > > -bytea *\n> > > -view_reloptions(Datum reloptions, bool validate)\n> > > -{\n> > > -\tstatic const relopt_parse_elt tab[] = {\n> > > -\t\t{\"security_barrier\", RELOPT_TYPE_BOOL,\n> > > -\t\toffsetof(ViewOptions, security_barrier)},\n> > > -\t\t{\"check_option\", RELOPT_TYPE_ENUM,\n> > > -\t\toffsetof(ViewOptions, check_option)}\n> > > -\t};\n> > > -\n> > > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > > -\t\t\t\t\t\t\t\t\t \n> RELOPT_KIND_VIEW,\n> > > -\t\t\t\t\t\t\t\t\t \n> sizeof(ViewOptions),\n> > > -\t\t\t\t\t\t\t\t\t \n> tab, lengthof(tab));\n> > > -}\n> > > + * get_tablespace_options_spec_set\n> > > + *\t\tReturns an options spec set for tablespaces\n> > > +*/\n> > > +static options_spec_set *tablespace_options_spec_set = NULL;\n> > > \n> > > -/*\n> > > - * Parse options for heaps, views and toast tables.\n> > > - */\n> > > -bytea *\n> > > -heap_reloptions(char relkind, Datum reloptions, bool validate)\n> > > +options_spec_set *\n> > > +get_tablespace_options_spec_set(void)\n> > > \n> > > {\n> > > \n> > > -\tStdRdOptions *rdopts;\n> > > -\n> > > -\tswitch (relkind)\n> > > +\tif (!tablespace_options_spec_set)\n> > > \n> > > \t{\n> > > \n> > > -\t\tcase RELKIND_TOASTVALUE:\n> > > -\t\t\trdopts = (StdRdOptions *)\n> > > -\t\t\t\tdefault_reloptions(reloptions, validate, \n> RELOPT_KIND_TOAST);\n> > > -\t\t\tif (rdopts != NULL)\n> > > -\t\t\t{\n> > > -\t\t\t\t/* adjust default-only parameters for TOAST \n> relations */\n> > > -\t\t\t\trdopts->fillfactor = 100;\n> > > -\t\t\t\trdopts->autovacuum.analyze_threshold = -1;\n> > > -\t\t\t\trdopts->autovacuum.analyze_scale_factor = \n> -1;\n> > > -\t\t\t}\n> > > -\t\t\treturn (bytea *) rdopts;\n> > > -\t\tcase RELKIND_RELATION:\n> > > -\t\tcase RELKIND_MATVIEW:\n> > > -\t\t\treturn default_reloptions(reloptions, validate, \n> RELOPT_KIND_HEAP);\n> > > -\t\tdefault:\n> > > -\t\t\t/* other relkinds are not supported */\n> > > -\t\t\treturn NULL;\n> > > -\t}\n> > > -}\n> > > -\n> > > -\n> > > -/*\n> > > - * Parse options for indexes.\n> > > - *\n> > > - *\tamoptions\tindex AM's option parser function\n> > > - *\treloptions\toptions as text[] datum\n> > > - *\tvalidate\terror flag\n> > > - */\n> > > -bytea *\n> > > -index_reloptions(amoptions_function amoptions, Datum reloptions, bool\n> > > validate) -{\n> > > -\tAssert(amoptions != NULL);\n> > > +\t\ttablespace_options_spec_set = allocateOptionsSpecSet(NULL,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t sizeof(TableSpaceOpts), 4);\n> > > \n> > > -\t/* Assume function is strict */\n> > > -\tif (!PointerIsValid(DatumGetPointer(reloptions)))\n> > > -\t\treturn NULL;\n> > > +\t\toptionsSpecSetAddReal(tablespace_options_spec_set,\n> > > +\t\t\t\t\t\t\t\t \n> \"random_page_cost\",\n> > > +\t\t\t\t\t\t\t\t \"Sets the \n> planner's estimate of the cost of a nonsequentially\n> > > fetched disk page\", +\t\t\t\t\t\t\t\t\n> ShareUpdateExclusiveLock,\n> > > +\t\t\t0, offsetof(TableSpaceOpts, random_page_cost), -1, \n> 0.0, DBL_MAX);\n> > > \n> > > -\treturn amoptions(reloptions, validate);\n> > > -}\n> > > +\t\toptionsSpecSetAddReal(tablespace_options_spec_set, \n> \"seq_page_cost\",\n> > > +\t\t\t\t\t\t\t\t \"Sets the \n> planner's estimate of the cost of a sequentially\n> > > fetched disk page\", +\t\t\t\t\t\t\t\t\n> ShareUpdateExclusiveLock,\n> > > +\t\t\t 0, offsetof(TableSpaceOpts, seq_page_cost), -1, \n> 0.0, DBL_MAX);\n> > > \n> > > -/*\n> > > - * Option parser for attribute reloptions\n> > > - */\n> > > -bytea *\n> > > -attribute_reloptions(Datum reloptions, bool validate)\n> > > -{\n> > > -\tstatic const relopt_parse_elt tab[] = {\n> > > -\t\t{\"n_distinct\", RELOPT_TYPE_REAL, offsetof(AttributeOpts, \n> n_distinct)},\n> > > -\t\t{\"n_distinct_inherited\", RELOPT_TYPE_REAL, \n> offsetof(AttributeOpts,\n> > > n_distinct_inherited)} -\t};\n> > > -\n> > > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > > -\t\t\t\t\t\t\t\t\t \n> RELOPT_KIND_ATTRIBUTE,\n> > > -\t\t\t\t\t\t\t\t\t \n> sizeof(AttributeOpts),\n> > > -\t\t\t\t\t\t\t\t\t \n> tab, lengthof(tab));\n> > > -}\n> > > +\t\toptionsSpecSetAddInt(tablespace_options_spec_set,\n> > > +\t\t\t\t\t\t\t\t \n> \"effective_io_concurrency\",\n> > > +\t\t\t\t\t\t\t\t \"Number of \n> simultaneous requests that can be handled efficiently\n> > > by the disk subsystem\", +\t\t\t\t\t\t\t\t\n> ShareUpdateExclusiveLock,\n> > > +\t\t\t\t\t 0, offsetof(TableSpaceOpts, \n> effective_io_concurrency),\n> > > +#ifdef USE_PREFETCH\n> > > +\t\t\t\t\t\t\t\t -1, 0, \n> MAX_IO_CONCURRENCY\n> > > +#else\n> > > +\t\t\t\t\t\t\t\t 0, 0, 0\n> > > +#endif\n> > > +\t\t\t);\n> > > \n> > > -/*\n> > > - * Option parser for tablespace reloptions\n> > > - */\n> > > -bytea *\n> > > -tablespace_reloptions(Datum reloptions, bool validate)\n> > > -{\n> > > -\tstatic const relopt_parse_elt tab[] = {\n> > > -\t\t{\"random_page_cost\", RELOPT_TYPE_REAL, \n> offsetof(TableSpaceOpts,\n> > > random_page_cost)}, -\t\t{\"seq_page_cost\", RELOPT_TYPE_REAL,\n> > > offsetof(TableSpaceOpts, seq_page_cost)}, -\t\t\n> {\"effective_io_concurrency\",\n> > > RELOPT_TYPE_INT, offsetof(TableSpaceOpts, effective_io_concurrency)},\n> > > -\t\t{\"maintenance_io_concurrency\", RELOPT_TYPE_INT,\n> > > offsetof(TableSpaceOpts, maintenance_io_concurrency)} -\t};\n> > > -\n> > > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > > -\t\t\t\t\t\t\t\t\t \n> RELOPT_KIND_TABLESPACE,\n> > > -\t\t\t\t\t\t\t\t\t \n> sizeof(TableSpaceOpts),\n> > > -\t\t\t\t\t\t\t\t\t \n> tab, lengthof(tab));\n> > > +\t\toptionsSpecSetAddInt(tablespace_options_spec_set,\n> > > +\t\t\t\t\t\t\t\t \n> \"maintenance_io_concurrency\",\n> > > +\t\t\t\t\t\t\t\t \"Number of \n> simultaneous requests that can be handled efficiently\n> > > by the disk subsystem for maintenance work.\", +\t\t\t\t\n> \t\t\t\t\n> > > ShareUpdateExclusiveLock,\n> > > +\t\t\t\t\t 0, offsetof(TableSpaceOpts, \n> maintenance_io_concurrency),\n> > > +#ifdef USE_PREFETCH\n> > > +\t\t\t\t\t\t\t\t -1, 0, \n> MAX_IO_CONCURRENCY\n> > > +#else\n> > > +\t\t\t\t\t\t\t\t 0, 0, 0\n> > > +#endif\n> > > +\t\t\t);\n> > > +\t}\n> > > +\treturn tablespace_options_spec_set;\n> > > \n> > > }\n> > > \n> > > /*\n> > > \n> > > @@ -2099,33 +612,55 @@ tablespace_reloptions(Datum reloptions, bool\n> > > validate)> \n> > > * for a longer explanation of how this works.\n> > > */\n> > > \n> > > LOCKMODE\n> > > \n> > > -AlterTableGetRelOptionsLockLevel(List *defList)\n> > > +AlterTableGetRelOptionsLockLevel(Relation rel, List *defList)\n> > > \n> > > {\n> > > \n> > > \tLOCKMODE\tlockmode = NoLock;\n> > > \tListCell *cell;\n> > > \n> > > +\toptions_spec_set *spec_set = NULL;\n> > > \n> > > \tif (defList == NIL)\n> > > \t\n> > > \t\treturn AccessExclusiveLock;\n> > > \n> > > -\tif (need_initialization)\n> > > -\t\tinitialize_reloptions();\n> > > +\tswitch (rel->rd_rel->relkind)\n> > > +\t{\n> > > +\t\tcase RELKIND_TOASTVALUE:\n> > > +\t\t\tspec_set = get_toast_relopt_spec_set();\n> > > +\t\t\tbreak;\n> > > +\t\tcase RELKIND_RELATION:\n> > > +\t\tcase RELKIND_MATVIEW:\n> > > +\t\t\tspec_set = get_heap_relopt_spec_set();\n> > > +\t\t\tbreak;\n> > > +\t\tcase RELKIND_INDEX:\n> > > +\t\t\tspec_set = rel->rd_indam->amreloptspecset();\n> > > +\t\t\tbreak;\n> > > +\t\tcase RELKIND_VIEW:\n> > > +\t\t\tspec_set = get_view_relopt_spec_set();\n> > > +\t\t\tbreak;\n> > > +\t\tcase RELKIND_PARTITIONED_TABLE:\n> > > +\t\t\tspec_set = get_partitioned_relopt_spec_set();\n> > > +\t\t\tbreak;\n> > > +\t\tdefault:\n> > > +\t\t\tAssert(false);\t\t/* can't get here */\n> > > +\t\t\tbreak;\n> > > +\t}\n> > > +\tAssert(spec_set);\t\t\t/* No spec set - no reloption \n> change. Should\n> > > +\t\t\t\t\t\t\t\t * never get \n> here */\n> > > \n> > > \tforeach(cell, defList)\n> > > \t{\n> > > \t\n> > > \t\tDefElem *def = (DefElem *) lfirst(cell);\n> > > \n> > > +\n> > > \n> > > \t\tint\t\t\ti;\n> > > \n> > > -\t\tfor (i = 0; relOpts[i]; i++)\n> > > +\t\tfor (i = 0; i < spec_set->num; i++)\n> > > \n> > > \t\t{\n> > > \n> > > -\t\t\tif (strncmp(relOpts[i]->name,\n> > > -\t\t\t\t\t\tdef->defname,\n> > > -\t\t\t\t\t\trelOpts[i]->namelen + 1) == \n> 0)\n> > > -\t\t\t{\n> > > -\t\t\t\tif (lockmode < relOpts[i]->lockmode)\n> > > -\t\t\t\t\tlockmode = relOpts[i]->lockmode;\n> > > -\t\t\t}\n> > > +\t\t\toption_spec_basic *gen = spec_set->definitions[i];\n> > > +\n> > > +\t\t\tif (pg_strcasecmp(gen->name,\n> > > +\t\t\t\t\t\t\t def->defname) == 0)\n> > > +\t\t\t\tif (lockmode < gen->lockmode)\n> > > +\t\t\t\t\tlockmode = gen->lockmode;\n> > > \n> > > \t\t}\n> > > \t\n> > > \t}\n> > > \n> > > -\n> > > \n> > > \treturn lockmode;\n> > > \n> > > -}\n> > > +}\n> > > \\ No newline at end of file\n> > > diff --git a/src/backend/access/gin/gininsert.c\n> > > b/src/backend/access/gin/gininsert.c index 0e8672c..0cbffad 100644\n> > > --- a/src/backend/access/gin/gininsert.c\n> > > +++ b/src/backend/access/gin/gininsert.c\n> > > @@ -512,6 +512,8 @@ gininsert(Relation index, Datum *values, bool *isnull,\n> > > \n> > > \toldCtx = MemoryContextSwitchTo(insertCtx);\n> > > \n> > > +// elog(WARNING, \"GinGetUseFastUpdate = %i\", GinGetUseFastUpdate(index));\n> > > +\n> > > \n> > > \tif (GinGetUseFastUpdate(index))\n> > > \t{\n> > > \t\n> > > \t\tGinTupleCollector collector;\n> > > \n> > > diff --git a/src/backend/access/gin/ginutil.c\n> > > b/src/backend/access/gin/ginutil.c index 6d2d71b..d1fa3a0 100644\n> > > --- a/src/backend/access/gin/ginutil.c\n> > > +++ b/src/backend/access/gin/ginutil.c\n> > > @@ -16,7 +16,7 @@\n> > > \n> > > #include \"access/gin_private.h\"\n> > > #include \"access/ginxlog.h\"\n> > > \n> > > -#include \"access/reloptions.h\"\n> > > +#include \"access/options.h\"\n> > > \n> > > #include \"access/xloginsert.h\"\n> > > #include \"catalog/pg_collation.h\"\n> > > #include \"catalog/pg_type.h\"\n> > > \n> > > @@ -28,6 +28,7 @@\n> > > \n> > > #include \"utils/builtins.h\"\n> > > #include \"utils/index_selfuncs.h\"\n> > > #include \"utils/typcache.h\"\n> > > \n> > > +#include \"utils/guc.h\"\n> > > \n> > > /*\n> > > \n> > > @@ -67,7 +68,6 @@ ginhandler(PG_FUNCTION_ARGS)\n> > > \n> > > \tamroutine->amvacuumcleanup = ginvacuumcleanup;\n> > > \tamroutine->amcanreturn = NULL;\n> > > \tamroutine->amcostestimate = gincostestimate;\n> > > \n> > > -\tamroutine->amoptions = ginoptions;\n> > > \n> > > \tamroutine->amproperty = NULL;\n> > > \tamroutine->ambuildphasename = NULL;\n> > > \tamroutine->amvalidate = ginvalidate;\n> > > \n> > > @@ -82,6 +82,7 @@ ginhandler(PG_FUNCTION_ARGS)\n> > > \n> > > \tamroutine->amestimateparallelscan = NULL;\n> > > \tamroutine->aminitparallelscan = NULL;\n> > > \tamroutine->amparallelrescan = NULL;\n> > > \n> > > +\tamroutine->amreloptspecset = gingetreloptspecset;\n> > > \n> > > \tPG_RETURN_POINTER(amroutine);\n> > > \n> > > }\n> > > \n> > > @@ -604,6 +605,7 @@ ginExtractEntries(GinState *ginstate, OffsetNumber\n> > > attnum,> \n> > > \treturn entries;\n> > > \n> > > }\n> > > \n> > > +/*\n> > > \n> > > bytea *\n> > > ginoptions(Datum reloptions, bool validate)\n> > > {\n> > > \n> > > @@ -618,6 +620,7 @@ ginoptions(Datum reloptions, bool validate)\n> > > \n> > > \t\t\t\t\t\t\t\t\t \n> sizeof(GinOptions),\n> > > \t\t\t\t\t\t\t\t\t \n> tab, lengthof(tab));\n> > > \n> > > }\n> > > \n> > > +*/\n> > > \n> > > /*\n> > > \n> > > * Fetch index's statistical data into *stats\n> > > \n> > > @@ -705,3 +708,31 @@ ginUpdateStats(Relation index, const GinStatsData\n> > > *stats, bool is_build)> \n> > > \tEND_CRIT_SECTION();\n> > > \n> > > }\n> > > \n> > > +\n> > > +static options_spec_set *gin_relopt_specset = NULL;\n> > > +\n> > > +void *\n> > > +gingetreloptspecset(void)\n> > > +{\n> > > +\tif (gin_relopt_specset)\n> > > +\t\treturn gin_relopt_specset;\n> > > +\n> > > +\tgin_relopt_specset = allocateOptionsSpecSet(NULL,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\tsizeof(GinOptions), 2);\n> > > +\n> > > +\toptionsSpecSetAddBool(gin_relopt_specset, \"fastupdate\",\n> > > +\t\t\t\t\t\t\"Enables \\\"fast update\\\" \n> feature for this GIN index\",\n> > > +\t\t\t\t\t\t\t \n> AccessExclusiveLock,\n> > > +\t\t\t\t\t\t\t 0,\n> > > +\t\t\t\t\t\t\t offsetof(GinOptions, \n> useFastUpdate),\n> > > +\t\t\t\t\t\t\t \n> GIN_DEFAULT_USE_FASTUPDATE);\n> > > +\n> > > +\toptionsSpecSetAddInt(gin_relopt_specset, \"gin_pending_list_limit\",\n> > > +\t\t \"Maximum size of the pending list for this GIN index, in \n> kilobytes\",\n> > > +\t\t\t\t\t\t\t AccessExclusiveLock,\n> > > +\t\t\t\t\t\t\t 0,\n> > > +\t\t\t\t\t\t\t offsetof(GinOptions, \n> pendingListCleanupSize),\n> > > +\t\t\t\t\t\t\t -1, 64, \n> MAX_KILOBYTES);\n> > > +\n> > > +\treturn gin_relopt_specset;\n> > > +}\n> > > diff --git a/src/backend/access/gist/gist.c\n> > > b/src/backend/access/gist/gist.c index 0683f42..cbbc6a5 100644\n> > > --- a/src/backend/access/gist/gist.c\n> > > +++ b/src/backend/access/gist/gist.c\n> > > @@ -88,7 +88,6 @@ gisthandler(PG_FUNCTION_ARGS)\n> > > \n> > > \tamroutine->amvacuumcleanup = gistvacuumcleanup;\n> > > \tamroutine->amcanreturn = gistcanreturn;\n> > > \tamroutine->amcostestimate = gistcostestimate;\n> > > \n> > > -\tamroutine->amoptions = gistoptions;\n> > > \n> > > \tamroutine->amproperty = gistproperty;\n> > > \tamroutine->ambuildphasename = NULL;\n> > > \tamroutine->amvalidate = gistvalidate;\n> > > \n> > > @@ -103,6 +102,7 @@ gisthandler(PG_FUNCTION_ARGS)\n> > > \n> > > \tamroutine->amestimateparallelscan = NULL;\n> > > \tamroutine->aminitparallelscan = NULL;\n> > > \tamroutine->amparallelrescan = NULL;\n> > > \n> > > +\tamroutine->amreloptspecset = gistgetreloptspecset;\n> > > \n> > > \tPG_RETURN_POINTER(amroutine);\n> > > \n> > > }\n> > > \n> > > diff --git a/src/backend/access/gist/gistbuild.c\n> > > b/src/backend/access/gist/gistbuild.c index baad28c..931d249 100644\n> > > --- a/src/backend/access/gist/gistbuild.c\n> > > +++ b/src/backend/access/gist/gistbuild.c\n> > > @@ -215,6 +215,7 @@ gistbuild(Relation heap, Relation index, IndexInfo\n> > > *indexInfo)> \n> > > \t\t\tbuildstate.buildMode = GIST_BUFFERING_DISABLED;\n> > > \t\t\n> > > \t\telse\t\t\t\t\t/* must be \"auto\" \n> */\n> > > \t\t\n> > > \t\t\tbuildstate.buildMode = GIST_BUFFERING_AUTO;\n> > > \n> > > +//elog(WARNING, \"biffering_mode = %i\", options->buffering_mode);\n> > > \n> > > \t}\n> > > \telse\n> > > \t{\n> > > \n> > > diff --git a/src/backend/access/gist/gistutil.c\n> > > b/src/backend/access/gist/gistutil.c index 43ba03b..0391915 100644\n> > > --- a/src/backend/access/gist/gistutil.c\n> > > +++ b/src/backend/access/gist/gistutil.c\n> > > @@ -17,7 +17,7 @@\n> > > \n> > > #include \"access/gist_private.h\"\n> > > #include \"access/htup_details.h\"\n> > > \n> > > -#include \"access/reloptions.h\"\n> > > +#include \"access/options.h\"\n> > > \n> > > #include \"catalog/pg_opclass.h\"\n> > > #include \"storage/indexfsm.h\"\n> > > #include \"storage/lmgr.h\"\n> > > \n> > > @@ -916,20 +916,6 @@ gistPageRecyclable(Page page)\n> > > \n> > > \treturn false;\n> > > \n> > > }\n> > > \n> > > -bytea *\n> > > -gistoptions(Datum reloptions, bool validate)\n> > > -{\n> > > -\tstatic const relopt_parse_elt tab[] = {\n> > > -\t\t{\"fillfactor\", RELOPT_TYPE_INT, offsetof(GiSTOptions, \n> fillfactor)},\n> > > -\t\t{\"buffering\", RELOPT_TYPE_ENUM, offsetof(GiSTOptions, \n> buffering_mode)}\n> > > -\t};\n> > > -\n> > > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > > -\t\t\t\t\t\t\t\t\t \n> RELOPT_KIND_GIST,\n> > > -\t\t\t\t\t\t\t\t\t \n> sizeof(GiSTOptions),\n> > > -\t\t\t\t\t\t\t\t\t \n> tab, lengthof(tab));\n> > > -}\n> > > -\n> > > \n> > > /*\n> > > \n> > > *\tgistproperty() -- Check boolean properties of indexes.\n> > > *\n> > > \n> > > @@ -1064,3 +1050,42 @@ gistGetFakeLSN(Relation rel)\n> > > \n> > > \t\treturn GetFakeLSNForUnloggedRel();\n> > > \t\n> > > \t}\n> > > \n> > > }\n> > > \n> > > +\n> > > +/* values from GistOptBufferingMode */\n> > > +opt_enum_elt_def gistBufferingOptValues[] =\n> > > +{\n> > > +\t{\"auto\", GIST_OPTION_BUFFERING_AUTO},\n> > > +\t{\"on\", GIST_OPTION_BUFFERING_ON},\n> > > +\t{\"off\", GIST_OPTION_BUFFERING_OFF},\n> > > +\t{(const char *) NULL}\t\t/* list terminator */\n> > > +};\n> > > +\n> > > +static options_spec_set *gist_relopt_specset = NULL;\n> > > +\n> > > +void *\n> > > +gistgetreloptspecset(void)\n> > > +{\n> > > +\tif (gist_relopt_specset)\n> > > +\t\treturn gist_relopt_specset;\n> > > +\n> > > +\tgist_relopt_specset = allocateOptionsSpecSet(NULL,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t sizeof(GiSTOptions), 2);\n> > > +\n> > > +\toptionsSpecSetAddInt(gist_relopt_specset, \"fillfactor\",\n> > > +\t\t\t\t\t\t\"Packs gist index pages only \n> to this percentage\",\n> > > +\t\t\t\t\t\t\t NoLock,\t\t/* \n> No ALTER, no lock */\n> > > +\t\t\t\t\t\t\t 0,\n> > > +\t\t\t\t\t\t\t offsetof(GiSTOptions, \n> fillfactor),\n> > > +\t\t\t\t\t\t\t \n> GIST_DEFAULT_FILLFACTOR,\n> > > +\t\t\t\t\t\t\t GIST_MIN_FILLFACTOR, \n> 100);\n> > > +\n> > > +\toptionsSpecSetAddEnum(gist_relopt_specset, \"buffering\",\n> > > +\t\t\t\t\t\t \"Enables buffering build \n> for this GiST index\",\n> > > +\t\t\t\t\t\t\t NoLock,\t\t/* \n> No ALTER, no lock */\n> > > +\t\t\t\t\t\t\t 0,\n> > > +\t\t\t\t\t\t\t \n> offsetof(GiSTOptions, buffering_mode),\n> > > +\t\t\t\t\t\t\t \n> gistBufferingOptValues,\n> > > +\t\t\t\t\t\t\t \n> GIST_OPTION_BUFFERING_AUTO,\n> > > +\t\t\t\t\t\t\t gettext_noop(\"Valid \n> values are \\\"on\\\", \\\"off\\\", and\n> > > \\\"auto\\\".\"));\n> > > +\treturn gist_relopt_specset;\n> > > +}\n> > > diff --git a/src/backend/access/hash/hash.c\n> > > b/src/backend/access/hash/hash.c index eb38104..8dc4ca7 100644\n> > > --- a/src/backend/access/hash/hash.c\n> > > +++ b/src/backend/access/hash/hash.c\n> > > @@ -85,7 +85,6 @@ hashhandler(PG_FUNCTION_ARGS)\n> > > \n> > > \tamroutine->amvacuumcleanup = hashvacuumcleanup;\n> > > \tamroutine->amcanreturn = NULL;\n> > > \tamroutine->amcostestimate = hashcostestimate;\n> > > \n> > > -\tamroutine->amoptions = hashoptions;\n> > > \n> > > \tamroutine->amproperty = NULL;\n> > > \tamroutine->ambuildphasename = NULL;\n> > > \tamroutine->amvalidate = hashvalidate;\n> > > \n> > > @@ -100,6 +99,7 @@ hashhandler(PG_FUNCTION_ARGS)\n> > > \n> > > \tamroutine->amestimateparallelscan = NULL;\n> > > \tamroutine->aminitparallelscan = NULL;\n> > > \tamroutine->amparallelrescan = NULL;\n> > > \n> > > +\tamroutine->amreloptspecset = hashgetreloptspecset;\n> > > \n> > > \tPG_RETURN_POINTER(amroutine);\n> > > \n> > > }\n> > > \n> > > diff --git a/src/backend/access/hash/hashpage.c\n> > > b/src/backend/access/hash/hashpage.c index 159646c..38f64ef 100644\n> > > --- a/src/backend/access/hash/hashpage.c\n> > > +++ b/src/backend/access/hash/hashpage.c\n> > > @@ -359,6 +359,8 @@ _hash_init(Relation rel, double num_tuples, ForkNumber\n> > > forkNum)> \n> > > \tdata_width = sizeof(uint32);\n> > > \titem_width = MAXALIGN(sizeof(IndexTupleData)) + MAXALIGN(data_width) \n> +\n> > > \t\n> > > \t\tsizeof(ItemIdData);\t\t/* include the line pointer */\n> > > \n> > > +//elog(WARNING, \"fillfactor = %i\", HashGetFillFactor(rel));\n> > > +\n> > > \n> > > \tffactor = HashGetTargetPageUsage(rel) / item_width;\n> > > \t/* keep to a sane range */\n> > > \tif (ffactor < 10)\n> > > \n> > > diff --git a/src/backend/access/hash/hashutil.c\n> > > b/src/backend/access/hash/hashutil.c index 5198728..826beab 100644\n> > > --- a/src/backend/access/hash/hashutil.c\n> > > +++ b/src/backend/access/hash/hashutil.c\n> > > @@ -15,7 +15,7 @@\n> > > \n> > > #include \"postgres.h\"\n> > > \n> > > #include \"access/hash.h\"\n> > > \n> > > -#include \"access/reloptions.h\"\n> > > +#include \"access/options.h\"\n> > > \n> > > #include \"access/relscan.h\"\n> > > #include \"port/pg_bitutils.h\"\n> > > #include \"storage/buf_internals.h\"\n> > > \n> > > @@ -272,19 +272,6 @@ _hash_checkpage(Relation rel, Buffer buf, int flags)\n> > > \n> > > \t}\n> > > \n> > > }\n> > > \n> > > -bytea *\n> > > -hashoptions(Datum reloptions, bool validate)\n> > > -{\n> > > -\tstatic const relopt_parse_elt tab[] = {\n> > > -\t\t{\"fillfactor\", RELOPT_TYPE_INT, offsetof(HashOptions, \n> fillfactor)},\n> > > -\t};\n> > > -\n> > > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > > -\t\t\t\t\t\t\t\t\t \n> RELOPT_KIND_HASH,\n> > > -\t\t\t\t\t\t\t\t\t \n> sizeof(HashOptions),\n> > > -\t\t\t\t\t\t\t\t\t \n> tab, lengthof(tab));\n> > > -}\n> > > -\n> > > \n> > > /*\n> > > \n> > > * _hash_get_indextuple_hashkey - get the hash index tuple's hash key\n> > > value\n> > > */\n> > > \n> > > @@ -620,3 +607,24 @@ _hash_kill_items(IndexScanDesc scan)\n> > > \n> > > \telse\n> > > \t\n> > > \t\t_hash_relbuf(rel, buf);\n> > > \n> > > }\n> > > \n> > > +\n> > > +static options_spec_set *hash_relopt_specset = NULL;\n> > > +\n> > > +void *\n> > > +hashgetreloptspecset(void)\n> > > +{\n> > > +\tif (hash_relopt_specset)\n> > > +\t\treturn hash_relopt_specset;\n> > > +\n> > > +\thash_relopt_specset = allocateOptionsSpecSet(NULL,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t sizeof(HashOptions), 1);\n> > > +\toptionsSpecSetAddInt(hash_relopt_specset, \"fillfactor\",\n> > > +\t\t\t\t\t\t\"Packs hash index pages only \n> to this percentage\",\n> > > +\t\t\t\t\t\t\t NoLock,\t\t/* \n> No ALTER -- no lock */\n> > > +\t\t\t\t\t\t\t 0,\n> > > +\t\t\t\t\t\t\t offsetof(HashOptions, \n> fillfactor),\n> > > +\t\t\t\t\t\t\t \n> HASH_DEFAULT_FILLFACTOR,\n> > > +\t\t\t\t\t\t\t HASH_MIN_FILLFACTOR, \n> 100);\n> > > +\n> > > +\treturn hash_relopt_specset;\n> > > +}\n> > > diff --git a/src/backend/access/nbtree/nbtinsert.c\n> > > b/src/backend/access/nbtree/nbtinsert.c index 7355e1d..f7b117e 100644\n> > > --- a/src/backend/access/nbtree/nbtinsert.c\n> > > +++ b/src/backend/access/nbtree/nbtinsert.c\n> > > @@ -2745,6 +2745,8 @@ _bt_delete_or_dedup_one_page(Relation rel, Relation\n> > > heapRel,> \n> > > \t\t_bt_bottomupdel_pass(rel, buffer, heapRel, insertstate-\n> >itemsz))\n> > > \t\treturn;\n> > > \n> > > +// elog(WARNING, \"Deduplicate_items = %i\", BTGetDeduplicateItems(rel));\n> > > +\n> > > \n> > > \t/* Perform deduplication pass (when enabled and index-is-\n> allequalimage)\n> > > \t*/\n> > > \tif (BTGetDeduplicateItems(rel) && itup_key->allequalimage)\n> > > \t\n> > > \t\t_bt_dedup_pass(rel, buffer, heapRel, insertstate->itup,\n> > > \n> > > diff --git a/src/backend/access/nbtree/nbtree.c\n> > > b/src/backend/access/nbtree/nbtree.c index 40ad095..f171c54 100644\n> > > --- a/src/backend/access/nbtree/nbtree.c\n> > > +++ b/src/backend/access/nbtree/nbtree.c\n> > > @@ -22,6 +22,7 @@\n> > > \n> > > #include \"access/nbtxlog.h\"\n> > > #include \"access/relscan.h\"\n> > > #include \"access/xlog.h\"\n> > > \n> > > +#include \"access/options.h\"\n> > > \n> > > #include \"commands/progress.h\"\n> > > #include \"commands/vacuum.h\"\n> > > #include \"miscadmin.h\"\n> > > \n> > > @@ -124,7 +125,6 @@ bthandler(PG_FUNCTION_ARGS)\n> > > \n> > > \tamroutine->amvacuumcleanup = btvacuumcleanup;\n> > > \tamroutine->amcanreturn = btcanreturn;\n> > > \tamroutine->amcostestimate = btcostestimate;\n> > > \n> > > -\tamroutine->amoptions = btoptions;\n> > > \n> > > \tamroutine->amproperty = btproperty;\n> > > \tamroutine->ambuildphasename = btbuildphasename;\n> > > \tamroutine->amvalidate = btvalidate;\n> > > \n> > > @@ -139,6 +139,7 @@ bthandler(PG_FUNCTION_ARGS)\n> > > \n> > > \tamroutine->amestimateparallelscan = btestimateparallelscan;\n> > > \tamroutine->aminitparallelscan = btinitparallelscan;\n> > > \tamroutine->amparallelrescan = btparallelrescan;\n> > > \n> > > +\tamroutine->amreloptspecset = btgetreloptspecset;\n> > > \n> > > \tPG_RETURN_POINTER(amroutine);\n> > > \n> > > }\n> > > \n> > > @@ -1418,3 +1419,37 @@ btcanreturn(Relation index, int attno)\n> > > \n> > > {\n> > > \n> > > \treturn true;\n> > > \n> > > }\n> > > \n> > > +\n> > > +static options_spec_set *bt_relopt_specset = NULL;\n> > > +\n> > > +void *\n> > > +btgetreloptspecset(void)\n> > > +{\n> > > +\tif (bt_relopt_specset)\n> > > +\t\treturn bt_relopt_specset;\n> > > +\n> > > +\tbt_relopt_specset = allocateOptionsSpecSet(NULL,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t sizeof(BTOptions), 3);\n> > > +\n> > > +\toptionsSpecSetAddInt(\n> > > +\t\tbt_relopt_specset, \"fillfactor\",\n> > > +\t\t\"Packs btree index pages only to this percentage\",\n> > > +\t\tShareUpdateExclusiveLock, /* since it applies only to later \n> inserts */\n> > > +\t\t0, offsetof(BTOptions, fillfactor),\n> > > +\t\tBTREE_DEFAULT_FILLFACTOR, BTREE_MIN_FILLFACTOR, 100\n> > > +\t);\n> > > +\toptionsSpecSetAddReal(\n> > > +\t\tbt_relopt_specset, \"vacuum_cleanup_index_scale_factor\",\n> > > +\t\t\"Number of tuple inserts prior to index cleanup as a fraction \n> of\n> > > reltuples\", +\t\tShareUpdateExclusiveLock,\n> > > +\t\t0, offsetof(BTOptions,vacuum_cleanup_index_scale_factor),\n> > > +\t\t-1, 0.0, 1e10\n> > > +\t);\n> > > +\toptionsSpecSetAddBool(\n> > > +\t\tbt_relopt_specset, \"deduplicate_items\",\n> > > +\t\t\"Enables \\\"deduplicate items\\\" feature for this btree index\",\n> > > +\t\tShareUpdateExclusiveLock, /* since it applies only to later \n> inserts */\n> > > +\t\t0, offsetof(BTOptions,deduplicate_items), true\n> > > +\t);\n> > > +\treturn bt_relopt_specset;\n> > > +}\n> > > diff --git a/src/backend/access/nbtree/nbtutils.c\n> > > b/src/backend/access/nbtree/nbtutils.c index c72b456..2588a30 100644\n> > > --- a/src/backend/access/nbtree/nbtutils.c\n> > > +++ b/src/backend/access/nbtree/nbtutils.c\n> > > @@ -18,7 +18,7 @@\n> > > \n> > > #include <time.h>\n> > > \n> > > #include \"access/nbtree.h\"\n> > > \n> > > -#include \"access/reloptions.h\"\n> > > +#include \"storage/lock.h\"\n> > > \n> > > #include \"access/relscan.h\"\n> > > #include \"catalog/catalog.h\"\n> > > #include \"commands/progress.h\"\n> > > \n> > > @@ -2100,25 +2100,6 @@ BTreeShmemInit(void)\n> > > \n> > > \t\tAssert(found);\n> > > \n> > > }\n> > > \n> > > -bytea *\n> > > -btoptions(Datum reloptions, bool validate)\n> > > -{\n> > > -\tstatic const relopt_parse_elt tab[] = {\n> > > -\t\t{\"fillfactor\", RELOPT_TYPE_INT, offsetof(BTOptions, \n> fillfactor)},\n> > > -\t\t{\"vacuum_cleanup_index_scale_factor\", RELOPT_TYPE_REAL,\n> > > -\t\toffsetof(BTOptions, vacuum_cleanup_index_scale_factor)},\n> > > -\t\t{\"deduplicate_items\", RELOPT_TYPE_BOOL,\n> > > -\t\toffsetof(BTOptions, deduplicate_items)}\n> > > -\n> > > -\t};\n> > > -\n> > > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > > -\t\t\t\t\t\t\t\t\t \n> RELOPT_KIND_BTREE,\n> > > -\t\t\t\t\t\t\t\t\t \n> sizeof(BTOptions),\n> > > -\t\t\t\t\t\t\t\t\t \n> tab, lengthof(tab));\n> > > -\n> > > -}\n> > > -\n> > > \n> > > /*\n> > > \n> > > *\tbtproperty() -- Check boolean properties of indexes.\n> > > *\n> > > \n> > > diff --git a/src/backend/access/spgist/spgutils.c\n> > > b/src/backend/access/spgist/spgutils.c index 03a9cd3..14429ad 100644\n> > > --- a/src/backend/access/spgist/spgutils.c\n> > > +++ b/src/backend/access/spgist/spgutils.c\n> > > @@ -17,7 +17,7 @@\n> > > \n> > > #include \"access/amvalidate.h\"\n> > > #include \"access/htup_details.h\"\n> > > \n> > > -#include \"access/reloptions.h\"\n> > > +#include \"access/options.h\"\n> > > \n> > > #include \"access/spgist_private.h\"\n> > > #include \"access/toast_compression.h\"\n> > > #include \"access/transam.h\"\n> > > \n> > > @@ -72,7 +72,6 @@ spghandler(PG_FUNCTION_ARGS)\n> > > \n> > > \tamroutine->amvacuumcleanup = spgvacuumcleanup;\n> > > \tamroutine->amcanreturn = spgcanreturn;\n> > > \tamroutine->amcostestimate = spgcostestimate;\n> > > \n> > > -\tamroutine->amoptions = spgoptions;\n> > > \n> > > \tamroutine->amproperty = spgproperty;\n> > > \tamroutine->ambuildphasename = NULL;\n> > > \tamroutine->amvalidate = spgvalidate;\n> > > \n> > > @@ -87,6 +86,7 @@ spghandler(PG_FUNCTION_ARGS)\n> > > \n> > > \tamroutine->amestimateparallelscan = NULL;\n> > > \tamroutine->aminitparallelscan = NULL;\n> > > \tamroutine->amparallelrescan = NULL;\n> > > \n> > > +\tamroutine->amreloptspecset = spggetreloptspecset;\n> > > \n> > > \tPG_RETURN_POINTER(amroutine);\n> > > \n> > > }\n> > > \n> > > @@ -550,6 +550,7 @@ SpGistGetBuffer(Relation index, int flags, int\n> > > needSpace, bool *isNew)> \n> > > \t * related to the ones already on it. But fillfactor mustn't cause \n> an\n> > > \t * error for requests that would otherwise be legal.\n> > > \t */\n> > > \n> > > +//elog(WARNING, \"fillfactor = %i\", SpGistGetFillFactor(index));\n> > > \n> > > \tneedSpace += SpGistGetTargetPageFreeSpace(index);\n> > > \tneedSpace = Min(needSpace, SPGIST_PAGE_CAPACITY);\n> > > \n> > > @@ -721,23 +722,6 @@ SpGistInitMetapage(Page page)\n> > > \n> > > }\n> > > \n> > > /*\n> > > \n> > > - * reloptions processing for SPGiST\n> > > - */\n> > > -bytea *\n> > > -spgoptions(Datum reloptions, bool validate)\n> > > -{\n> > > -\tstatic const relopt_parse_elt tab[] = {\n> > > -\t\t{\"fillfactor\", RELOPT_TYPE_INT, offsetof(SpGistOptions, \n> fillfactor)},\n> > > -\t};\n> > > -\n> > > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > > -\t\t\t\t\t\t\t\t\t \n> RELOPT_KIND_SPGIST,\n> > > -\t\t\t\t\t\t\t\t\t \n> sizeof(SpGistOptions),\n> > > -\t\t\t\t\t\t\t\t\t \n> tab, lengthof(tab));\n> > > -\n> > > -}\n> > > -\n> > > -/*\n> > > \n> > > * Get the space needed to store a non-null datum of the indicated type\n> > > * in an inner tuple (that is, as a prefix or node label).\n> > > * Note the result is already rounded up to a MAXALIGN boundary.\n> > > \n> > > @@ -1336,3 +1320,25 @@ spgproperty(Oid index_oid, int attno,\n> > > \n> > > \treturn true;\n> > > \n> > > }\n> > > \n> > > +\n> > > +static options_spec_set *spgist_relopt_specset = NULL;\n> > > +\n> > > +void *\n> > > +spggetreloptspecset(void)\n> > > +{\n> > > +\tif (!spgist_relopt_specset)\n> > > +\t{\n> > > +\t\tspgist_relopt_specset = allocateOptionsSpecSet(NULL,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\tsizeof(SpGistOptions), 1);\n> > > +\n> > > +\t\toptionsSpecSetAddInt(spgist_relopt_specset, \"fillfactor\",\n> > > +\t\t\t\t\t\t \"Packs spgist index pages \n> only to this percentage\",\n> > > +\t\t\t\t\t\t\t\t \n> ShareUpdateExclusiveLock,\t\t/* since it applies only\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t\t\t * to later inserts */\n> > > +\t\t\t\t\t\t\t\t 0,\n> > > +\t\t\t\t\t\t\t\t \n> offsetof(SpGistOptions, fillfactor),\n> > > +\t\t\t\t\t\t\t\t \n> SPGIST_DEFAULT_FILLFACTOR,\n> > > +\t\t\t\t\t\t\t\t \n> SPGIST_MIN_FILLFACTOR, 100);\n> > > +\t}\n> > > +\treturn spgist_relopt_specset;\n> > > +}\n> > > diff --git a/src/backend/commands/createas.c\n> > > b/src/backend/commands/createas.c index 0982851..4f3dbb8 100644\n> > > --- a/src/backend/commands/createas.c\n> > > +++ b/src/backend/commands/createas.c\n> > > @@ -90,6 +90,7 @@ create_ctas_internal(List *attrList, IntoClause *into)\n> > > \n> > > \tDatum\t\ttoast_options;\n> > > \tstatic char *validnsps[] = HEAP_RELOPT_NAMESPACES;\n> > > \tObjectAddress intoRelationAddr;\n> > > \n> > > +\tList\t *toastDefList;\n> > > \n> > > \t/* This code supports both CREATE TABLE AS and CREATE MATERIALIZED \n> VIEW\n> > > \t*/\n> > > \tis_matview = (into->viewQuery != NULL);\n> > > \n> > > @@ -124,14 +125,12 @@ create_ctas_internal(List *attrList, IntoClause\n> > > *into)> \n> > > \tCommandCounterIncrement();\n> > > \t\n> > > \t/* parse and validate reloptions for the toast table */\n> > > \n> > > -\ttoast_options = transformRelOptions((Datum) 0,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> create->options,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \"toast\",\n> > > -\t\t\t\t\t\t\t\t\t\t\n> validnsps,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> true, false);\n> > > \n> > > -\t(void) heap_reloptions(RELKIND_TOASTVALUE, toast_options, true);\n> > > +\toptionsDefListValdateNamespaces(create->options, validnsps);\n> > > +\ttoastDefList = optionsDefListFilterNamespaces(create->options, \n> \"toast\");\n> > > \n> > > +\ttoast_options = transformOptions(get_toast_relopt_spec_set(), (Datum) \n> 0,\n> > > +\t\t\t\t\t\t\t\t\t \n> toastDefList, 0);\n> > > \n> > > \tNewRelationCreateToastTable(intoRelationAddr.objectId, \n> toast_options);\n> > > \t\n> > > \t/* Create the \"view\" part of a materialized view. */\n> > > \n> > > diff --git a/src/backend/commands/foreigncmds.c\n> > > b/src/backend/commands/foreigncmds.c index 146fa57..758ca34 100644\n> > > --- a/src/backend/commands/foreigncmds.c\n> > > +++ b/src/backend/commands/foreigncmds.c\n> > > @@ -112,7 +112,7 @@ transformGenericOptions(Oid catalogId,\n> > > \n> > > \t\t\t\t\t\tList *options,\n> > > \t\t\t\t\t\tOid fdwvalidator)\n> > > \n> > > {\n> > > \n> > > -\tList\t *resultOptions = untransformRelOptions(oldOptions);\n> > > +\tList\t *resultOptions = optionsTextArrayToDefList(oldOptions);\n> > > \n> > > \tListCell *optcell;\n> > > \tDatum\t\tresult;\n> > > \n> > > diff --git a/src/backend/commands/indexcmds.c\n> > > b/src/backend/commands/indexcmds.c index c14ca27..96d465a 100644\n> > > --- a/src/backend/commands/indexcmds.c\n> > > +++ b/src/backend/commands/indexcmds.c\n> > > @@ -19,6 +19,7 @@\n> > > \n> > > #include \"access/heapam.h\"\n> > > #include \"access/htup_details.h\"\n> > > #include \"access/reloptions.h\"\n> > > \n> > > +#include \"access/options.h\"\n> > > \n> > > #include \"access/sysattr.h\"\n> > > #include \"access/tableam.h\"\n> > > #include \"access/xact.h\"\n> > > \n> > > @@ -531,7 +532,7 @@ DefineIndex(Oid relationId,\n> > > \n> > > \tForm_pg_am\taccessMethodForm;\n> > > \tIndexAmRoutine *amRoutine;\n> > > \tbool\t\tamcanorder;\n> > > \n> > > -\tamoptions_function amoptions;\n> > > +\tamreloptspecset_function amreloptspecsetfn;\n> > > \n> > > \tbool\t\tpartitioned;\n> > > \tbool\t\tsafe_index;\n> > > \tDatum\t\treloptions;\n> > > \n> > > @@ -837,7 +838,7 @@ DefineIndex(Oid relationId,\n> > > \n> > > \t\t\t\t\t\taccessMethodName)));\n> > > \t\n> > > \tamcanorder = amRoutine->amcanorder;\n> > > \n> > > -\tamoptions = amRoutine->amoptions;\n> > > +\tamreloptspecsetfn = amRoutine->amreloptspecset;\n> > > \n> > > \tpfree(amRoutine);\n> > > \tReleaseSysCache(tuple);\n> > > \n> > > @@ -851,10 +852,19 @@ DefineIndex(Oid relationId,\n> > > \n> > > \t/*\n> > > \t\n> > > \t * Parse AM-specific options, convert to text array form, validate.\n> > > \t */\n> > > \n> > > -\treloptions = transformRelOptions((Datum) 0, stmt->options,\n> > > -\t\t\t\t\t\t\t\t\t \n> NULL, NULL, false, false);\n> > > \n> > > -\t(void) index_reloptions(amoptions, reloptions, true);\n> > > +\tif (amreloptspecsetfn)\n> > > +\t{\n> > > +\t\treloptions = transformOptions(amreloptspecsetfn(),\n> > > +\t\t\t\t\t\t\t\t\t \n> (Datum) 0, stmt->options, 0);\n> > > +\t}\n> > > +\telse\n> > > +\t{\n> > > +\t\tereport(ERROR,\n> > > +\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > > +\t\t\t\t errmsg(\"access method %s does not support \n> options\",\n> > > +\t\t\t\t\t\taccessMethodName)));\n> > > +\t}\n> > > \n> > > \t/*\n> > > \t\n> > > \t * Prepare arguments for index_create, primarily an IndexInfo \n> structure.\n> > > \n> > > @@ -1986,8 +1996,7 @@ ComputeIndexAttrs(IndexInfo *indexInfo,\n> > > \n> > > \t\t\t\t\tpalloc0(sizeof(Datum) * indexInfo-\n> >ii_NumIndexAttrs);\n> > > \t\t\t\n> > > \t\t\tindexInfo->ii_OpclassOptions[attn] =\n> > > \n> > > -\t\t\t\ttransformRelOptions((Datum) 0, attribute-\n> >opclassopts,\n> > > -\t\t\t\t\t\t\t\t\t\n> NULL, NULL, false, false);\n> > > +\t\t\t\toptionsDefListToTextArray(attribute-\n> >opclassopts);\n> > > \n> > > \t\t}\n> > > \t\t\n> > > \t\tattn++;\n> > > \n> > > diff --git a/src/backend/commands/tablecmds.c\n> > > b/src/backend/commands/tablecmds.c index 1c2ebe1..7f3004f 100644\n> > > --- a/src/backend/commands/tablecmds.c\n> > > +++ b/src/backend/commands/tablecmds.c\n> > > @@ -20,6 +20,7 @@\n> > > \n> > > #include \"access/heapam_xlog.h\"\n> > > #include \"access/multixact.h\"\n> > > #include \"access/reloptions.h\"\n> > > \n> > > +#include \"access/options.h\"\n> > > \n> > > #include \"access/relscan.h\"\n> > > #include \"access/sysattr.h\"\n> > > #include \"access/tableam.h\"\n> > > \n> > > @@ -641,7 +642,6 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid\n> > > ownerId,> \n> > > \tListCell *listptr;\n> > > \tAttrNumber\tattnum;\n> > > \tbool\t\tpartitioned;\n> > > \n> > > -\tstatic char *validnsps[] = HEAP_RELOPT_NAMESPACES;\n> > > \n> > > \tOid\t\t\tofTypeId;\n> > > \tObjectAddress address;\n> > > \tLOCKMODE\tparentLockmode;\n> > > \n> > > @@ -789,19 +789,37 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid\n> > > ownerId,> \n> > > \t/*\n> > > \t\n> > > \t * Parse and validate reloptions, if any.\n> > > \t */\n> > > \n> > > -\treloptions = transformRelOptions((Datum) 0, stmt->options, NULL,\n> > > validnsps, -\t\t\t\t\t\t\t\t\t\n> true, false);\n> > > \n> > > \tswitch (relkind)\n> > > \t{\n> > > \t\n> > > \t\tcase RELKIND_VIEW:\n> > > -\t\t\t(void) view_reloptions(reloptions, true);\n> > > +\t\t\treloptions = transformOptions(\n> > > +\t\t\t\t\t\t\t\t\t \n> get_view_relopt_spec_set(),\n> > > +\t\t\t\t\t\t\t\t\t \n> (Datum) 0, stmt->options, 0);\n> > > \n> > > \t\t\tbreak;\n> > > \t\t\n> > > \t\tcase RELKIND_PARTITIONED_TABLE:\n> > > -\t\t\t(void) partitioned_table_reloptions(reloptions, \n> true);\n> > > +\t\t{\n> > > +\t\t\t/* If it is not all listed above, then it if heap */\n> > > +\t\t\tchar\t *namespaces[] = HEAP_RELOPT_NAMESPACES;\n> > > +\t\t\tList\t *heapDefList;\n> > > +\n> > > +\t\t\toptionsDefListValdateNamespaces(stmt->options, \n> namespaces);\n> > > +\t\t\theapDefList = optionsDefListFilterNamespaces(stmt-\n> >options, NULL);\n> > > +\t\t\treloptions = \n> transformOptions(get_partitioned_relopt_spec_set(),\n> > > +\t\t\t\t\t\t\t\t\t \n> (Datum) 0, heapDefList, 0);\n> > > \n> > > \t\t\tbreak;\n> > > \n> > > +\t\t}\n> > > \n> > > \t\tdefault:\n> > > -\t\t\t(void) heap_reloptions(relkind, reloptions, true);\n> > > +\t\t{\n> > > +\t\t\t/* If it is not all listed above, then it if heap */\n> > > +\t\t\tchar\t *namespaces[] = HEAP_RELOPT_NAMESPACES;\n> > > +\t\t\tList\t *heapDefList;\n> > > +\n> > > +\t\t\toptionsDefListValdateNamespaces(stmt->options, \n> namespaces);\n> > > +\t\t\theapDefList = optionsDefListFilterNamespaces(stmt-\n> >options, NULL);\n> > > +\t\t\treloptions = \n> transformOptions(get_heap_relopt_spec_set(),\n> > > +\t\t\t\t\t\t\t\t\t \n> (Datum) 0, heapDefList, 0);\n> > > +\t\t}\n> > > \n> > > \t}\n> > > \t\n> > > \tif (stmt->ofTypename)\n> > > \n> > > @@ -4022,7 +4040,7 @@ void\n> > > \n> > > AlterTableInternal(Oid relid, List *cmds, bool recurse)\n> > > {\n> > > \n> > > \tRelation\trel;\n> > > \n> > > -\tLOCKMODE\tlockmode = AlterTableGetLockLevel(cmds);\n> > > +\tLOCKMODE\tlockmode = AlterTableGetLockLevel(relid, cmds);\n> > > \n> > > \trel = relation_open(relid, lockmode);\n> > > \n> > > @@ -4064,7 +4082,7 @@ AlterTableInternal(Oid relid, List *cmds, bool\n> > > recurse)> \n> > > * otherwise we might end up with an inconsistent dump that can't\n> > > restore.\n> > > */\n> > > \n> > > LOCKMODE\n> > > \n> > > -AlterTableGetLockLevel(List *cmds)\n> > > +AlterTableGetLockLevel(Oid relid, List *cmds)\n> > > \n> > > {\n> > > \n> > > \t/*\n> > > \t\n> > > \t * This only works if we read catalog tables using MVCC snapshots.\n> > > \n> > > @@ -4285,9 +4303,13 @@ AlterTableGetLockLevel(List *cmds)\n> > > \n> > > \t\t\t\t\t\t\t\t\t * \n> getTables() */\n> > > \t\t\t\n> > > \t\t\tcase AT_ResetRelOptions:\t/* Uses MVCC in \n> getIndexes() and\n> > > \t\t\t\n> > > \t\t\t\t\t\t\t\t\t\t\n> * getTables() */\n> > > \n> > > -\t\t\t\tcmd_lockmode = \n> AlterTableGetRelOptionsLockLevel((List *) cmd->def);\n> > > -\t\t\t\tbreak;\n> > > -\n> > > +\t\t\t\t{\n> > > +\t\t\t\t\tRelation rel = relation_open(relid, \n> NoLock); // FIXME I am not sure\n> > > how wise it is +\t\t\t\t\tcmd_lockmode = \n> AlterTableGetRelOptionsLockLevel(rel,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t\tcastNode(List, cmd->def));\n> > > +\t\t\t\t\trelation_close(rel,NoLock);\n> > > +\t\t\t\t\tbreak;\n> > > +\t\t\t\t}\n> > > \n> > > \t\t\tcase AT_AttachPartition:\n> > > \t\t\t\tcmd_lockmode = ShareUpdateExclusiveLock;\n> > > \t\t\t\tbreak;\n> > > \n> > > @@ -8062,11 +8084,11 @@ ATExecSetOptions(Relation rel, const char\n> > > *colName, Node *options,> \n> > > \t/* Generate new proposed attoptions (text array) */\n> > > \tdatum = SysCacheGetAttr(ATTNAME, tuple, \n> Anum_pg_attribute_attoptions,\n> > > \t\n> > > \t\t\t\t\t\t\t&isnull);\n> > > \n> > > -\tnewOptions = transformRelOptions(isnull ? (Datum) 0 : datum,\n> > > -\t\t\t\t\t\t\t\t\t \n> castNode(List, options), NULL, NULL,\n> > > -\t\t\t\t\t\t\t\t\t \n> false, isReset);\n> > > -\t/* Validate new options */\n> > > -\t(void) attribute_reloptions(newOptions, true);\n> > > +\n> > > +\tnewOptions = transformOptions(get_attribute_options_spec_set(),\n> > > +\t\t\t\t\t\t\t\t isnull ? \n> (Datum) 0 : datum,\n> > > +\t\t\t\t\t castNode(List, options), \n> OPTIONS_PARSE_MODE_FOR_ALTER |\n> > > +\t\t\t\t\t\t\t (isReset ? \n> OPTIONS_PARSE_MODE_FOR_RESET : 0));\n> > > \n> > > \t/* Build new tuple. */\n> > > \tmemset(repl_null, false, sizeof(repl_null));\n> > > \n> > > @@ -13704,7 +13726,8 @@ ATExecSetRelOptions(Relation rel, List *defList,\n> > > AlterTableType operation,> \n> > > \tDatum\t\trepl_val[Natts_pg_class];\n> > > \tbool\t\trepl_null[Natts_pg_class];\n> > > \tbool\t\trepl_repl[Natts_pg_class];\n> > > \n> > > -\tstatic char *validnsps[] = HEAP_RELOPT_NAMESPACES;\n> > > +\tList\t *toastDefList;\n> > > +\toptions_parse_mode parse_mode;\n> > > \n> > > \tif (defList == NIL && operation != AT_ReplaceRelOptions)\n> > > \t\n> > > \t\treturn;\t\t\t\t\t/* nothing to do \n> */\n> > > \n> > > @@ -13734,27 +13757,68 @@ ATExecSetRelOptions(Relation rel, List *defList,\n> > > AlterTableType operation,> \n> > > \t}\n> > > \t\n> > > \t/* Generate new proposed reloptions (text array) */\n> > > \n> > > -\tnewOptions = transformRelOptions(isnull ? (Datum) 0 : datum,\n> > > -\t\t\t\t\t\t\t\t\t \n> defList, NULL, validnsps, false,\n> > > -\t\t\t\t\t\t\t\t\t \n> operation == AT_ResetRelOptions);\n> > > \n> > > \t/* Validate */\n> > > \n> > > +\tparse_mode = OPTIONS_PARSE_MODE_FOR_ALTER;\n> > > +\tif (operation == AT_ResetRelOptions)\n> > > +\t\tparse_mode |= OPTIONS_PARSE_MODE_FOR_RESET;\n> > > +\n> > > \n> > > \tswitch (rel->rd_rel->relkind)\n> > > \t{\n> > > \t\n> > > \t\tcase RELKIND_RELATION:\n> > > -\t\tcase RELKIND_TOASTVALUE:\n> > > +\t\tcase RELKIND_TOASTVALUE: // FIXME why it is here???\n> > > \n> > > \t\tcase RELKIND_MATVIEW:\n> > > -\t\t\t(void) heap_reloptions(rel->rd_rel->relkind, \n> newOptions, true);\n> > > +\t\t\t{\n> > > +\t\t\t\tchar\t *namespaces[] = \n> HEAP_RELOPT_NAMESPACES;\n> > > +\t\t\t\tList\t *heapDefList;\n> > > +\n> > > +\t\t\t\toptionsDefListValdateNamespaces(defList, \n> namespaces);\n> > > +\t\t\t\theapDefList = optionsDefListFilterNamespaces(\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t\t defList, NULL);\n> > > +\t\t\t\tnewOptions = \n> transformOptions(get_heap_relopt_spec_set(),\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t isnull ? (Datum) 0 : datum,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t heapDefList, parse_mode);\n> > > +\t\t\t}\n> > > \n> > > \t\t\tbreak;\n> > > \n> > > +\n> > > \n> > > \t\tcase RELKIND_PARTITIONED_TABLE:\n> > > -\t\t\t(void) partitioned_table_reloptions(newOptions, \n> true);\n> > > -\t\t\tbreak;\n> > > +\t\t\t{\n> > > +\t\t\t\tchar\t *namespaces[] = \n> HEAP_RELOPT_NAMESPACES;\n> > > +\t\t\t\tList\t *heapDefList;\n> > > +\n> > > +\t\t\t\toptionsDefListValdateNamespaces(defList, \n> namespaces);\n> > > +\t\t\t\theapDefList = optionsDefListFilterNamespaces(\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t\t defList, NULL);\n> > > +\t\t\t\tnewOptions = \n> transformOptions(get_partitioned_relopt_spec_set(),\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t isnull ? (Datum) 0 : datum,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t heapDefList, parse_mode);\n> > > +\t\t\t\tbreak;\n> > > +\t\t\t}\n> > > \n> > > \t\tcase RELKIND_VIEW:\n> > > -\t\t\t(void) view_reloptions(newOptions, true);\n> > > -\t\t\tbreak;\n> > > +\t\t\t{\n> > > +\n> > > +\t\t\t\tnewOptions = transformOptions(\n> > > +\t\t\t\t\t\t\t\t\t \n> get_view_relopt_spec_set(),\n> > > +\t\t\t\t\t\t\t\t\t \n> datum, defList, parse_mode);\n> > > +\t\t\t\tbreak;\n> > > +\t\t\t}\n> > > \n> > > \t\tcase RELKIND_INDEX:\n> > > \n> > > \t\tcase RELKIND_PARTITIONED_INDEX:\n> > > -\t\t\t(void) index_reloptions(rel->rd_indam->amoptions, \n> newOptions, true);\n> > > +\t\t\tif (! rel->rd_indam->amreloptspecset)\n> > > +\t\t\t{\n> > > +\t\t\t\tereport(ERROR,\n> > > +\t\t\t\t\t\t\n> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > > +\t\t\t\t\t\t errmsg(\"index %s does not \n> support options\",\n> > > +\t\t\t\t\t\t\t\t\n> RelationGetRelationName(rel))));\n> > > +\t\t\t\tbreak;\n> > > +\t\t\t}\n> > > +\t\t\tparse_mode = OPTIONS_PARSE_MODE_FOR_ALTER;\n> > > +\t\t\tif (operation == AT_ResetRelOptions)\n> > > +\t\t\t\tparse_mode |= OPTIONS_PARSE_MODE_FOR_RESET;\n> > > +\t\t\tnewOptions = transformOptions(\n> > > +\t\t\t\t\t\t\t\t\trel-\n> >rd_indam->amreloptspecset(),\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \tisnull ? (Datum) 0 : datum,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \tdefList, parse_mode);\n> > > \n> > > \t\t\tbreak;\n> > > \t\t\n> > > \t\tdefault:\n> > > \t\t\tereport(ERROR,\n> > > \n> > > @@ -13769,7 +13833,7 @@ ATExecSetRelOptions(Relation rel, List *defList,\n> > > AlterTableType operation,> \n> > > \tif (rel->rd_rel->relkind == RELKIND_VIEW)\n> > > \t{\n> > > \t\n> > > \t\tQuery\t *view_query = get_view_query(rel);\n> > > \n> > > -\t\tList\t *view_options = \n> untransformRelOptions(newOptions);\n> > > +\t\tList\t *view_options = \n> optionsTextArrayToDefList(newOptions);\n> > > \n> > > \t\tListCell *cell;\n> > > \t\tbool\t\tcheck_option = false;\n> > > \n> > > @@ -13853,11 +13917,15 @@ ATExecSetRelOptions(Relation rel, List *defList,\n> > > AlterTableType operation,> \n> > > \t\t\t\t\t\t\t\t\t\n> &isnull);\n> > > \t\t\n> > > \t\t}\n> > > \n> > > -\t\tnewOptions = transformRelOptions(isnull ? (Datum) 0 : datum,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> defList, \"toast\", validnsps, false,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> operation == AT_ResetRelOptions);\n> > > +\t\tparse_mode = OPTIONS_PARSE_MODE_FOR_ALTER;\n> > > +\t\tif (operation == AT_ResetRelOptions)\n> > > +\t\t\tparse_mode |= OPTIONS_PARSE_MODE_FOR_RESET;\n> > > +\n> > > +\t\ttoastDefList = optionsDefListFilterNamespaces(defList, \n> \"toast\");\n> > > \n> > > -\t\t(void) heap_reloptions(RELKIND_TOASTVALUE, newOptions, \n> true);\n> > > +\t\tnewOptions = transformOptions(get_toast_relopt_spec_set(),\n> > > +\t\t\t\t\t\t\t\t\t \n> isnull ? (Datum) 0 : datum,\n> > > +\t\t\t\t\t\t\t\t\t \n> toastDefList, parse_mode);\n> > > \n> > > \t\tmemset(repl_val, 0, sizeof(repl_val));\n> > > \t\tmemset(repl_null, false, sizeof(repl_null));\n> > > \n> > > diff --git a/src/backend/commands/tablespace.c\n> > > b/src/backend/commands/tablespace.c index 4b96eec..912699b 100644\n> > > --- a/src/backend/commands/tablespace.c\n> > > +++ b/src/backend/commands/tablespace.c\n> > > @@ -345,10 +345,9 @@ CreateTableSpace(CreateTableSpaceStmt *stmt)\n> > > \n> > > \tnulls[Anum_pg_tablespace_spcacl - 1] = true;\n> > > \t\n> > > \t/* Generate new proposed spcoptions (text array) */\n> > > \n> > > -\tnewOptions = transformRelOptions((Datum) 0,\n> > > -\t\t\t\t\t\t\t\t\t \n> stmt->options,\n> > > -\t\t\t\t\t\t\t\t\t \n> NULL, NULL, false, false);\n> > > -\t(void) tablespace_reloptions(newOptions, true);\n> > > +\tnewOptions = transformOptions(get_tablespace_options_spec_set(),\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t(Datum) 0, stmt->options, 0);\n> > > +\n> > > \n> > > \tif (newOptions != (Datum) 0)\n> > > \t\n> > > \t\tvalues[Anum_pg_tablespace_spcoptions - 1] = newOptions;\n> > > \t\n> > > \telse\n> > > \n> > > @@ -1053,10 +1052,11 @@ AlterTableSpaceOptions(AlterTableSpaceOptionsStmt\n> > > *stmt)> \n> > > \t/* Generate new proposed spcoptions (text array) */\n> > > \tdatum = heap_getattr(tup, Anum_pg_tablespace_spcoptions,\n> > > \t\n> > > \t\t\t\t\t\t RelationGetDescr(rel), \n> &isnull);\n> > > \n> > > -\tnewOptions = transformRelOptions(isnull ? (Datum) 0 : datum,\n> > > -\t\t\t\t\t\t\t\t\t \n> stmt->options, NULL, NULL, false,\n> > > -\t\t\t\t\t\t\t\t\t \n> stmt->isReset);\n> > > -\t(void) tablespace_reloptions(newOptions, true);\n> > > +\tnewOptions = transformOptions(get_tablespace_options_spec_set(),\n> > > +\t\t\t\t\t\t\t\t isnull ? \n> (Datum) 0 : datum,\n> > > +\t\t\t\t\t\t\t\t stmt-\n> >options,\n> > > +\t\t\t\t\t\t\t\t \n> OPTIONS_PARSE_MODE_FOR_ALTER |\n> > > +\t\t\t\t\t\t (stmt->isReset ? \n> OPTIONS_PARSE_MODE_FOR_RESET : 0));\n> > > \n> > > \t/* Build new tuple. */\n> > > \tmemset(repl_null, false, sizeof(repl_null));\n> > > \n> > > diff --git a/src/backend/foreign/foreign.c b/src/backend/foreign/foreign.c\n> > > index 5564dc3..0370be7 100644\n> > > --- a/src/backend/foreign/foreign.c\n> > > +++ b/src/backend/foreign/foreign.c\n> > > @@ -78,7 +78,7 @@ GetForeignDataWrapperExtended(Oid fdwid, bits16 flags)\n> > > \n> > > \tif (isnull)\n> > > \t\n> > > \t\tfdw->options = NIL;\n> > > \t\n> > > \telse\n> > > \n> > > -\t\tfdw->options = untransformRelOptions(datum);\n> > > +\t\tfdw->options = optionsTextArrayToDefList(datum);\n> > > \n> > > \tReleaseSysCache(tp);\n> > > \n> > > @@ -165,7 +165,7 @@ GetForeignServerExtended(Oid serverid, bits16 flags)\n> > > \n> > > \tif (isnull)\n> > > \t\n> > > \t\tserver->options = NIL;\n> > > \t\n> > > \telse\n> > > \n> > > -\t\tserver->options = untransformRelOptions(datum);\n> > > +\t\tserver->options = optionsTextArrayToDefList(datum);\n> > > \n> > > \tReleaseSysCache(tp);\n> > > \n> > > @@ -233,7 +233,7 @@ GetUserMapping(Oid userid, Oid serverid)\n> > > \n> > > \tif (isnull)\n> > > \t\n> > > \t\tum->options = NIL;\n> > > \t\n> > > \telse\n> > > \n> > > -\t\tum->options = untransformRelOptions(datum);\n> > > +\t\tum->options = optionsTextArrayToDefList(datum);\n> > > \n> > > \tReleaseSysCache(tp);\n> > > \n> > > @@ -270,7 +270,7 @@ GetForeignTable(Oid relid)\n> > > \n> > > \tif (isnull)\n> > > \t\n> > > \t\tft->options = NIL;\n> > > \t\n> > > \telse\n> > > \n> > > -\t\tft->options = untransformRelOptions(datum);\n> > > +\t\tft->options = optionsTextArrayToDefList(datum);\n> > > \n> > > \tReleaseSysCache(tp);\n> > > \n> > > @@ -303,7 +303,7 @@ GetForeignColumnOptions(Oid relid, AttrNumber attnum)\n> > > \n> > > \tif (isnull)\n> > > \t\n> > > \t\toptions = NIL;\n> > > \t\n> > > \telse\n> > > \n> > > -\t\toptions = untransformRelOptions(datum);\n> > > +\t\toptions = optionsTextArrayToDefList(datum);\n> > > \n> > > \tReleaseSysCache(tp);\n> > > \n> > > @@ -572,7 +572,7 @@ pg_options_to_table(PG_FUNCTION_ARGS)\n> > > \n> > > \tDatum\t\tarray = PG_GETARG_DATUM(0);\n> > > \t\n> > > \tdeflist_to_tuplestore((ReturnSetInfo *) fcinfo->resultinfo,\n> > > \n> > > -\t\t\t\t\t\t \n> untransformRelOptions(array));\n> > > +\t\t\t\t\t\t \n> optionsTextArrayToDefList(array));\n> > > \n> > > \treturn (Datum) 0;\n> > > \n> > > }\n> > > \n> > > @@ -643,7 +643,7 @@ is_conninfo_option(const char *option, Oid context)\n> > > \n> > > Datum\n> > > postgresql_fdw_validator(PG_FUNCTION_ARGS)\n> > > {\n> > > \n> > > -\tList\t *options_list = \n> untransformRelOptions(PG_GETARG_DATUM(0));\n> > > +\tList\t *options_list = \n> optionsTextArrayToDefList(PG_GETARG_DATUM(0));\n> > > \n> > > \tOid\t\t\tcatalog = PG_GETARG_OID(1);\n> > > \t\n> > > \tListCell *cell;\n> > > \n> > > diff --git a/src/backend/parser/parse_utilcmd.c\n> > > b/src/backend/parser/parse_utilcmd.c index 313d7b6..1fe41b4 100644\n> > > --- a/src/backend/parser/parse_utilcmd.c\n> > > +++ b/src/backend/parser/parse_utilcmd.c\n> > > @@ -1757,7 +1757,7 @@ generateClonedIndexStmt(RangeVar *heapRel, Relation\n> > > source_idx,> \n> > > \t\t/* Add the operator class name, if non-default */\n> > > \t\tiparam->opclass = get_opclass(indclass->values[keyno], \n> keycoltype);\n> > > \t\tiparam->opclassopts =\n> > > \n> > > -\t\t\tuntransformRelOptions(get_attoptions(source_relid, \n> keyno + 1));\n> > > +\t\t\t\n> optionsTextArrayToDefList(get_attoptions(source_relid, keyno + 1));\n> > > \n> > > \t\tiparam->ordering = SORTBY_DEFAULT;\n> > > \t\tiparam->nulls_ordering = SORTBY_NULLS_DEFAULT;\n> > > \n> > > @@ -1821,7 +1821,7 @@ generateClonedIndexStmt(RangeVar *heapRel, Relation\n> > > source_idx,> \n> > > \tdatum = SysCacheGetAttr(RELOID, ht_idxrel,\n> > > \t\n> > > \t\t\t\t\t\t\t\n> Anum_pg_class_reloptions, &isnull);\n> > > \t\n> > > \tif (!isnull)\n> > > \n> > > -\t\tindex->options = untransformRelOptions(datum);\n> > > +\t\tindex->options = optionsTextArrayToDefList(datum);\n> > > \n> > > \t/* If it's a partial index, decompile and append the predicate */\n> > > \tdatum = SysCacheGetAttr(INDEXRELID, ht_idx,\n> > > \n> > > diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c\n> > > index bf085aa..d12ab1a 100644\n> > > --- a/src/backend/tcop/utility.c\n> > > +++ b/src/backend/tcop/utility.c\n> > > @@ -1155,6 +1155,7 @@ ProcessUtilitySlow(ParseState *pstate,\n> > > \n> > > \t\t\t\t\t\t\tCreateStmt *cstmt = \n> (CreateStmt *) stmt;\n> > > \t\t\t\t\t\t\tDatum\t\t\n> toast_options;\n> > > \t\t\t\t\t\t\tstatic char \n> *validnsps[] = HEAP_RELOPT_NAMESPACES;\n> > > \n> > > +\t\t\t\t\t\t\tList\t \n> *toastDefList;\n> > > \n> > > \t\t\t\t\t\t\t/* Remember \n> transformed RangeVar for LIKE */\n> > > \t\t\t\t\t\t\ttable_rv = cstmt-\n> >relation;\n> > > \n> > > @@ -1178,15 +1179,17 @@ ProcessUtilitySlow(ParseState *pstate,\n> > > \n> > > \t\t\t\t\t\t\t * parse and \n> validate reloptions for the toast\n> > > \t\t\t\t\t\t\t * table\n> > > \t\t\t\t\t\t\t */\n> > > \n> > > -\t\t\t\t\t\t\ttoast_options = \n> transformRelOptions((Datum) 0,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t\t\tcstmt->options,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t\t\t\"toast\",\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t\t\tvalidnsps,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t\t\ttrue,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t\t\tfalse);\n> > > -\t\t\t\t\t\t\t(void) \n> heap_reloptions(RELKIND_TOASTVALUE,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t toast_options,\n> > > -\t\t\t\t\t\t\t\t\t\t\n> \t\t true);\n> > > +\n> > > +\t\t\t\t\t\t\t\n> optionsDefListValdateNamespaces(\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t ((CreateStmt *) stmt)->options,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t\tvalidnsps);\n> > > +\n> > > +\t\t\t\t\t\t\ttoastDefList = \n> optionsDefListFilterNamespaces(\n> > > +\t\t\t\t\t\t\t\t\t\n> ((CreateStmt *) stmt)->options, \"toast\");\n> > > +\n> > > +\t\t\t\t\t\t\ttoast_options = \n> transformOptions(\n> > > +\t\t\t\t\t\t\t\t\t \n> get_toast_relopt_spec_set(), (Datum) 0,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t\t toastDefList, 0);\n> > > \n> > > \t\t\t\t\t\t\t\n> NewRelationCreateToastTable(address.objectId,\n> > > \t\t\t\t\t\t\t\n> > > \t\t\t\t\t\t\t\t\t\t\n> \t\t\t\ttoast_options);\n> > > \n> > > @@ -1295,9 +1298,12 @@ ProcessUtilitySlow(ParseState *pstate,\n> > > \n> > > \t\t\t\t\t * lock on (for example) a relation \n> on which we have no\n> > > \t\t\t\t\t * permissions.\n> > > \t\t\t\t\t */\n> > > \n> > > -\t\t\t\t\tlockmode = \n> AlterTableGetLockLevel(atstmt->cmds);\n> > > -\t\t\t\t\trelid = \n> AlterTableLookupRelation(atstmt, lockmode);\n> > > -\n> > > +\t\t\t\t\trelid = \n> AlterTableLookupRelation(atstmt, NoLock); // FIXME!\n> > > +\t\t\t\t\tif (OidIsValid(relid))\n> > > +\t\t\t\t\t{\n> > > +\t\t\t\t\t\tlockmode = \n> AlterTableGetLockLevel(relid, atstmt->cmds);\n> > > +\t\t\t\t\t\trelid = \n> AlterTableLookupRelation(atstmt, lockmode);\n> > > +\t\t\t\t\t}\n> > > \n> > > \t\t\t\t\tif (OidIsValid(relid))\n> > > \t\t\t\t\t{\n> > > \t\t\t\t\t\n> > > \t\t\t\t\t\tAlterTableUtilityContext \n> atcontext;\n> > > \n> > > diff --git a/src/backend/utils/cache/attoptcache.c\n> > > b/src/backend/utils/cache/attoptcache.c index 72d89cb..f651129 100644\n> > > --- a/src/backend/utils/cache/attoptcache.c\n> > > +++ b/src/backend/utils/cache/attoptcache.c\n> > > @@ -16,6 +16,7 @@\n> > > \n> > > */\n> > > \n> > > #include \"postgres.h\"\n> > > \n> > > +#include \"access/options.h\"\n> > > \n> > > #include \"access/reloptions.h\"\n> > > #include \"utils/attoptcache.h\"\n> > > #include \"utils/catcache.h\"\n> > > \n> > > @@ -148,7 +149,8 @@ get_attribute_options(Oid attrelid, int attnum)\n> > > \n> > > \t\t\t\topts = NULL;\n> > > \t\t\t\n> > > \t\t\telse\n> > > \t\t\t{\n> > > \n> > > -\t\t\t\tbytea\t *bytea_opts = \n> attribute_reloptions(datum, false);\n> > > +\t\t\t\tbytea *bytea_opts = \n> optionsTextArrayToBytea(\n> > > +\t\t\t\t\t\t\t\t\t\n> get_attribute_options_spec_set(), datum, 0);\n> > > \n> > > \t\t\t\topts = \n> MemoryContextAlloc(CacheMemoryContext,\n> > > \t\t\t\t\n> > > \t\t\t\t\t\t\t\t\t\t\n> VARSIZE(bytea_opts));\n> > > \n> > > diff --git a/src/backend/utils/cache/relcache.c\n> > > b/src/backend/utils/cache/relcache.c index 13d9994..f22c2d9 100644\n> > > --- a/src/backend/utils/cache/relcache.c\n> > > +++ b/src/backend/utils/cache/relcache.c\n> > > @@ -441,7 +441,7 @@ static void\n> > > \n> > > RelationParseRelOptions(Relation relation, HeapTuple tuple)\n> > > {\n> > > \n> > > \tbytea\t *options;\n> > > \n> > > -\tamoptions_function amoptsfn;\n> > > +\tamreloptspecset_function amoptspecsetfn;\n> > > \n> > > \trelation->rd_options = NULL;\n> > > \n> > > @@ -456,11 +456,11 @@ RelationParseRelOptions(Relation relation, HeapTuple\n> > > tuple)> \n> > > \t\tcase RELKIND_VIEW:\n> > > \t\tcase RELKIND_MATVIEW:\n> > > \n> > > \t\tcase RELKIND_PARTITIONED_TABLE:\n> > > -\t\t\tamoptsfn = NULL;\n> > > +\t\t\tamoptspecsetfn = NULL;\n> > > \n> > > \t\t\tbreak;\n> > > \t\t\n> > > \t\tcase RELKIND_INDEX:\n> > > \n> > > \t\tcase RELKIND_PARTITIONED_INDEX:\n> > > -\t\t\tamoptsfn = relation->rd_indam->amoptions;\n> > > +\t\t\tamoptspecsetfn = relation->rd_indam->amreloptspecset;\n> > > \n> > > \t\t\tbreak;\n> > > \t\t\n> > > \t\tdefault:\n> > > \t\t\treturn;\n> > > \n> > > @@ -471,7 +471,7 @@ RelationParseRelOptions(Relation relation, HeapTuple\n> > > tuple)> \n> > > \t * we might not have any other for pg_class yet (consider executing \n> this\n> > > \t * code for pg_class itself)\n> > > \t */\n> > > \n> > > -\toptions = extractRelOptions(tuple, GetPgClassDescriptor(), \n> amoptsfn);\n> > > +\toptions = extractRelOptions(tuple, GetPgClassDescriptor(),\n> > > amoptspecsetfn);> \n> > > \t/*\n> > > \t\n> > > \t * Copy parsed data into CacheMemoryContext. To guard against the\n> > > \n> > > diff --git a/src/backend/utils/cache/spccache.c\n> > > b/src/backend/utils/cache/spccache.c index 5870f43..87f2fa5 100644\n> > > --- a/src/backend/utils/cache/spccache.c\n> > > +++ b/src/backend/utils/cache/spccache.c\n> > > @@ -148,7 +148,8 @@ get_tablespace(Oid spcid)\n> > > \n> > > \t\t\topts = NULL;\n> > > \t\t\n> > > \t\telse\n> > > \t\t{\n> > > \n> > > -\t\t\tbytea\t *bytea_opts = \n> tablespace_reloptions(datum, false);\n> > > +\t\t\tbytea *bytea_opts = optionsTextArrayToBytea(\n> > > +\t\t\t\t\t\t\t\t\n> get_tablespace_options_spec_set(), datum, 0);\n> > > \n> > > \t\t\topts = MemoryContextAlloc(CacheMemoryContext, \n> VARSIZE(bytea_opts));\n> > > \t\t\tmemcpy(opts, bytea_opts, VARSIZE(bytea_opts));\n> > > \n> > > diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h\n> > > index d357ebb..b8fb6b9 100644\n> > > --- a/src/include/access/amapi.h\n> > > +++ b/src/include/access/amapi.h\n> > > @@ -136,10 +136,6 @@ typedef void (*amcostestimate_function) (struct\n> > > PlannerInfo *root,> \n> > > \t\t\t\t\t\t\t\t\t\t\n> double *indexCorrelation,\n> > > \t\t\t\t\t\t\t\t\t\t\n> double *indexPages);\n> > > \n> > > -/* parse index reloptions */\n> > > -typedef bytea *(*amoptions_function) (Datum reloptions,\n> > > -\t\t\t\t\t\t\t\t\t \n> bool validate);\n> > > -\n> > > \n> > > /* report AM, index, or index column property */\n> > > typedef bool (*amproperty_function) (Oid index_oid, int attno,\n> > > \n> > > \t\t\t\t\t\t\t\t\t \n> IndexAMProperty prop, const char *propname,\n> > > \n> > > @@ -186,6 +182,9 @@ typedef void (*ammarkpos_function) (IndexScanDesc\n> > > scan);> \n> > > /* restore marked scan position */\n> > > typedef void (*amrestrpos_function) (IndexScanDesc scan);\n> > > \n> > > +/* get catalog of reloptions definitions */\n> > > +typedef void *(*amreloptspecset_function) ();\n> > > +\n> > > \n> > > /*\n> > > \n> > > * Callback function signatures - for parallel index scans.\n> > > */\n> > > \n> > > @@ -263,7 +262,6 @@ typedef struct IndexAmRoutine\n> > > \n> > > \tamvacuumcleanup_function amvacuumcleanup;\n> > > \tamcanreturn_function amcanreturn;\t/* can be NULL */\n> > > \tamcostestimate_function amcostestimate;\n> > > \n> > > -\tamoptions_function amoptions;\n> > > \n> > > \tamproperty_function amproperty; /* can be NULL */\n> > > \tambuildphasename_function ambuildphasename; /* can be NULL */\n> > > \tamvalidate_function amvalidate;\n> > > \n> > > @@ -275,6 +273,7 @@ typedef struct IndexAmRoutine\n> > > \n> > > \tamendscan_function amendscan;\n> > > \tammarkpos_function ammarkpos;\t/* can be NULL */\n> > > \tamrestrpos_function amrestrpos; /* can be NULL */\n> > > \n> > > +\tamreloptspecset_function amreloptspecset; /* can be NULL */\n> > > \n> > > \t/* interface functions to support parallel index scans */\n> > > \tamestimateparallelscan_function amestimateparallelscan; /* can be \n> NULL\n> > > \t*/\n> > > \n> > > diff --git a/src/include/access/brin.h b/src/include/access/brin.h\n> > > index 4e2be13..25b3456 100644\n> > > --- a/src/include/access/brin.h\n> > > +++ b/src/include/access/brin.h\n> > > @@ -36,6 +36,8 @@ typedef struct BrinStatsData\n> > > \n> > > #define BRIN_DEFAULT_PAGES_PER_RANGE\t128\n> > > \n> > > +#define BRIN_MIN_PAGES_PER_RANGE\t\t1\n> > > +#define BRIN_MAX_PAGES_PER_RANGE\t\t131072\n> > > \n> > > #define BrinGetPagesPerRange(relation) \\\n> > > \n> > > \t(AssertMacro(relation->rd_rel->relkind == RELKIND_INDEX && \\\n> > > \t\n> > > \t\t\t\t relation->rd_rel->relam == BRIN_AM_OID), \\\n> > > \n> > > diff --git a/src/include/access/brin_internal.h\n> > > b/src/include/access/brin_internal.h index 79440eb..a798a96 100644\n> > > --- a/src/include/access/brin_internal.h\n> > > +++ b/src/include/access/brin_internal.h\n> > > @@ -14,6 +14,7 @@\n> > > \n> > > #include \"access/amapi.h\"\n> > > #include \"storage/bufpage.h\"\n> > > #include \"utils/typcache.h\"\n> > > \n> > > +#include \"access/options.h\"\n> > > \n> > > /*\n> > > \n> > > @@ -108,6 +109,7 @@ extern IndexBulkDeleteResult\n> > > *brinbulkdelete(IndexVacuumInfo *info,> \n> > > extern IndexBulkDeleteResult *brinvacuumcleanup(IndexVacuumInfo *info,\n> > > \n> > > \t\t\t\t\t\t\t\t\t\t\n> \t\tIndexBulkDeleteResult *stats);\n> > > \n> > > extern bytea *brinoptions(Datum reloptions, bool validate);\n> > > \n> > > +extern void * bringetreloptspecset (void);\n> > > \n> > > /* brin_validate.c */\n> > > extern bool brinvalidate(Oid opclassoid);\n> > > \n> > > diff --git a/src/include/access/gin_private.h\n> > > b/src/include/access/gin_private.h index 670a40b..2b7c25c 100644\n> > > --- a/src/include/access/gin_private.h\n> > > +++ b/src/include/access/gin_private.h\n> > > @@ -108,6 +108,7 @@ extern Datum *ginExtractEntries(GinState *ginstate,\n> > > OffsetNumber attnum,> \n> > > extern OffsetNumber gintuple_get_attrnum(GinState *ginstate, IndexTuple\n> > > tuple); extern Datum gintuple_get_key(GinState *ginstate, IndexTuple\n> > > tuple,> \n> > > \t\t\t\t\t\t\t GinNullCategory \n> *category);\n> > > \n> > > +extern void *gingetreloptspecset(void);\n> > > \n> > > /* gininsert.c */\n> > > extern IndexBuildResult *ginbuild(Relation heap, Relation index,\n> > > \n> > > diff --git a/src/include/access/gist_private.h\n> > > b/src/include/access/gist_private.h index 553d364..015b75a 100644\n> > > --- a/src/include/access/gist_private.h\n> > > +++ b/src/include/access/gist_private.h\n> > > @@ -22,6 +22,7 @@\n> > > \n> > > #include \"storage/buffile.h\"\n> > > #include \"utils/hsearch.h\"\n> > > #include \"access/genam.h\"\n> > > \n> > > +#include \"access/reloptions.h\" //FIXME! should be replaced with options.h\n> > > finally> \n> > > /*\n> > > \n> > > * Maximum number of \"halves\" a page can be split into in one operation.\n> > > \n> > > @@ -388,6 +389,7 @@ typedef enum GistOptBufferingMode\n> > > \n> > > \tGIST_OPTION_BUFFERING_OFF\n> > > \n> > > } GistOptBufferingMode;\n> > > \n> > > +\n> > > \n> > > /*\n> > > \n> > > * Storage type for GiST's reloptions\n> > > */\n> > > \n> > > @@ -478,7 +480,7 @@ extern void gistadjustmembers(Oid opfamilyoid,\n> > > \n> > > #define GIST_MIN_FILLFACTOR\t\t\t10\n> > > #define GIST_DEFAULT_FILLFACTOR\t\t90\n> > > \n> > > -extern bytea *gistoptions(Datum reloptions, bool validate);\n> > > +extern void *gistgetreloptspecset(void);\n> > > \n> > > extern bool gistproperty(Oid index_oid, int attno,\n> > > \n> > > \t\t\t\t\t\t IndexAMProperty prop, const \n> char *propname,\n> > > \t\t\t\t\t\t bool *res, bool *isnull);\n> > > \n> > > diff --git a/src/include/access/hash.h b/src/include/access/hash.h\n> > > index 1cce865..91922ef 100644\n> > > --- a/src/include/access/hash.h\n> > > +++ b/src/include/access/hash.h\n> > > @@ -378,7 +378,6 @@ extern IndexBulkDeleteResult\n> > > *hashbulkdelete(IndexVacuumInfo *info,> \n> > > \t\t\t\t\t\t\t\t\t\t\n> \t void *callback_state);\n> > > \n> > > extern IndexBulkDeleteResult *hashvacuumcleanup(IndexVacuumInfo *info,\n> > > \n> > > \t\t\t\t\t\t\t\t\t\t\n> \t\tIndexBulkDeleteResult *stats);\n> > > \n> > > -extern bytea *hashoptions(Datum reloptions, bool validate);\n> > > \n> > > extern bool hashvalidate(Oid opclassoid);\n> > > extern void hashadjustmembers(Oid opfamilyoid,\n> > > \n> > > \t\t\t\t\t\t\t Oid opclassoid,\n> > > \n> > > @@ -470,6 +469,7 @@ extern BlockNumber\n> > > _hash_get_newblock_from_oldbucket(Relation rel, Bucket old_bu> \n> > > extern Bucket _hash_get_newbucket_from_oldbucket(Relation rel, Bucket\n> > > old_bucket,> \n> > > \t\t\t\t\t\t\t\t\t\t\n> \t\t uint32 lowmask, uint32 maxbucket);\n> > > \n> > > extern void _hash_kill_items(IndexScanDesc scan);\n> > > \n> > > +extern void *hashgetreloptspecset(void);\n> > > \n> > > /* hash.c */\n> > > extern void hashbucketcleanup(Relation rel, Bucket cur_bucket,\n> > > \n> > > diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h\n> > > index 30a216e..1fcb5f5 100644\n> > > --- a/src/include/access/nbtree.h\n> > > +++ b/src/include/access/nbtree.h\n> > > @@ -1252,7 +1252,7 @@ extern void _bt_end_vacuum(Relation rel);\n> > > \n> > > extern void _bt_end_vacuum_callback(int code, Datum arg);\n> > > extern Size BTreeShmemSize(void);\n> > > extern void BTreeShmemInit(void);\n> > > \n> > > -extern bytea *btoptions(Datum reloptions, bool validate);\n> > > +extern void * btgetreloptspecset (void);\n> > > \n> > > extern bool btproperty(Oid index_oid, int attno,\n> > > \n> > > \t\t\t\t\t IndexAMProperty prop, const char \n> *propname,\n> > > \t\t\t\t\t bool *res, bool *isnull);\n> > > \n> > > diff --git a/src/include/access/options.h b/src/include/access/options.h\n> > > new file mode 100644\n> > > index 0000000..34e2917\n> > > --- /dev/null\n> > > +++ b/src/include/access/options.h\n> > > @@ -0,0 +1,245 @@\n> > > +/*-----------------------------------------------------------------------\n> > > -- + *\n> > > + * options.h\n> > > + *\t Core support for relation and tablespace options\n> > > (pg_class.reloptions\n> > > + *\t and pg_tablespace.spcoptions)\n> > > + *\n> > > + * Note: the functions dealing with text-array options values declare\n> > > + * them as Datum, not ArrayType *, to avoid needing to include array.h\n> > > + * into a lot of low-level code.\n> > > + *\n> > > + *\n> > > + * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group\n> > > + * Portions Copyright (c) 1994, Regents of the University of California\n> > > + *\n> > > + * src/include/access/options.h\n> > > + *\n> > > +\n> > > *------------------------------------------------------------------------\n> > > - + */\n> > > +#ifndef OPTIONS_H\n> > > +#define OPTIONS_H\n> > > +\n> > > +#include \"storage/lock.h\"\n> > > +#include \"nodes/pg_list.h\"\n> > > +\n> > > +\n> > > +/* supported option types */\n> > > +typedef enum option_type\n> > > +{\n> > > +\tOPTION_TYPE_BOOL,\n> > > +\tOPTION_TYPE_INT,\n> > > +\tOPTION_TYPE_REAL,\n> > > +\tOPTION_TYPE_ENUM,\n> > > +\tOPTION_TYPE_STRING\n> > > +}\toption_type;\n> > > +\n> > > +\n> > > +typedef enum option_value_status\n> > > +{\n> > > +\tOPTION_VALUE_STATUS_EMPTY,\t/* Option was just initialized */\n> > > +\tOPTION_VALUE_STATUS_RAW,\t/* Option just came from syntax analyzer in\n> > > +\t\t\t\t\t\t\t\t * has name, \n> and raw (unparsed) value */\n> > > +\tOPTION_VALUE_STATUS_PARSED, /* Option was parsed and has link to \n> catalog\n> > > +\t\t\t\t\t\t\t\t * entry and \n> proper value */\n> > > +\tOPTION_VALUE_STATUS_FOR_RESET\t\t/* This option came from \n> ALTER xxx\n> > > +\t\t\t\t\t\t\t\t\t\t\n> * RESET */\n> > > +}\toption_value_status;\n> > > +\n> > > +/* flags for reloptinon definition */\n> > > +typedef enum option_spec_flags\n> > > +{\n> > > +\tOPTION_DEFINITION_FLAG_FORBID_ALTER = (1 << 0),\t\t/* \n> Altering this option\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t * is forbidden */\n> > > +\tOPTION_DEFINITION_FLAG_IGNORE = (1 << 1),\t/* Skip this option while\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t * parsing. Used for WITH OIDS\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t * special case */\n> > > +\tOPTION_DEFINITION_FLAG_REJECT = (1 << 2)\t/* Option will be \n> rejected\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t * when comes from syntax\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t * analyzer, but still have\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t * default value and offset */\n> > > +} option_spec_flags;\n> > > +\n> > > +/* flags that tells reloption parser how to parse*/\n> > > +typedef enum options_parse_mode\n> > > +{\n> > > +\tOPTIONS_PARSE_MODE_VALIDATE = (1 << 0),\n> > > +\tOPTIONS_PARSE_MODE_FOR_ALTER = (1 << 1),\n> > > +\tOPTIONS_PARSE_MODE_FOR_RESET = (1 << 2)\n> > > +} options_parse_mode;\n> > > +\n> > > +\n> > > +\n> > > +/*\n> > > + * opt_enum_elt_def -- One member of the array of acceptable values\n> > > + * of an enum reloption.\n> > > + */\n> > > +typedef struct opt_enum_elt_def\n> > > +{\n> > > +\tconst char *string_val;\n> > > +\tint\t\t\tsymbol_val;\n> > > +} opt_enum_elt_def;\n> > > +\n> > > +\n> > > +/* generic structure to store Option Spec information */\n> > > +typedef struct option_spec_basic\n> > > +{\n> > > +\tconst char *name;\t\t\t/* must be first (used as list \n> termination\n> > > +\t\t\t\t\t\t\t\t * marker) */\n> > > +\tconst char *desc;\n> > > +\tLOCKMODE\tlockmode;\n> > > +\toption_spec_flags flags;\n> > > +\toption_type type;\n> > > +\tint\t\t\tstruct_offset;\t/* offset of the value in \n> Bytea representation */\n> > > +}\toption_spec_basic;\n> > > +\n> > > +\n> > > +/* reloptions records for specific variable types */\n> > > +typedef struct option_spec_bool\n> > > +{\n> > > +\toption_spec_basic base;\n> > > +\tbool\t\tdefault_val;\n> > > +}\toption_spec_bool;\n> > > +\n> > > +typedef struct option_spec_int\n> > > +{\n> > > +\toption_spec_basic base;\n> > > +\tint\t\t\tdefault_val;\n> > > +\tint\t\t\tmin;\n> > > +\tint\t\t\tmax;\n> > > +}\toption_spec_int;\n> > > +\n> > > +typedef struct option_spec_real\n> > > +{\n> > > +\toption_spec_basic base;\n> > > +\tdouble\t\tdefault_val;\n> > > +\tdouble\t\tmin;\n> > > +\tdouble\t\tmax;\n> > > +}\toption_spec_real;\n> > > +\n> > > +typedef struct option_spec_enum\n> > > +{\n> > > +\toption_spec_basic base;\n> > > +\topt_enum_elt_def *members;/* FIXME rewrite. Null terminated array of\n> > > allowed values for +\t\t\t\t\t\t\t\t\n> * the option */\n> > > +\tint\t\t\tdefault_val;\t/* Number of item of \n> allowed_values array */\n> > > +\tconst char *detailmsg;\n> > > +}\toption_spec_enum;\n> > > +\n> > > +/* validation routines for strings */\n> > > +typedef void (*validate_string_option) (const char *value);\n> > > +\n> > > +/*\n> > > + * When storing sting reloptions, we shoud deal with special case when\n> > > + * option value is not set. For fixed length options, we just copy\n> > > default\n> > > + * option value into the binary structure. For varlen value, there can be\n> > > + * \"not set\" special case, with no default value offered.\n> > > + * In this case we will set offset value to -1, so code that use\n> > > relptions\n> > > + * can deal this case. For better readability it was defined as a\n> > > constant. + */\n> > > +#define OPTION_STRING_VALUE_NOT_SET_OFFSET -1\n> > > +\n> > > +typedef struct option_spec_string\n> > > +{\n> > > +\toption_spec_basic base;\n> > > +\tvalidate_string_option validate_cb;\n> > > +\tchar\t *default_val;\n> > > +}\toption_spec_string;\n> > > +\n> > > +typedef void (*postprocess_bytea_options_function) (void *data, bool\n> > > validate); +\n> > > +typedef struct options_spec_set\n> > > +{\n> > > +\toption_spec_basic **definitions;\n> > > +\tint\t\t\tnum;\t\t\t/* Number of \n> spec_set items in use */\n> > > +\tint\t\t\tnum_allocated;\t/* Number of spec_set \n> items allocated */\n> > > +\tbool\t\tforbid_realloc; /* If number of items of the \n> spec_set were\n> > > +\t\t\t\t\t\t\t\t * strictly \n> set to certain value do no allow\n> > > +\t\t\t\t\t\t\t\t * adding \n> more idems */\n> > > +\tSize\t\tstruct_size;\t/* Size of a structure for \n> options in binary\n> > > +\t\t\t\t\t\t\t\t * \n> representation */\n> > > +\tpostprocess_bytea_options_function postprocess_fun; /* This function \n> is\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t * called after options\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t * were converted in\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t * Bytea represenation.\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t * Can be used for extra\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t\t\t\t * validation and so on */\n> > > +\tchar\t *namespace;\t\t/* spec_set is used for options \n> from this\n> > > +\t\t\t\t\t\t\t\t * namespase \n> */\n> > > +}\toptions_spec_set;\n> > > +\n> > > +\n> > > +/* holds an option value parsed or unparsed */\n> > > +typedef struct option_value\n> > > +{\n> > > +\toption_spec_basic *gen;\n> > > +\tchar\t *namespace;\n> > > +\toption_value_status status;\n> > > +\tchar\t *raw_value;\t\t/* allocated separately */\n> > > +\tchar\t *raw_name;\n> > > +\tunion\n> > > +\t{\n> > > +\t\tbool\t\tbool_val;\n> > > +\t\tint\t\t\tint_val;\n> > > +\t\tdouble\t\treal_val;\n> > > +\t\tint\t\t\tenum_val;\n> > > +\t\tchar\t *string_val; /* allocated separately */\n> > > +\t}\t\t\tvalues;\n> > > +}\toption_value;\n> > > +\n> > > +\n> > > +\n> > > +\n> > > +/*\n> > > + * Options spec_set related functions\n> > > + */\n> > > +extern options_spec_set *allocateOptionsSpecSet(const char *namespace,\n> > > +\t\t\t\t\t\t\t\t int \n> size_of_bytea, int num_items_expected);\n> > > +extern void optionsSpecSetAddBool(options_spec_set * spec_set, const char\n> > > *name, +\t\t\t\t const char *desc, LOCKMODE \n> lockmode, option_spec_flags\n> > > flags, +\t\t\t\t\t\t\t\t\t\n> int struct_offset, bool default_val);\n> > > +extern void optionsSpecSetAddInt(options_spec_set * spec_set, const char\n> > > *name, +\t\t\t\t\tconst char *desc, LOCKMODE \n> lockmode, option_spec_flags\n> > > flags, +\t\t\t\t\tint struct_offset, int \n> default_val, int min_val, int\n> > > max_val); +extern void optionsSpecSetAddReal(options_spec_set * spec_set,\n> > > const char *name, +\t\t const char *desc, LOCKMODE lockmode,\n> > > option_spec_flags flags, +\t int struct_offset, double default_val,\n> > > double min_val, double max_val); +extern void\n> > > optionsSpecSetAddEnum(options_spec_set * spec_set,\n> > > +\t\t\t\t\t\t const char *name, const \n> char *desc, LOCKMODE lockmode,\n> > > option_spec_flags flags, +\t\t\tint struct_offset, \n> opt_enum_elt_def*\n> > > members, int default_val, const char *detailmsg); +extern void\n> > > optionsSpecSetAddString(options_spec_set * spec_set, const char *name,\n> > > +\t\t const char *desc, LOCKMODE lockmode, option_spec_flags flags, \n> +int\n> > > struct_offset, const char *default_val, validate_string_option\n> > > validator); +\n> > > +\n> > > +/*\n> > > + * This macro allows to get string option value from bytea\n> > > representation.\n> > > + * \"optstruct\" - is a structure that is stored in bytea options\n> > > representation + * \"member\" - member of this structure that has string\n> > > option value + * (actually string values are stored in bytea after the\n> > > structure, and + * and \"member\" will contain an offset to this value.\n> > > This macro do all + * the math\n> > > + */\n> > > +#define GET_STRING_OPTION(optstruct, member) \\\n> > > +\t((optstruct)->member == OPTION_STRING_VALUE_NOT_SET_OFFSET ? NULL : \\\n> > > +\t (char *)(optstruct) + (optstruct)->member)\n> > > +\n> > > +/*\n> > > + * Functions related to option convertation, parsing, manipulation\n> > > + * and validation\n> > > + */\n> > > +extern void optionsDefListValdateNamespaces(List *defList,\n> > > +\t\t\t\t\t\t\t\tchar \n> **allowed_namespaces);\n> > > +extern List *optionsDefListFilterNamespaces(List *defList, const char\n> > > *namespace); +extern List *optionsTextArrayToDefList(Datum options);\n> > > +extern Datum optionsDefListToTextArray(List *defList);\n> > > +/*\n> > > + * Meta functions that uses functions above to get options for relations,\n> > > + * tablespaces, views and so on\n> > > + */\n> > > +\n> > > +extern bytea *optionsTextArrayToBytea(options_spec_set * spec_set, Datum\n> > > data, +\t\t\t\t\t\t\t\t\t\n> \t\t\t\t\t\t\tbool validate);\n> > > +extern Datum transformOptions(options_spec_set * spec_set, Datum\n> > > oldOptions, +\t\t\t\t List *defList, options_parse_mode \n> parse_mode);\n> > > +\n> > > +#endif /* OPTIONS_H */\n> > > diff --git a/src/include/access/reloptions.h\n> > > b/src/include/access/reloptions.h index 7c5fbeb..21b91df 100644\n> > > --- a/src/include/access/reloptions.h\n> > > +++ b/src/include/access/reloptions.h\n> > > @@ -22,6 +22,7 @@\n> > > \n> > > #include \"access/amapi.h\"\n> > > #include \"access/htup.h\"\n> > > #include \"access/tupdesc.h\"\n> > > \n> > > +#include \"access/options.h\"\n> > > \n> > > #include \"nodes/pg_list.h\"\n> > > #include \"storage/lock.h\"\n> > > \n> > > @@ -110,20 +111,10 @@ typedef struct relopt_real\n> > > \n> > > \tdouble\t\tmax;\n> > > \n> > > } relopt_real;\n> > > \n> > > -/*\n> > > - * relopt_enum_elt_def -- One member of the array of acceptable values\n> > > - * of an enum reloption.\n> > > - */\n> > > -typedef struct relopt_enum_elt_def\n> > > -{\n> > > -\tconst char *string_val;\n> > > -\tint\t\t\tsymbol_val;\n> > > -} relopt_enum_elt_def;\n> > > -\n> > > \n> > > typedef struct relopt_enum\n> > > {\n> > > \n> > > \trelopt_gen\tgen;\n> > > \n> > > -\trelopt_enum_elt_def *members;\n> > > +\topt_enum_elt_def *members;\n> > > \n> > > \tint\t\t\tdefault_val;\n> > > \tconst char *detailmsg;\n> > > \t/* null-terminated array of members */\n> > > \n> > > @@ -167,6 +158,7 @@ typedef struct local_relopts\n> > > \n> > > \tList\t *options;\t\t/* list of local_relopt \n> definitions */\n> > > \tList\t *validators;\t\t/* list of relopts_validator \n> callbacks */\n> > > \tSize\t\trelopt_struct_size; /* size of parsed bytea \n> structure */\n> > > \n> > > +\toptions_spec_set * spec_set; /* FIXME */\n> > > \n> > > } local_relopts;\n> > > \n> > > /*\n> > > \n> > > @@ -179,21 +171,6 @@ typedef struct local_relopts\n> > > \n> > > \t((optstruct)->member == 0 ? NULL : \\\n> > > \t\n> > > \t (char *)(optstruct) + (optstruct)->member)\n> > > \n> > > -extern relopt_kind add_reloption_kind(void);\n> > > -extern void add_bool_reloption(bits32 kinds, const char *name, const char\n> > > *desc, -\t\t\t\t\t\t\t bool \n> default_val, LOCKMODE lockmode);\n> > > -extern void add_int_reloption(bits32 kinds, const char *name, const char\n> > > *desc, -\t\t\t\t\t\t\t int \n> default_val, int min_val, int max_val,\n> > > -\t\t\t\t\t\t\t LOCKMODE \n> lockmode);\n> > > -extern void add_real_reloption(bits32 kinds, const char *name, const char\n> > > *desc, -\t\t\t\t\t\t\t double \n> default_val, double min_val, double max_val,\n> > > -\t\t\t\t\t\t\t LOCKMODE \n> lockmode);\n> > > -extern void add_enum_reloption(bits32 kinds, const char *name, const char\n> > > *desc, -\t\t\t\t\t\t\t \n> relopt_enum_elt_def *members, int default_val,\n> > > -\t\t\t\t\t\t\t const char \n> *detailmsg, LOCKMODE lockmode);\n> > > -extern void add_string_reloption(bits32 kinds, const char *name, const\n> > > char *desc, -\t\t\t\t\t\t\t\t \n> const char *default_val, validate_string_relopt\n> > > validator, -\t\t\t\t\t\t\t\t \n> LOCKMODE lockmode);\n> > > \n> > > extern void init_local_reloptions(local_relopts *opts, Size\n> > > relopt_struct_size); extern void\n> > > register_reloptions_validator(local_relopts *opts,\n> > > \n> > > @@ -210,7 +187,7 @@ extern void add_local_real_reloption(local_relopts\n> > > *opts, const char *name,> \n> > > \t\t\t\t\t\t\t\t\t int \n> offset);\n> > > \n> > > extern void add_local_enum_reloption(local_relopts *relopts,\n> > > \n> > > \t\t\t\t\t\t\t\t\t \n> const char *name, const char *desc,\n> > > \n> > > -\t\t\t\t\t\t\t\t\t \n> relopt_enum_elt_def *members,\n> > > +\t\t\t\t\t\t\t\t\t \n> opt_enum_elt_def *members,\n> > > \n> > > \t\t\t\t\t\t\t\t\t int \n> default_val, const char *detailmsg,\n> > > \t\t\t\t\t\t\t\t\t int \n> offset);\n> > > \n> > > extern void add_local_string_reloption(local_relopts *opts, const char\n> > > *name,> \n> > > @@ -219,29 +196,17 @@ extern void add_local_string_reloption(local_relopts\n> > > *opts, const char *name,> \n> > > \t\t\t\t\t\t\t\t\t \n> validate_string_relopt validator,\n> > > \t\t\t\t\t\t\t\t\t \n> fill_string_relopt filler, int offset);\n> > > \n> > > -extern Datum transformRelOptions(Datum oldOptions, List *defList,\n> > > -\t\t\t\t\t\t\t\t const char \n> *namspace, char *validnsps[],\n> > > -\t\t\t\t\t\t\t\t bool \n> acceptOidsOff, bool isReset);\n> > > -extern List *untransformRelOptions(Datum options);\n> > > \n> > > extern bytea *extractRelOptions(HeapTuple tuple, TupleDesc tupdesc,\n> > > \n> > > -\t\t\t\t\t\t\t\t\n> amoptions_function amoptions);\n> > > -extern void *build_reloptions(Datum reloptions, bool validate,\n> > > -\t\t\t\t\t\t\t relopt_kind kind,\n> > > -\t\t\t\t\t\t\t Size \n> relopt_struct_size,\n> > > -\t\t\t\t\t\t\t const \n> relopt_parse_elt *relopt_elems,\n> > > -\t\t\t\t\t\t\t int \n> num_relopt_elems);\n> > > +\t\t\t\t\t\t\t\t\n> amreloptspecset_function amoptions_def_set);\n> > > \n> > > extern void *build_local_reloptions(local_relopts *relopts, Datum\n> > > options,\n> > > \n> > > \t\t\t\t\t\t\t\t\tbool \n> validate);\n> > > \n> > > -extern bytea *default_reloptions(Datum reloptions, bool validate,\n> > > -\t\t\t\t\t\t\t\t relopt_kind \n> kind);\n> > > -extern bytea *heap_reloptions(char relkind, Datum reloptions, bool\n> > > validate); -extern bytea *view_reloptions(Datum reloptions, bool\n> > > validate);\n> > > -extern bytea *partitioned_table_reloptions(Datum reloptions, bool\n> > > validate); -extern bytea *index_reloptions(amoptions_function amoptions,\n> > > Datum reloptions, -\t\t\t\t\t\t\t \n> bool validate);\n> > > -extern bytea *attribute_reloptions(Datum reloptions, bool validate);\n> > > -extern bytea *tablespace_reloptions(Datum reloptions, bool validate);\n> > > -extern LOCKMODE AlterTableGetRelOptionsLockLevel(List *defList);\n> > > +options_spec_set *get_heap_relopt_spec_set(void);\n> > > +options_spec_set *get_toast_relopt_spec_set(void);\n> > > +options_spec_set *get_partitioned_relopt_spec_set(void);\n> > > +options_spec_set *get_view_relopt_spec_set(void);\n> > > +options_spec_set *get_attribute_options_spec_set(void);\n> > > +options_spec_set *get_tablespace_options_spec_set(void);\n> > > +extern LOCKMODE AlterTableGetRelOptionsLockLevel(Relation rel, List\n> > > *defList);> \n> > > #endif\t\t\t\t\t\t\t/* \n> RELOPTIONS_H */\n> > > \n> > > diff --git a/src/include/access/spgist.h b/src/include/access/spgist.h\n> > > index 2eb2f42..d9a9b2d 100644\n> > > --- a/src/include/access/spgist.h\n> > > +++ b/src/include/access/spgist.h\n> > > @@ -189,9 +189,6 @@ typedef struct spgLeafConsistentOut\n> > > \n> > > } spgLeafConsistentOut;\n> > > \n> > > -/* spgutils.c */\n> > > -extern bytea *spgoptions(Datum reloptions, bool validate);\n> > > -\n> > > \n> > > /* spginsert.c */\n> > > extern IndexBuildResult *spgbuild(Relation heap, Relation index,\n> > > \n> > > \t\t\t\t\t\t\t\t struct \n> IndexInfo *indexInfo);\n> > > \n> > > diff --git a/src/include/access/spgist_private.h\n> > > b/src/include/access/spgist_private.h index 40d3b71..dd9a05a 100644\n> > > --- a/src/include/access/spgist_private.h\n> > > +++ b/src/include/access/spgist_private.h\n> > > @@ -529,6 +529,7 @@ extern OffsetNumber SpGistPageAddNewItem(SpGistState\n> > > *state, Page page,> \n> > > extern bool spgproperty(Oid index_oid, int attno,\n> > > \n> > > \t\t\t\t\t\tIndexAMProperty prop, const \n> char *propname,\n> > > \t\t\t\t\t\tbool *res, bool *isnull);\n> > > \n> > > +extern void *spggetreloptspecset(void);\n> > > \n> > > /* spgdoinsert.c */\n> > > extern void spgUpdateNodeLink(SpGistInnerTuple tup, int nodeN,\n> > > \n> > > diff --git a/src/include/commands/tablecmds.h\n> > > b/src/include/commands/tablecmds.h index 336549c..3f87f98 100644\n> > > --- a/src/include/commands/tablecmds.h\n> > > +++ b/src/include/commands/tablecmds.h\n> > > @@ -34,7 +34,7 @@ extern Oid\tAlterTableLookupRelation(AlterTableStmt\n> > > *stmt, LOCKMODE lockmode);> \n> > > extern void AlterTable(AlterTableStmt *stmt, LOCKMODE lockmode,\n> > > \n> > > \t\t\t\t\t struct AlterTableUtilityContext \n> *context);\n> > > \n> > > -extern LOCKMODE AlterTableGetLockLevel(List *cmds);\n> > > +extern LOCKMODE AlterTableGetLockLevel(Oid relid, List *cmds);\n> > > \n> > > extern void ATExecChangeOwner(Oid relationOid, Oid newOwnerId, bool\n> > > recursing, LOCKMODE lockmode);> \n> > > diff --git a/src/test/modules/dummy_index_am/dummy_index_am.c\n> > > b/src/test/modules/dummy_index_am/dummy_index_am.c index\n> > > 5365b063..80b39e8 100644\n> > > --- a/src/test/modules/dummy_index_am/dummy_index_am.c\n> > > +++ b/src/test/modules/dummy_index_am/dummy_index_am.c\n> > > @@ -14,7 +14,7 @@\n> > > \n> > > #include \"postgres.h\"\n> > > \n> > > #include \"access/amapi.h\"\n> > > \n> > > -#include \"access/reloptions.h\"\n> > > +#include \"access/options.h\"\n> > > \n> > > #include \"catalog/index.h\"\n> > > #include \"commands/vacuum.h\"\n> > > #include \"nodes/pathnodes.h\"\n> > > \n> > > @@ -25,12 +25,6 @@ PG_MODULE_MAGIC;\n> > > \n> > > void\t\t_PG_init(void);\n> > > \n> > > -/* parse table for fillRelOptions */\n> > > -relopt_parse_elt di_relopt_tab[6];\n> > > -\n> > > -/* Kind of relation options for dummy index */\n> > > -relopt_kind di_relopt_kind;\n> > > -\n> > > \n> > > typedef enum DummyAmEnum\n> > > {\n> > > \n> > > \tDUMMY_AM_ENUM_ONE,\n> > > \n> > > @@ -49,7 +43,7 @@ typedef struct DummyIndexOptions\n> > > \n> > > \tint\t\t\toption_string_null_offset;\n> > > \n> > > }\t\t\tDummyIndexOptions;\n> > > \n> > > -relopt_enum_elt_def dummyAmEnumValues[] =\n> > > +opt_enum_elt_def dummyAmEnumValues[] =\n> > > \n> > > {\n> > > \n> > > \t{\"one\", DUMMY_AM_ENUM_ONE},\n> > > \t{\"two\", DUMMY_AM_ENUM_TWO},\n> > > \n> > > @@ -63,77 +57,85 @@ PG_FUNCTION_INFO_V1(dihandler);\n> > > \n> > > * Validation function for string relation options.\n> > > */\n> > > \n> > > static void\n> > > \n> > > -validate_string_option(const char *value)\n> > > +divalidate_string_option(const char *value)\n> > > \n> > > {\n> > > \n> > > \tereport(NOTICE,\n> > > \t\n> > > \t\t\t(errmsg(\"new option value for string parameter %s\",\n> > > \t\t\t\n> > > \t\t\t\t\tvalue ? value : \"NULL\")));\n> > > \n> > > }\n> > > \n> > > -/*\n> > > - * This function creates a full set of relation option types,\n> > > - * with various patterns.\n> > > - */\n> > > -static void\n> > > -create_reloptions_table(void)\n> > > +static options_spec_set *di_relopt_specset = NULL;\n> > > +void * digetreloptspecset(void);\n> > > +\n> > > +void *\n> > > +digetreloptspecset(void)\n> > > \n> > > {\n> > > \n> > > -\tdi_relopt_kind = add_reloption_kind();\n> > > -\n> > > -\tadd_int_reloption(di_relopt_kind, \"option_int\",\n> > > -\t\t\t\t\t \"Integer option for \n> dummy_index_am\",\n> > > -\t\t\t\t\t 10, -10, 100, \n> AccessExclusiveLock);\n> > > -\tdi_relopt_tab[0].optname = \"option_int\";\n> > > -\tdi_relopt_tab[0].opttype = RELOPT_TYPE_INT;\n> > > -\tdi_relopt_tab[0].offset = offsetof(DummyIndexOptions, option_int);\n> > > -\n> > > -\tadd_real_reloption(di_relopt_kind, \"option_real\",\n> > > -\t\t\t\t\t \"Real option for dummy_index_am\",\n> > > -\t\t\t\t\t 3.1415, -10, 100, \n> AccessExclusiveLock);\n> > > -\tdi_relopt_tab[1].optname = \"option_real\";\n> > > -\tdi_relopt_tab[1].opttype = RELOPT_TYPE_REAL;\n> > > -\tdi_relopt_tab[1].offset = offsetof(DummyIndexOptions, option_real);\n> > > -\n> > > -\tadd_bool_reloption(di_relopt_kind, \"option_bool\",\n> > > -\t\t\t\t\t \"Boolean option for \n> dummy_index_am\",\n> > > -\t\t\t\t\t true, AccessExclusiveLock);\n> > > -\tdi_relopt_tab[2].optname = \"option_bool\";\n> > > -\tdi_relopt_tab[2].opttype = RELOPT_TYPE_BOOL;\n> > > -\tdi_relopt_tab[2].offset = offsetof(DummyIndexOptions, option_bool);\n> > > -\n> > > -\tadd_enum_reloption(di_relopt_kind, \"option_enum\",\n> > > -\t\t\t\t\t \"Enum option for dummy_index_am\",\n> > > -\t\t\t\t\t dummyAmEnumValues,\n> > > -\t\t\t\t\t DUMMY_AM_ENUM_ONE,\n> > > -\t\t\t\t\t \"Valid values are \\\"one\\\" and \n> \\\"two\\\".\",\n> > > -\t\t\t\t\t AccessExclusiveLock);\n> > > -\tdi_relopt_tab[3].optname = \"option_enum\";\n> > > -\tdi_relopt_tab[3].opttype = RELOPT_TYPE_ENUM;\n> > > -\tdi_relopt_tab[3].offset = offsetof(DummyIndexOptions, option_enum);\n> > > -\n> > > -\tadd_string_reloption(di_relopt_kind, \"option_string_val\",\n> > > -\t\t\t\t\t\t \"String option for \n> dummy_index_am with non-NULL default\",\n> > > -\t\t\t\t\t\t \"DefaultValue\", \n> &validate_string_option,\n> > > -\t\t\t\t\t\t AccessExclusiveLock);\n> > > -\tdi_relopt_tab[4].optname = \"option_string_val\";\n> > > -\tdi_relopt_tab[4].opttype = RELOPT_TYPE_STRING;\n> > > -\tdi_relopt_tab[4].offset = offsetof(DummyIndexOptions,\n> > > -\t\t\t\t\t\t\t\t\t \n> option_string_val_offset);\n> > > +\tif (di_relopt_specset)\n> > > +\t\treturn di_relopt_specset;\n> > > +\n> > > +\tdi_relopt_specset = allocateOptionsSpecSet(NULL,\n> > > +\t\t\t\t\t\t\t\t\t\t\n> \t sizeof(DummyIndexOptions), 6);\n> > > +\n> > > +\toptionsSpecSetAddInt(\n> > > +\t\tdi_relopt_specset, \"option_int\",\n> > > +\t\t\"Integer option for dummy_index_am\",\n> > > +\t\tAccessExclusiveLock,\n> > > +\t\t0, offsetof(DummyIndexOptions, option_int),\n> > > +\t\t10, -10, 100\n> > > +\t);\n> > > +\n> > > +\n> > > +\toptionsSpecSetAddReal(\n> > > +\t\tdi_relopt_specset, \"option_real\",\n> > > +\t\t\"Real option for dummy_index_am\",\n> > > +\t\tAccessExclusiveLock,\n> > > +\t\t0, offsetof(DummyIndexOptions, option_real),\n> > > +\t\t3.1415, -10, 100\n> > > +\t);\n> > > +\n> > > +\toptionsSpecSetAddBool(\n> > > +\t\tdi_relopt_specset, \"option_bool\",\n> > > +\t\t\"Boolean option for dummy_index_am\",\n> > > +\t\tAccessExclusiveLock,\n> > > +\t\t0, offsetof(DummyIndexOptions, option_bool), true\n> > > +\t);\n> > > +\n> > > +\toptionsSpecSetAddEnum(di_relopt_specset, \"option_enum\",\n> > > +\t\t\"Enum option for dummy_index_am\",\n> > > +\t\tAccessExclusiveLock,\n> > > +\t\t0,\n> > > +\t\toffsetof(DummyIndexOptions, option_enum),\n> > > +\t\tdummyAmEnumValues,\n> > > +\t\tDUMMY_AM_ENUM_ONE,\n> > > +\t\t\"Valid values are \\\"one\\\" and \\\"two\\\".\"\n> > > +\t);\n> > > +\n> > > +\toptionsSpecSetAddString(di_relopt_specset, \"option_string_val\",\n> > > +\t\t\"String option for dummy_index_am with non-NULL default\",\n> > > +\t\tAccessExclusiveLock,\n> > > +\t\t0,\n> > > +\t\toffsetof(DummyIndexOptions, option_string_val_offset),\n> > > +\t\t\"DefaultValue\", &divalidate_string_option\n> > > +\t);\n> > > \n> > > \t/*\n> > > \t\n> > > \t * String option for dummy_index_am with NULL default, and without\n> > > \t * description.\n> > > \t */\n> > > \n> > > -\tadd_string_reloption(di_relopt_kind, \"option_string_null\",\n> > > -\t\t\t\t\t\t NULL,\t/* description */\n> > > -\t\t\t\t\t\t NULL, \n> &validate_string_option,\n> > > -\t\t\t\t\t\t AccessExclusiveLock);\n> > > -\tdi_relopt_tab[5].optname = \"option_string_null\";\n> > > -\tdi_relopt_tab[5].opttype = RELOPT_TYPE_STRING;\n> > > -\tdi_relopt_tab[5].offset = offsetof(DummyIndexOptions,\n> > > -\t\t\t\t\t\t\t\t\t \n> option_string_null_offset);\n> > > +\n> > > +\toptionsSpecSetAddString(di_relopt_specset, \"option_string_null\",\n> > > +\t\tNULL,\t/* description */\n> > > +\t\tAccessExclusiveLock,\n> > > +\t\t0,\n> > > +\t\toffsetof(DummyIndexOptions, option_string_null_offset),\n> > > +\t\tNULL, &divalidate_string_option\n> > > +\t);\n> > > +\n> > > +\treturn di_relopt_specset;\n> > > \n> > > }\n> > > \n> > > +\n> > > \n> > > /*\n> > > \n> > > * Build a new index.\n> > > */\n> > > \n> > > @@ -219,19 +221,6 @@ dicostestimate(PlannerInfo *root, IndexPath *path,\n> > > double loop_count,> \n> > > }\n> > > \n> > > /*\n> > > \n> > > - * Parse relation options for index AM, returning a DummyIndexOptions\n> > > - * structure filled with option values.\n> > > - */\n> > > -static bytea *\n> > > -dioptions(Datum reloptions, bool validate)\n> > > -{\n> > > -\treturn (bytea *) build_reloptions(reloptions, validate,\n> > > -\t\t\t\t\t\t\t\t\t \n> di_relopt_kind,\n> > > -\t\t\t\t\t\t\t\t\t \n> sizeof(DummyIndexOptions),\n> > > -\t\t\t\t\t\t\t\t\t \n> di_relopt_tab, lengthof(di_relopt_tab));\n> > > -}\n> > > -\n> > > -/*\n> > > \n> > > * Validator for index AM.\n> > > */\n> > > \n> > > static bool\n> > > \n> > > @@ -308,7 +297,6 @@ dihandler(PG_FUNCTION_ARGS)\n> > > \n> > > \tamroutine->amvacuumcleanup = divacuumcleanup;\n> > > \tamroutine->amcanreturn = NULL;\n> > > \tamroutine->amcostestimate = dicostestimate;\n> > > \n> > > -\tamroutine->amoptions = dioptions;\n> > > \n> > > \tamroutine->amproperty = NULL;\n> > > \tamroutine->ambuildphasename = NULL;\n> > > \tamroutine->amvalidate = divalidate;\n> > > \n> > > @@ -322,12 +310,7 @@ dihandler(PG_FUNCTION_ARGS)\n> > > \n> > > \tamroutine->amestimateparallelscan = NULL;\n> > > \tamroutine->aminitparallelscan = NULL;\n> > > \tamroutine->amparallelrescan = NULL;\n> > > \n> > > +\tamroutine->amreloptspecset = digetreloptspecset;\n> > > \n> > > \tPG_RETURN_POINTER(amroutine);\n> > > \n> > > }\n> > > \n> > > -\n> > > -void\n> > > -_PG_init(void)\n> > > -{\n> > > -\tcreate_reloptions_table();\n> > > -}\n> \n> \n> \n> \n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 29 Nov 2021 13:08:19 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Suggestion: Unified options API. Need help from core team" } ]
[ { "msg_contents": "Hi all,\n\n$subject has been noticed on github here:\nhttps://github.com/postgres/postgres/pull/70/commits\n\nLooking at the MSIs of OpenSSL for Win64 and Win32, there are no\nchanges in the deliverable names or paths, meaning that something as\nsimple as the attached patch is enough to make the build pass.\n\nAny opinions?\n--\nMichael", "msg_date": "Tue, 19 Oct 2021 14:27:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Fixing build of MSVC with OpenSSL 3.0.0" }, { "msg_contents": "> On 19 Oct 2021, at 07:27, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Looking at the MSIs of OpenSSL for Win64 and Win32, there are no\n> changes in the deliverable names or paths, meaning that something as\n> simple as the attached patch is enough to make the build pass.\n\nMakes sense.\n\n> Any opinions?\n\nI think we can tighten the check for GetOpenSSLVersion() a bit since we now now\nthe range of version in the 1.x.x series. For these checks we know we want\n1.1.x or 3.x.x, but never 2.x.x etc.\n\nHow about the (untested) attached which encodes that knowledge, as well as dies\non too old OpenSSL versions?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 19 Oct 2021 10:34:10 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Fixing build of MSVC with OpenSSL 3.0.0" }, { "msg_contents": "On Tue, Oct 19, 2021 at 10:34:10AM +0200, Daniel Gustafsson wrote:\n> I think we can tighten the check for GetOpenSSLVersion() a bit since we now now\n> the range of version in the 1.x.x series. For these checks we know we want\n> 1.1.x or 3.x.x, but never 2.x.x etc.\n> \n> How about the (untested) attached which encodes that knowledge, as well as dies\n> on too old OpenSSL versions?\n\nOne assumption hidden behind the scripts of src/tools/msvc/ is that we\nhave never needed to support OpenSSL <= 1.0.1 these days (see for\nexample HAVE_X509_GET_SIGNATURE_NID always set to 1, introduced in\n1.0.2) because the buildfarm has no need for it and there is no MSI\nfor this version for years (except if compiling from source, but\nnobody would do that for an older version anyway with their right\nmind). If you try, you would already get a compilation failure pretty\nquickly. So I'd rather keep the code as-is and not add the extra\nsudden-death check. Now that's only three extra lines, so..\n--\nMichael", "msg_date": "Tue, 19 Oct 2021 19:52:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Fixing build of MSVC with OpenSSL 3.0.0" }, { "msg_contents": "> On 19 Oct 2021, at 12:52, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Oct 19, 2021 at 10:34:10AM +0200, Daniel Gustafsson wrote:\n>> I think we can tighten the check for GetOpenSSLVersion() a bit since we now now\n>> the range of version in the 1.x.x series. For these checks we know we want\n>> 1.1.x or 3.x.x, but never 2.x.x etc.\n>> \n>> How about the (untested) attached which encodes that knowledge, as well as dies\n>> on too old OpenSSL versions?\n> \n> One assumption hidden behind the scripts of src/tools/msvc/ is that we\n> have never needed to support OpenSSL <= 1.0.1 these days\n\nRight, I was going off the version stated in the documentation which doesn't\nlist per OS requirements.\n\n> ..(see for\n> example HAVE_X509_GET_SIGNATURE_NID always set to 1, introduced in\n> 1.0.2) because the buildfarm has no need for it and there is no MSI\n> for this version for years (except if compiling from source, but\n> nobody would do that for an older version anyway with their right\n> mind). If you try, you would already get a compilation failure pretty\n> quickly. So I'd rather keep the code as-is and not add the extra\n> sudden-death check.\n\nFair enough, there isn't much use in protecting against issues that will never\nhappen.\n\nThe other proposal, making sure that we don't see a version 2.x.x creep in (in\ncase a packager decides to play cute like how has happened in other OS's) seem\nsane to me, but I'm also not very well versed in Windows so you be the judge\nthere.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 19 Oct 2021 13:09:28 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Fixing build of MSVC with OpenSSL 3.0.0" }, { "msg_contents": "On Tue, Oct 19, 2021 at 01:09:28PM +0200, Daniel Gustafsson wrote:\n> The other proposal, making sure that we don't see a version 2.x.x creep in (in\n> case a packager decides to play cute like how has happened in other OS's) seem\n> sane to me, but I'm also not very well versed in Windows so you be the judge\n> there.\n\nIf that happens to become a problem, I'd rather wait and see when/if\nwe reach this point. For the MSIs the PG docs point to, the first\npatch is more than enough.\n--\nMichael", "msg_date": "Wed, 20 Oct 2021 12:37:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Fixing build of MSVC with OpenSSL 3.0.0" } ]
[ { "msg_contents": "Hi hackers,\n\nI couldn’t find a similar report to this one, so starting a new thread. I can reproduce this on v14.0 as well as PostgreSQL 12.5 (not tried below versions).\n\nSteps to reproduce:\n\nCREATE TYPE two_ints as (if1 int, if2 int);\nCREATE DOMAIN domain AS two_ints CHECK ((VALUE).if1 > 0);\nCREATE TABLE domain_indirection_test (f1 int, f3 domain, domain_array domain[]);\nINSERT INTO domain_indirection_test (f1,f3.if1) VALUES (0, 1);\nUPDATE domain_indirection_test SET domain_array[0].if2 = 5;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nTime: 3.990 ms\n@:-!>\n\n\nThe backtrace on PG 12.5 (As far as I remember, PG14 looks very similar) :\n\n(lldb) bt\n* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x0)\n * frame #0: 0x00000001036584b7 postgres`pg_detoast_datum(datum=0x0000000000000000) at fmgr.c:1741:6\n frame #1: 0x0000000103439a86 postgres`ExecEvalFieldStoreDeForm(state=<unavailable>, op=0x00007f9212045df8, econtext=<unavailable>) at execExprInterp.c:3025:12\n frame #2: 0x000000010343834e postgres`ExecInterpExpr(state=<unavailable>, econtext=<unavailable>, isnull=0x00007ffeec91fdc7) at execExprInterp.c:1337:4\n frame #3: 0x000000010343742b postgres`ExecInterpExprStillValid(state=0x00007f921181db18, econtext=0x00007f921181d670, isNull=<unavailable>) at execExprInterp.c:1778:9\n frame #4: 0x0000000103444e0d postgres`ExecEvalExprSwitchContext(state=0x00007f921181db18, econtext=0x00007f921181d670, isNull=<unavailable>) at executor.h:310:13\n frame #5: 0x0000000103444cf0 postgres`ExecProject(projInfo=0x00007f921181db10) at executor.h:344:9\n frame #6: 0x0000000103444af6 postgres`ExecScan(node=0x00007f921181d560, accessMtd=(postgres`SeqNext at nodeSeqscan.c:51), recheckMtd=(postgres`SeqRecheck at nodeSeqscan.c:90)) at execScan.c:239:12\n frame #7: 0x0000000103461d17 postgres`ExecSeqScan(pstate=<unavailable>) at nodeSeqscan.c:112:9\n frame #8: 0x000000010344375c postgres`ExecProcNodeFirst(node=0x00007f921181d560) at execProcnode.c:445:9\n frame #9: 0x000000010345eefe postgres`ExecProcNode(node=0x00007f921181d560) at executor.h:242:9\n frame #10: 0x000000010345e74f postgres`ExecModifyTable(pstate=0x00007f921181d090) at nodeModifyTable.c:2079:14\n frame #11: 0x000000010344375c postgres`ExecProcNodeFirst(node=0x00007f921181d090) at execProcnode.c:445:9\n frame #12: 0x000000010343f25e postgres`ExecProcNode(node=0x00007f921181d090) at executor.h:242:9\n frame #13: 0x000000010343d80d postgres`ExecutePlan(estate=0x00007f921181cd10, planstate=0x00007f921181d090, use_parallel_mode=<unavailable>, operation=CMD_UPDATE, sendTuples=false, numberTuples=0, direction=ForwardScanDirection, dest=0x00007f9211818c38, execute_once=<unavailable>) at execMain.c:1646:10\n frame #14: 0x000000010343d745 postgres`standard_ExecutorRun(queryDesc=0x00007f921180e310, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:364:3\n frame #15: 0x000000010343d67c postgres`ExecutorRun(queryDesc=0x00007f921180e310, direction=ForwardScanDirection, count=0, execute_once=<unavailable>) at execMain.c:308:3\n frame #16: 0x00000001035784a8 postgres`ProcessQuery(plan=<unavailable>, sourceText=<unavailable>, params=<unavailable>, queryEnv=0x0000000000000000, dest=<unavailable>, completionTag=\"\") at pquery.c:161:2\n frame #17: 0x0000000103577c5e postgres`PortalRunMulti(portal=0x00007f9215024110, isTopLevel=true, setHoldSnapshot=false, dest=0x00007f9211818c38, altdest=0x00007f9211818c38, completionTag=\"\") at pquery.c:0\n frame #18: 0x000000010357763d postgres`PortalRun(portal=0x00007f9215024110, count=9223372036854775807, isTopLevel=<unavailable>, run_once=<unavailable>, dest=0x00007f9211818c38, altdest=0x00007f9211818c38, completionTag=\"\") at pquery.c:796:5\n frame #19: 0x0000000103574f87 postgres`exec_simple_query(query_string=\"UPDATE domain_indirection_test SET domain_array[0].if2 = 5;\") at postgres.c:1215:10\n frame #20: 0x00000001035746b8 postgres`PostgresMain(argc=<unavailable>, argv=<unavailable>, dbname=<unavailable>, username=<unavailable>) at postgres.c:0\n frame #21: 0x000000010350d712 postgres`BackendRun(port=<unavailable>) at postmaster.c:4494:2\n frame #22: 0x000000010350cffa postgres`BackendStartup(port=<unavailable>) at postmaster.c:4177:3\n frame #23: 0x000000010350c59c postgres`ServerLoop at postmaster.c:1725:7\n frame #24: 0x000000010350ac8d postgres`PostmasterMain(argc=3, argv=0x00007f9210d049c0) at postmaster.c:1398:11\n frame #25: 0x000000010347fbdd postgres`main(argc=<unavailable>, argv=<unavailable>) at main.c:228:3\n frame #26: 0x00007fff204e8f3d libdyld.dylib`start + 1\n\n\nThanks,\nOnder KALACI\nSoftware Engineer at Microsoft &\nDeveloping the Citus database extension for PostgreSQL\n\n\n\n\n\n\n\n\n\n\nHi hackers,\n \nI couldn’t find a similar report to this one, so starting a new thread. I can reproduce this on v14.0 as well as PostgreSQL 12.5 (not tried below versions).\n \nSteps to reproduce:\n \nCREATE TYPE two_ints as (if1 int, if2 int);\nCREATE DOMAIN domain AS two_ints CHECK ((VALUE).if1 > 0);\nCREATE TABLE domain_indirection_test (f1 int, f3 domain, domain_array domain[]);\nINSERT INTO domain_indirection_test (f1,f3.if1) VALUES (0, 1);\nUPDATE domain_indirection_test SET domain_array[0].if2 = 5;\nserver closed the connection unexpectedly\n                This probably means the server terminated abnormally\n                before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nTime: 3.990 ms\n@:-!>\n \n \nThe backtrace on PG 12.5 (As far as I remember, PG14 looks very similar) :\n \n(lldb) bt\n* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x0)\n  * frame #0: 0x00000001036584b7 postgres`pg_detoast_datum(datum=0x0000000000000000) at fmgr.c:1741:6\n    frame #1: 0x0000000103439a86 postgres`ExecEvalFieldStoreDeForm(state=<unavailable>, op=0x00007f9212045df8, econtext=<unavailable>) at execExprInterp.c:3025:12\n    frame #2: 0x000000010343834e postgres`ExecInterpExpr(state=<unavailable>, econtext=<unavailable>, isnull=0x00007ffeec91fdc7) at execExprInterp.c:1337:4\n    frame #3: 0x000000010343742b postgres`ExecInterpExprStillValid(state=0x00007f921181db18, econtext=0x00007f921181d670, isNull=<unavailable>) at execExprInterp.c:1778:9\n    frame #4: 0x0000000103444e0d postgres`ExecEvalExprSwitchContext(state=0x00007f921181db18, econtext=0x00007f921181d670, isNull=<unavailable>) at executor.h:310:13\n    frame #5: 0x0000000103444cf0 postgres`ExecProject(projInfo=0x00007f921181db10) at executor.h:344:9\n    frame #6: 0x0000000103444af6 postgres`ExecScan(node=0x00007f921181d560, accessMtd=(postgres`SeqNext at nodeSeqscan.c:51), recheckMtd=(postgres`SeqRecheck at nodeSeqscan.c:90)) at execScan.c:239:12\n    frame #7: 0x0000000103461d17 postgres`ExecSeqScan(pstate=<unavailable>) at nodeSeqscan.c:112:9\n    frame #8: 0x000000010344375c postgres`ExecProcNodeFirst(node=0x00007f921181d560) at execProcnode.c:445:9\n    frame #9: 0x000000010345eefe postgres`ExecProcNode(node=0x00007f921181d560) at executor.h:242:9\n    frame #10: 0x000000010345e74f postgres`ExecModifyTable(pstate=0x00007f921181d090) at nodeModifyTable.c:2079:14\n    frame #11: 0x000000010344375c postgres`ExecProcNodeFirst(node=0x00007f921181d090) at execProcnode.c:445:9\n    frame #12: 0x000000010343f25e postgres`ExecProcNode(node=0x00007f921181d090) at executor.h:242:9\n    frame #13: 0x000000010343d80d postgres`ExecutePlan(estate=0x00007f921181cd10, planstate=0x00007f921181d090, use_parallel_mode=<unavailable>, operation=CMD_UPDATE, sendTuples=false, numberTuples=0, direction=ForwardScanDirection,\n dest=0x00007f9211818c38, execute_once=<unavailable>) at execMain.c:1646:10\n    frame #14: 0x000000010343d745 postgres`standard_ExecutorRun(queryDesc=0x00007f921180e310, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:364:3\n    frame #15: 0x000000010343d67c postgres`ExecutorRun(queryDesc=0x00007f921180e310, direction=ForwardScanDirection, count=0, execute_once=<unavailable>) at execMain.c:308:3\n    frame #16: 0x00000001035784a8 postgres`ProcessQuery(plan=<unavailable>, sourceText=<unavailable>, params=<unavailable>, queryEnv=0x0000000000000000, dest=<unavailable>, completionTag=\"\") at pquery.c:161:2\n    frame #17: 0x0000000103577c5e postgres`PortalRunMulti(portal=0x00007f9215024110, isTopLevel=true, setHoldSnapshot=false, dest=0x00007f9211818c38, altdest=0x00007f9211818c38, completionTag=\"\") at pquery.c:0\n    frame #18: 0x000000010357763d postgres`PortalRun(portal=0x00007f9215024110, count=9223372036854775807, isTopLevel=<unavailable>, run_once=<unavailable>, dest=0x00007f9211818c38, altdest=0x00007f9211818c38, completionTag=\"\")\n at pquery.c:796:5\n    frame #19: 0x0000000103574f87 postgres`exec_simple_query(query_string=\"UPDATE domain_indirection_test SET domain_array[0].if2 = 5;\") at postgres.c:1215:10\n    frame #20: 0x00000001035746b8 postgres`PostgresMain(argc=<unavailable>, argv=<unavailable>, dbname=<unavailable>, username=<unavailable>) at postgres.c:0\n    frame #21: 0x000000010350d712 postgres`BackendRun(port=<unavailable>) at postmaster.c:4494:2\n    frame #22: 0x000000010350cffa postgres`BackendStartup(port=<unavailable>) at postmaster.c:4177:3\n    frame #23: 0x000000010350c59c postgres`ServerLoop at postmaster.c:1725:7\n    frame #24: 0x000000010350ac8d postgres`PostmasterMain(argc=3, argv=0x00007f9210d049c0) at postmaster.c:1398:11\n    frame #25: 0x000000010347fbdd postgres`main(argc=<unavailable>, argv=<unavailable>) at main.c:228:3\n    frame #26: 0x00007fff204e8f3d libdyld.dylib`start + 1\n \n \nThanks,\nOnder KALACI\nSoftware Engineer at Microsoft &\nDeveloping the Citus database extension for PostgreSQL", "msg_date": "Tue, 19 Oct 2021 07:38:49 +0000", "msg_from": "Onder Kalaci <onderk@microsoft.com>", "msg_from_op": true, "msg_subject": "UPDATE on Domain Array that is based on a composite key crashes" }, { "msg_contents": "On Tue, Oct 19, 2021 at 12:39 AM Onder Kalaci <onderk@microsoft.com> wrote:\n\n> Hi hackers,\n>\n>\n>\n> I couldn’t find a similar report to this one, so starting a new thread. I\n> can reproduce this on v14.0 as well as PostgreSQL 12.5 (not tried below\n> versions).\n>\n>\n>\n> Steps to reproduce:\n>\n>\n>\n> CREATE TYPE two_ints as (if1 int, if2 int);\n>\n> CREATE DOMAIN domain AS two_ints CHECK ((VALUE).if1 > 0);\n>\n> CREATE TABLE domain_indirection_test (f1 int, f3 domain, domain_array\n> domain[]);\n>\n> INSERT INTO domain_indirection_test (f1,f3.if1) VALUES (0, 1);\n>\n> UPDATE domain_indirection_test SET domain_array[0].if2 = 5;\n>\n> server closed the connection unexpectedly\n>\n> This probably means the server terminated abnormally\n>\n> before or while processing the request.\n>\n> The connection to the server was lost. Attempting reset: Failed.\n>\n> Time: 3.990 ms\n>\n> @:-!>\n>\n>\n>\n>\n>\n> The backtrace on PG 12.5 (As far as I remember, PG14 looks very similar) :\n>\n>\n>\n> (lldb) bt\n>\n> * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS\n> (code=1, address=0x0)\n>\n> * frame #0: 0x00000001036584b7\n> postgres`pg_detoast_datum(datum=0x0000000000000000) at fmgr.c:1741:6\n>\n> frame #1: 0x0000000103439a86\n> postgres`ExecEvalFieldStoreDeForm(state=<unavailable>,\n> op=0x00007f9212045df8, econtext=<unavailable>) at execExprInterp.c:3025:12\n>\n> frame #2: 0x000000010343834e\n> postgres`ExecInterpExpr(state=<unavailable>, econtext=<unavailable>,\n> isnull=0x00007ffeec91fdc7) at execExprInterp.c:1337:4\n>\n> frame #3: 0x000000010343742b\n> postgres`ExecInterpExprStillValid(state=0x00007f921181db18,\n> econtext=0x00007f921181d670, isNull=<unavailable>) at\n> execExprInterp.c:1778:9\n>\n> frame #4: 0x0000000103444e0d\n> postgres`ExecEvalExprSwitchContext(state=0x00007f921181db18,\n> econtext=0x00007f921181d670, isNull=<unavailable>) at executor.h:310:13\n>\n> frame #5: 0x0000000103444cf0\n> postgres`ExecProject(projInfo=0x00007f921181db10) at executor.h:344:9\n>\n> frame #6: 0x0000000103444af6\n> postgres`ExecScan(node=0x00007f921181d560, accessMtd=(postgres`SeqNext at\n> nodeSeqscan.c:51), recheckMtd=(postgres`SeqRecheck at nodeSeqscan.c:90)) at\n> execScan.c:239:12\n>\n> frame #7: 0x0000000103461d17\n> postgres`ExecSeqScan(pstate=<unavailable>) at nodeSeqscan.c:112:9\n>\n> frame #8: 0x000000010344375c\n> postgres`ExecProcNodeFirst(node=0x00007f921181d560) at execProcnode.c:445:9\n>\n> frame #9: 0x000000010345eefe\n> postgres`ExecProcNode(node=0x00007f921181d560) at executor.h:242:9\n>\n> frame #10: 0x000000010345e74f\n> postgres`ExecModifyTable(pstate=0x00007f921181d090) at\n> nodeModifyTable.c:2079:14\n>\n> frame #11: 0x000000010344375c\n> postgres`ExecProcNodeFirst(node=0x00007f921181d090) at execProcnode.c:445:9\n>\n> frame #12: 0x000000010343f25e\n> postgres`ExecProcNode(node=0x00007f921181d090) at executor.h:242:9\n>\n> frame #13: 0x000000010343d80d\n> postgres`ExecutePlan(estate=0x00007f921181cd10,\n> planstate=0x00007f921181d090, use_parallel_mode=<unavailable>,\n> operation=CMD_UPDATE, sendTuples=false, numberTuples=0,\n> direction=ForwardScanDirection, dest=0x00007f9211818c38,\n> execute_once=<unavailable>) at execMain.c:1646:10\n>\n> frame #14: 0x000000010343d745\n> postgres`standard_ExecutorRun(queryDesc=0x00007f921180e310,\n> direction=ForwardScanDirection, count=0, execute_once=true) at\n> execMain.c:364:3\n>\n> frame #15: 0x000000010343d67c\n> postgres`ExecutorRun(queryDesc=0x00007f921180e310,\n> direction=ForwardScanDirection, count=0, execute_once=<unavailable>) at\n> execMain.c:308:3\n>\n> frame #16: 0x00000001035784a8\n> postgres`ProcessQuery(plan=<unavailable>, sourceText=<unavailable>,\n> params=<unavailable>, queryEnv=0x0000000000000000, dest=<unavailable>,\n> completionTag=\"\") at pquery.c:161:2\n>\n> frame #17: 0x0000000103577c5e\n> postgres`PortalRunMulti(portal=0x00007f9215024110, isTopLevel=true,\n> setHoldSnapshot=false, dest=0x00007f9211818c38, altdest=0x00007f9211818c38,\n> completionTag=\"\") at pquery.c:0\n>\n> frame #18: 0x000000010357763d\n> postgres`PortalRun(portal=0x00007f9215024110, count=9223372036854775807,\n> isTopLevel=<unavailable>, run_once=<unavailable>, dest=0x00007f9211818c38,\n> altdest=0x00007f9211818c38, completionTag=\"\") at pquery.c:796:5\n>\n> frame #19: 0x0000000103574f87\n> postgres`exec_simple_query(query_string=\"UPDATE domain_indirection_test SET\n> domain_array[0].if2 = 5;\") at postgres.c:1215:10\n>\n> frame #20: 0x00000001035746b8\n> postgres`PostgresMain(argc=<unavailable>, argv=<unavailable>,\n> dbname=<unavailable>, username=<unavailable>) at postgres.c:0\n>\n> frame #21: 0x000000010350d712 postgres`BackendRun(port=<unavailable>)\n> at postmaster.c:4494:2\n>\n> frame #22: 0x000000010350cffa\n> postgres`BackendStartup(port=<unavailable>) at postmaster.c:4177:3\n>\n> frame #23: 0x000000010350c59c postgres`ServerLoop at\n> postmaster.c:1725:7\n>\n> frame #24: 0x000000010350ac8d postgres`PostmasterMain(argc=3,\n> argv=0x00007f9210d049c0) at postmaster.c:1398:11\n>\n> frame #25: 0x000000010347fbdd postgres`main(argc=<unavailable>,\n> argv=<unavailable>) at main.c:228:3\n>\n> frame #26: 0x00007fff204e8f3d libdyld.dylib`start + 1\n>\n>\n>\n>\n>\n> Thanks,\n>\n> Onder KALACI\n>\n> Software Engineer at Microsoft &\n>\n> Developing the Citus database extension for PostgreSQL\n>\n\n Hi,\n It seems the following change would fix the crash:\n\ndiff --git a/src/postgres/src/backend/executor/execExprInterp.c\nb/src/postgres/src/backend/executor/execExprInterp.c\nindex 622cab9d4..50cb4f014 100644\n--- a/src/postgres/src/backend/executor/execExprInterp.c\n+++ b/src/postgres/src/backend/executor/execExprInterp.c\n@@ -3038,6 +3038,9 @@ ExecEvalFieldStoreDeForm(ExprState *state,\nExprEvalStep *op, ExprContext *econte\n HeapTupleHeader tuphdr;\n HeapTupleData tmptup;\n\n+ if (DatumGetPointer(tupDatum) == NULL) {\n+ return;\n+ }\n tuphdr = DatumGetHeapTupleHeader(tupDatum);\n tmptup.t_len = HeapTupleHeaderGetDatumLength(tuphdr);\n ItemPointerSetInvalid(&(tmptup.t_self));\n\nHere is the result after the update statement:\n\n# UPDATE domain_indirection_test SET domain_array[0].if2 = 5;\nUPDATE 1\n# select * from domain_indirection_test;\n f1 | f3 | domain_array\n----+------+----------------\n 0 | (1,) | [0:0]={\"(,5)\"}\n(1 row)\n\nCheers\n\nOn Tue, Oct 19, 2021 at 12:39 AM Onder Kalaci <onderk@microsoft.com> wrote:\n\n\nHi hackers,\n \nI couldn’t find a similar report to this one, so starting a new thread. I can reproduce this on v14.0 as well as PostgreSQL 12.5 (not tried below versions).\n \nSteps to reproduce:\n \nCREATE TYPE two_ints as (if1 int, if2 int);\nCREATE DOMAIN domain AS two_ints CHECK ((VALUE).if1 > 0);\nCREATE TABLE domain_indirection_test (f1 int, f3 domain, domain_array domain[]);\nINSERT INTO domain_indirection_test (f1,f3.if1) VALUES (0, 1);\nUPDATE domain_indirection_test SET domain_array[0].if2 = 5;\nserver closed the connection unexpectedly\n                This probably means the server terminated abnormally\n                before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nTime: 3.990 ms\n@:-!>\n \n \nThe backtrace on PG 12.5 (As far as I remember, PG14 looks very similar) :\n \n(lldb) bt\n* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x0)\n  * frame #0: 0x00000001036584b7 postgres`pg_detoast_datum(datum=0x0000000000000000) at fmgr.c:1741:6\n    frame #1: 0x0000000103439a86 postgres`ExecEvalFieldStoreDeForm(state=<unavailable>, op=0x00007f9212045df8, econtext=<unavailable>) at execExprInterp.c:3025:12\n    frame #2: 0x000000010343834e postgres`ExecInterpExpr(state=<unavailable>, econtext=<unavailable>, isnull=0x00007ffeec91fdc7) at execExprInterp.c:1337:4\n    frame #3: 0x000000010343742b postgres`ExecInterpExprStillValid(state=0x00007f921181db18, econtext=0x00007f921181d670, isNull=<unavailable>) at execExprInterp.c:1778:9\n    frame #4: 0x0000000103444e0d postgres`ExecEvalExprSwitchContext(state=0x00007f921181db18, econtext=0x00007f921181d670, isNull=<unavailable>) at executor.h:310:13\n    frame #5: 0x0000000103444cf0 postgres`ExecProject(projInfo=0x00007f921181db10) at executor.h:344:9\n    frame #6: 0x0000000103444af6 postgres`ExecScan(node=0x00007f921181d560, accessMtd=(postgres`SeqNext at nodeSeqscan.c:51), recheckMtd=(postgres`SeqRecheck at nodeSeqscan.c:90)) at execScan.c:239:12\n    frame #7: 0x0000000103461d17 postgres`ExecSeqScan(pstate=<unavailable>) at nodeSeqscan.c:112:9\n    frame #8: 0x000000010344375c postgres`ExecProcNodeFirst(node=0x00007f921181d560) at execProcnode.c:445:9\n    frame #9: 0x000000010345eefe postgres`ExecProcNode(node=0x00007f921181d560) at executor.h:242:9\n    frame #10: 0x000000010345e74f postgres`ExecModifyTable(pstate=0x00007f921181d090) at nodeModifyTable.c:2079:14\n    frame #11: 0x000000010344375c postgres`ExecProcNodeFirst(node=0x00007f921181d090) at execProcnode.c:445:9\n    frame #12: 0x000000010343f25e postgres`ExecProcNode(node=0x00007f921181d090) at executor.h:242:9\n    frame #13: 0x000000010343d80d postgres`ExecutePlan(estate=0x00007f921181cd10, planstate=0x00007f921181d090, use_parallel_mode=<unavailable>, operation=CMD_UPDATE, sendTuples=false, numberTuples=0, direction=ForwardScanDirection,\n dest=0x00007f9211818c38, execute_once=<unavailable>) at execMain.c:1646:10\n    frame #14: 0x000000010343d745 postgres`standard_ExecutorRun(queryDesc=0x00007f921180e310, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:364:3\n    frame #15: 0x000000010343d67c postgres`ExecutorRun(queryDesc=0x00007f921180e310, direction=ForwardScanDirection, count=0, execute_once=<unavailable>) at execMain.c:308:3\n    frame #16: 0x00000001035784a8 postgres`ProcessQuery(plan=<unavailable>, sourceText=<unavailable>, params=<unavailable>, queryEnv=0x0000000000000000, dest=<unavailable>, completionTag=\"\") at pquery.c:161:2\n    frame #17: 0x0000000103577c5e postgres`PortalRunMulti(portal=0x00007f9215024110, isTopLevel=true, setHoldSnapshot=false, dest=0x00007f9211818c38, altdest=0x00007f9211818c38, completionTag=\"\") at pquery.c:0\n    frame #18: 0x000000010357763d postgres`PortalRun(portal=0x00007f9215024110, count=9223372036854775807, isTopLevel=<unavailable>, run_once=<unavailable>, dest=0x00007f9211818c38, altdest=0x00007f9211818c38, completionTag=\"\")\n at pquery.c:796:5\n    frame #19: 0x0000000103574f87 postgres`exec_simple_query(query_string=\"UPDATE domain_indirection_test SET domain_array[0].if2 = 5;\") at postgres.c:1215:10\n    frame #20: 0x00000001035746b8 postgres`PostgresMain(argc=<unavailable>, argv=<unavailable>, dbname=<unavailable>, username=<unavailable>) at postgres.c:0\n    frame #21: 0x000000010350d712 postgres`BackendRun(port=<unavailable>) at postmaster.c:4494:2\n    frame #22: 0x000000010350cffa postgres`BackendStartup(port=<unavailable>) at postmaster.c:4177:3\n    frame #23: 0x000000010350c59c postgres`ServerLoop at postmaster.c:1725:7\n    frame #24: 0x000000010350ac8d postgres`PostmasterMain(argc=3, argv=0x00007f9210d049c0) at postmaster.c:1398:11\n    frame #25: 0x000000010347fbdd postgres`main(argc=<unavailable>, argv=<unavailable>) at main.c:228:3\n    frame #26: 0x00007fff204e8f3d libdyld.dylib`start + 1\n \n \nThanks,\nOnder KALACI\nSoftware Engineer at Microsoft &\nDeveloping the Citus database extension for PostgreSQL Hi, It seems the following change would fix the crash:diff --git a/src/postgres/src/backend/executor/execExprInterp.c b/src/postgres/src/backend/executor/execExprInterp.cindex 622cab9d4..50cb4f014 100644--- a/src/postgres/src/backend/executor/execExprInterp.c+++ b/src/postgres/src/backend/executor/execExprInterp.c@@ -3038,6 +3038,9 @@ ExecEvalFieldStoreDeForm(ExprState *state, ExprEvalStep *op, ExprContext *econte         HeapTupleHeader tuphdr;         HeapTupleData tmptup;+        if (DatumGetPointer(tupDatum) == NULL) {+            return;+        }         tuphdr = DatumGetHeapTupleHeader(tupDatum);         tmptup.t_len = HeapTupleHeaderGetDatumLength(tuphdr);         ItemPointerSetInvalid(&(tmptup.t_self));Here is the result after the update statement:# UPDATE domain_indirection_test SET domain_array[0].if2 = 5;UPDATE 1# select * from domain_indirection_test; f1 |  f3  |  domain_array----+------+----------------  0 | (1,) | [0:0]={\"(,5)\"}(1 row)Cheers", "msg_date": "Tue, 19 Oct 2021 02:12:46 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: UPDATE on Domain Array that is based on a composite key crashes" }, { "msg_contents": "\nOn Tue, 19 Oct 2021 at 17:12, Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Tue, Oct 19, 2021 at 12:39 AM Onder Kalaci <onderk@microsoft.com> wrote:\n>\n>> Hi hackers,\n>>\n>>\n>>\n>> I couldn’t find a similar report to this one, so starting a new thread. I\n>> can reproduce this on v14.0 as well as PostgreSQL 12.5 (not tried below\n>> versions).\n>>\n>>\n>>\n>> Steps to reproduce:\n>>\n>>\n>>\n>> CREATE TYPE two_ints as (if1 int, if2 int);\n>>\n>> CREATE DOMAIN domain AS two_ints CHECK ((VALUE).if1 > 0);\n>>\n>> CREATE TABLE domain_indirection_test (f1 int, f3 domain, domain_array\n>> domain[]);\n>>\n>> INSERT INTO domain_indirection_test (f1,f3.if1) VALUES (0, 1);\n>>\n>> UPDATE domain_indirection_test SET domain_array[0].if2 = 5;\n>>\n>> server closed the connection unexpectedly\n>>\n>> This probably means the server terminated abnormally\n>>\n>> before or while processing the request.\n>>\n>> The connection to the server was lost. Attempting reset: Failed.\n>>\n>> Time: 3.990 ms\n>>\n>\n> Hi,\n> It seems the following change would fix the crash:\n>\n> diff --git a/src/postgres/src/backend/executor/execExprInterp.c\n> b/src/postgres/src/backend/executor/execExprInterp.c\n> index 622cab9d4..50cb4f014 100644\n> --- a/src/postgres/src/backend/executor/execExprInterp.c\n> +++ b/src/postgres/src/backend/executor/execExprInterp.c\n> @@ -3038,6 +3038,9 @@ ExecEvalFieldStoreDeForm(ExprState *state,\n> ExprEvalStep *op, ExprContext *econte\n> HeapTupleHeader tuphdr;\n> HeapTupleData tmptup;\n>\n> + if (DatumGetPointer(tupDatum) == NULL) {\n> + return;\n> + }\n> tuphdr = DatumGetHeapTupleHeader(tupDatum);\n> tmptup.t_len = HeapTupleHeaderGetDatumLength(tuphdr);\n> ItemPointerSetInvalid(&(tmptup.t_self));\n>\n> Here is the result after the update statement:\n>\n> # UPDATE domain_indirection_test SET domain_array[0].if2 = 5;\n> UPDATE 1\n> # select * from domain_indirection_test;\n> f1 | f3 | domain_array\n> ----+------+----------------\n> 0 | (1,) | [0:0]={\"(,5)\"}\n> (1 row)\n>\n\nYeah, it fixes the core dump.\n\nHowever, When I test the patch, I find the update will replace all data\nin `domain` if we only update one field.\n\npostgres=# UPDATE domain_indirection_test SET domain_array[0].if2 = 5;\nUPDATE 1\npostgres=# select * from domain_indirection_test ;\n f1 | f3 | domain_array\n----+------+----------------\n 0 | (1,) | [0:0]={\"(,5)\"}\n(1 row)\n\npostgres=# UPDATE domain_indirection_test SET domain_array[0].if1 = 10;\nUPDATE 1\npostgres=# select * from domain_indirection_test ;\n f1 | f3 | domain_array\n----+------+-----------------\n 0 | (1,) | [0:0]={\"(10,)\"}\n(1 row)\n\n\nSo I try to update all field in `domain`, and find only the last one will\nbe stored.\n\npostgres=# UPDATE domain_indirection_test SET domain_array[0].if1 = 10, domain_array[0].if2 = 5;\nUPDATE 1\npostgres=# select * from domain_indirection_test ;\n f1 | f3 | domain_array\n----+------+----------------\n 0 | (1,) | [0:0]={\"(,5)\"}\n(1 row)\n\npostgres=# UPDATE domain_indirection_test SET domain_array[0].if2 = 10, domain_array[0].if1 = 5;\nUPDATE 1\npostgres=# select * from domain_indirection_test ;\n f1 | f3 | domain_array\n----+------+----------------\n 0 | (1,) | [0:0]={\"(5,)\"}\n(1 row)\n\n\nDoes this worked as expected? For me, For me, I think this is a bug.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Tue, 19 Oct 2021 18:33:36 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: UPDATE on Domain Array that is based on a composite key crashes" }, { "msg_contents": "On Tue, Oct 19, 2021 at 2:12 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Tue, Oct 19, 2021 at 12:39 AM Onder Kalaci <onderk@microsoft.com>\n> wrote:\n>\n>> Hi hackers,\n>>\n>>\n>>\n>> I couldn’t find a similar report to this one, so starting a new thread. I\n>> can reproduce this on v14.0 as well as PostgreSQL 12.5 (not tried below\n>> versions).\n>>\n>>\n>>\n>> Steps to reproduce:\n>>\n>>\n>>\n>> CREATE TYPE two_ints as (if1 int, if2 int);\n>>\n>> CREATE DOMAIN domain AS two_ints CHECK ((VALUE).if1 > 0);\n>>\n>> CREATE TABLE domain_indirection_test (f1 int, f3 domain, domain_array\n>> domain[]);\n>>\n>> INSERT INTO domain_indirection_test (f1,f3.if1) VALUES (0, 1);\n>>\n>> UPDATE domain_indirection_test SET domain_array[0].if2 = 5;\n>>\n>> server closed the connection unexpectedly\n>>\n>> This probably means the server terminated abnormally\n>>\n>> before or while processing the request.\n>>\n>> The connection to the server was lost. Attempting reset: Failed.\n>>\n>> Time: 3.990 ms\n>>\n>> @:-!>\n>>\n>>\n>>\n>>\n>>\n>> The backtrace on PG 12.5 (As far as I remember, PG14 looks very similar) :\n>>\n>>\n>>\n>> (lldb) bt\n>>\n>> * thread #1, queue = 'com.apple.main-thread', stop reason =\n>> EXC_BAD_ACCESS (code=1, address=0x0)\n>>\n>> * frame #0: 0x00000001036584b7\n>> postgres`pg_detoast_datum(datum=0x0000000000000000) at fmgr.c:1741:6\n>>\n>> frame #1: 0x0000000103439a86\n>> postgres`ExecEvalFieldStoreDeForm(state=<unavailable>,\n>> op=0x00007f9212045df8, econtext=<unavailable>) at execExprInterp.c:3025:12\n>>\n>> frame #2: 0x000000010343834e\n>> postgres`ExecInterpExpr(state=<unavailable>, econtext=<unavailable>,\n>> isnull=0x00007ffeec91fdc7) at execExprInterp.c:1337:4\n>>\n>> frame #3: 0x000000010343742b\n>> postgres`ExecInterpExprStillValid(state=0x00007f921181db18,\n>> econtext=0x00007f921181d670, isNull=<unavailable>) at\n>> execExprInterp.c:1778:9\n>>\n>> frame #4: 0x0000000103444e0d\n>> postgres`ExecEvalExprSwitchContext(state=0x00007f921181db18,\n>> econtext=0x00007f921181d670, isNull=<unavailable>) at executor.h:310:13\n>>\n>> frame #5: 0x0000000103444cf0\n>> postgres`ExecProject(projInfo=0x00007f921181db10) at executor.h:344:9\n>>\n>> frame #6: 0x0000000103444af6\n>> postgres`ExecScan(node=0x00007f921181d560, accessMtd=(postgres`SeqNext at\n>> nodeSeqscan.c:51), recheckMtd=(postgres`SeqRecheck at nodeSeqscan.c:90)) at\n>> execScan.c:239:12\n>>\n>> frame #7: 0x0000000103461d17\n>> postgres`ExecSeqScan(pstate=<unavailable>) at nodeSeqscan.c:112:9\n>>\n>> frame #8: 0x000000010344375c\n>> postgres`ExecProcNodeFirst(node=0x00007f921181d560) at execProcnode.c:445:9\n>>\n>> frame #9: 0x000000010345eefe\n>> postgres`ExecProcNode(node=0x00007f921181d560) at executor.h:242:9\n>>\n>> frame #10: 0x000000010345e74f\n>> postgres`ExecModifyTable(pstate=0x00007f921181d090) at\n>> nodeModifyTable.c:2079:14\n>>\n>> frame #11: 0x000000010344375c\n>> postgres`ExecProcNodeFirst(node=0x00007f921181d090) at execProcnode.c:445:9\n>>\n>> frame #12: 0x000000010343f25e\n>> postgres`ExecProcNode(node=0x00007f921181d090) at executor.h:242:9\n>>\n>> frame #13: 0x000000010343d80d\n>> postgres`ExecutePlan(estate=0x00007f921181cd10,\n>> planstate=0x00007f921181d090, use_parallel_mode=<unavailable>,\n>> operation=CMD_UPDATE, sendTuples=false, numberTuples=0,\n>> direction=ForwardScanDirection, dest=0x00007f9211818c38,\n>> execute_once=<unavailable>) at execMain.c:1646:10\n>>\n>> frame #14: 0x000000010343d745\n>> postgres`standard_ExecutorRun(queryDesc=0x00007f921180e310,\n>> direction=ForwardScanDirection, count=0, execute_once=true) at\n>> execMain.c:364:3\n>>\n>> frame #15: 0x000000010343d67c\n>> postgres`ExecutorRun(queryDesc=0x00007f921180e310,\n>> direction=ForwardScanDirection, count=0, execute_once=<unavailable>) at\n>> execMain.c:308:3\n>>\n>> frame #16: 0x00000001035784a8\n>> postgres`ProcessQuery(plan=<unavailable>, sourceText=<unavailable>,\n>> params=<unavailable>, queryEnv=0x0000000000000000, dest=<unavailable>,\n>> completionTag=\"\") at pquery.c:161:2\n>>\n>> frame #17: 0x0000000103577c5e\n>> postgres`PortalRunMulti(portal=0x00007f9215024110, isTopLevel=true,\n>> setHoldSnapshot=false, dest=0x00007f9211818c38, altdest=0x00007f9211818c38,\n>> completionTag=\"\") at pquery.c:0\n>>\n>> frame #18: 0x000000010357763d\n>> postgres`PortalRun(portal=0x00007f9215024110, count=9223372036854775807,\n>> isTopLevel=<unavailable>, run_once=<unavailable>, dest=0x00007f9211818c38,\n>> altdest=0x00007f9211818c38, completionTag=\"\") at pquery.c:796:5\n>>\n>> frame #19: 0x0000000103574f87\n>> postgres`exec_simple_query(query_string=\"UPDATE domain_indirection_test SET\n>> domain_array[0].if2 = 5;\") at postgres.c:1215:10\n>>\n>> frame #20: 0x00000001035746b8\n>> postgres`PostgresMain(argc=<unavailable>, argv=<unavailable>,\n>> dbname=<unavailable>, username=<unavailable>) at postgres.c:0\n>>\n>> frame #21: 0x000000010350d712 postgres`BackendRun(port=<unavailable>)\n>> at postmaster.c:4494:2\n>>\n>> frame #22: 0x000000010350cffa\n>> postgres`BackendStartup(port=<unavailable>) at postmaster.c:4177:3\n>>\n>> frame #23: 0x000000010350c59c postgres`ServerLoop at\n>> postmaster.c:1725:7\n>>\n>> frame #24: 0x000000010350ac8d postgres`PostmasterMain(argc=3,\n>> argv=0x00007f9210d049c0) at postmaster.c:1398:11\n>>\n>> frame #25: 0x000000010347fbdd postgres`main(argc=<unavailable>,\n>> argv=<unavailable>) at main.c:228:3\n>>\n>> frame #26: 0x00007fff204e8f3d libdyld.dylib`start + 1\n>>\n>>\n>>\n>>\n>>\n>> Thanks,\n>>\n>> Onder KALACI\n>>\n>> Software Engineer at Microsoft &\n>>\n>> Developing the Citus database extension for PostgreSQL\n>>\n>\n> Hi,\n> It seems the following change would fix the crash:\n>\n> diff --git a/src/postgres/src/backend/executor/execExprInterp.c\n> b/src/postgres/src/backend/executor/execExprInterp.c\n> index 622cab9d4..50cb4f014 100644\n> --- a/src/postgres/src/backend/executor/execExprInterp.c\n> +++ b/src/postgres/src/backend/executor/execExprInterp.c\n> @@ -3038,6 +3038,9 @@ ExecEvalFieldStoreDeForm(ExprState *state,\n> ExprEvalStep *op, ExprContext *econte\n> HeapTupleHeader tuphdr;\n> HeapTupleData tmptup;\n>\n> + if (DatumGetPointer(tupDatum) == NULL) {\n> + return;\n> + }\n> tuphdr = DatumGetHeapTupleHeader(tupDatum);\n> tmptup.t_len = HeapTupleHeaderGetDatumLength(tuphdr);\n> ItemPointerSetInvalid(&(tmptup.t_self));\n>\n> Here is the result after the update statement:\n>\n> # UPDATE domain_indirection_test SET domain_array[0].if2 = 5;\n> UPDATE 1\n> # select * from domain_indirection_test;\n> f1 | f3 | domain_array\n> ----+------+----------------\n> 0 | (1,) | [0:0]={\"(,5)\"}\n> (1 row)\n>\n> Cheers\n>\nHi,\nHere is the patch.\nIf the new test should be placed in a different .sql file, please let me\nknow.\n\nThe update issue Japin mentioned seems to be orthogonal to the crash.\n\nCheers", "msg_date": "Tue, 19 Oct 2021 08:17:34 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: UPDATE on Domain Array that is based on a composite key crashes" }, { "msg_contents": "\nOn Tue, 19 Oct 2021 at 23:17, Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Tue, Oct 19, 2021 at 2:12 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>>\n>>\n>> On Tue, Oct 19, 2021 at 12:39 AM Onder Kalaci <onderk@microsoft.com>\n>> wrote:\n>>\n>>> Hi hackers,\n>>>\n>>>\n>>>\n>>> I couldn’t find a similar report to this one, so starting a new thread. I\n>>> can reproduce this on v14.0 as well as PostgreSQL 12.5 (not tried below\n>>> versions).\n>>>\n>>>\n>>>\n>>> Steps to reproduce:\n>>>\n>>>\n>>>\n>>> CREATE TYPE two_ints as (if1 int, if2 int);\n>>>\n>>> CREATE DOMAIN domain AS two_ints CHECK ((VALUE).if1 > 0);\n>>>\n>>> CREATE TABLE domain_indirection_test (f1 int, f3 domain, domain_array\n>>> domain[]);\n>>>\n>>> INSERT INTO domain_indirection_test (f1,f3.if1) VALUES (0, 1);\n>>>\n>>> UPDATE domain_indirection_test SET domain_array[0].if2 = 5;\n>>>\n>>> server closed the connection unexpectedly\n>>>\n>>> This probably means the server terminated abnormally\n>>>\n>>> before or while processing the request.\n>>>\n>>> The connection to the server was lost. Attempting reset: Failed.\n>>>\n>>> Time: 3.990 ms\n>>>\n>>> @:-!>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>> The backtrace on PG 12.5 (As far as I remember, PG14 looks very similar) :\n>>>\n>>>\n>>>\n>>> (lldb) bt\n>>>\n>>> * thread #1, queue = 'com.apple.main-thread', stop reason =\n>>> EXC_BAD_ACCESS (code=1, address=0x0)\n>>>\n>>> * frame #0: 0x00000001036584b7\n>>> postgres`pg_detoast_datum(datum=0x0000000000000000) at fmgr.c:1741:6\n>>>\n>>> frame #1: 0x0000000103439a86\n>>> postgres`ExecEvalFieldStoreDeForm(state=<unavailable>,\n>>> op=0x00007f9212045df8, econtext=<unavailable>) at execExprInterp.c:3025:12\n>>>\n>>> frame #2: 0x000000010343834e\n>>> postgres`ExecInterpExpr(state=<unavailable>, econtext=<unavailable>,\n>>> isnull=0x00007ffeec91fdc7) at execExprInterp.c:1337:4\n>>>\n>>> frame #3: 0x000000010343742b\n>>> postgres`ExecInterpExprStillValid(state=0x00007f921181db18,\n>>> econtext=0x00007f921181d670, isNull=<unavailable>) at\n>>> execExprInterp.c:1778:9\n>>>\n>>> frame #4: 0x0000000103444e0d\n>>> postgres`ExecEvalExprSwitchContext(state=0x00007f921181db18,\n>>> econtext=0x00007f921181d670, isNull=<unavailable>) at executor.h:310:13\n>>>\n>>> frame #5: 0x0000000103444cf0\n>>> postgres`ExecProject(projInfo=0x00007f921181db10) at executor.h:344:9\n>>>\n>>> frame #6: 0x0000000103444af6\n>>> postgres`ExecScan(node=0x00007f921181d560, accessMtd=(postgres`SeqNext at\n>>> nodeSeqscan.c:51), recheckMtd=(postgres`SeqRecheck at nodeSeqscan.c:90)) at\n>>> execScan.c:239:12\n>>>\n>>> frame #7: 0x0000000103461d17\n>>> postgres`ExecSeqScan(pstate=<unavailable>) at nodeSeqscan.c:112:9\n>>>\n>>> frame #8: 0x000000010344375c\n>>> postgres`ExecProcNodeFirst(node=0x00007f921181d560) at execProcnode.c:445:9\n>>>\n>>> frame #9: 0x000000010345eefe\n>>> postgres`ExecProcNode(node=0x00007f921181d560) at executor.h:242:9\n>>>\n>>> frame #10: 0x000000010345e74f\n>>> postgres`ExecModifyTable(pstate=0x00007f921181d090) at\n>>> nodeModifyTable.c:2079:14\n>>>\n>>> frame #11: 0x000000010344375c\n>>> postgres`ExecProcNodeFirst(node=0x00007f921181d090) at execProcnode.c:445:9\n>>>\n>>> frame #12: 0x000000010343f25e\n>>> postgres`ExecProcNode(node=0x00007f921181d090) at executor.h:242:9\n>>>\n>>> frame #13: 0x000000010343d80d\n>>> postgres`ExecutePlan(estate=0x00007f921181cd10,\n>>> planstate=0x00007f921181d090, use_parallel_mode=<unavailable>,\n>>> operation=CMD_UPDATE, sendTuples=false, numberTuples=0,\n>>> direction=ForwardScanDirection, dest=0x00007f9211818c38,\n>>> execute_once=<unavailable>) at execMain.c:1646:10\n>>>\n>>> frame #14: 0x000000010343d745\n>>> postgres`standard_ExecutorRun(queryDesc=0x00007f921180e310,\n>>> direction=ForwardScanDirection, count=0, execute_once=true) at\n>>> execMain.c:364:3\n>>>\n>>> frame #15: 0x000000010343d67c\n>>> postgres`ExecutorRun(queryDesc=0x00007f921180e310,\n>>> direction=ForwardScanDirection, count=0, execute_once=<unavailable>) at\n>>> execMain.c:308:3\n>>>\n>>> frame #16: 0x00000001035784a8\n>>> postgres`ProcessQuery(plan=<unavailable>, sourceText=<unavailable>,\n>>> params=<unavailable>, queryEnv=0x0000000000000000, dest=<unavailable>,\n>>> completionTag=\"\") at pquery.c:161:2\n>>>\n>>> frame #17: 0x0000000103577c5e\n>>> postgres`PortalRunMulti(portal=0x00007f9215024110, isTopLevel=true,\n>>> setHoldSnapshot=false, dest=0x00007f9211818c38, altdest=0x00007f9211818c38,\n>>> completionTag=\"\") at pquery.c:0\n>>>\n>>> frame #18: 0x000000010357763d\n>>> postgres`PortalRun(portal=0x00007f9215024110, count=9223372036854775807,\n>>> isTopLevel=<unavailable>, run_once=<unavailable>, dest=0x00007f9211818c38,\n>>> altdest=0x00007f9211818c38, completionTag=\"\") at pquery.c:796:5\n>>>\n>>> frame #19: 0x0000000103574f87\n>>> postgres`exec_simple_query(query_string=\"UPDATE domain_indirection_test SET\n>>> domain_array[0].if2 = 5;\") at postgres.c:1215:10\n>>>\n>>> frame #20: 0x00000001035746b8\n>>> postgres`PostgresMain(argc=<unavailable>, argv=<unavailable>,\n>>> dbname=<unavailable>, username=<unavailable>) at postgres.c:0\n>>>\n>>> frame #21: 0x000000010350d712 postgres`BackendRun(port=<unavailable>)\n>>> at postmaster.c:4494:2\n>>>\n>>> frame #22: 0x000000010350cffa\n>>> postgres`BackendStartup(port=<unavailable>) at postmaster.c:4177:3\n>>>\n>>> frame #23: 0x000000010350c59c postgres`ServerLoop at\n>>> postmaster.c:1725:7\n>>>\n>>> frame #24: 0x000000010350ac8d postgres`PostmasterMain(argc=3,\n>>> argv=0x00007f9210d049c0) at postmaster.c:1398:11\n>>>\n>>> frame #25: 0x000000010347fbdd postgres`main(argc=<unavailable>,\n>>> argv=<unavailable>) at main.c:228:3\n>>>\n>>> frame #26: 0x00007fff204e8f3d libdyld.dylib`start + 1\n>>>\n>>>\n>>>\n>>>\n>>>\n>>> Thanks,\n>>>\n>>> Onder KALACI\n>>>\n>>> Software Engineer at Microsoft &\n>>>\n>>> Developing the Citus database extension for PostgreSQL\n>>>\n>>\n>> Hi,\n>> It seems the following change would fix the crash:\n>>\n>> diff --git a/src/postgres/src/backend/executor/execExprInterp.c\n>> b/src/postgres/src/backend/executor/execExprInterp.c\n>> index 622cab9d4..50cb4f014 100644\n>> --- a/src/postgres/src/backend/executor/execExprInterp.c\n>> +++ b/src/postgres/src/backend/executor/execExprInterp.c\n>> @@ -3038,6 +3038,9 @@ ExecEvalFieldStoreDeForm(ExprState *state,\n>> ExprEvalStep *op, ExprContext *econte\n>> HeapTupleHeader tuphdr;\n>> HeapTupleData tmptup;\n>>\n>> + if (DatumGetPointer(tupDatum) == NULL) {\n>> + return;\n>> + }\n>> tuphdr = DatumGetHeapTupleHeader(tupDatum);\n>> tmptup.t_len = HeapTupleHeaderGetDatumLength(tuphdr);\n>> ItemPointerSetInvalid(&(tmptup.t_self));\n>>\n>> Here is the result after the update statement:\n>>\n>> # UPDATE domain_indirection_test SET domain_array[0].if2 = 5;\n>> UPDATE 1\n>> # select * from domain_indirection_test;\n>> f1 | f3 | domain_array\n>> ----+------+----------------\n>> 0 | (1,) | [0:0]={\"(,5)\"}\n>> (1 row)\n>>\n>> Cheers\n>>\n> Hi,\n> Here is the patch.\n\nThanks for your updated the patch. A minor code style, we can remove the\nbraces when there is only one statement, this is more consenting with the\ncodebase. Others looks good to me.\n\n> If the new test should be placed in a different .sql file, please let me\n> know.\n>\n\n> The update issue Japin mentioned seems to be orthogonal to the crash.\n>\n\nI start a new thread to discuss it [1].\n\n[1] https://www.postgresql.org/message-id/MEYP282MB1669BED5CEFE711E00C7421EB6BD9@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM\n\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 20 Oct 2021 00:38:31 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: UPDATE on Domain Array that is based on a composite key crashes" }, { "msg_contents": "[ please do not quote the entire thread when replying ]\n\nZhihong Yu <zyu@yugabyte.com> writes:\n> Here is the patch.\n\nThis patch seems quite misguided to me. The proximate cause of\nthe crash is that we're arriving at ExecEvalFieldStoreDeForm with\n*op->resnull and *op->resvalue both zero, which is a completely\ninvalid situation for a pass-by-reference datatype; so something\nupstream of this messed up. Even if there were an argument for\nacting as though that were a valid NULL value, this patch fails to\ndo so; that'd require setting all the output fieldstore.nulls[]\nentries to true, which you didn't.\n\nMoreover, experiment quickly shows that the problem only shows up with\nan array of domain over composite, not an array of plain composite.\nThe proposed patch doesn't seem to have anything to do with that\nobservation.\n\nAfter some digging around, I see where the issue actually is:\nthe expression tree we're dealing with looks like\n\n\t {SUBSCRIPTINGREF \n\t :refexpr \n\t {VAR \n\t }\n\t :refassgnexpr \n\t {COERCETODOMAIN \n\t :arg \n\t {FIELDSTORE \n\t :arg \n\t {CASETESTEXPR \n\t }\n\t }\n\t }\n\t }\n\nThe array element we intend to replace has to be passed down to\nthe CaseTestExpr, but that isn't happening. That's because\nisAssignmentIndirectionExpr fails to recognize a tree like\nthis, so ExecInitSubscriptingRef doesn't realize it needs to\narrange for that.\n\nI believe the attached is a correct fix.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 19 Oct 2021 13:04:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: UPDATE on Domain Array that is based on a composite key crashes" }, { "msg_contents": "On Tue, Oct 19, 2021 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> [ please do not quote the entire thread when replying ]\n>\n> Zhihong Yu <zyu@yugabyte.com> writes:\n> > Here is the patch.\n>\n> This patch seems quite misguided to me. The proximate cause of\n> the crash is that we're arriving at ExecEvalFieldStoreDeForm with\n> *op->resnull and *op->resvalue both zero, which is a completely\n> invalid situation for a pass-by-reference datatype; so something\n> upstream of this messed up. Even if there were an argument for\n> acting as though that were a valid NULL value, this patch fails to\n> do so; that'd require setting all the output fieldstore.nulls[]\n> entries to true, which you didn't.\n>\n> Moreover, experiment quickly shows that the problem only shows up with\n> an array of domain over composite, not an array of plain composite.\n> The proposed patch doesn't seem to have anything to do with that\n> observation.\n>\n> After some digging around, I see where the issue actually is:\n> the expression tree we're dealing with looks like\n>\n> {SUBSCRIPTINGREF\n> :refexpr\n> {VAR\n> }\n> :refassgnexpr\n> {COERCETODOMAIN\n> :arg\n> {FIELDSTORE\n> :arg\n> {CASETESTEXPR\n> }\n> }\n> }\n> }\n>\n> The array element we intend to replace has to be passed down to\n> the CaseTestExpr, but that isn't happening. That's because\n> isAssignmentIndirectionExpr fails to recognize a tree like\n> this, so ExecInitSubscriptingRef doesn't realize it needs to\n> arrange for that.\n>\n> I believe the attached is a correct fix.\n>\n> regards, tom lane\n>\n> Hi,\nTom's patch fixes the update of individual field inside the domain type as\nwell.\n\nTom's patch looks good to me.\n\nOn Tue, Oct 19, 2021 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:[ please do not quote the entire thread when replying ]\n\nZhihong Yu <zyu@yugabyte.com> writes:\n> Here is the patch.\n\nThis patch seems quite misguided to me.  The proximate cause of\nthe crash is that we're arriving at ExecEvalFieldStoreDeForm with\n*op->resnull and *op->resvalue both zero, which is a completely\ninvalid situation for a pass-by-reference datatype; so something\nupstream of this messed up.  Even if there were an argument for\nacting as though that were a valid NULL value, this patch fails to\ndo so; that'd require setting all the output fieldstore.nulls[]\nentries to true, which you didn't.\n\nMoreover, experiment quickly shows that the problem only shows up with\nan array of domain over composite, not an array of plain composite.\nThe proposed patch doesn't seem to have anything to do with that\nobservation.\n\nAfter some digging around, I see where the issue actually is:\nthe expression tree we're dealing with looks like\n\n                 {SUBSCRIPTINGREF \n                 :refexpr \n                    {VAR \n                    }\n                 :refassgnexpr \n                    {COERCETODOMAIN \n                    :arg \n                       {FIELDSTORE \n                       :arg \n                          {CASETESTEXPR \n                          }\n                       }\n                    }\n                 }\n\nThe array element we intend to replace has to be passed down to\nthe CaseTestExpr, but that isn't happening.  That's because\nisAssignmentIndirectionExpr fails to recognize a tree like\nthis, so ExecInitSubscriptingRef doesn't realize it needs to\narrange for that.\n\nI believe the attached is a correct fix.\n\n                        regards, tom lane\nHi,Tom's patch fixes the update of individual field inside the domain type as well.Tom's patch looks good to me.", "msg_date": "Tue, 19 Oct 2021 10:29:40 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: UPDATE on Domain Array that is based on a composite key crashes" } ]
[ { "msg_contents": "Hi,\n\nI can (almost) consistently reproduce $subject by executing the\nattached shell script, which I was using while constructing a test\ncase for another thread.\n\nThe backtrace on the assert failure is this:\n\n(gdb) bt\n#0 0x00007fce0b018387 in raise () from /lib64/libc.so.6\n#1 0x00007fce0b019a78 in abort () from /lib64/libc.so.6\n#2 0x0000000000b0bdfc in ExceptionalCondition (conditionName=0xcc0828\n\"pgstat_is_initialized && !pgstat_is_shutdown\",\n errorType=0xcc01b0 \"FailedAssertion\", fileName=0xcbfe12\n\"pgstat.c\", lineNumber=4852) at assert.c:69\n#3 0x00000000008ac51e in pgstat_assert_is_up () at pgstat.c:4852\n#4 0x00000000008a9623 in pgstat_send (msg=0x7ffd16db3240, len=144) at\npgstat.c:3075\n#5 0x00000000008a7cbf in pgstat_report_replslot_drop (\n slotname=0x7fce02dc6720 \"pg_16399_sync_16389_7020757232905881693\")\nat pgstat.c:1869\n#6 0x00000000008fb06b in ReplicationSlotDropPtr (slot=0x7fce02dc6708)\nat slot.c:696\n#7 0x00000000008fadbc in ReplicationSlotDropAcquired () at slot.c:585\n#8 0x00000000008faa8a in ReplicationSlotRelease () at slot.c:482\n#9 0x00000000009697c2 in ProcKill (code=1, arg=0) at proc.c:852\n#10 0x0000000000940878 in shmem_exit (code=1) at ipc.c:272\n#11 0x00000000009406a5 in proc_exit_prepare (code=1) at ipc.c:194\n#12 0x00000000009405fc in proc_exit (code=1) at ipc.c:107\n#13 0x0000000000b0c796 in errfinish (filename=0xce8525 \"postgres.c\",\nlineno=3193,\n funcname=0xcea370 <__func__.24551> \"ProcessInterrupts\") at elog.c:666\n#14 0x0000000000976ce4 in ProcessInterrupts () at postgres.c:3191\n#15 0x0000000000908023 in WalSndWaitForWal (loc=16785408) at walsender.c:1406\n#16 0x0000000000906f58 in logical_read_xlog_page (state=0x191d9e0,\ntargetPagePtr=16777216, reqLen=8192,\n targetRecPtr=22502048, cur_page=0x1929150 \"\") at walsender.c:821\n#17 0x000000000059f450 in ReadPageInternal (state=0x191d9e0,\npageptr=22495232, reqLen=6840) at xlogreader.c:649\n#18 0x000000000059ec2e in XLogReadRecord (state=0x191d9e0,\nerrormsg=0x7ffd16db3e68) at xlogreader.c:337\n#19 0x00000000008d48fe in DecodingContextFindStartpoint\n(ctx=0x191d620) at logical.c:606\n#20 0x000000000090769a in CreateReplicationSlot (cmd=0x191c000) at\nwalsender.c:1038\n#21 0x000000000090851f in exec_replication_command (\n cmd_string=0x185f470 \"CREATE_REPLICATION_SLOT\n\\\"pg_16399_sync_16389_7020757232905881693\\\" LOGICAL pgoutput (SNAPSHOT\n'use')\") at walsender.c:1636\n#22 0x00000000009783d1 in PostgresMain (dbname=0x18896e8 \"postgres\",\nusername=0x18896c8 \"amit\") at postgres.c:4493\n#23 0x00000000008b3e1d in BackendRun (port=0x1880fb0) at postmaster.c:4560\n#24 0x00000000008b37bd in BackendStartup (port=0x1880fb0) at postmaster.c:4288\n#25 0x00000000008afd03 in ServerLoop () at postmaster.c:1801\n#26 0x00000000008af5da in PostmasterMain (argc=5, argv=0x1859ba0) at\npostmaster.c:1473\n#27 0x00000000007b074b in main (argc=5, argv=0x1859ba0) at main.c:198\n\ncc'ing Andres and Horiguchi-san as pgstat_assert_is_up() is added in\nthe recent commit ee3f8d3d3ae, though not sure if the problem is that\ncommit's fault. I wonder if it may be of the adjacent commit\nfb2c5028e635.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 19 Oct 2021 22:14:59 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "pgstat_assert_is_up() can fail in walsender" }, { "msg_contents": "\n\nOn 2021/10/19 22:14, Amit Langote wrote:\n> Hi,\n> \n> I can (almost) consistently reproduce $subject by executing the\n> attached shell script, which I was using while constructing a test\n> case for another thread.\n\nThis seems the same issue that was reported at the thread [1].\n\n[1]\nhttps://www.postgresql.org/message-id/OS0PR01MB571621B206EEB17D8AB171F094B59@OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 22 Oct 2021 02:14:40 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pgstat_assert_is_up() can fail in walsender" }, { "msg_contents": "On Fri, Oct 22, 2021 at 2:14 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/10/19 22:14, Amit Langote wrote:\n> > Hi,\n> >\n> > I can (almost) consistently reproduce $subject by executing the\n> > attached shell script, which I was using while constructing a test\n> > case for another thread.\n>\n> This seems the same issue that was reported at the thread [1].\n>\n> [1]\n> https://www.postgresql.org/message-id/OS0PR01MB571621B206EEB17D8AB171F094B59@OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nAh, indeed. Thank you.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Oct 2021 11:35:08 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pgstat_assert_is_up() can fail in walsender" }, { "msg_contents": "Hi,\n\nOn 2021-10-22 02:14:40 +0900, Fujii Masao wrote:\n> On 2021/10/19 22:14, Amit Langote wrote:\n> > Hi,\n> > \n> > I can (almost) consistently reproduce $subject by executing the\n> > attached shell script, which I was using while constructing a test\n> > case for another thread.\n> \n> This seems the same issue that was reported at the thread [1].\n> \n> [1]\n> https://www.postgresql.org/message-id/OS0PR01MB571621B206EEB17D8AB171F094B59@OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nSorry for not working on that sooner, I got distracted. Experimenting with a\nsomewhat more fundamental fix for this. I'll not finish that today, but I'll\ntry to have something out monday.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 22 Oct 2021 08:27:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgstat_assert_is_up() can fail in walsender" } ]
[ { "msg_contents": "\nHi, hackers\n\nWhile reading the patch in [1], I found there is an unexpected behavior when\nupdate the domain array that is based on a composite.\n\nSteps to reproduce:\n\n1)\nCREATE TYPE two_ints as (if1 int, if2 int);\nCREATE DOMAIN domain AS two_ints CHECK ((VALUE).if1 > 0);\nCREATE TABLE domain_indirection_test (f1 int, f3 domain, domain_array domain[]);\nINSERT INTO domain_indirection_test (f1,f3.if1) VALUES (0, 1);\n\n2) The following test besed on patch in [1].\nUPDATE domain_indirection_test SET domain_array[0].if2 = 5;\nselect * from domain_indirection_test;\n f1 | f3 | domain_array\n----+------+----------------\n 0 | (1,) | [0:0]={\"(,5)\"}\n\n3)\nUPDATE domain_indirection_test SET domain_array[0].if1 = 10;\nselect * from domain_indirection_test ;\n f1 | f3 | domain_array\n----+------+-----------------\n 0 | (1,) | [0:0]={\"(10,)\"}\n(1 row)\n\n4)\nUPDATE domain_indirection_test SET domain_array[0].if1 = 10, domain_array[0].if2 = 5;\nselect * from domain_indirection_test ;\n f1 | f3 | domain_array\n----+------+----------------\n 0 | (1,) | [0:0]={\"(,5)\"}\n(1 row)\n\n5) \nUPDATE domain_indirection_test SET domain_array[0].if2 = 10, domain_array[0].if1 = 5;\nselect * from domain_indirection_test ;\n f1 | f3 | domain_array\n----+------+----------------\n 0 | (1,) | [0:0]={\"(5,)\"}\n(1 row)\n\n6) Work as expected.\nUPDATE domain_indirection_test SET domain_array[0] = (10, 5);\nselect * from domain_indirection_test ;\n f1 | f3 | domain_array\n----+------+------------------\n 0 | (1,) | [0:0]={\"(10,5)\"}\n(1 row)\n\n[1] https://www.postgresql.org/message-id/PH0PR21MB132823A46AA36F0685B7A29AD8BD9%40PH0PR21MB1328.namprd21.prod.outlook.com\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 20 Oct 2021 00:34:18 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Unexpected behavior of updating domain array that is based on a\n composite" } ]
[ { "msg_contents": "Hi hackers,\r\n\r\nI run 'make check-world' a lot, and I typically use parallelism and\r\nredirect stdout to /dev/null as suggested in the docs [0]. This seems\r\nto eliminate all of the test chatter except for this one message:\r\n\r\n NOTICE: database \"regression\" does not exist, skipping\r\n\r\nThis is emitted by the installcheck-parallel run in the pg_upgrade\r\ntest. Sending stderr to stdout clears it up, but presumably we don't\r\nwant to miss other errors, too. We could also just create the\r\ndatabase it is trying to drop to silence the NOTICE. This is what the\r\nattached patch does.\r\n\r\nThis is admittedly just a pet peeve, but maybe it is bothering others,\r\ntoo.\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/docs/devel/regress-run.html", "msg_date": "Tue, 19 Oct 2021 17:41:29 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "pg_upgrade test chatter" }, { "msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> I run 'make check-world' a lot, and I typically use parallelism and\n> redirect stdout to /dev/null as suggested in the docs [0]. This seems\n> to eliminate all of the test chatter except for this one message:\n\n> NOTICE: database \"regression\" does not exist, skipping\n\nYeah, that's bugged me too ever since we got to the point where that\nwas the only output ...\n\n> We could also just create the\n> database it is trying to drop to silence the NOTICE.\n\n... but that seems like a mighty expensive way to fix it.\ncreatedb is pretty slow on older/slower buildfarm animals.\n\nMaybe we could run the stderr output through \"grep -v\", or the like?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Oct 2021 14:00:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test chatter" }, { "msg_contents": "I wrote:\n> \"Bossart, Nathan\" <bossartn@amazon.com> writes:\n>> I run 'make check-world' a lot, and I typically use parallelism and\n>> redirect stdout to /dev/null as suggested in the docs [0]. This seems\n>> to eliminate all of the test chatter except for this one message:\n>> NOTICE: database \"regression\" does not exist, skipping\n\n> Yeah, that's bugged me too ever since we got to the point where that\n> was the only output ...\n\nActually ... why shouldn't we suppress that by running the command\nwith client_min_messages = warning? This would have to be a change\nto pg_regress, but I'm having a hard time thinking of cases where\nquieting that message would be a problem.\n\nI tried doing this as a one-liner change in pg_regress's\ndrop_database_if_exists(), but the idea fell over pretty\nquickly, because what underlies that is a \"psql -c\" call:\n\n$ psql -c 'set client_min_messages = warning; drop database if exists foo'\nERROR: DROP DATABASE cannot run inside a transaction block\n\nWe could dodge that, with modern versions of psql, by issuing\ntwo -c switches. So after a bit of hacking I have the attached\nPOC patch. It's incomplete because now that we have this\ninfrastructure we should change other parts of pg_regress\nto not launch psql N times where one would do. But it's enough\nto get through check-world without any chatter.\n\nAny objections to polishing this up and pushing it?\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 19 Oct 2021 15:36:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test chatter" }, { "msg_contents": "On 10/19/21, 12:37 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> Actually ... why shouldn't we suppress that by running the command\r\n> with client_min_messages = warning? This would have to be a change\r\n> to pg_regress, but I'm having a hard time thinking of cases where\r\n> quieting that message would be a problem.\r\n\r\nI was just looking into something like this.\r\n\r\n> We could dodge that, with modern versions of psql, by issuing\r\n> two -c switches. So after a bit of hacking I have the attached\r\n> POC patch. It's incomplete because now that we have this\r\n> infrastructure we should change other parts of pg_regress\r\n> to not launch psql N times where one would do. But it's enough\r\n> to get through check-world without any chatter.\r\n> \r\n> Any objections to polishing this up and pushing it?\r\n\r\nNo objections here. This seems like an overall improvement, and I\r\nconfirmed that it clears up the NOTICE from the pg_upgrade test.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 19 Oct 2021 20:20:44 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade test chatter" }, { "msg_contents": "On 2021-Oct-19, Tom Lane wrote:\n\n> I tried doing this as a one-liner change in pg_regress's\n> drop_database_if_exists(), but the idea fell over pretty\n> quickly, because what underlies that is a \"psql -c\" call:\n> \n> $ psql -c 'set client_min_messages = warning; drop database if exists foo'\n> ERROR: DROP DATABASE cannot run inside a transaction block\n> \n> We could dodge that, with modern versions of psql, by issuing\n> two -c switches.\n\nIsn't it easier to pass client_min_messages via PGOPTIONS?\n\nPGOPTIONS=\"-c client_min_messages=warning\" psql -c \"drop database if exists foo\"\n\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 19 Oct 2021 22:55:31 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test chatter" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Oct-19, Tom Lane wrote:\n>> We could dodge that, with modern versions of psql, by issuing\n>> two -c switches.\n\n> Isn't it easier to pass client_min_messages via PGOPTIONS?\n\n> PGOPTIONS=\"-c client_min_messages=warning\" psql -c \"drop database if exists foo\"\n\nYeah, my original thought had been to hack this at the test level.\nHowever, I felt like it'd be worth adding this code because we could\napply it elsewhere in pg_regress.c to save several psql sessions\n(and hence backend starts) per regression DB creation. That's not a\nhuge win, but it'd add up.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Oct 2021 22:08:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test chatter" }, { "msg_contents": "On 2021-Oct-19, Tom Lane wrote:\n\n> Yeah, my original thought had been to hack this at the test level.\n> However, I felt like it'd be worth adding this code because we could\n> apply it elsewhere in pg_regress.c to save several psql sessions\n> (and hence backend starts) per regression DB creation. That's not a\n> huge win, but it'd add up.\n\nAh, yeah, that sounds like it can be significant under valgrind and\nsuch, so +1.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Find a bug in a program, and fix it, and the program will work today.\nShow the program how to find and fix a bug, and the program\nwill work forever\" (Oliver Silfridge)\n\n\n", "msg_date": "Tue, 19 Oct 2021 23:37:28 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test chatter" } ]
[ { "msg_contents": "Hi,\n\nThis is a followup to\nhttp://postgr.es/m/CA+TgmoZ5A26C6OxKApafyuy_sx0VG6VXdD_Q6aSEzsvrPHDwzw@mail.gmail.com.\nI'm suspicious of the following code in CreateReplicationSlot:\n\n /* setup state for WalSndSegmentOpen */\n sendTimeLineIsHistoric = false;\n sendTimeLine = ThisTimeLineID;\n\nThe first thing that's odd about this is that if this is physical\nreplication, it's apparently dead code, because AFAICT sendTimeLine\nwill not be used for anything in that case. If it's logical\nreplication, there's a case where this will set sendTImeLine to 0,\nwhich seems very strange, since that is not a valid timeline. To\nreproduce, do this:\n\n1. Create a new primary database with tli=1. Create a standby.\n\n2. On the standby, fire up a database-connected replication, something\nlike this:\npsql 'port=5433 replication=database dbname=rhaas'\nDon't execute any commands yet!\n\n2. From some other backend, promote the standby:\nselect pg_promote();\nIt now gets a TLI of 2.\n\n3. Try to create a logical replication slot perhaps using something like this:\nCREATE_REPLICATION_SLOT \"foo\" LOGICAL \"test_decoding\" ( SNAPSHOT 'nothing');\n\nIf the system had been in normal running when you started the session,\nit would be initialized, because InitPostgres() calls\nRecoveryInProgress(). But since that only initializes it during normal\nrunning, and not during recovery, it doesn't help in this scenario.\nAnd if on the other hand you had not promoted the standby as in step\n2, then we'd still set sendTimeLine = 0 here, but then we'd almost\nimmediately call CheckLogicalDecodingRequirements() and error out\nwithout doing anything with the value. Here, however, we continue on.\n\nBut I don't know if it matters. We call CreateInitDecodingContext()\nwith sendTimeLine and ThisTimeLineID still zero; it doesn't call any\ncallbacks. Then we call DecodingContextFindStartpoint() with\nsendTimeLine still 0 and the first callback that gets invoked is\nlogical_read_xlog_page. At this point sendTimeLine = 0 and\nThisTimeLineID = 0. That calls XLogReadDetermineTimeline() which\nresets ThisTimeLineID to the correct value of 2, but when we get back\nto logical_read_xlog_page, we still manage to call WALRead with a\ntimeline of 0 because state->seg.ws_tli is still 0. And when WALRead\neventually does call WalSndOpen, which unconditionally propagates\nsendTimeLine into the TLI pointer that is passed to it. So now\nstate->seg_ws_tli also ends up being 2. So I guess maybe nothing bad\nhappens? But it sure seems strange that the code would apparently work\njust as well as it does today with the following patch:\n\ndiff --git a/src/backend/replication/walsender.c\nb/src/backend/replication/walsender.c\nindex b811a5c0ef..44fd598519 100644\n--- a/src/backend/replication/walsender.c\n+++ b/src/backend/replication/walsender.c\n@@ -945,7 +945,7 @@ CreateReplicationSlot(CreateReplicationSlotCmd *cmd)\n\n /* setup state for WalSndSegmentOpen */\n sendTimeLineIsHistoric = false;\n- sendTimeLine = ThisTimeLineID;\n+ sendTimeLine = rand() % 10;\n\n if (cmd->kind == REPLICATION_KIND_PHYSICAL)\n {\n\nAnd in fact, that passes make check-world. :-(\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Oct 2021 15:13:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "ThisTimeLineID can be used uninitialized" }, { "msg_contents": "Hi,\n\nOn 2021-10-19 15:13:04 -0400, Robert Haas wrote:\n> This is a followup to\n> http://postgr.es/m/CA+TgmoZ5A26C6OxKApafyuy_sx0VG6VXdD_Q6aSEzsvrPHDwzw@mail.gmail.com.\n> I'm suspicious of the following code in CreateReplicationSlot:\n> \n> /* setup state for WalSndSegmentOpen */\n> sendTimeLineIsHistoric = false;\n> sendTimeLine = ThisTimeLineID;\n> \n> The first thing that's odd about this is that if this is physical\n> replication, it's apparently dead code, because AFAICT sendTimeLine\n> will not be used for anything in that case.\n\nIt's quite confusing. It's *really* not helped by physical replication using\nbut not really using an xlogreader to keep state. Which presumably isn't\nactually used during a physical CreateReplicationSlot(), but is referenced by\na comment :/\n\n\n> But I don't know if it matters. We call CreateInitDecodingContext()\n> with sendTimeLine and ThisTimeLineID still zero; it doesn't call any\n> callbacks. Then we call DecodingContextFindStartpoint() with\n> sendTimeLine still 0 and the first callback that gets invoked is\n> logical_read_xlog_page. At this point sendTimeLine = 0 and\n> ThisTimeLineID = 0. That calls XLogReadDetermineTimeline() which\n> resets ThisTimeLineID to the correct value of 2, but when we get back\n> to logical_read_xlog_page, we still manage to call WALRead with a\n> timeline of 0 because state->seg.ws_tli is still 0. And when WALRead\n> eventually does call WalSndOpen, which unconditionally propagates\n> sendTimeLine into the TLI pointer that is passed to it. So now\n> state->seg_ws_tli also ends up being 2. So I guess maybe nothing bad\n> happens? But it sure seems strange that the code would apparently work\n> just as well as it does today with the following patch:\n> \n> diff --git a/src/backend/replication/walsender.c\n> b/src/backend/replication/walsender.c\n> index b811a5c0ef..44fd598519 100644\n> --- a/src/backend/replication/walsender.c\n> +++ b/src/backend/replication/walsender.c\n> @@ -945,7 +945,7 @@ CreateReplicationSlot(CreateReplicationSlotCmd *cmd)\n> \n> /* setup state for WalSndSegmentOpen */\n> sendTimeLineIsHistoric = false;\n> - sendTimeLine = ThisTimeLineID;\n> + sendTimeLine = rand() % 10;\n> \n> if (cmd->kind == REPLICATION_KIND_PHYSICAL)\n> {\n\nIstm we should introduce an InvalidTimeLineID, and explicitly initialize\nsendTimeLine to that, and assert that it's valid / invalid in a bunch of\nplaces?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 19 Oct 2021 13:43:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: ThisTimeLineID can be used uninitialized" }, { "msg_contents": "On 2021-Oct-19, Andres Freund wrote:\n\n> Hi,\n> \n> On 2021-10-19 15:13:04 -0400, Robert Haas wrote:\n> > This is a followup to\n> > http://postgr.es/m/CA+TgmoZ5A26C6OxKApafyuy_sx0VG6VXdD_Q6aSEzsvrPHDwzw@mail.gmail.com.\n> > I'm suspicious of the following code in CreateReplicationSlot:\n> > \n> > /* setup state for WalSndSegmentOpen */\n> > sendTimeLineIsHistoric = false;\n> > sendTimeLine = ThisTimeLineID;\n> > \n> > The first thing that's odd about this is that if this is physical\n> > replication, it's apparently dead code, because AFAICT sendTimeLine\n> > will not be used for anything in that case.\n> \n> It's quite confusing. It's *really* not helped by physical replication using\n> but not really using an xlogreader to keep state. Which presumably isn't\n> actually used during a physical CreateReplicationSlot(), but is referenced by\n> a comment :/\n\nYeah, that's not very nice. My preference would be to change physical\nreplication to use xlogreader in the regular way, and avoid confounding\nbackdoors like its current approach.\n\n> > But it sure seems strange that the code would apparently work just\n> > as well as it does today with the following patch:\n> > \n> > diff --git a/src/backend/replication/walsender.c\n> > b/src/backend/replication/walsender.c\n> > index b811a5c0ef..44fd598519 100644\n> > --- a/src/backend/replication/walsender.c\n> > +++ b/src/backend/replication/walsender.c\n> > @@ -945,7 +945,7 @@ CreateReplicationSlot(CreateReplicationSlotCmd *cmd)\n> > \n> > /* setup state for WalSndSegmentOpen */\n> > sendTimeLineIsHistoric = false;\n> > - sendTimeLine = ThisTimeLineID;\n> > + sendTimeLine = rand() % 10;\n\nHah. Yeah, when you can do things like that and the tests don't break,\nthat indicates a problem in the tests.\n\n> Istm we should introduce an InvalidTimeLineID, and explicitly initialize\n> sendTimeLine to that, and assert that it's valid / invalid in a bunch of\n> places?\n\nThat's not a bad idea; it'll help discover bogus code. Obviously, some\nadditional tests wouldn't harm -- we have a lot more coverage now than\nin embarrasingly recent past, but it can still be improved.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Las mujeres son como hondas: mientras más resistencia tienen,\n más lejos puedes llegar con ellas\" (Jonas Nightingale, Leap of Faith)\n\n\n", "msg_date": "Tue, 19 Oct 2021 20:30:30 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ThisTimeLineID can be used uninitialized" }, { "msg_contents": "On Tue, Oct 19, 2021 at 7:30 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Hah. Yeah, when you can do things like that and the tests don't break,\n> that indicates a problem in the tests.\n\nI *think* the problem is actually in the code, not the tests. In other\nwords, from what I can tell, we copy the bogus timeline value (0, or a\nrandom number) into several places, but then eventually overwrite all\ncopies of that value with a correct value before using it for\nanything. So in other words I think that the comment saying that this\ncode is initializing values that WalSndSegmentOpen is going to need is\njust wrong. I don't completely understand why it's wrong, but I think\nit IS wrong.\n\n> > Istm we should introduce an InvalidTimeLineID, and explicitly initialize\n> > sendTimeLine to that, and assert that it's valid / invalid in a bunch of\n> > places?\n>\n> That's not a bad idea; it'll help discover bogus code. Obviously, some\n> additional tests wouldn't harm -- we have a lot more coverage now than\n> in embarrasingly recent past, but it can still be improved.\n\n+1.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Oct 2021 09:08:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ThisTimeLineID can be used uninitialized" }, { "msg_contents": "On 2021-Oct-20, Robert Haas wrote:\n\n> On Tue, Oct 19, 2021 at 7:30 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > Hah. Yeah, when you can do things like that and the tests don't break,\n> > that indicates a problem in the tests.\n> \n> I *think* the problem is actually in the code, not the tests. In other\n> words, from what I can tell, we copy the bogus timeline value (0, or a\n> random number) into several places, but then eventually overwrite all\n> copies of that value with a correct value before using it for\n> anything. So in other words I think that the comment saying that this\n> code is initializing values that WalSndSegmentOpen is going to need is\n> just wrong. I don't completely understand why it's wrong, but I think\n> it IS wrong.\n\nOh, I'm not saying that there isn't a problem in the code. I'm just\nsaying that there is *also* a problem (an omission) in the tests.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Cómo ponemos nuestros dedos en la arcilla del otro. Eso es la amistad; jugar\nal alfarero y ver qué formas se pueden sacar del otro\" (C. Halloway en\nLa Feria de las Tinieblas, R. Bradbury)\n\n\n", "msg_date": "Wed, 20 Oct 2021 10:40:56 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ThisTimeLineID can be used uninitialized" }, { "msg_contents": "On Wed, Oct 20, 2021 at 09:08:57AM -0400, Robert Haas wrote:\n> On Tue, Oct 19, 2021 at 7:30 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>> Istm we should introduce an InvalidTimeLineID, and explicitly initialize\n>>> sendTimeLine to that, and assert that it's valid / invalid in a bunch of\n>>> places?\n>>\n>> That's not a bad idea; it'll help discover bogus code. Obviously, some\n>> additional tests wouldn't harm -- we have a lot more coverage now than\n>> in embarrasingly recent past, but it can still be improved.\n> \n> +1.\n\nThere is already an assumption in walsender.c where an invalid\ntimeline is 0, by the way? See sendTimeLineNextTLI and sendTimeLine.\nAsserting here and there looks like a good thing to do for code paths\nwhere the timeline should, or should not, be set.\n--\nMichael", "msg_date": "Thu, 21 Oct 2021 14:41:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ThisTimeLineID can be used uninitialized" }, { "msg_contents": "On 2021-Oct-21, Michael Paquier wrote:\n\n> There is already an assumption in walsender.c where an invalid\n> timeline is 0, by the way? See sendTimeLineNextTLI and sendTimeLine.\n> Asserting here and there looks like a good thing to do for code paths\n> where the timeline should, or should not, be set.\n\nSure, but as Robert suggested, let's make that value a known and obvious\nconstant InvalidTimeLineId rather than magic value 0.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"In fact, the basic problem with Perl 5's subroutines is that they're not\ncrufty enough, so the cruft leaks out into user-defined code instead, by\nthe Conservation of Cruft Principle.\" (Larry Wall, Apocalypse 6)\n\n\n", "msg_date": "Thu, 21 Oct 2021 10:55:26 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ThisTimeLineID can be used uninitialized" }, { "msg_contents": "On Tue, Oct 19, 2021 at 4:44 PM Andres Freund <andres@anarazel.de> wrote:\n> It's quite confusing. It's *really* not helped by physical replication using\n> but not really using an xlogreader to keep state. Which presumably isn't\n> actually used during a physical CreateReplicationSlot(), but is referenced by\n> a comment :/\n\nI can't figure out what you're referring to here. I can't find\nCreateReplicationSlot() using an xlogreader in the logical replication\ncase, or a comment that refers to it doing so.\n\n> Istm we should introduce an InvalidTimeLineID, and explicitly initialize\n> sendTimeLine to that, and assert that it's valid / invalid in a bunch of\n> places?\n\nI think the correct fix for this particular problem is just to delete\nthe initialization, as in the attached patch. I spent a long time\nstudying this today and eventually convinced myself that there's just\nno way these initializations can ever do anything (details in proposed\ncommit message). While it is important that we do not access the\nglobal variable when it's uninitialized, here there is no reason to\naccess it in the first place.\n\nRegarding the more general problem, I think we should consider (1)\nreducing the number of places that access ThisTimeLineID directly,\npreferring to add TimeLineID arguments to functions and pass the\nrelevant timeline value around explicitly and then (2) changing all of\nthe remaining accesses to ThisTimeLineID to function calls instead,\ne.g. by inventing a function GetCurrentTimeLineID(). Once we do that,\nI think this kind of problem just goes away. On the one hand,\nGetCurrentTimeLineID() could assert that the value is valid before\nreturning it, and then we would have centralized checking that we're\nnot using a bogus value. But, there's no reason to stop there. If all\nthe callers are using this function rather than accessing the global\nvariable directly, then the function can just initialize the value\nfrom shared memory as required! Or it can forget about having a local\ncopy stored in a global variable and just always read the current\nvalue from shared memory! With a little thought, I think this approach\ncan avoid this sort of unfortunate coding:\n\n if (!RecoveryInProgress())\n read_upto = GetFlushRecPtr();\n else\n read_upto = GetXLogReplayRecPtr(&ThisTimeLineID);\n tli = ThisTimeLineID;\n\nWhat is going on here? Well, if we're not still in recovery, then the\ncall to RecoveryInProgress() will initialize ThisTimeLineID as a side\neffect, and after that it can't change. If we are still in recovery\nthen GetXLogReplayRecPtr()'s will update the global variable as a side\neffect on every trip through the function. Either way, read_upto is\nthe end of WAL in the way that's relevant to whichever operating mode\nis current. But imagine that we could code this in a way that didn't\ndepend on global variables getting updated as a side effect. For\nexample:\n\n if (!RecoveryInProgress())\n read_upto = GetFlushRecPtr();\n else\n read_upto = GetXLogReplayRecPtr();\n currTLI = GetCurrentTimeLineID();\n\nOr perhaps:\n\n if (!RecoveryInProgress())\n read_upto = GetFlushRecPtr(&currTLI);\n else\n read_upto = GetXLogReplayRecPtr(&currTLI);\n\nMy point here is that the current idiom only makes sense if you\nrealize that RecoveryInProgress() has a side effect of updating\nThisTimeLineID, and on the other hand that the only reason we're using\nThisTimeLineID instead of a local variable here is that that's what\nRecoveryInProgress() updates. It's just two mutually-reinforcing bad\ndecisions.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 21 Oct 2021 15:29:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ThisTimeLineID can be used uninitialized" }, { "msg_contents": "On Thu, Oct 21, 2021 at 3:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think the correct fix for this particular problem is just to delete\n> the initialization, as in the attached patch. I spent a long time\n> studying this today and eventually convinced myself that there's just\n> no way these initializations can ever do anything (details in proposed\n> commit message). While it is important that we do not access the\n> global variable when it's uninitialized, here there is no reason to\n> access it in the first place.\n\nI have committed this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Oct 2021 11:03:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ThisTimeLineID can be used uninitialized" } ]
[ { "msg_contents": "\nThe problem I'm writing about (h/t Simon Riggs for finding it) is\nillustrated by the following snippet of java:\n\n public static void runtest(Connection conn) throws Exception {\n Statement stmt = conn.createStatement();\n stmt.setFetchSize(10);\n ResultSet rs = stmt.executeQuery(\"select oid, relfileid, relname from pg_class\");\n int count = 100;\n while (rs.next() && count-- > 0) {\n System.out.print(\".\");\n }\n rs.close();\n stmt.commit();\n stmt.close();\n System.out.println(\"\");\n }\n\nWhen called, this prints out a line with 100 dots showing 100 lines were\nfetched, but pg_stat_statements shows this:\n\n query | select oid, relfilenode, relname from pg_class\n calls | 1\n rows  | 10\n\n\nsuggesting only 10 rows were returned. It appears that only the first\n\"EXECUTE 10\" command against the portal is counted. At the very least\nthis is a POLA violation, and it seems to be a bug. Maybe it's\ndocumented somewhere but if so it's not obvious to me.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 19 Oct 2021 15:24:33 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "cursor use vs pg_stat_statements" }, { "msg_contents": "On Tue, 2021-10-19 at 15:24 -0400, Andrew Dunstan wrote:\n> \n> The problem I'm writing about (h/t Simon Riggs for finding it) is\n> illustrated by the following snippet of java:\n> \n>       public static void runtest(Connection conn) throws Exception {\n>         Statement stmt = conn.createStatement();\n>         stmt.setFetchSize(10);\n>         ResultSet rs = stmt.executeQuery(\"select oid, relfileid, relname from pg_class\");\n>         int count = 100;\n>         while (rs.next() && count-- > 0) {\n>           System.out.print(\".\");\n>         }\n>         rs.close();\n>         stmt.commit();\n>         stmt.close();\n>         System.out.println(\"\");\n>       }\n> \n> When called, this prints out a line with 100 dots showing 100 lines were\n> fetched, but pg_stat_statements shows this:\n> \n>     query | select oid, relfilenode, relname from pg_class\n>     calls | 1\n>     rows  | 10\n> \n> \n> suggesting only 10 rows were returned. It appears that only the first\n> \"EXECUTE 10\" command against the portal is counted. At the very least\n> this is a POLA violation, and it seems to be a bug. Maybe it's\n> documented somewhere but if so it's not obvious to me.\n\nI can't reproduce this on 14.1, after fixing the errors in your code:\n\ntest=# SELECT query, calls, rows FROM pg_stat_statements WHERE queryid = '3485361931104084405' \\gx\n─[ RECORD 1 ]─────────────────────────────────────────\nquery │ select oid, relfilenode, relname from pg_class\ncalls │ 1\nrows │ 424\n\nThe code I used was:\n\npublic class x {\n public static void main(String[] args) throws ClassNotFoundException, java.sql.SQLException {\n Class.forName(\"org.postgresql.Driver\");\n\n java.sql.Connection conn = java.sql.DriverManager.getConnection(\"jdbc:postgresql:test?user=laurenz\");\n\n java.sql.Statement stmt = conn.createStatement();\n stmt.setFetchSize(10);\n java.sql.ResultSet rs = stmt.executeQuery(\"select oid, relfilenode, relname from pg_class\");\n int count = 100;\n while (rs.next() && count-- > 0) {\n System.out.print(\".\");\n }\n rs.close();\n stmt.close();\n System.out.println(\"\");\n conn.close();\n }\n}\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 20 Oct 2021 15:02:16 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: cursor use vs pg_stat_statements" }, { "msg_contents": "On 10/20/21 9:02 AM, Laurenz Albe wrote:\n> On Tue, 2021-10-19 at 15:24 -0400, Andrew Dunstan wrote:\n>> The problem I'm writing about (h/t Simon Riggs for finding it) is\n>> illustrated by the following snippet of java:\n>>\n>>       public static void runtest(Connection conn) throws Exception {\n>>         Statement stmt = conn.createStatement();\n>>         stmt.setFetchSize(10);\n>>         ResultSet rs = stmt.executeQuery(\"select oid, relfileid, relname from pg_class\");\n>>         int count = 100;\n>>         while (rs.next() && count-- > 0) {\n>>           System.out.print(\".\");\n>>         }\n>>         rs.close();\n>>         stmt.commit();\n>>         stmt.close();\n>>         System.out.println(\"\");\n>>       }\n>>\n>> When called, this prints out a line with 100 dots showing 100 lines were\n>> fetched, but pg_stat_statements shows this:\n>>\n>>     query | select oid, relfilenode, relname from pg_class\n>>     calls | 1\n>>     rows  | 10\n>>\n>>\n>> suggesting only 10 rows were returned. It appears that only the first\n>> \"EXECUTE 10\" command against the portal is counted. At the very least\n>> this is a POLA violation, and it seems to be a bug. Maybe it's\n>> documented somewhere but if so it's not obvious to me.\n> I can't reproduce this on 14.1, after fixing the errors in your code:\n>\n>\n\nTry again with autocommit turned off. Sorry, I omitted that crucial\ndetail. Exact test code attached (name/password removed)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 20 Oct 2021 09:53:38 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: cursor use vs pg_stat_statements" }, { "msg_contents": "On Wed, Oct 20, 2021 at 09:53:38AM -0400, Andrew Dunstan wrote:\n> Try again with autocommit turned off. Sorry, I omitted that crucial\n> detail. Exact test code attached (name/password removed)\n\nFor the same of the archives, this should be OK now under 1d477a9.\nSee also this thread:\nhttps://www.postgresql.org/message-id/EBE6C507-9EB6-4142-9E4D-38B1673363A7@amazon.com\n--\nMichael", "msg_date": "Fri, 7 Apr 2023 07:33:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: cursor use vs pg_stat_statements" } ]
[ { "msg_contents": "Greetings -hackers,\n\nEnclosed is a patch that implements CREATE ROLE IF NOT EXISTS (along with\nthe same support for USER/GROUP). This is a fairly straightforward\napproach in that we do no validation of anything other than existence, with\nthe user needing to ensure that permissions/grants are set up in the proper\nway.\n\nComments?\n\nBest,\n\nDavid", "msg_date": "Tue, 19 Oct 2021 15:12:30 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "On Tue, 19 Oct 2021 at 16:12, David Christensen <\ndavid.christensen@crunchydata.com> wrote:\n\n> Greetings -hackers,\n>\n> Enclosed is a patch that implements CREATE ROLE IF NOT EXISTS (along with\n> the same support for USER/GROUP). This is a fairly straightforward\n> approach in that we do no validation of anything other than existence, with\n> the user needing to ensure that permissions/grants are set up in the proper\n> way.\n>\n\nOne little tricky aspect that occurs to me is the ALTER ROLE to set the\nrole flag options: it really needs to mention *all* the available options\nif it is to leave the role in a specific state regardless of how it started\nout. For example, if the existing role has BYPASSRLS but you want the\ndefault NOBYPASSRLS you have to say so explicitly.\n\nBecause of this, I think my preference, based just on thinking about\nsetting the flag options, would be for CREATE OR REPLACE.\n\nHowever, I'm wondering about the role name options: IN ROLE, ROLE, ADMIN.\nWith OR REPLACE should they replace the set of memberships or augment it?\nEither seems potentially problematic to me. By contrast it’s absolutely\nclear what IF NOT EXISTS should do with these.\n\nSo I’m not sure what I think overall.\n\nOn Tue, 19 Oct 2021 at 16:12, David Christensen <david.christensen@crunchydata.com> wrote:Greetings -hackers,Enclosed is a patch that implements CREATE ROLE IF NOT EXISTS (along with the same support for USER/GROUP).  This is a fairly straightforward approach in that we do no validation of anything other than existence, with the user needing to ensure that permissions/grants are set up in the proper way.One little tricky aspect that occurs to me is the ALTER ROLE to set the role flag options: it really needs to mention *all* the available options if it is to leave the role in a specific state regardless of how it started out. For example, if the existing role has BYPASSRLS but you want the default NOBYPASSRLS you have to say so explicitly.Because of this, I think my preference, based just on thinking about setting the flag options, would be for CREATE OR REPLACE.However, I'm wondering about the role name options: IN ROLE, ROLE, ADMIN. With OR REPLACE should they replace the set of memberships or augment it? Either seems potentially problematic to me. By contrast it’s absolutely clear what IF NOT EXISTS should do with these.So I’m not sure what I think overall.", "msg_date": "Tue, 19 Oct 2021 17:29:16 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "On Tue, Oct 19, 2021 at 4:29 PM Isaac Morland <isaac.morland@gmail.com>\nwrote:\n\n> On Tue, 19 Oct 2021 at 16:12, David Christensen <\n> david.christensen@crunchydata.com> wrote:\n>\n>> Greetings -hackers,\n>>\n>> Enclosed is a patch that implements CREATE ROLE IF NOT EXISTS (along with\n>> the same support for USER/GROUP). This is a fairly straightforward\n>> approach in that we do no validation of anything other than existence, with\n>> the user needing to ensure that permissions/grants are set up in the proper\n>> way.\n>>\n>\n> One little tricky aspect that occurs to me is the ALTER ROLE to set the\n> role flag options: it really needs to mention *all* the available options\n> if it is to leave the role in a specific state regardless of how it started\n> out. For example, if the existing role has BYPASSRLS but you want the\n> default NOBYPASSRLS you have to say so explicitly.\n>\n> Because of this, I think my preference, based just on thinking about\n> setting the flag options, would be for CREATE OR REPLACE.\n>\n> However, I'm wondering about the role name options: IN ROLE, ROLE, ADMIN.\n> With OR REPLACE should they replace the set of memberships or augment it?\n> Either seems potentially problematic to me. By contrast it’s absolutely\n> clear what IF NOT EXISTS should do with these.\n>\n> So I’m not sure what I think overall.\n>\n\nSure, the ambiguity here for merging options was exactly the reason I went\nwith the IF NOT EXISTS route. Whatever concerns with merging already exist\nwith ALTER ROLE, so nothing new is introduced by this functionality, at\nleast that was my original thought.\n\nDavid\n\nOn Tue, Oct 19, 2021 at 4:29 PM Isaac Morland <isaac.morland@gmail.com> wrote:On Tue, 19 Oct 2021 at 16:12, David Christensen <david.christensen@crunchydata.com> wrote:Greetings -hackers,Enclosed is a patch that implements CREATE ROLE IF NOT EXISTS (along with the same support for USER/GROUP).  This is a fairly straightforward approach in that we do no validation of anything other than existence, with the user needing to ensure that permissions/grants are set up in the proper way.One little tricky aspect that occurs to me is the ALTER ROLE to set the role flag options: it really needs to mention *all* the available options if it is to leave the role in a specific state regardless of how it started out. For example, if the existing role has BYPASSRLS but you want the default NOBYPASSRLS you have to say so explicitly.Because of this, I think my preference, based just on thinking about setting the flag options, would be for CREATE OR REPLACE.However, I'm wondering about the role name options: IN ROLE, ROLE, ADMIN. With OR REPLACE should they replace the set of memberships or augment it? Either seems potentially problematic to me. By contrast it’s absolutely clear what IF NOT EXISTS should do with these.So I’m not sure what I think overall.Sure, the ambiguity here for merging options was exactly the reason I went with the IF NOT EXISTS route.  Whatever concerns with merging already exist with ALTER ROLE, so nothing new is introduced by this functionality, at least that was my original thought.David", "msg_date": "Thu, 21 Oct 2021 15:07:17 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "> On 19 Oct 2021, at 22:12, David Christensen <david.christensen@crunchydata.com> wrote:\n> \n> Greetings -hackers,\n> \n> Enclosed is a patch that implements CREATE ROLE IF NOT EXISTS (along with the same support for USER/GROUP). This is a fairly straightforward approach in that we do no validation of anything other than existence, with the user needing to ensure that permissions/grants are set up in the proper way.\n> \n> Comments?\n\nThis fails the roleattributes test in \"make check\", with what seems to be a\ntrivial change in the output. Can you please submit a rebased version fixing\nthe test?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 3 Nov 2021 11:51:59 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": ">\n>\n> This fails the roleattributes test in \"make check\", with what seems to be a\n> trivial change in the output. Can you please submit a rebased version\n> fixing\n> the test?\n>\n\nUpdated version attached.\n\nDavid", "msg_date": "Wed, 3 Nov 2021 16:59:32 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "David Christensen <david.christensen@crunchydata.com> writes:\n> Updated version attached.\n\nI'm generally pretty down on IF NOT EXISTS semantics in all cases,\nbut it seems particularly dangerous for something as fundamental\nto privilege checks as a role. It's not hard at all to conjure up\nscenarios in which this permits privilege escalation. That is,\nAlice wants to create role Bob and give it some privileges, but\nshe's lazy and writes a quick-and-dirty script using CREATE ROLE\nIF NOT EXISTS. Meanwhile Charlie sneaks in and creates Bob first,\nand then grants it to himself. Now Alice's script is giving away\nall sorts of privilege to Charlie. (Admittedly, Charlie must have\nCREATEROLE privilege already, but that doesn't mean he has every\nprivilege that Alice has --- especially not as we continue working\nto slice the superuser salami ever more finely.)\n\nDo we really need this?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Nov 2021 18:18:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "> On 3 Nov 2021, at 23:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I'm generally pretty down on IF NOT EXISTS semantics in all cases,\n> but it seems particularly dangerous for something as fundamental\n> to privilege checks as a role. It's not hard at all to conjure up\n> scenarios in which this permits privilege escalation. That is,\n> Alice wants to create role Bob and give it some privileges, but\n> she's lazy and writes a quick-and-dirty script using CREATE ROLE\n> IF NOT EXISTS. Meanwhile Charlie sneaks in and creates Bob first,\n> and then grants it to himself. Now Alice's script is giving away\n> all sorts of privilege to Charlie. (Admittedly, Charlie must have\n> CREATEROLE privilege already, but that doesn't mean he has every\n> privilege that Alice has --- especially not as we continue working\n> to slice the superuser salami ever more finely.)\n\nI agree with this take, I don't think the convenience outweighs the risk in\nthis case.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 4 Nov 2021 09:53:10 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "Greetings,\n\n* Daniel Gustafsson (daniel@yesql.se) wrote:\n> > On 3 Nov 2021, at 23:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I'm generally pretty down on IF NOT EXISTS semantics in all cases,\n> > but it seems particularly dangerous for something as fundamental\n> > to privilege checks as a role. It's not hard at all to conjure up\n> > scenarios in which this permits privilege escalation. That is,\n> > Alice wants to create role Bob and give it some privileges, but\n> > she's lazy and writes a quick-and-dirty script using CREATE ROLE\n> > IF NOT EXISTS. Meanwhile Charlie sneaks in and creates Bob first,\n> > and then grants it to himself. Now Alice's script is giving away\n> > all sorts of privilege to Charlie. (Admittedly, Charlie must have\n> > CREATEROLE privilege already, but that doesn't mean he has every\n> > privilege that Alice has --- especially not as we continue working\n> > to slice the superuser salami ever more finely.)\n> \n> I agree with this take, I don't think the convenience outweighs the risk in\n> this case.\n\nI don't quite follow this. The entire point of Alice writing a script\nthat uses IF NOT EXISTS is to have that command not fail if, indeed,\nthat role already exists, but for the rest of the script to be run.\nThat there's some potential attacker with CREATEROLE running around\ncreating roles that they think someone *else* might create is really\nstretching things to a very questionable level- especially with\nCREATEROLE where Charlie could just CREATE a new role which is a member\nof Bob anyway after the fact and then GRANT that role to themselves.\n\nThe reason this thread was started is that it's a pretty clearly useful\nthing to be able to use IF NOT EXISTS for CREATE ROLE and I don't agree\nwith the justification that we shouldn't allow it because someone might\nuse it carelessly. For one, I really doubt that's actually a risk at\nall, but more importantly there's a lot of very good use-cases where\nit'll be used correctly and not having it means having to do other ugly\nthings like write a pl/pgsql function which checks pg_roles and would\nend up having the exact same risk but be a lot more clunky. And, yes,\npeople are already doing that. Let's give them useful tools and\ndocument that they be careful with them, not make them jump through\nhoops.\n\nThanks,\n\nStephen", "msg_date": "Mon, 8 Nov 2021 13:38:53 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> I don't quite follow this. The entire point of Alice writing a script\n> that uses IF NOT EXISTS is to have that command not fail if, indeed,\n> that role already exists, but for the rest of the script to be run.\n> That there's some potential attacker with CREATEROLE running around\n> creating roles that they think someone *else* might create is really\n> stretching things to a very questionable level- especially with\n> CREATEROLE where Charlie could just CREATE a new role which is a member\n> of Bob anyway after the fact and then GRANT that role to themselves.\n\nI agree that as things stand, CREATEROLE is powerful enough that Charlie\ndoesn't need any subterfuge to become a member of the Bob role. However,\nin view of other work that's going on, I think we shouldn't design the\nsystem on the assumption that it'll always be that way. As soon as\nthere exist roles that can create roles but cannot make arbitrary\nprivilege grants, this becomes an interesting security question.\nDo you really think that's never going to happen?\n\nMy concern here is basically that the semantics of CINE --- ie, that\nyou don't really know the initial properties of the target object ---\nseem far more dangerous for a role than for any other sort of object.\nThe possibility of unexpected grants on or to that role means\nthat you may be giving away privileges unintentionally.\n\n> The reason this thread was started is that it's a pretty clearly useful\n> thing to be able to use IF NOT EXISTS for CREATE ROLE and I don't agree\n> with the justification that we shouldn't allow it because someone might\n> use it carelessly.\n\nI'm not buying the argument that it's a \"clearly useful thing\".\nI think it's a foot-gun, and I repeat the point that nobody's\nactually provided a concrete use-case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Nov 2021 13:59:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "\n\n> On Nov 8, 2021, at 10:38 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> I don't quite follow this. The entire point of Alice writing a script\n> that uses IF NOT EXISTS is to have that command not fail if, indeed,\n> that role already exists, but for the rest of the script to be run.\n> That there's some potential attacker with CREATEROLE running around\n> creating roles that they think someone *else* might create is really\n> stretching things to a very questionable level- especially with\n> CREATEROLE where Charlie could just CREATE a new role which is a member\n> of Bob anyway after the fact and then GRANT that role to themselves.\n\nI don't see why this is \"stretching things to a very questionable level\". It might help this discussion if you could provide pseudo-code or similar for adding roles which is well-written and secure, and which benefits from this syntax. I would expect the amount of locking and checking for pre-existing roles that such logic would require would make the IF NOT EXIST option useless. Perhaps I'm wrong?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 8 Nov 2021 11:22:31 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "On Mon, Nov 8, 2021 at 1:22 PM Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n> > On Nov 8, 2021, at 10:38 AM, Stephen Frost <sfrost@snowman.net> wrote:\n>\n> > I don't quite follow this. The entire point of Alice writing a script\n> > that uses IF NOT EXISTS is to have that command not fail if, indeed,\n> > that role already exists, but for the rest of the script to be run.\n> > That there's some potential attacker with CREATEROLE running around\n> > creating roles that they think someone *else* might create is really\n> > stretching things to a very questionable level- especially with\n> > CREATEROLE where Charlie could just CREATE a new role which is a member\n> > of Bob anyway after the fact and then GRANT that role to themselves.\n>\n> I don't see why this is \"stretching things to a very questionable level\".\n> It might help this discussion if you could provide pseudo-code or similar\n> for adding roles which is well-written and secure, and which benefits from\n> this syntax. I would expect the amount of locking and checking for\n> pre-existing roles that such logic would require would make the IF NOT\n> EXIST option useless. Perhaps I'm wrong?\n>\n\nThe main motivator for me writing this was trying to increase idempotency\nfor things like scripting, where you want to be able to minimize the effort\nrequired to get things into a particular state. I agree with Stephen that\nwhether or not this is a best practices approach, this is something that is\nbeing done in the wild via DO blocks or similar, so providing a tool to\nhandle this better seems useful on its own.\n\nThis originally came from me looking into the failures to load certain\n`pg_dump` or `pg_dumpall` output when generated with the `--clean` flag,\nwhich necessarily cannot work, as it fails with the error `current user\ncannot be dropped`. Not that I am promoting the use of `pg_dumpall\n--clean`, as there are clearly better solutions here, but something which\ngenerates unusable output does not seem that useful. Instead, you could\ngenerate `CREATE ROLE IF NOT EXISTS username` statements and emit `ALTER\nROLE ...`, which is what it is already doing (modulo `IF NOT EXISTS`).\n\nThis seems to introduce no further security vectors compared to field work\nand increases utility in some cases, so seems generally useful to me.\n\nIf CINE semantics are at issue, what about the CREATE OR REPLACE semantics\nwith some sort of merge into the existing role? I don't care strongly\nabout which approach is taken, just think the overall \"make this role exist\nin this form\" without an error is useful in my own work, and CINE was\neasier to implement as a first pass.\n\nBest,\n\nDavid\n\nOn Mon, Nov 8, 2021 at 1:22 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:> On Nov 8, 2021, at 10:38 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> I don't quite follow this.  The entire point of Alice writing a script\n> that uses IF NOT EXISTS is to have that command not fail if, indeed,\n> that role already exists, but for the rest of the script to be run.\n> That there's some potential attacker with CREATEROLE running around\n> creating roles that they think someone *else* might create is really\n> stretching things to a very questionable level- especially with\n> CREATEROLE where Charlie could just CREATE a new role which is a member\n> of Bob anyway after the fact and then GRANT that role to themselves.\n\nI don't see why this is \"stretching things to a very questionable level\".  It might help this discussion if you could provide pseudo-code or similar for adding roles which is well-written and secure, and which benefits from this syntax.  I would expect the amount of locking and checking for pre-existing roles that such logic would require would make the IF NOT EXIST option useless.  Perhaps I'm wrong? The main motivator for me writing this was trying to increase idempotency for things like scripting, where you want to be able to minimize the effort required to get things into a particular state.  I agree with Stephen that whether or not this is a best practices approach, this is something that is being done in the wild via DO blocks or similar, so providing a tool to handle this better seems useful on its own.This originally came from me looking into the failures to load certain `pg_dump` or `pg_dumpall` output when generated with the `--clean` flag, which necessarily cannot work, as it fails with the error `current user cannot be dropped`.  Not that I am promoting the use of `pg_dumpall --clean`, as there are clearly better solutions here, but something which generates unusable output does not seem that useful.  Instead, you could generate `CREATE ROLE IF NOT EXISTS username` statements and emit `ALTER ROLE ...`, which is what it is already doing (modulo `IF NOT EXISTS`).This seems to introduce no further security vectors compared to field work and increases utility in some cases, so seems generally useful to me.If CINE semantics are at issue, what about the CREATE OR REPLACE semantics with some sort of merge into the existing role?  I don't care strongly about which approach is taken, just think the overall \"make this role exist in this form\" without an error is useful in my own work, and CINE was easier to implement as a first pass.Best,David", "msg_date": "Tue, 9 Nov 2021 09:36:14 -0600", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "\n\n> On Nov 9, 2021, at 7:36 AM, David Christensen <david.christensen@crunchydata.com> wrote:\n> \n> If CINE semantics are at issue, what about the CREATE OR REPLACE semantics with some sort of merge into the existing role? I don't care strongly about which approach is taken, just think the overall \"make this role exist in this form\" without an error is useful in my own work, and CINE was easier to implement as a first pass.\n\nCREATE OR REPLACE might be a better option, not with the \"merge into the existing role\" part, but rather as drop+create. If a malicious actor has already added other roles to the role, or created a table with a malicious trigger definition, the drop part will fail, which is good from a security viewpoint. Of course, the drop portion will also fail under other conditions which don't entail any security concerns, but maybe they could be addressed in a series of follow-on patches?\n\nI understand this idea is not as useful for creating idempotent scripts, but maybe it gets you part of the way there?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 9 Nov 2021 07:55:54 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "Greetings,\n\n* David Christensen (david.christensen@crunchydata.com) wrote:\n> On Mon, Nov 8, 2021 at 1:22 PM Mark Dilger <mark.dilger@enterprisedb.com>\n> wrote:\n> \n> > > On Nov 8, 2021, at 10:38 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> >\n> > > I don't quite follow this. The entire point of Alice writing a script\n> > > that uses IF NOT EXISTS is to have that command not fail if, indeed,\n> > > that role already exists, but for the rest of the script to be run.\n> > > That there's some potential attacker with CREATEROLE running around\n> > > creating roles that they think someone *else* might create is really\n> > > stretching things to a very questionable level- especially with\n> > > CREATEROLE where Charlie could just CREATE a new role which is a member\n> > > of Bob anyway after the fact and then GRANT that role to themselves.\n> >\n> > I don't see why this is \"stretching things to a very questionable level\".\n> > It might help this discussion if you could provide pseudo-code or similar\n> > for adding roles which is well-written and secure, and which benefits from\n> > this syntax. I would expect the amount of locking and checking for\n> > pre-existing roles that such logic would require would make the IF NOT\n> > EXIST option useless. Perhaps I'm wrong?\n> >\n> \n> The main motivator for me writing this was trying to increase idempotency\n> for things like scripting, where you want to be able to minimize the effort\n> required to get things into a particular state. I agree with Stephen that\n> whether or not this is a best practices approach, this is something that is\n> being done in the wild via DO blocks or similar, so providing a tool to\n> handle this better seems useful on its own.\n\nAgreed.\n\n> This originally came from me looking into the failures to load certain\n> `pg_dump` or `pg_dumpall` output when generated with the `--clean` flag,\n> which necessarily cannot work, as it fails with the error `current user\n> cannot be dropped`. Not that I am promoting the use of `pg_dumpall\n> --clean`, as there are clearly better solutions here, but something which\n> generates unusable output does not seem that useful. Instead, you could\n> generate `CREATE ROLE IF NOT EXISTS username` statements and emit `ALTER\n> ROLE ...`, which is what it is already doing (modulo `IF NOT EXISTS`).\n\nThe other very common case that I've seen is where the role ends up\nowning objects and therefore can't be dropped without also dropping\nthose objects- possibly just GRANTs but may also be tables or other\nthings. In other words, a script like this:\n\nDROP ROLE IF EXISTS r1;\nCREATE ROLE r1;\nCREATE SCHEMA IF NOT EXISTS r1 AUTHORIZATION r1;\n\nisn't able to be re-run, while this is able to be:\n\nCREATE ROLE IF NOT EXISTS r1;\nCREATE SCHEMA IF NOT EXISTS r1 AUTHORIZATION r1;\n\n> This seems to introduce no further security vectors compared to field work\n> and increases utility in some cases, so seems generally useful to me.\n\nYeah.\n\n> If CINE semantics are at issue, what about the CREATE OR REPLACE semantics\n> with some sort of merge into the existing role? I don't care strongly\n> about which approach is taken, just think the overall \"make this role exist\n> in this form\" without an error is useful in my own work, and CINE was\n> easier to implement as a first pass.\n\nI don't really see how we could do CREATE OR REPLACE here, at least for\nthe cases that I'm thinking about. How would that work with existing\nGRANTs, for example? Perhaps it'd be alright if we limited it to just\nwhat can be specified in the CREATE ROLE and then left anything else in\nplace. I do generally like the idea of being able to explicitly say how\nthe role should look in one shot.\n\nThanks,\n\nStephen", "msg_date": "Tue, 9 Nov 2021 11:15:00 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Nov 9, 2021, at 7:36 AM, David Christensen <david.christensen@crunchydata.com> wrote:\n> > If CINE semantics are at issue, what about the CREATE OR REPLACE semantics with some sort of merge into the existing role? I don't care strongly about which approach is taken, just think the overall \"make this role exist in this form\" without an error is useful in my own work, and CINE was easier to implement as a first pass.\n> \n> CREATE OR REPLACE might be a better option, not with the \"merge into the existing role\" part, but rather as drop+create. If a malicious actor has already added other roles to the role, or created a table with a malicious trigger definition, the drop part will fail, which is good from a security viewpoint. Of course, the drop portion will also fail under other conditions which don't entail any security concerns, but maybe they could be addressed in a series of follow-on patches?\n> \n> I understand this idea is not as useful for creating idempotent scripts, but maybe it gets you part of the way there?\n\nIf it's actually drop+create then, no, that isn't really useful because\nit'll fail when that role owns objects (see my other email). If we can\navoid that issue then CREATE OR REPLACE might work, we just need to make\nsure that we document what is, and isn't, done in such a case.\n\nThanks,\n\nStephen", "msg_date": "Tue, 9 Nov 2021 11:16:50 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "On Tue, Nov 9, 2021 at 9:55 AM Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n> > On Nov 9, 2021, at 7:36 AM, David Christensen <\n> david.christensen@crunchydata.com> wrote:\n> >\n> > If CINE semantics are at issue, what about the CREATE OR REPLACE\n> semantics with some sort of merge into the existing role? I don't care\n> strongly about which approach is taken, just think the overall \"make this\n> role exist in this form\" without an error is useful in my own work, and\n> CINE was easier to implement as a first pass.\n>\n> CREATE OR REPLACE might be a better option, not with the \"merge into the\n> existing role\" part, but rather as drop+create. If a malicious actor has\n> already added other roles to the role, or created a table with a malicious\n> trigger definition, the drop part will fail, which is good from a security\n> viewpoint. Of course, the drop portion will also fail under other\n> conditions which don't entail any security concerns, but maybe they could\n> be addressed in a series of follow-on patches?\n>\n> I understand this idea is not as useful for creating idempotent scripts,\n> but maybe it gets you part of the way there?\n\n\nWell, the CREATE OR REPLACE via just setting the role's attributes\nexplicitly based on what you passed it could work (not strictly DROP +\nCREATE, in that you're keeping existing ownerships, etc, and can avoid\ncross-db permissions/ownership checks). Seems like some sort of merge\nlogic could be in order, as you wouldn't really want to lose existing\npermissions granted to a role, but you want to ensure that /at least/ the\npermissions granted exist for this role.\n\nDavid\n\nOn Tue, Nov 9, 2021 at 9:55 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:> On Nov 9, 2021, at 7:36 AM, David Christensen <david.christensen@crunchydata.com> wrote:\n> \n> If CINE semantics are at issue, what about the CREATE OR REPLACE semantics with some sort of merge into the existing role?  I don't care strongly about which approach is taken, just think the overall \"make this role exist in this form\" without an error is useful in my own work, and CINE was easier to implement as a first pass.\n\nCREATE OR REPLACE might be a better option, not with the \"merge into the existing role\" part, but rather as drop+create.  If a malicious actor has already added other roles to the role, or created a table with a malicious trigger definition, the drop part will fail, which is good from a security viewpoint.  Of course, the drop portion will also fail under other conditions which don't entail any security concerns, but maybe they could be addressed in a series of follow-on patches?\n\nI understand this idea is not as useful for creating idempotent scripts, but maybe it gets you part of the way there?Well, the CREATE OR REPLACE via just setting the role's attributes explicitly based on what you passed it could work (not strictly DROP + CREATE, in that you're keeping existing ownerships, etc, and can avoid cross-db permissions/ownership checks).  Seems like some sort of merge logic could be in order, as you wouldn't really want to lose existing permissions granted to a role, but you want to ensure that /at least/ the permissions granted exist for this role.David", "msg_date": "Tue, 9 Nov 2021 10:19:09 -0600", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "Greetings,\n\n* David Christensen (david.christensen@crunchydata.com) wrote:\n> On Tue, Nov 9, 2021 at 9:55 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > > On Nov 9, 2021, at 7:36 AM, David Christensen <\n> > david.christensen@crunchydata.com> wrote:\n> > > If CINE semantics are at issue, what about the CREATE OR REPLACE\n> > semantics with some sort of merge into the existing role? I don't care\n> > strongly about which approach is taken, just think the overall \"make this\n> > role exist in this form\" without an error is useful in my own work, and\n> > CINE was easier to implement as a first pass.\n> >\n> > CREATE OR REPLACE might be a better option, not with the \"merge into the\n> > existing role\" part, but rather as drop+create. If a malicious actor has\n> > already added other roles to the role, or created a table with a malicious\n> > trigger definition, the drop part will fail, which is good from a security\n> > viewpoint. Of course, the drop portion will also fail under other\n> > conditions which don't entail any security concerns, but maybe they could\n> > be addressed in a series of follow-on patches?\n> >\n> > I understand this idea is not as useful for creating idempotent scripts,\n> > but maybe it gets you part of the way there?\n> \n> Well, the CREATE OR REPLACE via just setting the role's attributes\n> explicitly based on what you passed it could work (not strictly DROP +\n> CREATE, in that you're keeping existing ownerships, etc, and can avoid\n> cross-db permissions/ownership checks). Seems like some sort of merge\n> logic could be in order, as you wouldn't really want to lose existing\n> permissions granted to a role, but you want to ensure that /at least/ the\n> permissions granted exist for this role.\n\nWhat happens with role attributes that aren't explicitly mentioned\nthough? Do those get reset to 'default' or are they left as-is?\n\nI suspect that most implementations will end up just explicitly setting\nall of the role attributes, of course, because they want the role to\nlook like how it is defined to in whatever manifest is declaring the\nrole, but we should still think about how we want this to work if we're\ngoing in this direction.\n\nIn terms of least-surprise, I do tend to think that the answer is \"only\ncare about what is explicitly put into the command\"- that is, if it\nisn't in the CREATE ROLE statement then it gets left as-is. Not sure\nhow others feel about that though.\n\nThanks,\n\nStephen", "msg_date": "Tue, 9 Nov 2021 11:22:45 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "On Tue, Nov 9, 2021 at 10:22 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * David Christensen (david.christensen@crunchydata.com) wrote:\n> > Well, the CREATE OR REPLACE via just setting the role's attributes\n> > explicitly based on what you passed it could work (not strictly DROP +\n> > CREATE, in that you're keeping existing ownerships, etc, and can avoid\n> > cross-db permissions/ownership checks). Seems like some sort of merge\n> > logic could be in order, as you wouldn't really want to lose existing\n> > permissions granted to a role, but you want to ensure that /at least/ the\n> > permissions granted exist for this role.\n>\n> What happens with role attributes that aren't explicitly mentioned\n> though? Do those get reset to 'default' or are they left as-is?\n>\n\nSince we have the ability to specify explicit negative options\n(NOCREATEDB vs CREATEDB, etc), I'd say leave as-is if not specified,\notherwise ensure it matches what you included in the command. Would also\nensure forward compatibility if new permissions/attributes were introduced,\nas we don't want to explicitly require that all permissions be itemized to\nutilize.\n\n\n> I suspect that most implementations will end up just explicitly setting\n> all of the role attributes, of course, because they want the role to\n> look like how it is defined to in whatever manifest is declaring the\n> role, but we should still think about how we want this to work if we're\n> going in this direction.\n>\n\nAgreed.\n\n\n> In terms of least-surprise, I do tend to think that the answer is \"only\n> care about what is explicitly put into the command\"- that is, if it\n> isn't in the CREATE ROLE statement then it gets left as-is. Not sure\n> how others feel about that though.\n>\n\nThis is also what would make the most sense to me.\n\nDavid\n\nOn Tue, Nov 9, 2021 at 10:22 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* David Christensen (david.christensen@crunchydata.com) wrote:\n> Well, the CREATE OR REPLACE via just setting the role's attributes\n> explicitly based on what you passed it could work (not strictly DROP +\n> CREATE, in that you're keeping existing ownerships, etc, and can avoid\n> cross-db permissions/ownership checks).  Seems like some sort of merge\n> logic could be in order, as you wouldn't really want to lose existing\n> permissions granted to a role, but you want to ensure that /at least/ the\n> permissions granted exist for this role.\n\nWhat happens with role attributes that aren't explicitly mentioned\nthough?  Do those get reset to 'default' or are they left as-is?Since we have the ability to specify explicit negative options (NOCREATEDB vs CREATEDB, etc), I'd say leave as-is if not specified, otherwise ensure it matches what you included in the command.  Would also ensure forward compatibility if new permissions/attributes were introduced, as we don't want to explicitly require that all permissions be itemized to utilize. \nI suspect that most implementations will end up just explicitly setting\nall of the role attributes, of course, because they want the role to\nlook like how it is defined to in whatever manifest is declaring the\nrole, but we should still think about how we want this to work if we're\ngoing in this direction.Agreed. \nIn terms of least-surprise, I do tend to think that the answer is \"only\ncare about what is explicitly put into the command\"- that is, if it\nisn't in the CREATE ROLE statement then it gets left as-is.  Not sure\nhow others feel about that though.This is also what would make the most sense to me.David", "msg_date": "Tue, 9 Nov 2021 10:28:14 -0600", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "\n\n> On Nov 9, 2021, at 8:22 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> In terms of least-surprise, I do tend to think that the answer is \"only\n> care about what is explicitly put into the command\"- that is, if it\n> isn't in the CREATE ROLE statement then it gets left as-is. Not sure\n> how others feel about that though.\n\nbob: CREATE ROLE charlie;\nbob: GRANT charlie TO david;\n\nsuper_alice: CREATE OR REPLACE ROLE charlie SUPERUSER;\n\nI think this is the sort of thing Tom and I are worried about. \"david\" is now a member of a superuser role, and it is far from clear that \"super_alice\" intended that. Even if \"bob\" is not malicious, having this happen by accident is pretty bad.\n\nIf we fix the existing bug that the pg_auth_members.grantor field can end up as a dangling reference, instead making sure that it is always accurate, then perhaps this would be ok if all roles granted into \"charlie\" had grantor=\"super_alice\". I'm not sure that is really good enough, but it is a lot closer to making this safe than allowing the command to succeed when role \"charlie\" has been granted away by someone else.\n \n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 9 Nov 2021 08:32:22 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Nov 9, 2021, at 8:22 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> > In terms of least-surprise, I do tend to think that the answer is \"only\n> > care about what is explicitly put into the command\"- that is, if it\n> > isn't in the CREATE ROLE statement then it gets left as-is. Not sure\n> > how others feel about that though.\n> \n> bob: CREATE ROLE charlie;\n> bob: GRANT charlie TO david;\n> \n> super_alice: CREATE OR REPLACE ROLE charlie SUPERUSER;\n> \n> I think this is the sort of thing Tom and I are worried about. \"david\" is now a member of a superuser role, and it is far from clear that \"super_alice\" intended that. Even if \"bob\" is not malicious, having this happen by accident is pretty bad.\n\nI understand the concern that you and Tom have raised, I just don't see\nit as such an issue that we can't give users this option. They're\nalready doing it via DO blocks and that's surely not any better.\nDocumenting that you should care about who is able to create roles in\nyour system when thinking about this is certainly reasonable, but just\nsaying we won't add it because someone might somewhere mis-use it isn't.\n\n> If we fix the existing bug that the pg_auth_members.grantor field can end up as a dangling reference, instead making sure that it is always accurate, then perhaps this would be ok if all roles granted into \"charlie\" had grantor=\"super_alice\". I'm not sure that is really good enough, but it is a lot closer to making this safe than allowing the command to succeed when role \"charlie\" has been granted away by someone else.\n\nI agree we should fix the issue of the grantor field being a dangling\nreference, that's clearly not a good thing.\n\nI'm not sure what is meant by making sure they're always 'accurate' or\nwhy 'accurate' in this case means that the grantor is always\n'super_alice'..? Are you suggesting that the CREATE OR REPLACE ROLE run\nby super_alice would remove the GRANT that bob made of granting charlie\nto david? I would argue that it's entirely possible that super_alice\nknows exactly what is going on and intends for charlie to have superuser\naccess and understands that any role which charlie has been GRANT'd to\nwould therefore be able to become charlie, that's not a surprise.\n\nNow, bringing this around to the more general discussion about making it\npossible for folks who aren't superuser to be able to create roles, I\nthink there's another way to address this that might satisfy everyone,\nparticularly with the CREATE OR REPLACE approach- to wit: if the role\ncreate isn't one that you've got appropriate rights on, then you\nshouldn't be able to CREATE OR REPLACE it. This, perhaps, gets to a\ndistinction between having ADMIN rights on a role vs. the ability to\nredefine the role (perhaps by virtue of being the 'owner' of that role)\nthat's useful.\n\nIn other words, in the case outlined above:\n\nbob: CREATE ROLE charlie;\n -- charlie is now a role 'owned' by bob and that isn't able to be\n -- changed by bob to be some other owner, unless bob can become the\n -- role to which they want to change ownership to\nbob: GRANT charlie TO david;\n\nalice: CREATE OR REPLACE ROLE charlie;\n -- This now fails because while alice is able to create roles, alice\n -- can only 'replace' roles which alice owns.\n\nI appreciate that the case of 'super_alice' doing things where they're\nan actual superuser might still be an issue, but running around doing\nthings with superuser is already risky business and the point here is to\nget away from doing that by splitting superuser up, ideally in a way\nthat privileges can be given out to non-superusers in a manner that's\nsafer than doing things as a superuser and where independent\nnon-superusers aren't able to do bad things to each other.\n\nThanks,\n\nStephen", "msg_date": "Tue, 9 Nov 2021 11:50:10 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "\n\n> On Nov 9, 2021, at 8:50 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n>> If we fix the existing bug that the pg_auth_members.grantor field can end up as a dangling reference, instead making sure that it is always accurate, then perhaps this would be ok if all roles granted into \"charlie\" had grantor=\"super_alice\". I'm not sure that is really good enough, but it is a lot closer to making this safe than allowing the command to succeed when role \"charlie\" has been granted away by someone else.\n> \n> I agree we should fix the issue of the grantor field being a dangling\n> reference, that's clearly not a good thing.\n\nJust FYI, I have a patch underway to fix it. I'm not super close to posting it, though.\n\n> I'm not sure what is meant by making sure they're always 'accurate' or\n> why 'accurate' in this case means that the grantor is always\n> 'super_alice'..?\n\nI mean that the dangling reference could point at a role that no longer exists, but if the oid gets recycled, it could point at the *wrong* role rather than merely at no role. So we'd need that fixed before we could rely on the \"grantor\" field for anything. I think Robert mentioned this issue already, on another thread.\n\n> Are you suggesting that the CREATE OR REPLACE ROLE run\n> by super_alice would remove the GRANT that bob made of granting charlie\n> to david?\n\nSuppose user \"stephen.frost\" owns a database and runs a script which creates roles, schemas, etc:\n\n CREATE OR REPLACE ROLE super_alice SUPERUSER;\n SET SESSION AUTHORIZATION super_alice;\n CREATE OR REPLACE ROLE charlie;\n CREATE OR REPLACE ROLE david IN ROLE charlie;\n\nUser \"stephen.frost\" runs that script again. The system cannot tell, as things currently are implemented, that \"stephen.frost\" was the original creator of role \"super_alice\", nor that \"super_alice\" was the original creator of \"charlie\" and \"david\". \n\nThe \"grantor\" field for \"david\"'s membership in \"charlie\" points at \"super_alice\", so we know enough to allow the \"IN ROLE charlie\" part, at least if we fix the dangling reference bug.\n\nIf we add an \"owner\" (or perhaps a \"creator\") field to pg_authid, the first time the script runs, it could be set to \"stephen.frost\" for \"super_alice\" and to \"super_alice\" for \"charlie\" and \"david\". When the script gets re-run, the CREATE OR REPLACE commands can succeed because that field matches.\n\n> I would argue that it's entirely possible that super_alice\n> knows exactly what is going on and intends for charlie to have superuser\n> access and understands that any role which charlie has been GRANT'd to\n> would therefore be able to become charlie, that's not a surprise.\n\nI agree that super_alice might know that, but perhaps we can make this feature less of a foot-gun and still achieve the goal of making idempotent role creation scripts work?\n\n> Now, bringing this around to the more general discussion about making it\n> possible for folks who aren't superuser to be able to create roles, I\n> think there's another way to address this that might satisfy everyone,\n> particularly with the CREATE OR REPLACE approach- to wit: if the role\n> create isn't one that you've got appropriate rights on, then you\n> shouldn't be able to CREATE OR REPLACE it.\n\nAgreed. You shouldn't be able to CREATE OR REPLACE a role that you couldn't CREATE in the first case.\n\n> This, perhaps, gets to a\n> distinction between having ADMIN rights on a role vs. the ability to\n> redefine the role (perhaps by virtue of being the 'owner' of that role)\n> that's useful.\n> \n> In other words, in the case outlined above:\n> \n> bob: CREATE ROLE charlie;\n> -- charlie is now a role 'owned' by bob and that isn't able to be\n> -- changed by bob to be some other owner, unless bob can become the\n> -- role to which they want to change ownership to\n> bob: GRANT charlie TO david;\n> \n> alice: CREATE OR REPLACE ROLE charlie;\n> -- This now fails because while alice is able to create roles, alice\n> -- can only 'replace' roles which alice owns.\n\nThis sounds reasonable. It means, of course, implementing a role ownership system. I thought you had other concerns about doing so.\n\n> I appreciate that the case of 'super_alice' doing things where they're\n> an actual superuser might still be an issue, but running around doing\n> things with superuser is already risky business and the point here is to\n> get away from doing that by splitting superuser up, ideally in a way\n> that privileges can be given out to non-superusers in a manner that's\n> safer than doing things as a superuser and where independent\n> non-superusers aren't able to do bad things to each other.\n\nI'm not sure what to do about this.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 9 Nov 2021 09:17:55 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "Modulo other issues/discussions, here is a version of this patch that\nimplements CREATE OR REPLACE ROLE just by handing off to AlterRole if it's\ndetermined that the role already exists; presumably any/all additional\nconsiderations would need to be added in both places were there a separate\ncode path for this.\n\nIt might be worth refactoring the AlterRole into a helper if there are any\ndeviations in messages, etc, but could be a decent approach to handling the\nproblem (which arguably would have similar restrictions/requirements in\nALTER ROLE itself) in a single location.\n\nBest,\n\nDavid", "msg_date": "Wed, 10 Nov 2021 11:14:51 -0600", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "> On 10 Nov 2021, at 18:14, David Christensen <david.christensen@crunchydata.com> wrote:\n\n> Modulo other issues/discussions, here is a version of this patch..\n\nThis patch fails to compile since you renamed the if_not_exists member in\nCreateRoleStmt but still set it in the parser.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 22 Nov 2021 13:49:37 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "On Mon, Nov 22, 2021 at 6:49 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 10 Nov 2021, at 18:14, David Christensen <\n> david.christensen@crunchydata.com> wrote:\n>\n> > Modulo other issues/discussions, here is a version of this patch..\n>\n> This patch fails to compile since you renamed the if_not_exists member in\n> CreateRoleStmt but still set it in the parser.\n>\n\nD'oh! Enclosed is a fixed/rebased version.\n\nBest,\n\nDavid", "msg_date": "Mon, 22 Nov 2021 07:47:41 -0600", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nWouldn't using opt_or_replace rule be a better option?\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Mon, 10 Jan 2022 14:43:26 +0000", "msg_from": "Asif Rehman <asifr.rehman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE IF NOT EXISTS" } ]
[ { "msg_contents": "Hi,\n\nWhen I read the documents and source code of wait evens,\nI found that the following wait events are never reported.\n\n* LogicalChangesRead: Waiting for a read from a logical changes file.\n* LogicalChangesWrite: Waiting for a write to a logical changes file.\n* LogicalSubxactRead: Waiting for a read from a logical subxact file.\n* LogicalSubxactWrite: Waiting for a write to a logical subxact file.\n\n\nThe wait events are introduced in the following patch.\n\n Add support for streaming to built-in logical replication.\n Amit Kapila on 2020/9/3 11:24:07\n 464824323e57dc4b397e8b05854d779908b55304\n\nI read the above discussion and found the wait events were reported at first.\nBut they seemed to be removed because they are not necessary because\nBufFileWrite/BufFileRead are enough([1]).\n\n\nIf my understanding is right, it's better to remove them since they make\nusers confused. Please see the attached patch. I confirmed that to make\ncheck-world passes all tests.\n\n[1]\nhttps://www.postgresql.org/message-id/CAA4eK1JV37jXUT5LeWzkBDNNnSntwQbLUZAj6m82QMiR1ZuuHQ%40mail.gmail.com\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Wed, 20 Oct 2021 14:12:20 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "LogicalChanges* and LogicalSubxact* wait events are never reported" }, { "msg_contents": "On Wed, Oct 20, 2021 at 02:12:20PM +0900, Masahiro Ikeda wrote:\n> If my understanding is right, it's better to remove them since they make\n> users confused. Please see the attached patch. I confirmed that to make\n> check-world passes all tests.\n\nYeah, I don't see the point in keeping these events around if they are\nnot used. Perhaps Amit has some plans for them, though.\n--\nMichael", "msg_date": "Wed, 20 Oct 2021 14:20:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: LogicalChanges* and LogicalSubxact* wait events are never\n reported" }, { "msg_contents": "On Wed, Oct 20, 2021 at 10:50 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Oct 20, 2021 at 02:12:20PM +0900, Masahiro Ikeda wrote:\n> > If my understanding is right, it's better to remove them since they make\n> > users confused. Please see the attached patch. I confirmed that to make\n> > check-world passes all tests.\n>\n> Yeah, I don't see the point in keeping these events around if they are\n> not used. Perhaps Amit has some plans for them, though.\n>\n\nNo, there is no plan for these. As far as I remember, during\ndevelopment, we have decided to use BufFile interface and forgot to\nremove these events which were required by the previous versions of\nthe patch. I'll take care of this.\n\nThanks for the report and patch!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 20 Oct 2021 14:47:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: LogicalChanges* and LogicalSubxact* wait events are never\n reported" }, { "msg_contents": "\n\nOn 2021/10/20 18:17, Amit Kapila wrote:\n> On Wed, Oct 20, 2021 at 10:50 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Wed, Oct 20, 2021 at 02:12:20PM +0900, Masahiro Ikeda wrote:\n>>> If my understanding is right, it's better to remove them since they make\n>>> users confused. Please see the attached patch. I confirmed that to make\n>>> check-world passes all tests.\n>>\n>> Yeah, I don't see the point in keeping these events around if they are\n>> not used. Perhaps Amit has some plans for them, though.\n>>\n> \n> No, there is no plan for these. As far as I remember, during\n> development, we have decided to use BufFile interface and forgot to\n> remove these events which were required by the previous versions of\n> the patch. I'll take care of this.\n> \n> Thanks for the report and patch!\nThanks for your replies and for handling it!\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 20 Oct 2021 19:16:20 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: LogicalChanges* and LogicalSubxact* wait events are never\n reported" }, { "msg_contents": "On Wed, Oct 20, 2021 at 3:46 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> On 2021/10/20 18:17, Amit Kapila wrote:\n> > On Wed, Oct 20, 2021 at 10:50 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >>\n> >> On Wed, Oct 20, 2021 at 02:12:20PM +0900, Masahiro Ikeda wrote:\n> >>> If my understanding is right, it's better to remove them since they make\n> >>> users confused. Please see the attached patch. I confirmed that to make\n> >>> check-world passes all tests.\n> >>\n> >> Yeah, I don't see the point in keeping these events around if they are\n> >> not used. Perhaps Amit has some plans for them, though.\n> >>\n> >\n> > No, there is no plan for these. As far as I remember, during\n> > development, we have decided to use BufFile interface and forgot to\n> > remove these events which were required by the previous versions of\n> > the patch. I'll take care of this.\n> >\n> > Thanks for the report and patch!\n> Thanks for your replies and for handling it!\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 21 Oct 2021 14:10:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: LogicalChanges* and LogicalSubxact* wait events are never\n reported" }, { "msg_contents": "\n\nOn 2021/10/21 17:40, Amit Kapila wrote:\n> On Wed, Oct 20, 2021 at 3:46 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>>\n>> On 2021/10/20 18:17, Amit Kapila wrote:\n>>> On Wed, Oct 20, 2021 at 10:50 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>>>\n>>>> On Wed, Oct 20, 2021 at 02:12:20PM +0900, Masahiro Ikeda wrote:\n>>>>> If my understanding is right, it's better to remove them since they make\n>>>>> users confused. Please see the attached patch. I confirmed that to make\n>>>>> check-world passes all tests.\n>>>>\n>>>> Yeah, I don't see the point in keeping these events around if they are\n>>>> not used. Perhaps Amit has some plans for them, though.\n>>>>\n>>>\n>>> No, there is no plan for these. As far as I remember, during\n>>> development, we have decided to use BufFile interface and forgot to\n>>> remove these events which were required by the previous versions of\n>>> the patch. I'll take care of this.\n>>>\n>>> Thanks for the report and patch!\n>> Thanks for your replies and for handling it!\n>>\n> \n> Pushed!\n\nThanks!\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 22 Oct 2021 09:38:57 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: LogicalChanges* and LogicalSubxact* wait events are never\n reported" } ]
[ { "msg_contents": "Hi hackers,\n\nOne of our test runs under the memory sanitizer cathed [1] the\nfollowing stacktrace:\n\n```\nheaptuple.c:1044:13: runtime error: load of value 111, which is not a\nvalid value for type '_Bool'\n #0 0x55fbb5e0857b in heap_form_tuple\n/home/runner/pgbuild/src/backend/access/common/heaptuple.c:1044\n #1 0x55fbb679f62d in tts_heap_materialize\n/home/runner/pgbuild/src/backend/executor/execTuples.c:381\n #2 0x55fbb67addcf in ExecFetchSlotHeapTuple\n/home/runner/pgbuild/src/backend/executor/execTuples.c:1654\n #3 0x55fbb5f8127d in heap_multi_insert\n/home/runner/pgbuild/src/backend/access/heap/heapam.c:2330\n #4 0x55fbb6261b50 in CatalogTuplesMultiInsertWithInfo\n/home/runner/pgbuild/src/backend/catalog/indexing.c:268\n #5 0x55fbb62ce5aa in copyTemplateDependencies\n/home/runner/pgbuild/src/backend/catalog/pg_shdepend.c:933\n #6 0x55fbb650eb98 in createdb\n/home/runner/pgbuild/src/backend/commands/dbcommands.c:590\n #7 0x55fbb7062b30 in standard_ProcessUtility\n/home/runner/pgbuild/src/backend/tcop/utility.c:773\n #8 0x7fa942a63c13 in loader_process_utility_hook\n/home/runner/work/timescaledb/timescaledb/src/loader/loader.c:522\n #9 0x55fbb7063807 in ProcessUtility\n/home/runner/pgbuild/src/backend/tcop/utility.c:523\n #10 0x55fbb705bac3 in PortalRunUtility\n/home/runner/pgbuild/src/backend/tcop/pquery.c:1147\n #11 0x55fbb705c6fe in PortalRunMulti\n/home/runner/pgbuild/src/backend/tcop/pquery.c:1304\n #12 0x55fbb705d485 in PortalRun\n/home/runner/pgbuild/src/backend/tcop/pquery.c:786\n #13 0x55fbb704f613 in exec_simple_query\n/home/runner/pgbuild/src/backend/tcop/postgres.c:1214\n #14 0x55fbb7054b30 in PostgresMain\n/home/runner/pgbuild/src/backend/tcop/postgres.c:4486\n #15 0x55fbb6d78551 in BackendRun\n/home/runner/pgbuild/src/backend/postmaster/postmaster.c:4506\n #16 0x55fbb6d8334c in BackendStartup\n/home/runner/pgbuild/src/backend/postmaster/postmaster.c:4228\n #17 0x55fbb6d840cd in ServerLoop\n/home/runner/pgbuild/src/backend/postmaster/postmaster.c:1745\n #18 0x55fbb6d86611 in PostmasterMain\n/home/runner/pgbuild/src/backend/postmaster/postmaster.c:1417\n #19 0x55fbb6970b9b in main /home/runner/pgbuild/src/backend/main/main.c:209\n```\n\nIt seems to be a bug in the PostgreSQL core. The memory corruption\nhappens @ pg_shdepend.c:914:\n\n```\n slot[slot_stored_count]->tts_values[Anum_pg_shdepend_refobjid\n] = shdep->refobjid;\n slot[slot_stored_count]->tts_values[Anum_pg_shdepend_deptype]\n= shdep->deptype; <--- HERE\n\n ExecStoreVirtualTuple(slot[slot_stored_count]);\n```\n\nThe shdep->deptype value gets written to slot[0]->tts_isnull:\n\n```\n(lldb) p shdep->deptype\n(char) $0 = 'o'\n(lldb) p ((uint8_t*)slot[0]->tts_isnull)[0]\n(uint8_t) $2 = 'o'\n(lldb) p/d 'o'\n(char) $4 = 111\n```\n\nI checked the rest of the PostgreSQL code and apparently, it should\nhave been tts_values[Anum_pg_shdepend_FOO - 1].\n\nThe patch is attached. The problem was first reported offlist by Sven\nKlemm. Investigated and fixed by me.\n\n[1]: https://github.com/timescale/timescaledb/actions/runs/1343346998\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 20 Oct 2021 13:01:31 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "[PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "On Wed, Oct 20, 2021 at 01:01:31PM +0300, Aleksander Alekseev wrote:\n> I checked the rest of the PostgreSQL code and apparently, it should\n> have been tts_values[Anum_pg_shdepend_FOO - 1].\n> \n> The patch is attached. The problem was first reported offlist by Sven\n> Klemm. Investigated and fixed by me.\n\nYes, that's indeed a one-off bug when copying shared dependencies of a\ntemplate database to the new one. This is new as of e3931d0, so I'll\ntake care of that and double-check the area while on.\n--\nMichael", "msg_date": "Wed, 20 Oct 2021 19:55:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "> On 20 Oct 2021, at 12:55, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Oct 20, 2021 at 01:01:31PM +0300, Aleksander Alekseev wrote:\n>> I checked the rest of the PostgreSQL code and apparently, it should\n>> have been tts_values[Anum_pg_shdepend_FOO - 1].\n>> \n>> The patch is attached. The problem was first reported offlist by Sven\n>> Klemm. Investigated and fixed by me.\n> \n> Yes, that's indeed a one-off bug when copying shared dependencies of a\n> template database to the new one. This is new as of e3931d0, so I'll\n> take care of that and double-check the area while on.\n\nThe attached patch looks correct to me. Skimming the referenced commit I see\nnothing else sticking out.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 20 Oct 2021 13:47:47 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "On 2021-Oct-20, Michael Paquier wrote:\n\n> On Wed, Oct 20, 2021 at 01:01:31PM +0300, Aleksander Alekseev wrote:\n> > I checked the rest of the PostgreSQL code and apparently, it should\n> > have been tts_values[Anum_pg_shdepend_FOO - 1].\n> > \n> > The patch is attached. The problem was first reported offlist by Sven\n> > Klemm. Investigated and fixed by me.\n> \n> Yes, that's indeed a one-off bug when copying shared dependencies of a\n> template database to the new one. This is new as of e3931d0, so I'll\n> take care of that and double-check the area while on.\n\nOuch ... this means that pg_shdepends contents are broken for databases\ncreated with 14.0? hmm ... yes.\n\nalvherre=# create role rol;\nCREATE ROLE\nalvherre=# create table blarg() ;\nCREATE TABLE\nalvherre=# alter table blarg owner to rol;\nALTER TABLE\nalvherre=# create database bar template alvherre;\nCREATE DATABASE\nalvherre=# \\c bar\nAhora está conectado a la base de datos «bar» con el usuario «alvherre».\nbar=# select * from pg_shdepend;\n dbid | classid | objid | objsubid | refclassid | refobjid | deptype \n-------+---------+-------+----------+------------+----------+---------\n 0 | 0 | 0 | 0 | 1260 | 10 | p\n 0 | 0 | 0 | 0 | 1260 | 6171 | p\n 0 | 0 | 0 | 0 | 1260 | 6181 | p\n 0 | 0 | 0 | 0 | 1260 | 6182 | p\n 0 | 0 | 0 | 0 | 1260 | 3373 | p\n 0 | 0 | 0 | 0 | 1260 | 3374 | p\n 0 | 0 | 0 | 0 | 1260 | 3375 | p\n 0 | 0 | 0 | 0 | 1260 | 3377 | p\n 0 | 0 | 0 | 0 | 1260 | 4569 | p\n 0 | 0 | 0 | 0 | 1260 | 4570 | p\n 0 | 0 | 0 | 0 | 1260 | 4571 | p\n 0 | 0 | 0 | 0 | 1260 | 4200 | p\n 12975 | 1259 | 37686 | 0 | 1260 | 37685 | o\n | 37689 | 1259 | 37686 | 0 | 1260 | 5\n(14 filas)\n\nbar=# select 37689::regclass;\n regclass \n----------\n 37689\n(1 fila)\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 20 Oct 2021 09:19:51 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "On Wed, Oct 20, 2021 at 09:19:51AM -0300, Alvaro Herrera wrote:\n> Ouch ... this means that pg_shdepends contents are broken for databases\n> created with 14.0? hmm ... yes.\n\nYes, it means so :(\n\nI have fixed the issue for now, and monitored the rest of the tree.\n\nAnother issue is that we have zero coverage for this area of the code\nwhen creating a database from a template and copying over shared\ndependencies:\nhttps://coverage.postgresql.org/src/backend/catalog/pg_shdepend.c.gcov.html\n\nIt is easy enough to get an error on the new database with\npg_describe_object(). Your part about adding a shared dependency with\na table on a given role is simple enough, as well. While looking for\na place where to put such a test, 020_createdb.pl felt like a natural\nplace and we don't have any coverage for the case of TEMPLATE with\ncreatedb. So I would like to suggest something like the attached for\nHEAD.\n--\nMichael", "msg_date": "Thu, 21 Oct 2021 11:42:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "On Wed, Oct 20, 2021 at 5:20 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Ouch ... this means that pg_shdepends contents are broken for databases\n> created with 14.0? hmm ... yes.\n\nI think that EDB's pg_catcheck tool can detect problems like this one.\nPerhaps it can be converted into an amcheck/pg_amcheck patch, and\nsubmitted. That would give us very broad coverage.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 20 Oct 2021 19:59:50 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "On Wed, Oct 20, 2021 at 07:59:50PM -0700, Peter Geoghegan wrote:\n> I think that EDB's pg_catcheck tool can detect problems like this one.\n\nYes, pg_catcheck is able to catch that.\n\n> Perhaps it can be converted into an amcheck/pg_amcheck patch, and\n> submitted. That would give us very broad coverage.\n\nPerhaps. This means the creation of a new database with shared deps\nin contrib/amcheck/t/. But is amcheck really a correct target here?\nThe fields involved here are an int, some OIDs and a char with a given\nsubset of values making them harder to check. pg_catcheck does checks\nacross catalogs, maintaining a mapping list as of its definitions.c.\n--\nMichael", "msg_date": "Thu, 21 Oct 2021 12:27:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "On Wed, Oct 20, 2021 at 8:27 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Perhaps. This means the creation of a new database with shared deps\n> in contrib/amcheck/t/. But is amcheck really a correct target here?\n> The fields involved here are an int, some OIDs and a char with a given\n> subset of values making them harder to check. pg_catcheck does checks\n> across catalogs, maintaining a mapping list as of its definitions.c.\n\nUsers should be able to use pg_amcheck as a high-level corruption\ndetection tool, which should include any new pg_catcheck style catalog\nchecking functionality. Whether or not we need to involve\ncontrib/amcheck itself doesn't seem important to me right now. Offhand\nI think that we wouldn't, because as you point out pg_catcheck is a\nfrontend program that checks multiple databases.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 20 Oct 2021 20:35:41 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "On 2021-Oct-21, Michael Paquier wrote:\n\n> On Wed, Oct 20, 2021 at 09:19:51AM -0300, Alvaro Herrera wrote:\n> > Ouch ... this means that pg_shdepends contents are broken for databases\n> > created with 14.0? hmm ... yes.\n> \n> Yes, it means so :(\n\nFor the upcoming release notes in 14.1 I think we'd do well to document\nhow to find out if you're affected by this; and if you are, how to fix\nit.\n\nI suppose pg_describe_object can be used on the contents of pg_shdepend\nto detect it. I'm less sure what to do to correct it -- delete the\nbogus entries and regenerate them with some bulk query?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Ninguna manada de bestias tiene una voz tan horrible como la humana\" (Orual)\n\n\n", "msg_date": "Thu, 21 Oct 2021 11:55:02 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I suppose pg_describe_object can be used on the contents of pg_shdepend\n> to detect it. I'm less sure what to do to correct it -- delete the\n> bogus entries and regenerate them with some bulk query?\n\nSeems that what copyTemplateDependencies wants to do can easily be\nmodeled by a SQL query, assuming you know which DB was cloned to\nwhich other one, and that the source's shdeps didn't change since\nthen. However, I'm not sure how we can get rid of existing bogus\nentries, especially if we'd like to preserve not-bogus ones\n(which very likely have gotten added to the destination DB since\nit was created).\n\nOn the whole I'm afraid that people messing with this manually are\nlikely to do more harm than good. pg_shdepend entries that don't\nmatch any object probably won't cause a problem, and the lack of\nprotection against untimely dropping a role is unlikely to be much\nof an issue for a role you're referencing in a template database.\nSo I suspect that practical issues will be rare. We're fortunate\nthat cloning a nonempty template database is rare already.\n\nBTW, I think there is an additional bug in copyTemplateDependencies:\nI do not see it initializing slot->tts_isnull[] anywhere. It\nprobably accidentally works (at least in devel builds) because we zero\nthat memory somewhere else, but surely this code shouldn't assume that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Oct 2021 11:52:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "On Thu, Oct 21, 2021 at 8:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> We're fortunate\n> that cloning a nonempty template database is rare already.\n>\n>\nThat, and a major use case for doing so is to quickly stage up testing data\nin a new database (i.e., not a production use case). Though I could see\ntenant-based products using this to bootstrap new clients I'd hope that is\na minority case.\n\nDavid J.\n\nOn Thu, Oct 21, 2021 at 8:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:We're fortunate\nthat cloning a nonempty template database is rare already.That, and a major use case for doing so is to quickly stage up testing data in a new database (i.e., not a production use case).  Though I could see tenant-based products using this to bootstrap new clients I'd hope that is a minority case.David J.", "msg_date": "Thu, 21 Oct 2021 08:58:28 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "Hi Tom,\n\n> BTW, I think there is an additional bug in copyTemplateDependencies:\n> I do not see it initializing slot->tts_isnull[] anywhere. It\n> probably accidentally works (at least in devel builds) because we zero\n> that memory somewhere else, but surely this code shouldn't assume that?\n\ntts_isnull[] is zeroed in:\n- copyTemplateDependencies\n-- MakeSingleTupleTableSlot, which simply wraps:\n--- MakeTupleTableSlot\n\n... where the slot is allocated with palloc0. The assumption that\nMakeSingleTupleTableSlot() returns valid TupleTableSlot* with zeroed\ntts_isnull[] seems reasonable, no?\n\nWhat confuses me is the fact that we have two procedures that do the\nsame thing. Maybe one is redundant.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 22 Oct 2021 10:48:57 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "On Fri, Oct 22, 2021 at 10:48:57AM +0300, Aleksander Alekseev wrote:\n> ... where the slot is allocated with palloc0. The assumption that\n> MakeSingleTupleTableSlot() returns valid TupleTableSlot* with zeroed\n> tts_isnull[] seems reasonable, no?\n\nYes, I don't see any need to do something more here. The number of\narguments is fetched from the tuple descriptor itself, so the\nallocation is sufficient.\n\n> What confuses me is the fact that we have two procedures that do the\n> same thing. Maybe one is redundant.\n\nDo you have something in mind here?\n\nTaking advantage of the catalog types and knowing that this is a\none-off, it is possible to recover dbid, classid, objid, objsubid and\nrefclassid. deptype can be mostly guessed from refclassid, but the\nreal problem is that refobjid is just lost because of the casting to a\nchar from and Oid.\n\n[ ... Thinks more ... ]\n\nHmm. Wouldn't it be as simple as removing the entries in pg_shdepend\nwhere dbid is NULL, and do an INSERT/SELECT with the existing entries\nin pg_shdepend from the template database, updating dbid to the new\ndatabase? That would require users to know which template they used\nas origin, as well as we could assume that no shared deps have changed\nbut that can be guessed by looking at all the misplaced fields. It is\ntrue enough that users could make a lot of damage with chirurgy DMLs\non catalogs, though.\n--\nMichael", "msg_date": "Fri, 22 Oct 2021 18:24:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Oct 22, 2021 at 10:48:57AM +0300, Aleksander Alekseev wrote:\n>> ... where the slot is allocated with palloc0. The assumption that\n>> MakeSingleTupleTableSlot() returns valid TupleTableSlot* with zeroed\n>> tts_isnull[] seems reasonable, no?\n\n> Yes, I don't see any need to do something more here.\n\nThat assumption is exactly what I'm objecting to. I don't think\nwe make it in other places, and I don't like making it here.\n(By \"here\" I mean all of e3931d0, because it made the same omission\nin several places.)\n\nThe primary reason why I think it's a bad idea is that only one\npath in MakeSingleTupleTableSlot provides a pre-zeroed tts_isnull\narray --- if you don't supply a tuple descriptor at creation,\nthe assumption falls down. So even if this coding technique is\nsafe where it is, it is a hazard for anyone copying the code into\nsome other context.\n\nI might be happier if we tried to guarantee that *every* way of\ncreating a slot will end with a pre-zeroed isnull array, and then\ngot rid of any thereby-duplicative memsets. But that would be\na lot more invasive than just making these places get in step.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Oct 2021 09:38:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "Hi Michael,\n\n> Do you have something in mind here?\n\nYep. This is not a priority though, thus I created a separate CF entry:\n\nhttps://commitfest.postgresql.org/35/3371/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 22 Oct 2021 16:44:09 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "> On 22 Oct 2021, at 15:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Michael Paquier <michael@paquier.xyz> writes:\n>> On Fri, Oct 22, 2021 at 10:48:57AM +0300, Aleksander Alekseev wrote:\n>>> ... where the slot is allocated with palloc0. The assumption that\n>>> MakeSingleTupleTableSlot() returns valid TupleTableSlot* with zeroed\n>>> tts_isnull[] seems reasonable, no?\n> \n>> Yes, I don't see any need to do something more here.\n> \n> That assumption is exactly what I'm objecting to. I don't think\n> we make it in other places, and I don't like making it here.\n> (By \"here\" I mean all of e3931d0, because it made the same omission\n> in several places.)\n\nThe attached fixes the the two ones I spotted, are there any I missed?\nRegardless of if we want to change the API (as discussed elsewhere here and in\na new thread), something like the attached should be done first and in 14 I\nthink.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Fri, 22 Oct 2021 20:22:24 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 22 Oct 2021, at 15:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> (By \"here\" I mean all of e3931d0, because it made the same omission\n>> in several places.)\n\n> The attached fixes the the two ones I spotted, are there any I missed?\n\nAh, you're right, InsertPgAttributeTuples is the only other spot in\nthat patch that's actually touching slots. I'd skimmed it a little\ntoo quickly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Oct 2021 14:34:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "> On 22 Oct 2021, at 20:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On 22 Oct 2021, at 15:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> (By \"here\" I mean all of e3931d0, because it made the same omission\n>>> in several places.)\n> \n>> The attached fixes the the two ones I spotted, are there any I missed?\n> \n> Ah, you're right, InsertPgAttributeTuples is the only other spot in\n> that patch that's actually touching slots. I'd skimmed it a little\n> too quickly.\n\nThanks for confirming, unless there are objections I'll apply the fix to master\nand backpatch to 14.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 22 Oct 2021 22:49:38 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "On Fri, Oct 22, 2021 at 10:49:38PM +0200, Daniel Gustafsson wrote:\n> On 22 Oct 2021, at 20:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Ah, you're right, InsertPgAttributeTuples is the only other spot in\n>> that patch that's actually touching slots. I'd skimmed it a little\n>> too quickly.\n> \n> Thanks for confirming, unless there are objections I'll apply the fix to master\n> and backpatch to 14.\n\nFine by me. The patch looks OK.\n--\nMichael", "msg_date": "Sat, 23 Oct 2021 07:47:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "On Thu, Oct 21, 2021 at 11:42:31AM +0900, Michael Paquier wrote:\n> It is easy enough to get an error on the new database with\n> pg_describe_object(). Your part about adding a shared dependency with\n> a table on a given role is simple enough, as well. While looking for\n> a place where to put such a test, 020_createdb.pl felt like a natural\n> place and we don't have any coverage for the case of TEMPLATE with\n> createdb. So I would like to suggest something like the attached for\n> HEAD.\n\nI was thinking on this one over the last couple of days, and doing a\ncopy of shared deps from a template within createdb still feels\nnatural, as of this patch:\nhttps://www.postgresql.org/message-id/YXDTl+PfSnqmbbkE@paquier.xyz\n\nWould somebody object to the addition of this test? Or perhaps\nsomebody has a better idea?\n--\nMichael", "msg_date": "Mon, 25 Oct 2021 16:51:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I was thinking on this one over the last couple of days, and doing a\n> copy of shared deps from a template within createdb still feels\n> natural, as of this patch:\n> https://www.postgresql.org/message-id/YXDTl+PfSnqmbbkE@paquier.xyz\n> Would somebody object to the addition of this test? Or perhaps\n> somebody has a better idea?\n\nI agree that we're not testing that area well enough. Proposed\npatch seems basically OK, but I think the test needs to be stricter\nabout what the expected output looks like --- for instance, it\nwouldn't complain if tab_foobar were described as something other\nthan a table.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Oct 2021 11:59:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "On Mon, Oct 25, 2021 at 11:59:52AM -0400, Tom Lane wrote:\n> I agree that we're not testing that area well enough. Proposed\n> patch seems basically OK, but I think the test needs to be stricter\n> about what the expected output looks like --- for instance, it\n> wouldn't complain if tab_foobar were described as something other\n> than a table.\n\nIndeed. There was also a problem in the regex itself, where '|' was\nnot escaped so the regex was not strict enough. While on it, I have\nadded a policy in the set copied to the new database. Testing the\ncase where the set of slots is full would require 2300~ entries, that\nwould take some time..\n--\nMichael", "msg_date": "Tue, 26 Oct 2021 14:43:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "> On 23 Oct 2021, at 00:47, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Oct 22, 2021 at 10:49:38PM +0200, Daniel Gustafsson wrote:\n>> On 22 Oct 2021, at 20:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Ah, you're right, InsertPgAttributeTuples is the only other spot in\n>>> that patch that's actually touching slots. I'd skimmed it a little\n>>> too quickly.\n>> \n>> Thanks for confirming, unless there are objections I'll apply the fix to master\n>> and backpatch to 14.\n> \n> Fine by me. The patch looks OK.\n\nApplied to master and 14, thanks.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 26 Oct 2021 10:49:49 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" }, { "msg_contents": "On Tue, Oct 26, 2021 at 02:43:26PM +0900, Michael Paquier wrote:\n> Indeed. There was also a problem in the regex itself, where '|' was\n> not escaped so the regex was not strict enough. While on it, I have\n> added a policy in the set copied to the new database. Testing the\n> case where the set of slots is full would require 2300~ entries, that\n> would take some time..\n\nApplied this one as of 70bfc5a.\n--\nMichael", "msg_date": "Thu, 28 Oct 2021 10:55:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory corruption in pg_shdepend.c" } ]
[ { "msg_contents": "Hi all,\n\nI have been wondering about some things related to schema privileges:\n\n1) Why do visibility rules apply to the \\d command, but not to system\ntables? What is the purpose of hiding stuff from \\d output while users\ncan get the same info another way?\n\n2) What is the reasoning behind separating schema privileges\nspecifically into CREATE and USAGE? And is it something that may be\nchanged in PG in the future?\n\nThe current logic allows a situation where after creating a table, a\nuser is not able to do anything with it despite being the owner. This\ncan be confusing, and I can't really imagine a scenario where it would\nbe useful from a security standpoint.\n\nAlternative approaches could be:\n- Separating schema privileges into more categories, such as CREATE,\nALTER, DROP, SELECT, UPDATE, INSERT etc, like it was done here [1] for\nexample. Then it allows more granular control which seems useful for\nsecurity.\n- To avoid many categories, only have USAGE to fully allow or fully\nprohibit someone to do stuff in the schema. Then it at least prevents\nthe weird situation where a user can create an object but can't do\nanything with it.\n\n[1] https://www.ibm.com/docs/en/db2/11.5?topic=privileges-schema\n\nThank you,\nAnna\n\n\n", "msg_date": "Wed, 20 Oct 2021 15:53:02 +0100", "msg_from": "Anna Akenteva <akenteva.annie@gmail.com>", "msg_from_op": true, "msg_subject": "Some questions about schema privileges" }, { "msg_contents": "On Wed, Oct 20, 2021 at 8:59 AM Anna Akenteva <akenteva.annie@gmail.com>\nwrote:\n\n> Hi all,\n>\n> I have been wondering about some things related to schema privileges:\n>\n> 1) Why do visibility rules apply to the \\d command, but not to system\n> tables? What is the purpose of hiding stuff from \\d output while users\n> can get the same info another way?\n>\n\nIMO the intended usage for \\d is to help people write queries. It seems\nreasonable to only show those things that would be resolved to if included\nin such a query. Its a convenience thing, not a security thing.\n\n\n> 2) What is the reasoning behind separating schema privileges\n> specifically into CREATE and USAGE? And is it something that may be\n> changed in PG in the future?\n>\n\nWell, because \"it is defined this way in the SQL Standard\" seems to apply\nhere (at least, the grant command compatibility notes doesn't indicate we\nare non-compliant here).\n\n\n> The current logic allows a situation where after creating a table, a\n> user is not able to do anything with it despite being the owner. This\n> can be confusing, and I can't really imagine a scenario where it would\n> be useful from a security standpoint.\n>\n\nYes, granting create but not usage isn't all that useful. But granting\nusage without create is. That is only possible if they are separate\ngrants. I suppose create could imply usage, but that just isn't how it\nworks, and isn't going to be changed.\n\n>\n> Alternative approaches could be:\n> - Separating schema privileges into more categories, such as CREATE,\n> ALTER, DROP, SELECT, UPDATE, INSERT etc, [...]\n\nThen it allows more granular control which seems useful for security.\n\n\nSo, kinda like default privileges but done at the schema, not\ndatabase/dba-role, scope. I'd rather there be better tools for managing\npermissions but still have them applied at the individual object level.\nAdding a layer of indirection takes an already complicated model and just\ncomplicates it further. It doesn't seem like a development and maintenance\nburden that the core project would benefit from taking on.\n\n\n> - To avoid many categories, only have USAGE to fully allow or fully\n> prohibit someone to do stuff in the schema. Then it at least prevents\n> the weird situation where a user can create an object but can't do\n> anything with it.\n>\n>\nThis doesn't seem like a problem that it is worth spending time avoiding.\n\nOn Wed, Oct 20, 2021 at 8:59 AM Anna Akenteva <akenteva.annie@gmail.com> wrote:Hi all,\n\nI have been wondering about some things related to schema privileges:\n\n1) Why do visibility rules apply to the \\d command, but not to system\ntables? What is the purpose of hiding stuff from \\d output while users\ncan get the same info another way?IMO the intended usage for \\d is to help people write queries.  It seems reasonable to only show those things that would be resolved to if included in such a query.  Its a convenience thing, not a security thing.\n\n2) What is the reasoning behind separating schema privileges\nspecifically into CREATE and USAGE? And is it something that may be\nchanged in PG in the future?Well, because \"it is defined this way in the SQL Standard\" seems to apply here (at least, the grant command compatibility notes doesn't indicate we are non-compliant here).\n\nThe current logic allows a situation where after creating a table, a\nuser is not able to do anything with it despite being the owner. This\ncan be confusing, and I can't really imagine a scenario where it would\nbe useful from a security standpoint.Yes, granting create but not usage isn't all that useful.  But granting usage without create is.  That is only possible if they are separate grants.  I suppose create could imply usage, but that just isn't how it works, and isn't going to be changed.\n\nAlternative approaches could be:\n- Separating schema privileges into more categories, such as CREATE,\nALTER, DROP, SELECT, UPDATE, INSERT etc, [...] Then it allows more granular control which seems useful for security.So, kinda like default privileges but done at the schema, not database/dba-role, scope.  I'd rather there be better tools for managing permissions but still have them applied at the individual object level.  Adding a layer of indirection takes an already complicated model and just complicates it further.  It doesn't seem like a development and maintenance burden that the core project would benefit from taking on. \n- To avoid many categories, only have USAGE to fully allow or fully\nprohibit someone to do stuff in the schema. Then it at least prevents\nthe weird situation where a user can create an object but can't do\nanything with it.This doesn't seem like a problem that it is worth spending time avoiding.", "msg_date": "Wed, 20 Oct 2021 09:35:19 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Some questions about schema privileges" } ]
[ { "msg_contents": "Hi,\n\nThe FATAL error \"recovery ended before configured recovery target was\nreached\" introduced by commit at [1] in PG 14 is causing the standby\nto go down after having spent a good amount of time in recovery. There\ncan be cases where the arrival of required WAL (for reaching recovery\ntarget) from the archive location to the standby may take time and\nmeanwhile the standby failing with the FATAL error isn't good.\nInstead, how about we make the standby wait for a certain amount of\ntime (with a GUC) so that it can keep looking for the required WAL. If\nit gets the required WAL during the wait time, then it succeeds in\nreaching the recovery target (no FATAL error of course). If it\ndoesn't, the timeout occurs and the standby fails with the FATAL\nerror. The value of the new GUC can probably be set to the average\ntime it takes for the WAL to reach archive location from the primary +\nfrom archive location to the standby, default 0 i.e. disabled.\n\nI'm attaching a WIP patch. I've tested it on my dev system and the\nrecovery regression tests are passing with it. I will provide a better\nversion later, probably with a test case.\n\nThoughts?\n\n[1] commit dc788668bb269b10a108e87d14fefd1b9301b793\n\nAuthor: Peter Eisentraut <peter@eisentraut.org>\nDate: Wed Jan 29 15:43:32 2020 +0100\n\n Fail if recovery target is not reached\n\n Before, if a recovery target is configured, but the archive ended\n before the target was reached, recovery would end and the server would\n promote without further notice. That was deemed to be pretty wrong.\n With this change, if the recovery target is not reached, it is a fatal\n error.\n\n Based-on-patch-by: Leif Gunnar Erlandsen <leif@lako.no>\n Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n Discussion:\nhttps://www.postgresql.org/message-id/flat/993736dd3f1713ec1f63fc3b653839f5@lako.no\n\nRegards,\nBharath Rupireddy.", "msg_date": "Wed, 20 Oct 2021 21:35:44 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "add retry mechanism for achieving recovery target before emitting\n FATA error \"recovery ended before configured recovery target was reached\"" }, { "msg_contents": "On Wed, 2021-10-20 at 21:35 +0530, Bharath Rupireddy wrote:\n> The FATAL error \"recovery ended before configured recovery target\n> was\n> reached\" introduced by commit at [1] in PG 14 is causing the standby\n> to go down after having spent a good amount of time in recovery.\n> There\n> can be cases where the arrival of required WAL (for reaching recovery\n> target) from the archive location to the standby may take time and\n> meanwhile the standby failing with the FATAL error isn't good.\n> Instead, how about we make the standby wait for a certain amount of\n> time (with a GUC) so that it can keep looking for the required WAL. \n\nHow is archiving configured, and would it be possible to introduce\nlogic into the restore_command to handle slow-to-arrive WAL?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 21 Oct 2021 17:24:34 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: add retry mechanism for achieving recovery target before\n emitting FATA error \"recovery ended before configured recovery target was\n reached\"" }, { "msg_contents": "On Fri, Oct 22, 2021 at 5:54 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Wed, 2021-10-20 at 21:35 +0530, Bharath Rupireddy wrote:\n> > The FATAL error \"recovery ended before configured recovery target\n> > was\n> > reached\" introduced by commit at [1] in PG 14 is causing the standby\n> > to go down after having spent a good amount of time in recovery.\n> > There\n> > can be cases where the arrival of required WAL (for reaching recovery\n> > target) from the archive location to the standby may take time and\n> > meanwhile the standby failing with the FATAL error isn't good.\n> > Instead, how about we make the standby wait for a certain amount of\n> > time (with a GUC) so that it can keep looking for the required WAL.\n>\n> How is archiving configured, and would it be possible to introduce\n> logic into the restore_command to handle slow-to-arrive WAL?\n\nThanks Jeff!\n\nIf the suggestion is to have the wait and retry logic embedded into\nthe user-written restore_command, IMHO, it's not a good idea as the\nrestore_command is external to the core PG and the FATAL error\n\"recovery ended before configured recovery target was reached\" is an\ninternal thing. Having the retry logic (controlled with a GUC) within\nthe core, when the startup process hits the recovery end before the\ntarget, is a better way and it is something the core PG can offer.\nWith this, the amount of work spent in recovery by the standby isn't\nwasted if the GUC is enabled with the right value. The optimal value\nsomeone can set is the average time it takes for the WAL to reach\narchive location from the primary + from archive location to the\nstandby. By default, we can disable the new GUC with value 0 so that\nwhoever wants can set it.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 22 Oct 2021 15:34:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add retry mechanism for achieving recovery target before emitting\n FATA error \"recovery ended before configured recovery target was reached\"" }, { "msg_contents": "On Fri, 2021-10-22 at 15:34 +0530, Bharath Rupireddy wrote:\n> If the suggestion is to have the wait and retry logic embedded into\n> the user-written restore_command, IMHO, it's not a good idea as the\n> restore_command is external to the core PG and the FATAL error\n> \"recovery ended before configured recovery target was reached\" is an\n> internal thing. \n\nIt seems likely that you'd want to tweak the exact behavior for the\ngiven system. For instance, if the files are making some progress, and\nyou can estimate that in 2 more minutes everything will be fine, then\nyou may be more willing to wait those two minutes. But if no progress\nhas happened since recovery began 15 minutes ago, you may want to fail\nimmediately.\n\nAll of this nuance would be better captured in a specialized script\nthan a generic timeout in the server code.\n\nWhat do you want to do after the timeout happens? If you want to issue\na WARNING instead of failing outright, perhaps that makes sense for\nexploratory PITR cases. That could be a simple boolean GUC without\nneeding to introduce the timeout logic into the server.\n\nI think it's an interesting point that it can be hard to choose a\nreasonable recovery target if the system is completely down. We could\nuse some better tooling or metadata around the lsns, xids or timestamp\nranges available in a pg_wal directory or an archive. Even better would\nbe to see the available named restore points. This would make is easier\nto calculate how long recovery might take for a given restore point, or\nwhether it's not going to work at all because there's not enough WAL.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 22 Oct 2021 13:16:54 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: add retry mechanism for achieving recovery target before\n emitting FATA error \"recovery ended before configured recovery target was\n reached\"" }, { "msg_contents": "On Sat, Oct 23, 2021 at 1:46 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Fri, 2021-10-22 at 15:34 +0530, Bharath Rupireddy wrote:\n> > If the suggestion is to have the wait and retry logic embedded into\n> > the user-written restore_command, IMHO, it's not a good idea as the\n> > restore_command is external to the core PG and the FATAL error\n> > \"recovery ended before configured recovery target was reached\" is an\n> > internal thing.\n>\n> What do you want to do after the timeout happens? If you want to issue\n> a WARNING instead of failing outright, perhaps that makes sense for\n> exploratory PITR cases. That could be a simple boolean GUC without\n> needing to introduce the timeout logic into the server.\n\nIf you are suggesting to give the user more control on what should\nhappen to the standby even after the timeout, then, the 2 new GUCs\nrecovery_target_retry_timeout (int) and\nrecovery_target_continue_after_timeout (bool) will really help users\nchoose what they want. I'm not sure if it is okay to have 2 new GUCs.\nLet's hear from other hackers what they think about this.\n\n> I think it's an interesting point that it can be hard to choose a\n> reasonable recovery target if the system is completely down. We could\n> use some better tooling or metadata around the lsns, xids or timestamp\n> ranges available in a pg_wal directory or an archive. Even better would\n> be to see the available named restore points. This would make is easier\n> to calculate how long recovery might take for a given restore point, or\n> whether it's not going to work at all because there's not enough WAL.\n\nI think pg_waldump can help here to do some exploratory analysis of\nthe available WAL in the directory where the WAL files are present.\nSince it is an independent C program, it can run even when the server\nis down and also run on archive location.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 23 Oct 2021 09:31:41 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add retry mechanism for achieving recovery target before emitting\n FATA error \"recovery ended before configured recovery target was reached\"" }, { "msg_contents": "On Sat, 2021-10-23 at 09:31 +0530, Bharath Rupireddy wrote:\n> If you are suggesting ...\n\nYour complaint seems to be coming from commit dc788668, so the most\ndirect answer would be to make that configurable to the old behavior,\nnot to invent a new timeout behavior.\n\nIf I understand correctly, you are doing PITR from an archive, right?\nSo would restore_command be a reasonable place for the timeout?\n\nAnd can you provide some approximate numbers to help me understand\nwhere the timeout would be helpful? E.g. you have W GB of WAL to\nreplay, and restore would take X minutes, but some WAL is missing so\nyou fail after X-Y minutes, but if you has timeout Z everything would\nbe great.\n\n> I think pg_waldump can help here to do some exploratory analysis of\n> the available WAL in the directory where the WAL files are present.\n> Since it is an independent C program, it can run even when the server\n> is down and also run on archive location.\n\nRight, it's possible to do, but I think there's room for improvement so\nwe don't have to always scan the WAL. I'm getting a bit off-topic from\nyour proposal though. I'll bring it up in another thread when my\nthoughts on this are more clear.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 23 Oct 2021 11:54:53 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: add retry mechanism for achieving recovery target before\n emitting FATA error \"recovery ended before configured recovery target was\n reached\"" }, { "msg_contents": "At Wed, 20 Oct 2021 21:35:44 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> Hi,\n> \n> The FATAL error \"recovery ended before configured recovery target was\n> reached\" introduced by commit at [1] in PG 14 is causing the standby\n> to go down after having spent a good amount of time in recovery. There\n> can be cases where the arrival of required WAL (for reaching recovery\n> target) from the archive location to the standby may take time and\n> meanwhile the standby failing with the FATAL error isn't good.\n> Instead, how about we make the standby wait for a certain amount of\n> time (with a GUC) so that it can keep looking for the required WAL. If\n> it gets the required WAL during the wait time, then it succeeds in\n> reaching the recovery target (no FATAL error of course). If it\n> doesn't, the timeout occurs and the standby fails with the FATAL\n> error. The value of the new GUC can probably be set to the average\n> time it takes for the WAL to reach archive location from the primary +\n> from archive location to the standby, default 0 i.e. disabled.\n> \n> I'm attaching a WIP patch. I've tested it on my dev system and the\n> recovery regression tests are passing with it. I will provide a better\n> version later, probably with a test case.\n> \n> Thoughts?\n\nIt looks like starting a server in non-hot standby mode only fetching\nfrom archive. The only difference is it doesn't have timeout.\n\nDoesn't that cofiguration meet your requirements?\n\nOr, if timeout matters, I agree with Jeff. Retrying in restore_command\nlooks fine.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 25 Oct 2021 09:59:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add retry mechanism for achieving recovery target before\n emitting FATA error \"recovery ended before configured recovery target was\n reached\"" }, { "msg_contents": "On Sat, Oct 23, 2021 at 1:46 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> What do you want to do after the timeout happens? If you want to issue\n> a WARNING instead of failing outright, perhaps that makes sense for\n> exploratory PITR cases. That could be a simple boolean GUC without\n> needing to introduce the timeout logic into the server.\n\nThanks Jeff. I posted the patch in a separate thread[1] for new GUC\n(WARN + promotion or shutdown with FATAL error) in case the recovery\ntarget isn't reached.\n\n[1] - https://www.postgresql.org/message-id/flat/CALj2ACWR4iaph7AWCr5-V9dXqpf2p5B%3D3fTyvLfL8VD_E%2Bx0tA%40mail.gmail.com.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 12 Nov 2021 15:47:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add retry mechanism for achieving recovery target before emitting\n FATA error \"recovery ended before configured recovery target was reached\"" } ]
[ { "msg_contents": "Hi,\n\nPerennially our users have complaints about slow count(*) when coming from\nsome other systems. Index-only scans help, but I think we can do better. I\nrecently wondered if a BRIN index could be used to answer min/max aggregate\nqueries over the whole table, and it turns out it doesn't. However, then it\noccurred to me that if we had an opclass that keeps track of the count in\neach page range, that would be a way to do a fast count(*) by creating the\nright index. That would require planner support and other work, but it\nseems doable. Any opinions on whether this is worth the effort?\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nHi,Perennially our users have complaints about slow count(*) when coming from some other systems. Index-only scans help, but I think we can do better. I recently wondered if a BRIN index could be used to answer min/max aggregate queries over the whole table, and it turns out it doesn't. However, then it occurred to me that if we had an opclass that keeps track of the count in each page range, that would be a way to do a fast count(*) by creating the right index. That would require planner support and other work, but it seems doable. Any opinions on whether this is worth the effort?-- John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 20 Oct 2021 13:51:41 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "[RFC] speed up count(*)" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> Perennially our users have complaints about slow count(*) when coming from\n> some other systems. Index-only scans help, but I think we can do better. I\n> recently wondered if a BRIN index could be used to answer min/max aggregate\n> queries over the whole table, and it turns out it doesn't. However, then it\n> occurred to me that if we had an opclass that keeps track of the count in\n> each page range, that would be a way to do a fast count(*) by creating the\n> right index. That would require planner support and other work, but it\n> seems doable. Any opinions on whether this is worth the effort?\n\nThe core reason why this is hard is that we insist on giving the right\nanswer. In particular, count(*) is supposed to count the rows that\nsatisfy the asker's snapshot. So I don't see a good way to answer it\nfrom an index only, given that we don't track visibility accurately\nin indexes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Oct 2021 13:57:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] speed up count(*)" }, { "msg_contents": "Hi, \n\nOn October 20, 2021 10:57:50 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>John Naylor <john.naylor@enterprisedb.com> writes:\n>> Perennially our users have complaints about slow count(*) when coming from\n>> some other systems. Index-only scans help, but I think we can do better. I\n>> recently wondered if a BRIN index could be used to answer min/max aggregate\n>> queries over the whole table, and it turns out it doesn't. However, then it\n>> occurred to me that if we had an opclass that keeps track of the count in\n>> each page range, that would be a way to do a fast count(*) by creating the\n>> right index. That would require planner support and other work, but it\n>> seems doable. Any opinions on whether this is worth the effort?\n>\n>The core reason why this is hard is that we insist on giving the right\n>answer. In particular, count(*) is supposed to count the rows that\n>satisfy the asker's snapshot. So I don't see a good way to answer it\n>from an index only, given that we don't track visibility accurately\n>in indexes.\n\nYeah.\n\nIf we really wanted to, we could accelerate unqualified count(*) substantially by computing the count inside heapam. There's a *lot* of overhead associated with returning tuples, grouping them, etc. Especially with all_visible set that's bound to be way faster (I'd guess are least 3-5x) if done in heapam (like we do the visibility determinations in heapgetpage for all tuples on a page at once).\n\n\nBut it's doubtful the necessary infrastructure is worth it. Perhaps that changes with the infrastructure some columnar AMs are asking for. They have a need to push more stuff down to the AM that's more generic than just count(*).\n\nRegards,\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Wed, 20 Oct 2021 11:22:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [RFC] speed up count(*)" }, { "msg_contents": "On 10/20/21 19:57, Tom Lane wrote:\n> John Naylor <john.naylor@enterprisedb.com> writes:\n>> Perennially our users have complaints about slow count(*) when coming from\n>> some other systems. Index-only scans help, but I think we can do better. I\n>> recently wondered if a BRIN index could be used to answer min/max aggregate\n>> queries over the whole table, and it turns out it doesn't. However, then it\n>> occurred to me that if we had an opclass that keeps track of the count in\n>> each page range, that would be a way to do a fast count(*) by creating the\n>> right index. That would require planner support and other work, but it\n>> seems doable. Any opinions on whether this is worth the effort?\n> \n> The core reason why this is hard is that we insist on giving the right\n> answer. In particular, count(*) is supposed to count the rows that\n> satisfy the asker's snapshot. So I don't see a good way to answer it\n> from an index only, given that we don't track visibility accurately\n> in indexes.\n> \n\nCouldn't we simply inspect the visibility map, use the index data only \nfor fully visible/summarized ranges, and inspect the heap for the \nremaining pages? That'd still be a huge improvement for tables with most \nonly a few pages modified recently, which is a pretty common case.\n\nI think the bigger issue is that people rarely do COUNT(*) on the whole \ntable. There are usually other conditions and/or GROUP BY, and I'm not \nsure how would that work.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 20 Oct 2021 20:23:20 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] speed up count(*)" }, { "msg_contents": "On Wed, Oct 20, 2021 at 2:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n>\n> Couldn't we simply inspect the visibility map, use the index data only\n> for fully visible/summarized ranges, and inspect the heap for the\n> remaining pages? That'd still be a huge improvement for tables with most\n> only a few pages modified recently, which is a pretty common case.\n>\n> I think the bigger issue is that people rarely do COUNT(*) on the whole\n> table. There are usually other conditions and/or GROUP BY, and I'm not\n> sure how would that work.\n\nRight. My (possibly hazy) recollection is that people don't have quite as\nhigh an expectation for queries with more complex predicates and/or\ngrouping. It would be interesting to see what the balance is.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Oct 20, 2021 at 2:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:>> Couldn't we simply inspect the visibility map, use the index data only> for fully visible/summarized ranges, and inspect the heap for the> remaining pages? That'd still be a huge improvement for tables with most> only a few pages modified recently, which is a pretty common case.>> I think the bigger issue is that people rarely do COUNT(*) on the whole> table. There are usually other conditions and/or GROUP BY, and I'm not> sure how would that work.Right. My (possibly hazy) recollection is that people don't have quite as high an expectation for queries with more complex predicates and/or grouping. It would be interesting to see what the balance is.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 20 Oct 2021 14:33:15 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [RFC] speed up count(*)" }, { "msg_contents": "\n\nOn 10/20/21 20:33, John Naylor wrote:\n> \n> On Wed, Oct 20, 2021 at 2:23 PM Tomas Vondra \n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>> \n> wrote:\n> >\n> > Couldn't we simply inspect the visibility map, use the index data only\n> > for fully visible/summarized ranges, and inspect the heap for the\n> > remaining pages? That'd still be a huge improvement for tables with most\n> > only a few pages modified recently, which is a pretty common case.\n> >\n> > I think the bigger issue is that people rarely do COUNT(*) on the whole\n> > table. There are usually other conditions and/or GROUP BY, and I'm not\n> > sure how would that work.\n> \n> Right. My (possibly hazy) recollection is that people don't have quite \n> as high an expectation for queries with more complex predicates and/or \n> grouping. It would be interesting to see what the balance is.\n> \n\nI don't know where the balance is, and I doubt it's possible to answer \nthat in general - I'm sure some workloads might benefit significantly.\n\nI wonder if multi-column BRIN indexes would help in cases with \nadditional predicates. Seems possible.\n\nBTW you mentioned using BRIN indexes for min/max - I've been thinking \nabout using BRIN indexes for ordering/sorting, which seems related. And \nI think it's actually doable, so I wonder why you concluded using BRIN \nindexes for min/max is not possible?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 20 Oct 2021 20:40:56 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] speed up count(*)" }, { "msg_contents": "On Wed, Oct 20, 2021 at 2:41 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> BTW you mentioned using BRIN indexes for min/max - I've been thinking\n> about using BRIN indexes for ordering/sorting, which seems related. And\n> I think it's actually doable, so I wonder why you concluded using BRIN\n> indexes for min/max is not possible?\n\nI just gathered it was not implemented in the planner, going by the explain\nplan in the toy query I tried, and then I got the lightbulb in my head\nabout count(*).\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Oct 20, 2021 at 2:41 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:> BTW you mentioned using BRIN indexes for min/max - I've been thinking> about using BRIN indexes for ordering/sorting, which seems related. And> I think it's actually doable, so I wonder why you concluded using BRIN> indexes for min/max is not possible?I just gathered it was not implemented in the planner, going by the explain plan in the toy query I tried, and then I got the lightbulb in my head about count(*).--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 20 Oct 2021 15:01:13 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [RFC] speed up count(*)" }, { "msg_contents": "On 10/20/21 2:33 PM, John Naylor wrote:\n> \n> On Wed, Oct 20, 2021 at 2:23 PM Tomas Vondra \n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>> \n> wrote:\n> >\n> > Couldn't we simply inspect the visibility map, use the index data only\n> > for fully visible/summarized ranges, and inspect the heap for the\n> > remaining pages? That'd still be a huge improvement for tables with most\n> > only a few pages modified recently, which is a pretty common case.\n> >\n> > I think the bigger issue is that people rarely do COUNT(*) on the whole\n> > table. There are usually other conditions and/or GROUP BY, and I'm not\n> > sure how would that work.\n> \n> Right. My (possibly hazy) recollection is that people don't have quite \n> as high an expectation for queries with more complex predicates and/or \n> grouping. It would be interesting to see what the balance is.\n\nI think you are exactly correct. People seem to understand that with a \npredicate it is harder, but they expect\n\n select count(*) from foo;\n\nto be nearly instantaneous, and they don't really need it to be exact. \nThe stock answer for that has been to do\n\n select reltuples from pg_class\n where relname = 'foo';\n\nBut that is unsatisfying because the problem is often with some \nbenchmark or another that cannot be changed.\n\nI'm sure this idea will be shot down in flames <donning flameproof \nsuit>, but what if we had a default \"off\" GUC which could be turned on \ncausing the former to be transparently rewritten into the latter \n</donning flameproof suit>?\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Thu, 21 Oct 2021 09:09:26 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] speed up count(*)" }, { "msg_contents": "On Thu, Oct 21, 2021 at 9:09 AM Joe Conway <mail@joeconway.com> wrote:\n> I think you are exactly correct. People seem to understand that with a\n> predicate it is harder, but they expect\n>\n> select count(*) from foo;\n>\n> to be nearly instantaneous, and they don't really need it to be exact.\n> The stock answer for that has been to do\n>\n> select reltuples from pg_class\n> where relname = 'foo';\n>\n> But that is unsatisfying because the problem is often with some\n> benchmark or another that cannot be changed.\n\nI would also expect it to almost always give the wrong answer.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Oct 2021 16:06:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] speed up count(*)" }, { "msg_contents": "On 10/21/21 4:06 PM, Robert Haas wrote:\n> On Thu, Oct 21, 2021 at 9:09 AM Joe Conway <mail@joeconway.com> wrote:\n>> I think you are exactly correct. People seem to understand that with a\n>> predicate it is harder, but they expect\n>>\n>> select count(*) from foo;\n>>\n>> to be nearly instantaneous, and they don't really need it to be exact.\n>> The stock answer for that has been to do\n>>\n>> select reltuples from pg_class\n>> where relname = 'foo';\n>>\n>> But that is unsatisfying because the problem is often with some\n>> benchmark or another that cannot be changed.\n> \n> I would also expect it to almost always give the wrong answer.\n\n\nThat is a grossly overstated position. When I have looked, it is often \nnot that terribly far off. And for many use cases that I have heard of \nat least, quite adequate.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Thu, 21 Oct 2021 16:19:05 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] speed up count(*)" }, { "msg_contents": "On Thu, Oct 21, 2021 at 4:19 PM Joe Conway <mail@joeconway.com> wrote:\n> That is a grossly overstated position. When I have looked, it is often\n> not that terribly far off. And for many use cases that I have heard of\n> at least, quite adequate.\n\nI don't think it's grossly overstated. If you need an approximation it\nmay be good enough, but I don't think it will often be exactly correct\n- probably only if the table is small and rarely modified.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Oct 2021 16:23:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] speed up count(*)" }, { "msg_contents": "On 10/21/21 4:23 PM, Robert Haas wrote:\n> On Thu, Oct 21, 2021 at 4:19 PM Joe Conway <mail@joeconway.com> wrote:\n>> That is a grossly overstated position. When I have looked, it is often\n>> not that terribly far off. And for many use cases that I have heard of\n>> at least, quite adequate.\n> \n> I don't think it's grossly overstated. If you need an approximation it\n> may be good enough, but I don't think it will often be exactly correct\n> - probably only if the table is small and rarely modified.\n\nmeh -- the people who expect this to be impossibly fast don't typically \nneed or expect it to be exactly correct, and there is no way to make it \n\"exactly correct\" in someone's snapshot without doing all the work.\n\nThat is why I didn't suggest making it the default. If you flip the \nswitch, you would get a very fast approximation. If you care about \naccuracy, you accept it has to be slow.\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Thu, 21 Oct 2021 16:29:09 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] speed up count(*)" }, { "msg_contents": "\nOn 10/21/21 4:29 PM, Joe Conway wrote:\n> On 10/21/21 4:23 PM, Robert Haas wrote:\n>> On Thu, Oct 21, 2021 at 4:19 PM Joe Conway <mail@joeconway.com> wrote:\n>>> That is a grossly overstated position. When I have looked, it is often\n>>> not that terribly far off. And for many use cases that I have heard of\n>>> at least, quite adequate.\n>>\n>> I don't think it's grossly overstated. If you need an approximation it\n>> may be good enough, but I don't think it will often be exactly correct\n>> - probably only if the table is small and rarely modified.\n>\n> meh -- the people who expect this to be impossibly fast don't\n> typically need or expect it to be exactly correct, and there is no way\n> to make it \"exactly correct\" in someone's snapshot without doing all\n> the work.\n>\n> That is why I didn't suggest making it the default. If you flip the\n> switch, you would get a very fast approximation. If you care about\n> accuracy, you accept it has to be slow.\n>\n\nI don't think we really want a switch for \"inaccurate results\nacceptable\", and I doubt the standard would accept an approximation for\ncount(*).\n\nBut something else that gave a fast approximate answer\n(\"count_estimate(*)\"?) would be useful to many. Not portable but still\nuseful, if someone could come up with a reasonable implementation.\n\n\ncheers\n\n\nandrew\n\n \n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 21 Oct 2021 16:51:49 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] speed up count(*)" }, { "msg_contents": "On Thu, Oct 21, 2021 at 4:29 PM Joe Conway <mail@joeconway.com> wrote:\n> meh -- the people who expect this to be impossibly fast don't typically\n> need or expect it to be exactly correct, and there is no way to make it\n> \"exactly correct\" in someone's snapshot without doing all the work.\n\nI think it could actually be WAY faster than it is if, as Andres says,\nwe had the ability to push the count operation inside the heap AM. I\nbelieve we have a tendency to attribute complaints like this to people\nhave unreasonable expectations, but here I'm not sure the expectation\nis unreasonable. I vaguely recall writing a special-purpose code to\ncount the number of tuples in relation years ago, and IIRC it was\nblazingly fast compared to letting our executor do it. I agree,\nhowever, that an approximation can be faster still.\n\n> That is why I didn't suggest making it the default. If you flip the\n> switch, you would get a very fast approximation. If you care about\n> accuracy, you accept it has to be slow.\n\nI'm not really here to take a position on the proposal. It doesn't\nexcite me, because I have not run across any users in the combination\nof circumstances you mention: query can't be changed, exact answer not\nactually required, whole table being counted. But I am not here to\ncall you a liar either. If you run across users in that situation all\nthe time, then you do.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Oct 2021 16:56:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] speed up count(*)" } ]
[ { "msg_contents": "These patches have been split off the now deprecated monolithic \"Delegating superuser tasks to new security roles\" thread at [1].\n\nThe purpose of these patches is to allow non-superusers to configure most aspects of a system, so long as they belong to the appropriate privileged role(s):\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n[1] https://www.postgresql.org/message-id/flat/F9408A5A-B20B-42D2-9E7F-49CD3D1547BC%40enterprisedb.com\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 20 Oct 2021 11:40:10 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "New privileged roles which can SET and ALTER SYSTEM SET" } ]
[ { "msg_contents": "These patches have been split off the now deprecated monolithic \"Delegating superuser tasks to new security roles\" thread at [1].\n\nThe purpose of these patches is to allow ordinary users to create and own event triggers without introducing escalation attack vectors:\n\n\n\n\n\n\n[1] https://www.postgresql.org/message-id/flat/F9408A5A-B20B-42D2-9E7F-49CD3D1547BC%40enterprisedb.com\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 20 Oct 2021 11:40:32 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Non-superuser event trigger owners" }, { "msg_contents": "Over in [1], you wrote:\n\n> On Oct 20, 2021, at 11:27 AM, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> On Wed, 2021-10-20 at 10:32 -0700, Mark Dilger wrote:\n>> I'd like to have a much clearer understanding of Noah's complaint\n>> first. There are multiple things to consider: (1) the role which\n>> owns the trigger, (2) the role which is performing an action which\n>> would cause the trigger to fire, (3) the set of roles role #1 belongs\n>> to, (4) the set of roles role #1 has ADMIN privilege on, (5) the set\n>> of roles that role #2 belongs to, and (6) the set of roles that role\n>> #2 has ADMIN privilege on. Maybe more?\n>> \n>> And that's before we even get into having roles own other roles,\n>> which the event trigger patches *do not depend on*. In the patch set\n>> associated with this thread, the event trigger stuff is in patches\n>> 0014 and 0015. The changes to CREATEROLE and role ownership are not\n>> until patches 0019, 0020, and 0021. (I'm presently writing another\n>> set of emails to split this all into four threads/patch sets.) \n>> \n>> I'd like to know precisely which combinations of these six things are\n>> objectionable, and why. There may be a way around the objections\n>> without needing to create new user options or new privileged roles.\n> \n> I can't speak for Noah, but my interpretation is that it would be\n> surprising if GRANT/REVOKE or membership in an ordinary role had\n> effects other than \"permission denied\" errors. It might make sense for\n> event trigger firing in all the cases we can think of, but it would\n> certainly be strange if we started accumulating a collection of\n> behaviors that implicitly change when you move in or out of a role.\n> \n> That's pretty general, so to answer your question: it seems like a\n> problem to use #3-6 in the calculation about whether to fire an event\n> trigger.\n\nRight. The patch as currently written requires that the trigger owner (role #1) be a member of role #2, as determined by is_member_of_role(item->fnowner, GetUserId()). The idea is that role #1 cannot force an action to be performed as role #2 that role #1 couldn't do independently through a SET ROLE followed by the same action.\n\nI admit that the patch has an achilles heal, in that the patch does not run SetUserIdAndSecContext with SECURITY_LOCAL_USERID_CHANGE to avoid the trigger changing role to the SessionUserId, but that issue exists all over the system with table triggers and user defined functions (including on indexes), and those don't even have the protection of requiring the function owner to be a member of the role invoking the function. As such, nailing that down is probably the work for an entirely separate patch set. \n\nAs for whether it strikes users as strange that event triggers sometimes fire and sometimes do not, depending on which role is the CurrentUserId, I think it's more a question of whether the trigger owner finds that strange. Triggers are used for things like auditing, and it's not really on behalf of the person whose actions are being audited, but rather on behalf of the auditor. Setting up the owner of the trigger to be a powerful enough user to catch everyone you mean to catch is the responsibility of whoever sets up the auditing system.\n\n> However, if we have a concept of role *ownership*, that's something\n> new. It may be less surprising to use that to determine additional\n> behaviors, like whether event triggers fire.\n\nI hadn't really thought about it that way. The two things were not all that connected, except perhaps indirectly.\n\n> We can also consider adding some additional language to the CREATE\n> EVENT TRIGGER syntax to make it more explicit what the scope is. For\n> instance:\n> \n> CREATE EVENT TRIGGER name\n> ON event\n> [ FOR {ALL|OWNED} ROLES ]\n> [ WHEN filter_variable IN (filter_value [, ... ]) [ AND ... ] ]\n> EXECUTE { FUNCTION | PROCEDURE } function_name()\n> \n> For a superuser ALL and OWNED would be the same, but regular users\n> would need to specify \"FOR OWNED ROLES\" or they'd get an error.\n\nI'll postpone taking any position on this, as role ownership is now a separate patch set and there is no connection between when/if that one gets committed and when/if this one does.\n\n\n[1] https://www.postgresql.org/message-id/flat/F9408A5A-B20B-42D2-9E7F-49CD3D1547BC%40enterprisedb.com\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 20 Oct 2021 13:14:10 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser event trigger owners" } ]
[ { "msg_contents": "These patches have been split off the now deprecated monolithic \"Delegating superuser tasks to new security roles\" thread at [1].\n\nThe purpose of these patches is to fix the CREATEROLE escalation attack vector misfeature. (Not everyone will see CREATEROLE that way, but the perceived value of the patch set likely depends on how much you see CREATEROLE in that light.)\n\n\n\n\n\n\n\n\n[1] https://www.postgresql.org/message-id/flat/F9408A5A-B20B-42D2-9E7F-49CD3D1547BC%40enterprisedb.com\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 20 Oct 2021 11:40:35 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On 10/20/21, 11:46 AM, \"Mark Dilger\" <mark.dilger@enterprisedb.com> wrote:\r\n> The purpose of these patches is to fix the CREATEROLE escalation\r\n> attack vector misfeature. (Not everyone will see CREATEROLE that\r\n> way, but the perceived value of the patch set likely depends on how\r\n> much you see CREATEROLE in that light.)\r\n\r\nRegarding the \"attack vector misfeature\" comment, I remember being\r\nsurprised when I first learned how much roles with CREATEROLE can do.\r\nWhen I describe CREATEROLE to others, I am sure to emphasize the note\r\nin the docs about such roles being \"almost-superuser\" roles.\r\nCREATEROLE is a rather big hammer at the moment, so I certainly think\r\nthere is value in reducing its almost-superuser-ness.\r\n\r\nI mentioned this in the other thread [0] already, but the first thing\r\nthat comes to mind when I look at these patches is how upgrades might\r\nwork. Will we just make the bootstrap superuser the owner for all\r\nroles when you first upgrade to v15? Also, are we just going to strip\r\nthe current CREATEROLE roles of much of their powers? Maybe it's\r\nworth keeping a legacy CREATEROLE role attribute for upgraded clusters\r\nthat could eventually be removed down the road.\r\n\r\nI'd also like to bring up my note about allowing users to transfer\r\nrole ownership. When I tested the patches earlier, REASSIGN OWNED BY\r\nwas failing with an \"unexpected classid\" ERROR. Besides REASSIGN\r\nOWNED BY, perhaps there should be another mechanism for transferring\r\nownership on a role-by-role basis (i.e., ALTER ROLE OWNER TO). I\r\nhaven't looked at this new patch set too closely, so my apologies if\r\nthis has already been added.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/53C7DF4C-8463-4647-9DFD-779B5E1861C4%40amazon.com\r\n\r\n", "msg_date": "Thu, 21 Oct 2021 23:04:34 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Oct 21, 2021, at 4:04 PM, Bossart, Nathan <bossartn@amazon.com> wrote:\n> \n> Regarding the \"attack vector misfeature\" comment, I remember being\n> surprised when I first learned how much roles with CREATEROLE can do.\n> When I describe CREATEROLE to others, I am sure to emphasize the note\n> in the docs about such roles being \"almost-superuser\" roles.\n> CREATEROLE is a rather big hammer at the moment, so I certainly think\n> there is value in reducing its almost-superuser-ness.\n\nIt is hard to know how many people are using CREATEROLE currently. There isn't much reason to give it out, since if you care enough about security to not give out superuser, you probably care too much about security to give away CREATEROLE.\n\n> I mentioned this in the other thread [0] already, but the first thing\n> that comes to mind when I look at these patches is how upgrades might\n> work. Will we just make the bootstrap superuser the owner for all\n> roles when you first upgrade to v15?\n\nYes, that's the idea. After upgrade, all roles will form a tree, with the bootstrap superuser at the root of the tree. The initial tree structure isn't very interesting, with all other roles directly owned by it, but from there the superuser can rearrange the tree, and after that non-superuser roles can manage whatever subtree of roles they are the root of.\n\n> Also, are we just going to strip\n> the current CREATEROLE roles of much of their powers? Maybe it's\n> worth keeping a legacy CREATEROLE role attribute for upgraded clusters\n> that could eventually be removed down the road.\n\nThe patch as written drastically reduces the power of the CREATEROLE attribute, in a non-backwards compatible way. I wondered if there would be complaints about that. If so, we could instead leave CREATEROLE alone, and create some other privileged role for the same thing, but it does start to look funny having a CREATEROLE privilege bit and also a privileged role named, perhaps, pg_can_create_roles.\n\n> I'd also like to bring up my note about allowing users to transfer\n> role ownership. When I tested the patches earlier, REASSIGN OWNED BY\n> was failing with an \"unexpected classid\" ERROR. Besides REASSIGN\n> OWNED BY, perhaps there should be another mechanism for transferring\n> ownership on a role-by-role basis (i.e., ALTER ROLE OWNER TO). I\n> haven't looked at this new patch set too closely, so my apologies if\n> this has already been added.\n\nYes, I completely agree with you on that. Both REASSIGN OWNED BY and ALTER ROLE OWNER TO should work. I'll take a look at the patches and repost with any adjustments that I find necessary to make those work.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 21 Oct 2021 16:21:09 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On 2021-10-21 03:40, Mark Dilger wrote:\n> These patches have been split off the now deprecated monolithic\n> \"Delegating superuser tasks to new security roles\" thread at [1].\n> \n> The purpose of these patches is to fix the CREATEROLE escalation\n> attack vector misfeature. (Not everyone will see CREATEROLE that way,\n> but the perceived value of the patch set likely depends on how much\n> you see CREATEROLE in that light.)\n\nHi! Thank you for the patch.\nI too think that CREATEROLE escalation attack is problem.\n\nI have three comments.\n1. Is there a function to check the owner of a role, it would be nice to \nbe able to check with \\du or pg_roles view.\n2. Is it correct that REPLICATION/BYPASSRLS can be granted even if you \nare not a super user, but have CREATEROLE and REPLICATION/BYPASSRLS?\n3. I think it would be better to have an \"DROP ROLE [ IF EXISTS ] name \n[, ...] [CASCADE | RESTRICT]\" like \"DROP TABLE [ IF EXISTS ] name [, \n...] [ CASCADE | RESTRICT ]\". What do you think?\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 26 Oct 2021 14:09:09 +0900", "msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Oct 25, 2021, at 10:09 PM, Shinya Kato <Shinya11.Kato@oss.nttdata.com> wrote:\n> \n> On 2021-10-21 03:40, Mark Dilger wrote:\n>> These patches have been split off the now deprecated monolithic\n>> \"Delegating superuser tasks to new security roles\" thread at [1].\n>> The purpose of these patches is to fix the CREATEROLE escalation\n>> attack vector misfeature. (Not everyone will see CREATEROLE that way,\n>> but the perceived value of the patch set likely depends on how much\n>> you see CREATEROLE in that light.)\n> \n> Hi! Thank you for the patch.\n> I too think that CREATEROLE escalation attack is problem.\n> \n> I have three comments.\n> 1. Is there a function to check the owner of a role, it would be nice to be able to check with \\du or pg_roles view.\n\nNo, but that is a good idea.\n\n> 2. Is it correct that REPLICATION/BYPASSRLS can be granted even if you are not a super user, but have CREATEROLE and REPLICATION/BYPASSRLS?\n\nIt is intentional, yes. Whether it is correct is up for debate, but I think it is. \n\n> 3. I think it would be better to have an \"DROP ROLE [ IF EXISTS ] name [, ...] [CASCADE | RESTRICT]\" like \"DROP TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]\". What do you think?\n\nI agree it would be nice to have, but roles are cluster-global and there are technical difficulties in cascading into multiple databases to drop all objects owned by the role. There was also a debate [1] about whether we would even want such behavior, leading to no real conclusion regarding how or if such a command should be implemented.\n\nThe current solution is to run REASSIGN OWNED in each database where the role owns objects before running DROP ROLE. At that point, the CASCADE option (not implemented) won't be needed. Of course, I need to post the next revision of this patch set addressing the deficiencies that Nathan pointed out upthread to make that work. \n\n[1] https://www.postgresql.org/message-id/flat/20211005025746.GN20998%40tamriel.snowman.net\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 26 Oct 2021 08:12:47 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\nOn 10/21/21 19:21, Mark Dilger wrote:\n>> Also, are we just going to strip\n>> the current CREATEROLE roles of much of their powers? Maybe it's\n>> worth keeping a legacy CREATEROLE role attribute for upgraded clusters\n>> that could eventually be removed down the road.\n> The patch as written drastically reduces the power of the CREATEROLE attribute, in a non-backwards compatible way. I wondered if there would be complaints about that. If so, we could instead leave CREATEROLE alone, and create some other privileged role for the same thing, but it does start to look funny having a CREATEROLE privilege bit and also a privileged role named, perhaps, pg_can_create_roles.\n\n\nGive that CREATEROLE currently just about amounts to being a superuser,\nmaybe there should be a pg_upgrade option to convert CREATEROLE to\nSUPERUSER. I don't want to perpetuate the misfeature though, so let's\njust bring it to an end.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 26 Oct 2021 14:50:49 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": ">> On Oct 25, 2021, at 10:09 PM, Shinya Kato <Shinya11.Kato@oss.nttdata.com> wrote:\n\n>> Hi! Thank you for the patch.\n>> I too think that CREATEROLE escalation attack is problem.\n>> \n>> I have three comments.\n>> 1. Is there a function to check the owner of a role, it would be nice to be able to check with \\du or pg_roles view.\n> \n> No, but that is a good idea.\n\nThese two ideas are implemented in v2. Both \\du and pg_roles show the owner information.\n\n> The current solution is to run REASSIGN OWNED in each database where the role owns objects before running DROP ROLE. At that point, the CASCADE option (not implemented) won't be needed. Of course, I need to post the next revision of this patch set addressing the deficiencies that Nathan pointed out upthread to make that work. \n\nREASSIGN OWNED and ALTER ROLE..OWNER TO now work in v2.\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 27 Oct 2021 15:21:43 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On 2021-10-28 07:21, Mark Dilger wrote:\n>>> On Oct 25, 2021, at 10:09 PM, Shinya Kato \n>>> <Shinya11.Kato@oss.nttdata.com> wrote:\n> \n>>> Hi! Thank you for the patch.\n>>> I too think that CREATEROLE escalation attack is problem.\n>>> \n>>> I have three comments.\n>>> 1. Is there a function to check the owner of a role, it would be nice \n>>> to be able to check with \\du or pg_roles view.\n>> \n>> No, but that is a good idea.\n> \n> These two ideas are implemented in v2. Both \\du and pg_roles show the\n> owner information.\nThank you. It seems good to me.\n\nBy the way, I got the following execution result.\nI was able to add the membership of a role with a different owner.\nIn brief, \"a\" was able to change the membership of owner \"shinya\".\nIs this the correct behavior?\n---\npostgres=# CREATE ROLE a LOGIN;\nCREATE ROLE\npostgres=# GRANT pg_execute_server_program TO a WITH ADMIN OPTION;\nGRANT ROLE\npostgres=# CREATE ROLE b;\nCREATE ROLE\npostgres=# \\du a\n List of roles\n Role name | Owner | Attributes | Member of\n-----------+--------+------------+-----------------------------\n a | shinya | | {pg_execute_server_program}\n\npostgres=# \\du b\n List of roles\n Role name | Owner | Attributes | Member of\n-----------+--------+--------------+-----------\n b | shinya | Cannot login | {}\n\npostgres=# \\c - a\nYou are now connected to database \"postgres\" as user \"a\".\npostgres=> GRANT pg_execute_server_program TO b;\nGRANT ROLE\npostgres=> \\du b\n List of roles\n Role name | Owner | Attributes | Member of\n-----------+--------+--------------+-----------------------------\n b | shinya | Cannot login | {pg_execute_server_program}\n---\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 28 Oct 2021 11:32:03 +0900", "msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Oct 27, 2021, at 7:32 PM, Shinya Kato <Shinya11.Kato@oss.nttdata.com> wrote:\n> \n> I was able to add the membership of a role with a different owner.\n> In brief, \"a\" was able to change the membership of owner \"shinya\".\n> Is this the correct behavior?\n\nI believe it is required for backwards compatibility. In a green field, we might consider doing things differently.\n\nThe only intentional backward compatibility break in this patch set is the the behavior of CREATEROLE. The general hope is that such a compatibility break will help far more than it hurts, as CREATEROLE does not appear to be a well adopted feature. I would expect that breaking the behavior of the WITH ADMIN OPTION feature would cause a lot more pain.\n\n\nTrying your example on both the unpatched and the patched sources, things appear to work as they should:\n\n\nUNPATCHED\n------------------\nmark.dilger=# CREATE ROLE a LOGIN;\nCREATE ROLE\nmark.dilger=# GRANT pg_execute_server_program TO a WITH ADMIN OPTION;\nGRANT ROLE\nmark.dilger=# CREATE ROLE b;\nCREATE ROLE\nmark.dilger=# \\du+ a\n List of roles\n Role name | Attributes | Member of | Description \n-----------+------------+-----------------------------+-------------\n a | | {pg_execute_server_program} | \n\nmark.dilger=# \\du+ b\n List of roles\n Role name | Attributes | Member of | Description \n-----------+--------------+-----------+-------------\n b | Cannot login | {} | \n\nmark.dilger=# \\c - a\nYou are now connected to database \"mark.dilger\" as user \"a\".\nmark.dilger=> GRANT pg_execute_server_program TO b;\nGRANT ROLE\nmark.dilger=> \\du+ b\n List of roles\n Role name | Attributes | Member of | Description \n-----------+--------------+-----------------------------+-------------\n b | Cannot login | {pg_execute_server_program} | \n\nmark.dilger=> \\du+ \"mark.dilger\"\n List of roles\n Role name | Attributes | Member of | Description \n-------------+------------------------------------------------------------+-----------+-------------\n mark.dilger | Superuser, Create role, Create DB, Replication, Bypass RLS | {} | \n\n\nPATCHED:\n---------------\nmark.dilger=# CREATE ROLE a LOGIN;\nCREATE ROLE\nmark.dilger=# GRANT pg_execute_server_program TO a WITH ADMIN OPTION;\nGRANT ROLE\nmark.dilger=# CREATE ROLE b;\nCREATE ROLE\nmark.dilger=# \\du+ a\n List of roles\n Role name | Owner | Attributes | Member of | Description \n-----------+-------------+------------+-----------------------------+-------------\n a | mark.dilger | | {pg_execute_server_program} | \n\nmark.dilger=# \\du+ b\n List of roles\n Role name | Owner | Attributes | Member of | Description \n-----------+-------------+--------------+-----------+-------------\n b | mark.dilger | Cannot login | {} | \n\nmark.dilger=# \\c - a\nYou are now connected to database \"mark.dilger\" as user \"a\".\nmark.dilger=> GRANT pg_execute_server_program TO b;\nGRANT ROLE\nmark.dilger=> \\du+ b\n List of roles\n Role name | Owner | Attributes | Member of | Description \n-----------+-------------+--------------+-----------------------------+-------------\n b | mark.dilger | Cannot login | {pg_execute_server_program} | \n\nmark.dilger=> \\du+ \"mark.dilger\"\n List of roles\n Role name | Owner | Attributes | Member of | Description \n-------------+-------------+------------------------------------------------------------+-----------+-------------\n mark.dilger | mark.dilger | Superuser, Create role, Create DB, Replication, Bypass RLS | {} | \n\n\n\nYou should notice that the owner of role \"b\" is the superuser \"mark.dilger\", and that owner's attributes are unchanged. But your point that role \"a\" can change the attributes of role \"mark.dilger\" is correct, as shown here:\n\nmark.dilger=> GRANT pg_execute_server_program TO \"mark.dilger\";\nGRANT ROLE\nmark.dilger=> \\du+ \"mark.dilger\"\n List of roles\n Role name | Owner | Attributes | Member of | Description \n-------------+-------------+------------------------------------------------------------+-----------------------------+-------------\n mark.dilger | mark.dilger | Superuser, Create role, Create DB, Replication, Bypass RLS | {pg_execute_server_program} | \n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 28 Oct 2021 08:24:23 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> The only intentional backward compatibility break in this patch set is the the behavior of CREATEROLE. The general hope is that such a compatibility break will help far more than it hurts, as CREATEROLE does not appear to be a well adopted feature. I would expect that breaking the behavior of the WITH ADMIN OPTION feature would cause a lot more pain.\n\nEven more to the point, WITH ADMIN OPTION is defined by the SQL standard.\nThe only way you get to mess with that is if you can convince people we\nmis-implemented the standard.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Oct 2021 12:14:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On 2021-10-29 01:14, Tom Lane wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> The only intentional backward compatibility break in this patch set is \n>> the the behavior of CREATEROLE. The general hope is that such a \n>> compatibility break will help far more than it hurts, as CREATEROLE \n>> does not appear to be a well adopted feature. I would expect that \n>> breaking the behavior of the WITH ADMIN OPTION feature would cause a \n>> lot more pain.\n> \n> Even more to the point, WITH ADMIN OPTION is defined by the SQL \n> standard.\n> The only way you get to mess with that is if you can convince people we\n> mis-implemented the standard.\nThank you for the detailed explanation.\nI now understand what you said.\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 29 Oct 2021 11:05:10 +0900", "msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On 2021-10-28 07:21, Mark Dilger wrote:\n>>> On Oct 25, 2021, at 10:09 PM, Shinya Kato \n>>> <Shinya11.Kato@oss.nttdata.com> wrote:\n> \n>>> Hi! Thank you for the patch.\n>>> I too think that CREATEROLE escalation attack is problem.\n>>> \n>>> I have three comments.\n>>> 1. Is there a function to check the owner of a role, it would be nice \n>>> to be able to check with \\du or pg_roles view.\n>> \n>> No, but that is a good idea.\n> \n> These two ideas are implemented in v2. Both \\du and pg_roles show the\n> owner information.\n> \n>> The current solution is to run REASSIGN OWNED in each database where \n>> the role owns objects before running DROP ROLE. At that point, the \n>> CASCADE option (not implemented) won't be needed. Of course, I need \n>> to post the next revision of this patch set addressing the \n>> deficiencies that Nathan pointed out upthread to make that work.\n> \n> REASSIGN OWNED and ALTER ROLE..OWNER TO now work in v2.\n\nWhen ALTER ROLE with the privilege of REPLICATION, only the superuser is \nchecked.\nTherefore, we have a strange situation where we can create a role but \nnot change it.\n---\npostgres=> SELECT current_user;\n current_user\n--------------\n test\n(1 row)\n\npostgres=> \\du test\n List of roles\n Role name | Owner | Attributes | Member of\n-----------+--------+--------------------------+-----------\n test | shinya | Create role, Replication | {}\n\npostgres=> CREATE ROLE test2 REPLICATION;\nCREATE ROLE\npostgres=> ALTER ROLE test2 NOREPLICATION;\n2021-11-04 14:24:02.687 JST [2615016] ERROR: must be superuser to alter \nreplication roles or change replication attribute\n2021-11-04 14:24:02.687 JST [2615016] STATEMENT: ALTER ROLE test2 \nNOREPLICATION;\nERROR: must be superuser to alter replication roles or change \nreplication attribute\n---\nWouldn't it be better to check if the role has CREATEROLE and \nREPLICATION?\nThe same is true for BYPASSRLS.\n\nBy the way, is this thread registered to CommitFest?\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 04 Nov 2021 16:00:06 +0900", "msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On 2021-11-04 16:00, Shinya Kato wrote:\n> On 2021-10-28 07:21, Mark Dilger wrote:\n>>>> On Oct 25, 2021, at 10:09 PM, Shinya Kato \n>>>> <Shinya11.Kato@oss.nttdata.com> wrote:\n>> \n>>>> Hi! Thank you for the patch.\n>>>> I too think that CREATEROLE escalation attack is problem.\n>>>> \n>>>> I have three comments.\n>>>> 1. Is there a function to check the owner of a role, it would be \n>>>> nice to be able to check with \\du or pg_roles view.\n>>> \n>>> No, but that is a good idea.\n>> \n>> These two ideas are implemented in v2. Both \\du and pg_roles show the\n>> owner information.\n>> \n>>> The current solution is to run REASSIGN OWNED in each database where \n>>> the role owns objects before running DROP ROLE. At that point, the \n>>> CASCADE option (not implemented) won't be needed. Of course, I need \n>>> to post the next revision of this patch set addressing the \n>>> deficiencies that Nathan pointed out upthread to make that work.\n>> \n>> REASSIGN OWNED and ALTER ROLE..OWNER TO now work in v2.\n> \n> When ALTER ROLE with the privilege of REPLICATION, only the superuser\n> is checked.\n> Therefore, we have a strange situation where we can create a role but\n> not change it.\n> ---\n> postgres=> SELECT current_user;\n> current_user\n> --------------\n> test\n> (1 row)\n> \n> postgres=> \\du test\n> List of roles\n> Role name | Owner | Attributes | Member of\n> -----------+--------+--------------------------+-----------\n> test | shinya | Create role, Replication | {}\n> \n> postgres=> CREATE ROLE test2 REPLICATION;\n> CREATE ROLE\n> postgres=> ALTER ROLE test2 NOREPLICATION;\n> 2021-11-04 14:24:02.687 JST [2615016] ERROR: must be superuser to\n> alter replication roles or change replication attribute\n> 2021-11-04 14:24:02.687 JST [2615016] STATEMENT: ALTER ROLE test2\n> NOREPLICATION;\n> ERROR: must be superuser to alter replication roles or change\n> replication attribute\n> ---\n> Wouldn't it be better to check if the role has CREATEROLE and \n> REPLICATION?\n> The same is true for BYPASSRLS.\n> \n> By the way, is this thread registered to CommitFest?\n\nI fixed the patches because they cannot be applied to HEAD.\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 22 Dec 2021 10:11:15 +0900", "msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Dec 21, 2021, at 5:11 PM, Shinya Kato <Shinya11.Kato@oss.nttdata.com> wrote:\n> \n> I fixed the patches because they cannot be applied to HEAD.\n\nThank you.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 21 Dec 2021 17:25:54 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On Tue, Dec 21, 2021 at 8:26 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Dec 21, 2021, at 5:11 PM, Shinya Kato <Shinya11.Kato@oss.nttdata.com> wrote:\n> >\n> > I fixed the patches because they cannot be applied to HEAD.\n>\n> Thank you.\n\nI reviewed and tested these and they LGTM. FYI the rebased v3 patches\nupthread are raw diffs so git am won't apply them. I can add myself to\nthe CF as a reviewer if it is helpful.\n\n\n", "msg_date": "Thu, 23 Dec 2021 16:06:45 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\nOn 12/23/21 16:06, Joshua Brindle wrote:\n> On Tue, Dec 21, 2021 at 8:26 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>>\n>>\n>>> On Dec 21, 2021, at 5:11 PM, Shinya Kato <Shinya11.Kato@oss.nttdata.com> wrote:\n>>>\n>>> I fixed the patches because they cannot be applied to HEAD.\n>> Thank you.\n> I reviewed and tested these and they LGTM. FYI the rebased v3 patches\n> upthread are raw diffs so git am won't apply them. \n\n\nThat's not at all unusual. I normally apply patches just using\n\n   patch -p 1 < $patchfile\n\n> I can add myself to\n> the CF as a reviewer if it is helpful.\n\n\nPlease do.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 3 Jan 2022 17:08:05 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On Mon, Jan 3, 2022 at 5:08 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 12/23/21 16:06, Joshua Brindle wrote:\n> > On Tue, Dec 21, 2021 at 8:26 PM Mark Dilger\n> > <mark.dilger@enterprisedb.com> wrote:\n> >>\n> >>\n> >>> On Dec 21, 2021, at 5:11 PM, Shinya Kato <Shinya11.Kato@oss.nttdata.com> wrote:\n> >>>\n> >>> I fixed the patches because they cannot be applied to HEAD.\n> >> Thank you.\n> > I reviewed and tested these and they LGTM. FYI the rebased v3 patches\n> > upthread are raw diffs so git am won't apply them.\n>\n>\n> That's not at all unusual. I normally apply patches just using\n>\n> patch -p 1 < $patchfile\n>\n> > I can add myself to\n> > the CF as a reviewer if it is helpful.\n>\n>\n> Please do.\n\nI just ran across this and I don't know if it is intended behavior or\nnot, can you tell me why this happens?\n\npostgres=> \\du+\n List of roles\n Role name | Owner | Attributes\n | Member of | Description\n-----------+----------+------------------------------------------------------------+-----------+-------------\n brindle | brindle | Password valid until 2022-01-05 00:00:00-05\n | {} |\n joshua | postgres | Create role\n | {} |\n postgres | postgres | Superuser, Create role, Create DB,\nReplication, Bypass RLS | {} |\n\npostgres=> \\password\nEnter new password for user \"brindle\":\nEnter it again:\nERROR: role \"brindle\" with OID 16384 owns itself\n\n\n", "msg_date": "Tue, 4 Jan 2022 09:35:31 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Jan 4, 2022, at 6:35 AM, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n> \n> I just ran across this and I don't know if it is intended behavior or\n> not\n\n<snip>\n\n> postgres=> \\password\n> Enter new password for user \"brindle\":\n> Enter it again:\n> ERROR: role \"brindle\" with OID 16384 owns itself\n\nNo, that looks like a bug. Thanks for reviewing!\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 4 Jan 2022 09:07:31 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Jan 4, 2022, at 9:07 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> No, that looks like a bug.\n\nI was able to reproduce that using REASSIGN OWNED BY to cause a user to own itself. Is that how you did it, or is there yet another way to get into that state?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 4 Jan 2022 12:39:32 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On Tue, Jan 4, 2022 at 3:39 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Jan 4, 2022, at 9:07 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> >\n> > No, that looks like a bug.\n>\n> I was able to reproduce that using REASSIGN OWNED BY to cause a user to own itself. Is that how you did it, or is there yet another way to get into that state?\n\nI did:\nALTER ROLE brindle OWNER TO brindle;\n\n\n", "msg_date": "Tue, 4 Jan 2022 15:47:31 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "> On Jan 4, 2022, at 12:47 PM, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n> \n>> I was able to reproduce that using REASSIGN OWNED BY to cause a user to own itself. Is that how you did it, or is there yet another way to get into that state?\n> \n> I did:\n> ALTER ROLE brindle OWNER TO brindle;\n\nOk, thanks. I have rebased, fixed both REASSIGN OWNED BY and ALTER ROLE .. OWNER TO cases, and added regression coverage for them.\n\nThe last patch set to contain significant changes was v2, with v3 just being a rebase. Relative to those sets:\n\n0001 -- rebased.\n0002 -- rebased; extend AlterRoleOwner_internal to disallow making a role its own immediate owner.\n0003 -- rebased; extend AlterRoleOwner_internal to disallow cycles in the role ownership graph.\n0004 -- rebased.\n0005 -- new; removes the broken pg_auth_members.grantor field.\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 5 Jan 2022 16:05:41 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On Wed, Jan 5, 2022 at 7:05 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> > On Jan 4, 2022, at 12:47 PM, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n> >\n> >> I was able to reproduce that using REASSIGN OWNED BY to cause a user to own itself. Is that how you did it, or is there yet another way to get into that state?\n> >\n> > I did:\n> > ALTER ROLE brindle OWNER TO brindle;\n>\n> Ok, thanks. I have rebased, fixed both REASSIGN OWNED BY and ALTER ROLE .. OWNER TO cases, and added regression coverage for them.\n>\n> The last patch set to contain significant changes was v2, with v3 just being a rebase. Relative to those sets:\n>\n> 0001 -- rebased.\n> 0002 -- rebased; extend AlterRoleOwner_internal to disallow making a role its own immediate owner.\n> 0003 -- rebased; extend AlterRoleOwner_internal to disallow cycles in the role ownership graph.\n> 0004 -- rebased.\n> 0005 -- new; removes the broken pg_auth_members.grantor field.\n>\n\nLGTM +1\n\n\n", "msg_date": "Fri, 7 Jan 2022 09:51:41 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\nOn 1/5/22 19:05, Mark Dilger wrote:\n>\n>> On Jan 4, 2022, at 12:47 PM, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n>>\n>>> I was able to reproduce that using REASSIGN OWNED BY to cause a user to own itself. Is that how you did it, or is there yet another way to get into that state?\n>> I did:\n>> ALTER ROLE brindle OWNER TO brindle;\n> Ok, thanks. I have rebased, fixed both REASSIGN OWNED BY and ALTER ROLE .. OWNER TO cases, and added regression coverage for them.\n>\n> The last patch set to contain significant changes was v2, with v3 just being a rebase. Relative to those sets:\n>\n> 0001 -- rebased.\n> 0002 -- rebased; extend AlterRoleOwner_internal to disallow making a role its own immediate owner.\n> 0003 -- rebased; extend AlterRoleOwner_internal to disallow cycles in the role ownership graph.\n> 0004 -- rebased.\n> 0005 -- new; removes the broken pg_auth_members.grantor field.\n\n\nIn general this looks good. Some nitpicks:\n\n\n+/*\n+ * Ownership check for a role (specified by OID)\n+ */\n+bool\n+pg_role_ownercheck(Oid role_oid, Oid roleid)\n\n\nThis is a bit confusing. Let's rename these params so it's clear which\nis the owner and which the owned role.\n\n\n+ * Note: In versions prior to PostgreSQL version 15, roles did not have\nowners\n+ * per se; instead we used this test in places where an ownership-like\n+ * permissions test was needed for a role.\n\n\nNo need to talk about what we used to do. People who want to know can\nlook back at older branches.\n\n\n+bool\n+has_rolinherit_privilege(Oid roleid)\n+{\n\n\nThis and similar functions should have header comments.\n\n\n+   /* Owners of roles have every privilge the owned role has */\n\ns/privlge/privilege/\n\n\n+CREATE ROLE regress_role_1 CREATEDB CREATEROLE REPLICATION BYPASSRLS;\n\n\nI don't really like this business of just numbering large numbers of\nroles in the tests. Let's give them more meaningful names.\n\n\n+   Role owners can change any of these settings on roles they own except\n\n\nI would say \"on roles they directly or indirectly own\", here and\nsimilarly in one or two other places.\n\n\n...\n\n\nI will probably do one or two more passes over the patches, but as I say\nin general they look fairly good.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 10 Jan 2022 17:34:56 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "> On Jan 10, 2022, at 2:34 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> In general this looks good. Some nitpicks:\n\nThanks. Some responses...\n\n> +/*\n> + * Ownership check for a role (specified by OID)\n> + */\n> +bool\n> +pg_role_ownercheck(Oid role_oid, Oid roleid)\n> \n> \n> This is a bit confusing. Let's rename these params so it's clear which\n> is the owner and which the owned role.\n\nYeah, I wondered about that when I was writing it. All the neighboring functions follow the pattern:\n\n (Oid <something>_oid, Oid roleid)\n\nso I followed that, but it isn't great. I've changed that in v5-0002 to use\n\n (Oid owned_role_oid, Oid owner_roleid)\n\nI wouldn't choose this naming in a green field, but I'm trying to stay close to the naming scheme of the surrounding functions.\n\n> + * Note: In versions prior to PostgreSQL version 15, roles did not have\n> owners\n> + * per se; instead we used this test in places where an ownership-like\n> + * permissions test was needed for a role.\n> \n> \n> No need to talk about what we used to do. People who want to know can\n> look back at older branches.\n\nRemoved in v5-0003.\n\n> +bool\n> +has_rolinherit_privilege(Oid roleid)\n> +{\n> \n> \n> This and similar functions should have header comments.\n\nHeader comments added for this and similar functions in v5-0004. This function was misnamed in prior patch sets; the privilege is INHERIT, not ROLINHERIT, so I also fixed the name in v5-0004.\n\n> + /* Owners of roles have every privilge the owned role has */\n> \n> s/privlge/privilege/\n\nFixed in v5-0003.\n\n> +CREATE ROLE regress_role_1 CREATEDB CREATEROLE REPLICATION BYPASSRLS;\n> \n> \n> I don't really like this business of just numbering large numbers of\n> roles in the tests. Let's give them more meaningful names.\n\nChanged in v5-0001.\n\n> + Role owners can change any of these settings on roles they own except\n> \n> \n> I would say \"on roles they directly or indirectly own\", here and\n> similarly in one or two other places.\n\nChanged a few sentences of doc/src/sgml/ref/alter_role.sgml in v5-0004 as you suggest. Please advise if you have other locations in mind. A quick grep -i 'role owner' doesn't show any other relevant locations.\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 11 Jan 2022 13:24:53 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "Rebased:\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 18 Jan 2022 12:51:46 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Jan 4, 2022, at 12:47 PM, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n> > \n> >> I was able to reproduce that using REASSIGN OWNED BY to cause a user to own itself. Is that how you did it, or is there yet another way to get into that state?\n> > \n> > I did:\n> > ALTER ROLE brindle OWNER TO brindle;\n> \n> Ok, thanks. I have rebased, fixed both REASSIGN OWNED BY and ALTER ROLE .. OWNER TO cases, and added regression coverage for them.\n> \n> The last patch set to contain significant changes was v2, with v3 just being a rebase. Relative to those sets:\n> \n> 0001 -- rebased.\n> 0002 -- rebased; extend AlterRoleOwner_internal to disallow making a role its own immediate owner.\n> 0003 -- rebased; extend AlterRoleOwner_internal to disallow cycles in the role ownership graph.\n> 0004 -- rebased.\n> 0005 -- new; removes the broken pg_auth_members.grantor field.\n\n> Subject: [PATCH v4 1/5] Add tests of the CREATEROLE attribute.\n\nNo particular issue with this one.\n\n> Subject: [PATCH v4 2/5] Add owners to roles\n> \n> All roles now have owners. By default, roles belong to the role\n> that created them, and initdb-time roles are owned by POSTGRES.\n\n... database superuser, not 'POSTGRES'.\n\n> +++ b/src/backend/catalog/aclchk.c\n> @@ -5430,6 +5434,57 @@ pg_statistics_object_ownercheck(Oid stat_oid, Oid roleid)\n> \treturn has_privs_of_role(roleid, ownerId);\n> }\n> \n> +/*\n> + * Ownership check for a role (specified by OID)\n> + */\n> +bool\n> +pg_role_ownercheck(Oid role_oid, Oid roleid)\n> +{\n> +\tHeapTuple\t\ttuple;\n> +\tForm_pg_authid\tauthform;\n> +\tOid\t\t\t\towner_oid;\n> +\n> +\t/* Superusers bypass all permission checking. */\n> +\tif (superuser_arg(roleid))\n> +\t\treturn true;\n> +\n> +\t/* Otherwise, look up the owner of the role */\n> +\ttuple = SearchSysCache1(AUTHOID, ObjectIdGetDatum(role_oid));\n> +\tif (!HeapTupleIsValid(tuple))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_UNDEFINED_OBJECT),\n> +\t\t\t\t errmsg(\"role with OID %u does not exist\",\n> +\t\t\t\t\t\trole_oid)));\n> +\tauthform = (Form_pg_authid) GETSTRUCT(tuple);\n> +\towner_oid = authform->rolowner;\n> +\n> +\t/*\n> +\t * Roles must necessarily have owners. Even the bootstrap user has an\n> +\t * owner. (It owns itself). Other roles must form a proper tree.\n> +\t */\n> +\tif (!OidIsValid(owner_oid))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> +\t\t\t\t errmsg(\"role \\\"%s\\\" with OID %u has invalid owner\",\n> +\t\t\t\t\t\tauthform->rolname.data, authform->oid)));\n> +\tif (authform->oid != BOOTSTRAP_SUPERUSERID &&\n> +\t\tauthform->rolowner == authform->oid)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> +\t\t\t\t errmsg(\"role \\\"%s\\\" with OID %u owns itself\",\n> +\t\t\t\t\t\tauthform->rolname.data, authform->oid)));\n> +\tif (authform->oid == BOOTSTRAP_SUPERUSERID &&\n> +\t\tauthform->rolowner != BOOTSTRAP_SUPERUSERID)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> +\t\t\t\t errmsg(\"role \\\"%s\\\" with OID %u owned by role with OID %u\",\n> +\t\t\t\t\t\tauthform->rolname.data, authform->oid,\n> +\t\t\t\t\t\tauthform->rolowner)));\n> +\tReleaseSysCache(tuple);\n> +\n> +\treturn (owner_oid == roleid);\n> +}\n\nDo we really need all of these checks on every call of this function..?\nAlso, there isn't much point in including the role OID twice in the last\nerror message, is there? Unless things have gotten quite odd, it's\ngoint to be the same value both times as we just proved to ourselves\nthat it is, in fact, the same value (and that it's not the\nBOOTSTRAP_SUPERUSERID).\n\nThis function also doesn't actually do any kind of checking to see if\nthe role ownership forms a proper tree, so it seems a bit odd to have\nthe comment talking about that here where it's doing other checks.\n\n> +++ b/src/backend/commands/user.c\n> @@ -77,6 +79,9 @@ CreateRole(ParseState *pstate, CreateRoleStmt *stmt)\n> \tDatum\t\tnew_record[Natts_pg_authid];\n> \tbool\t\tnew_record_nulls[Natts_pg_authid];\n> \tOid\t\t\troleid;\n> +\tOid\t\t\towner_uid;\n> +\tOid\t\t\tsaved_uid;\n> +\tint\t\t\tsave_sec_context;\n\nSeems a bit odd to introduce 'uid' into this file, which hasn't got any\nsuch anywhere in it, and I'm not entirely sure that any of these are\nactually needed..?\n\n> @@ -108,6 +113,16 @@ CreateRole(ParseState *pstate, CreateRoleStmt *stmt)\n> \tDefElem *dvalidUntil = NULL;\n> \tDefElem *dbypassRLS = NULL;\n> \n> +\tGetUserIdAndSecContext(&saved_uid, &save_sec_context);\n> +\n> +\t/*\n> +\t * Who is supposed to own the new role?\n> +\t */\n> +\tif (stmt->authrole)\n> +\t\towner_uid = get_rolespec_oid(stmt->authrole, false);\n> +\telse\n> +\t\towner_uid = saved_uid;\n> +\n> \t/* The defaults can vary depending on the original statement type */\n> \tswitch (stmt->stmt_type)\n> \t{\n> @@ -254,6 +269,10 @@ CreateRole(ParseState *pstate, CreateRoleStmt *stmt)\n> \t\t\tereport(ERROR,\n> \t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> \t\t\t\t\t errmsg(\"must be superuser to create superusers\")));\n> +\t\tif (!superuser_arg(owner_uid))\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> +\t\t\t\t\t errmsg(\"must be superuser to own superusers\")));\n> \t}\n> \telse if (isreplication)\n> \t{\n\nSo, we're telling a superuser (which is the only way you could get to\nthis point...) that they aren't allowed to create a superuser role which\nis owned by a non-superuser... Why?\n\n> @@ -310,6 +329,19 @@ CreateRole(ParseState *pstate, CreateRoleStmt *stmt)\n> \t\t\t\t errmsg(\"role \\\"%s\\\" already exists\",\n> \t\t\t\t\t\tstmt->role)));\n> \n> +\t/*\n> +\t * If the requested authorization is different from the current user,\n> +\t * temporarily set the current user so that the object(s) will be created\n> +\t * with the correct ownership.\n> +\t *\n> +\t * (The setting will be restored at the end of this routine, or in case of\n> +\t * error, transaction abort will clean things up.)\n> +\t */\n> +\tif (saved_uid != owner_uid)\n> +\t\tSetUserIdAndSecContext(owner_uid,\n> +\t\t\t\t\t\t\t save_sec_context | SECURITY_LOCAL_USERID_CHANGE);\n\nErr, why is this needed? This looks copied from the CreateSchemaCommand\nbut, unlike with the create schema command, CreateRole doesn't actually\nallow sub-commands to be run to create other objects in the way that\nCreateSchemaCommand does.\n\n> @@ -478,6 +513,9 @@ CreateRole(ParseState *pstate, CreateRoleStmt *stmt)\n> \t */\n> \ttable_close(pg_authid_rel, NoLock);\n> \n> +\t/* Reset current user and security context */\n> +\tSetUserIdAndSecContext(saved_uid, save_sec_context);\n> +\n> \treturn roleid;\n> }\n\n... ditto with this.\n\n> @@ -1675,3 +1714,110 @@ DelRoleMems(const char *rolename, Oid roleid,\n> +static void\n> +AlterRoleOwner_internal(HeapTuple tup, Relation rel, Oid newOwnerId)\n> +{\n> +\tForm_pg_authid authForm;\n> +\n> +\tAssert(tup->t_tableOid == AuthIdRelationId);\n> +\tAssert(RelationGetRelid(rel) == AuthIdRelationId);\n> +\n> +\tauthForm = (Form_pg_authid) GETSTRUCT(tup);\n> +\n> +\t/*\n> +\t * If the new owner is the same as the existing owner, consider the\n> +\t * command to have succeeded. This is for dump restoration purposes.\n> +\t */\n> +\tif (authForm->rolowner != newOwnerId)\n> +\t{\n> +\t\t/* Otherwise, must be owner of the existing object */\n> +\t\tif (!pg_role_ownercheck(authForm->oid, GetUserId()))\n> +\t\t\taclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_ROLE,\n> +\t\t\t\t\t\t NameStr(authForm->rolname));\n> +\n> +\t\t/* Must be able to become new owner */\n> +\t\tcheck_is_member_of_role(GetUserId(), newOwnerId);\n\nFeels like we should be saying a bit more about why we check for role\nmembership vs. has_privs_of_role() here. I'm generally of the opinion\nthat membership is the right thing to check here, just feel like we\nshould try to explain more why that's the right thing.\n\n> +\t\t/*\n> +\t\t * must have CREATEROLE rights\n> +\t\t *\n> +\t\t * NOTE: This is different from most other alter-owner checks in that\n> +\t\t * the current user is checked for create privileges instead of the\n> +\t\t * destination owner. This is consistent with the CREATE case for\n> +\t\t * roles. Because superusers will always have this right, we need no\n> +\t\t * special case for them.\n> +\t\t */\n> +\t\tif (!have_createrole_privilege())\n> +\t\t\taclcheck_error(ACLCHECK_NO_PRIV, OBJECT_ROLE,\n> +\t\t\t\t\t\t NameStr(authForm->rolname));\n> +\n\nI would think we'd be trying to get away from the role attribute stuff.\n\n> diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n\n> +\t\t\tCREATE ROLE RoleId AUTHORIZATION RoleSpec opt_with OptRoleList\n> +\t\t\t\t{\n> +\t\t\t\t\tCreateRoleStmt *n = makeNode(CreateRoleStmt);\n> +\t\t\t\t\tn->stmt_type = ROLESTMT_ROLE;\n> +\t\t\t\t\tn->role = $3;\n> +\t\t\t\t\tn->authrole = $5;\n> +\t\t\t\t\tn->options = $7;\n> +\t\t\t\t\t$$ = (Node *)n;\n> +\t\t\t\t}\n> \t\t;\n\n...\n\n> @@ -1218,6 +1229,10 @@ CreateOptRoleElem:\n> \t\t\t\t{\n> \t\t\t\t\t$$ = makeDefElem(\"addroleto\", (Node *)$3, @1);\n> \t\t\t\t}\n> +\t\t\t| OWNER RoleSpec\n> +\t\t\t\t{\n> +\t\t\t\t\t$$ = makeDefElem(\"owner\", (Node *)$2, @1);\n> +\t\t\t\t}\n> \t\t;\n\nNot sure why we'd have both AUTHORIZATION and OWNER for CREATE ROLE..?\nWe don't do that for other objects.\n\n> diff --git a/src/test/regress/sql/create_role.sql b/src/test/regress/sql/create_role.sql\n\n> @@ -1,6 +1,7 @@\n> -- ok, superuser can create users with any set of privileges\n> CREATE ROLE regress_role_super SUPERUSER;\n> CREATE ROLE regress_role_1 CREATEDB CREATEROLE REPLICATION BYPASSRLS;\n> +GRANT CREATE ON DATABASE regression TO regress_role_1;\n\nSeems odd to add this as part of this patch, or am I missing something?\n\n> From 1784a5b51d4dbebf99798b5832d92b0f585feb08 Mon Sep 17 00:00:00 2001\n> From: Mark Dilger <mark.dilger@enterprisedb.com>\n> Date: Tue, 4 Jan 2022 11:42:27 -0800\n> Subject: [PATCH v4 3/5] Give role owners control over owned roles\n> \n> Create a role ownership hierarchy. The previous commit added owners\n> to roles. This goes further, making role ownership transitive. If\n> role A owns role B, and role B owns role C, then role A can act as\n> the owner of role C. Also, roles A and B can perform any action on\n> objects belonging to role C that role C could itself perform.\n> \n> This is a preparatory patch for changing how CREATEROLE works.\n\nThis feels odd to have be an independent commit.\n\n> diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c\n> index ddd205d656..ef36fad700 100644\n> --- a/src/backend/catalog/aclchk.c\n> +++ b/src/backend/catalog/aclchk.c\n> @@ -5440,61 +5440,20 @@ pg_statistics_object_ownercheck(Oid stat_oid, Oid roleid)\n> bool\n> pg_role_ownercheck(Oid role_oid, Oid roleid)\n> {\n> -\tHeapTuple\t\ttuple;\n> -\tForm_pg_authid\tauthform;\n> -\tOid\t\t\t\towner_oid;\n> -\n> \t/* Superusers bypass all permission checking. */\n> \tif (superuser_arg(roleid))\n> \t\treturn true;\n> \n> -\t/* Otherwise, look up the owner of the role */\n> -\ttuple = SearchSysCache1(AUTHOID, ObjectIdGetDatum(role_oid));\n> -\tif (!HeapTupleIsValid(tuple))\n> -\t\tereport(ERROR,\n> -\t\t\t\t(errcode(ERRCODE_UNDEFINED_OBJECT),\n> -\t\t\t\t errmsg(\"role with OID %u does not exist\",\n> -\t\t\t\t\t\trole_oid)));\n> -\tauthform = (Form_pg_authid) GETSTRUCT(tuple);\n> -\towner_oid = authform->rolowner;\n> -\n> -\t/*\n> -\t * Roles must necessarily have owners. Even the bootstrap user has an\n> -\t * owner. (It owns itself). Other roles must form a proper tree.\n> -\t */\n> -\tif (!OidIsValid(owner_oid))\n> -\t\tereport(ERROR,\n> -\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> -\t\t\t\t errmsg(\"role \\\"%s\\\" with OID %u has invalid owner\",\n> -\t\t\t\t\t\tauthform->rolname.data, authform->oid)));\n> -\tif (authform->oid != BOOTSTRAP_SUPERUSERID &&\n> -\t\tauthform->rolowner == authform->oid)\n> -\t\tereport(ERROR,\n> -\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> -\t\t\t\t errmsg(\"role \\\"%s\\\" with OID %u owns itself\",\n> -\t\t\t\t\t\tauthform->rolname.data, authform->oid)));\n> -\tif (authform->oid == BOOTSTRAP_SUPERUSERID &&\n> -\t\tauthform->rolowner != BOOTSTRAP_SUPERUSERID)\n> -\t\tereport(ERROR,\n> -\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> -\t\t\t\t errmsg(\"role \\\"%s\\\" with OID %u owned by role with OID %u\",\n> -\t\t\t\t\t\tauthform->rolname.data, authform->oid,\n> -\t\t\t\t\t\tauthform->rolowner)));\n> -\tReleaseSysCache(tuple);\n> -\n> -\treturn (owner_oid == roleid);\n> +\t/* Otherwise, check the role ownership hierarchy */\n> +\treturn is_owner_of_role_nosuper(roleid, role_oid);\n> }\n\nThe function being basically entirely rewritten in this patch would be\none reason why it seems an odd split.\n\n> /*\n> * Check whether specified role has CREATEROLE privilege (or is a superuser)\n> *\n> - * Note: roles do not have owners per se; instead we use this test in\n> - * places where an ownership-like permissions test is needed for a role.\n> - * Be sure to apply it to the role trying to do the operation, not the\n> - * role being operated on!\tAlso note that this generally should not be\n> - * considered enough privilege if the target role is a superuser.\n> - * (We don't handle that consideration here because we want to give a\n> - * separate error message for such cases, so the caller has to deal with it.)\n> + * Note: In versions prior to PostgreSQL version 15, roles did not have owners\n> + * per se; instead we used this test in places where an ownership-like\n> + * permissions test was needed for a role.\n> */\n> bool\n> has_createrole_privilege(Oid roleid)\n\nSurely this should be in the prior commit, if the split is kept..\n\n> diff --git a/src/backend/commands/schemacmds.c b/src/backend/commands/schemacmds.c\n\n> @@ -363,7 +363,7 @@ AlterSchemaOwner_internal(HeapTuple tup, Relation rel, Oid newOwnerId)\n> \t\t/*\n> \t\t * must have create-schema rights\n> \t\t *\n> -\t\t * NOTE: This is different from other alter-owner checks in that the\n> +\t\t * NOTE: This is different from most other alter-owner checks in that the\n> \t\t * current user is checked for create privileges instead of the\n> \t\t * destination owner. This is consistent with the CREATE case for\n> \t\t * schemas. Because superusers will always have this right, we need\n\nNot a fan of just dropping 'most' in here, doesn't really help someone\nunderstand what is being talked about. I'd suggest adjusting the\ncomment to talk about alter-owner checks for objects which exist in\nschemas, as that's really what is being referred to.\n\n> diff --git a/src/backend/commands/user.c b/src/backend/commands/user.c\n> index 14820744bf..11d5dffc90 100644\n> --- a/src/backend/commands/user.c\n> +++ b/src/backend/commands/user.c\n> @@ -724,7 +724,7 @@ AlterRole(ParseState *pstate, AlterRoleStmt *stmt)\n> \t\t\t !rolemembers &&\n> \t\t\t !validUntil &&\n> \t\t\t dpassword &&\n> -\t\t\t roleid == GetUserId()))\n> +\t\t\t !pg_role_ownercheck(roleid, GetUserId())))\n> \t\t\tereport(ERROR,\n> \t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> \t\t\t\t\t errmsg(\"permission denied\")));\n> @@ -925,7 +925,8 @@ AlterRoleSet(AlterRoleSetStmt *stmt)\n> \t\t}\n> \t\telse\n> \t\t{\n> -\t\t\tif (!have_createrole_privilege() && roleid != GetUserId())\n> +\t\t\tif (!have_createrole_privilege() &&\n> +\t\t\t\t!pg_role_ownercheck(roleid, GetUserId()))\n> \t\t\t\tereport(ERROR,\n> \t\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> \t\t\t\t\t\t errmsg(\"permission denied\")));\n> @@ -977,11 +978,6 @@ DropRole(DropRoleStmt *stmt)\n> \t\t\t\tpg_auth_members_rel;\n> \tListCell *item;\n> \n> -\tif (!have_createrole_privilege())\n> -\t\tereport(ERROR,\n> -\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> -\t\t\t\t errmsg(\"permission denied to drop role\")));\n> -\n> \t/*\n> \t * Scan the pg_authid relation to find the Oid of the role(s) to be\n> \t * deleted.\n> @@ -1053,6 +1049,12 @@ DropRole(DropRoleStmt *stmt)\n> \t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> \t\t\t\t\t errmsg(\"must be superuser to drop superusers\")));\n> \n> +\t\tif (!have_createrole_privilege() &&\n> +\t\t\t!pg_role_ownercheck(roleid, GetUserId()))\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> +\t\t\t\t\t errmsg(\"permission denied to drop role\")));\n> +\n> \t\t/* DROP hook for the role being removed */\n> \t\tInvokeObjectDropHook(AuthIdRelationId, roleid, 0);\n> \n> @@ -1811,6 +1813,18 @@ AlterRoleOwner_internal(HeapTuple tup, Relation rel, Oid newOwnerId)\n> \t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> \t\t\t\t\t errmsg(\"role may not own itself\")));\n> \n> +\t\t/*\n> +\t\t * Must not create cycles in the role ownership hierarchy. If this\n> +\t\t * role owns (directly or indirectly) the proposed new owner, disallow\n> +\t\t * the ownership transfer.\n> +\t\t */\n> +\t\tif (is_owner_of_role_nosuper(authForm->oid, newOwnerId))\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t errmsg(\"role \\\"%s\\\" may not both own and be owned by role \\\"%s\\\"\",\n> +\t\t\t\t\t\t\tNameStr(authForm->rolname),\n> +\t\t\t\t\t\t\tGetUserNameFromId(newOwnerId, false))));\n> +\n> \t\tauthForm->rolowner = newOwnerId;\n> \t\tCatalogTupleUpdate(rel, &tup->t_self, tup);\n\n> diff --git a/src/backend/utils/adt/acl.c b/src/backend/utils/adt/acl.c\n\n> +/*\n> + * Get a list of roles which own the given role, directly or indirectly.\n> + *\n> + * Each role has only one direct owner. The returned list contains the given\n> + * role's owner, that role's owner, etc., up to the top of the ownership\n> + * hierarchy, which is always the bootstrap superuser.\n> + *\n> + * Raises an error if any role ownership invariant is violated. Returns NIL if\n> + * the given roleid is invalid.\n> + */\n> +static List *\n> +roles_is_owned_by(Oid roleid)\n> +{\n> +\tList\t *owners_list = NIL;\n> +\tOid\t\t\trole_oid = roleid;\n> +\n> +\t/*\n> +\t * Start with the current role and follow the ownership chain upwards until\n> +\t * we reach the bootstrap superuser. To defend against getting into an\n> +\t * infinite loop, we must check for ownership cycles. We choose to perform\n> +\t * other corruption checks on the ownership structure while iterating, too.\n> +\t */\n> +\twhile (OidIsValid(role_oid))\n> +\t{\n> +\t\tHeapTuple\t\ttuple;\n> +\t\tForm_pg_authid\tauthform;\n> +\t\tOid\t\t\t\towner_oid;\n> +\n> +\t\t/* Find the owner of the current iteration's role */\n> +\t\ttuple = SearchSysCache1(AUTHOID, ObjectIdGetDatum(role_oid));\n> +\t\tif (!HeapTupleIsValid(tuple))\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_UNDEFINED_OBJECT),\n> +\t\t\t\t\t errmsg(\"role with OID %u does not exist\", role_oid)));\n> +\n> +\t\tauthform = (Form_pg_authid) GETSTRUCT(tuple);\n> +\t\towner_oid = authform->rolowner;\n> +\n> +\t\t/*\n> +\t\t * Roles must necessarily have owners. Even the bootstrap user has an\n> +\t\t * owner. (It owns itself).\n> +\t\t */\n> +\t\tif (!OidIsValid(owner_oid))\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> +\t\t\t\t\t errmsg(\"role \\\"%s\\\" with OID %u has invalid owner\",\n> +\t\t\t\t\t\t\tNameStr(authform->rolname), authform->oid)));\n> +\n> +\t\t/* The bootstrap user must own itself */\n> +\t\tif (authform->oid == BOOTSTRAP_SUPERUSERID &&\n> +\t\t\towner_oid != BOOTSTRAP_SUPERUSERID)\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> +\t\t\t\t\t errmsg(\"role \\\"%s\\\" with OID %u owned by role with OID %u\",\n> +\t\t\t\t\t\t\tNameStr(authform->rolname), authform->oid,\n> +\t\t\t\t\t\t\tauthform->rolowner)));\n> +\n> +\t\t/*\n> +\t\t * Roles other than the bootstrap user must not be their own direct\n> +\t\t * owners.\n> +\t\t */\n> +\t\tif (authform->oid != BOOTSTRAP_SUPERUSERID &&\n> +\t\t\tauthform->oid == owner_oid)\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> +\t\t\t\t\t errmsg(\"role \\\"%s\\\" with OID %u owns itself\",\n> +\t\t\t\t\t\t\tNameStr(authform->rolname), authform->oid)));\n> +\n> +\t\tReleaseSysCache(tuple);\n> +\n> +\t\t/* If we have reached the bootstrap user, we're done. */\n> +\t\tif (role_oid == BOOTSTRAP_SUPERUSERID)\n> +\t\t{\n> +\t\t\tif (!owners_list)\n> +\t\t\t\towners_list = lappend_oid(owners_list, owner_oid);\n> +\t\t\tbreak;\n> +\t\t}\n> +\n> +\t\t/*\n> +\t\t * For all other users, check they do not own themselves indirectly\n> +\t\t * through an ownership cycle.\n> +\t\t *\n> +\t\t * Scanning the list each time through this loop results in overall\n> +\t\t * quadratic work in the depth of the ownership chain, but we're\n> +\t\t * not on a critical performance path, nor do we expect ownership\n> +\t\t * hierarchies to be deep.\n> +\t\t */\n> +\t\tif (owners_list && list_member_oid(owners_list,\n> +\t\t\t\t\t\t\t\t\t\t ObjectIdGetDatum(owner_oid)))\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> +\t\t\t\t\t errmsg(\"role \\\"%s\\\" with OID %u indirectly owns itself\",\n> +\t\t\t\t\t\t\tGetUserNameFromId(owner_oid, false),\n> +\t\t\t\t\t\t\towner_oid)));\n> +\n> +\t\t/* Done with sanity checks. Add this owner to the list. */\n> +\t\towners_list = lappend_oid(owners_list, owner_oid);\n> +\n> +\t\t/* Otherwise, iterate on this iteration's owner_oid. */\n> +\t\trole_oid = owner_oid;\n> +\t}\n> +\n> +\treturn owners_list;\n> +}\n\n> @@ -4850,6 +4955,10 @@ has_privs_of_role(Oid member, Oid role)\n\n> +\t/* Owners of roles have every privilge the owned role has */\n> +\tif (pg_role_ownercheck(role, member))\n> +\t\treturn true;\n\nWhoah, really? No, I don't agree with this, it's throwing away the\nentire concept around inheritance of role rights and how you can have\nroles which you can get the privileges of by doing a SET ROLE to them\nbut you don't automatically have those rights.\n\n> +/*\n> + * Is owner a direct or indirect owner of the role, not considering\n> + * superuserness?\n> + */\n> +bool\n> +is_owner_of_role_nosuper(Oid owner, Oid role)\n> +{\n> +\treturn list_member_oid(roles_is_owned_by(role), owner);\n> +}\n\n\nSurely if you're a member of a role which owns another role, you should\nbe considered to be an owner of that role too..? Just checking if the\ncurrent role is a member of the roles which directly own the specified\nrole misses that case.\n\nThat is:\n\nCREATE ROLE r1;\nCREATE ROLE r2;\n\nGRANT r2 to r1;\n\nCREATE ROLE r3 AUTHORIZATION r2;\n\nSurely, r1 is to be considered an owner of r3 in this case, but the\nabove check wouldn't consider that to be the case- it would only return\ntrue if the current role is r2.\n\nWe do need some kind of direct membership check in the list of owners to\navoid creating loops, so maybe this function is kept as that and the\npg_role_ownership() check is changed to address the above case, but I\ndon't think we should just ignore role membership when it comes to role\nownership- we don't do that for any other kind of ownership check.\n\n> Subject: [PATCH v4 4/5] Restrict power granted via CREATEROLE.\n\nI would think this would be done independently of the other patches and\nprobably be first.\n\n> diff --git a/doc/src/sgml/ref/alter_role.sgml b/doc/src/sgml/ref/alter_role.sgml\n\n> @@ -70,18 +70,18 @@ ALTER ROLE { <replaceable class=\"parameter\">role_specification</replaceable> | A\n> <link linkend=\"sql-revoke\"><command>REVOKE</command></link> for that.)\n> Attributes not mentioned in the command retain their previous settings.\n> Database superusers can change any of these settings for any role.\n> - Roles having <literal>CREATEROLE</literal> privilege can change any of these\n> - settings except <literal>SUPERUSER</literal>, <literal>REPLICATION</literal>,\n> - and <literal>BYPASSRLS</literal>; but only for non-superuser and\n> - non-replication roles.\n> - Ordinary roles can only change their own password.\n> + Role owners can change any of these settings on roles they own except\n> + <literal>SUPERUSER</literal>, <literal>REPLICATION</literal>, and\n> + <literal>BYPASSRLS</literal>; but only for non-superuser and non-replication\n> + roles, and only if the role owner does not alter the target role to have a\n> + privilege which the role owner itself lacks. Ordinary roles can only change\n> + their own password.\n> </para>\n\nHaving contemplated this a bit more, I don't like it, and it's not how\nthings work when it comes to regular privileges.\n\nConsider that I can currently GRANT someone UPDATE privileges on an\nobject, but they can't GRANT that privilege to someone else unless I\nexplicitly allow it. The same could certainly be said for roles-\nperhaps I want to allow someone the privilege to create non-login roles,\nbut I don't want them to be able to create new login roles, even if they\nthemselves have LOGIN.\n\nAs another point, I might want to have an 'admin' role that I want\nadmins to SET ROLE to before they go creating other roles, because I\ndon't want them to be creating roles as their regular user and so that\nthose other roles are owned by the 'admin' role, but I don't want that\nrole to have the 'login' attribute.\n\nIn other words, we should really consider what role attributes a given\nrole has to be independent of what role attributes that role is allowed\nto set on roles they create. I appreciate that \"just whatever the\ncurrent role has\" is simpler and less work but also will be difficult to\nwalk back from once it's in the wild.\n\n> @@ -1457,7 +1449,7 @@ AddRoleMems(const char *rolename, Oid roleid,\n\n> \t/*\n> -\t * Check permissions: must have createrole or admin option on the role to\n> +\t * Check permissions: must be owner or have admin option on the role to\n> \t * be changed. To mess with a superuser role, you gotta be superuser.\n> \t */\n> \tif (superuser_arg(roleid))\n\n...\n\n> @@ -1467,9 +1459,9 @@ AddRoleMems(const char *rolename, Oid roleid,\n> \t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> \t\t\t\t\t errmsg(\"must be superuser to alter superusers\")));\n> \t}\n> -\telse\n> +\telse if (!superuser())\n> \t{\n> -\t\tif (!have_createrole_privilege() &&\n> +\t\tif (!pg_role_ownercheck(roleid, grantorId) &&\n> \t\t\t!is_admin_of_role(grantorId, roleid))\n> \t\t\tereport(ERROR,\n> \t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n\nI'm not entirely sure about including owners here though I'm not\ncompletely against it either. This conflation of what the 'admin'\nprivileges on a role means vs. the 'ownership' of a role is part of what\nI dislike about having two distinct systems for saying who is allowed to\nGRANT one role to another.\n\nAlso, if we're going to always consider owners to be admins of roles\nthey own, why not push that into is_admin_of_role()?\n\n> Subject: [PATCH v4 5/5] Remove grantor field from pg_auth_members\n\nWhile I do think we should fix the issue with dangling references, I\ndislike just getting rid of this entirely. While I don't really agree\nwith the spec about running around DROP'ing objects when a user's\nprivilege to create those objects has been revoked, I do think we should\nbe REVOKE'ing rights when a user's right to GRANT has been revoked, and\ntracking the information about who GRANT'd what role to what other role\nis needed for that. Further, we track who GRANT'd access to what in the\nregular ACL system and I don't like the idea of removing that for roles.\n\n> We could fix the bug, but there is no clear solution to the problem\n> that existing installations may have broken data. Since the field\n> is not used for any purpose, removing it seems the best option.\n\nExisting broken systems will have to eventually be upgraded and the\nadmin will have to deal with such cases then, so I don't really consider\nthis to be that big of an issue or reason to entirely remove this.\n\nIf we're going to do this, it should also be done independently of the\nrole ownership stuff too.\n\nThanks,\n\nStephen", "msg_date": "Sat, 22 Jan 2022 16:20:33 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\nOn 1/22/22 16:20, Stephen Frost wrote:\n>> Subject: [PATCH v4 1/5] Add tests of the CREATEROLE attribute.\n> No particular issue with this one.\n>\n>\n\nI'm going to commit this piece forthwith so we get it out of the way.\nThat will presumably make the cfbot unhappy until Mark submits a new\npatch set.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 24 Jan 2022 14:19:50 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On Sat, Jan 22, 2022 at 4:20 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Whoah, really? No, I don't agree with this, it's throwing away the\n> entire concept around inheritance of role rights and how you can have\n> roles which you can get the privileges of by doing a SET ROLE to them\n> but you don't automatically have those rights.\n\nI see it differently. In my opinion, what that does is make the patch\nactually useful instead of largely a waste of time. If you are a\nservice provider, you want to give your customers a super-user-like\nexperience without actually making them superuser. You don't want to\nactually make them superuser, because then they could do things like\nchange archive_command or install plperlu and shell out to the OS\naccount, which you don't want. But you do want them to be able to\nadminister objects within the database just as a superuser could. And\na superuser has privileges over objects they own and objects belonging\nto other users automatically, without needing to SET ROLE.\n\nImagine what happens if we adopt your proposal here. Everybody now has\nto understand the behavior of a regular account, the behavior of a\nsuperuser account, and the behavior of this third type of account\nwhich is sort of like a superuser but requires a lot more SET ROLE\ncommands. And also every tool. So for example pg_dump and restore\nisn't going to work, not even on the set of objects this\nelevated-privilege user can access. pgAdmin isn't going to understand\nthat it needs to insert a bunch of extra SET ROLE commands to\nadminister objects. Ditto literally every other tool anyone has ever\nwritten to administer PostgreSQL. And for all of that pain, we get\nexactly zero extra security.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 Jan 2022 15:33:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\nOn 1/24/22 15:33, Robert Haas wrote:\n> On Sat, Jan 22, 2022 at 4:20 PM Stephen Frost <sfrost@snowman.net> wrote:\n>> Whoah, really? No, I don't agree with this, it's throwing away the\n>> entire concept around inheritance of role rights and how you can have\n>> roles which you can get the privileges of by doing a SET ROLE to them\n>> but you don't automatically have those rights.\n> I see it differently. In my opinion, what that does is make the patch\n> actually useful instead of largely a waste of time. If you are a\n> service provider, you want to give your customers a super-user-like\n> experience without actually making them superuser. You don't want to\n> actually make them superuser, because then they could do things like\n> change archive_command or install plperlu and shell out to the OS\n> account, which you don't want. But you do want them to be able to\n> administer objects within the database just as a superuser could. And\n> a superuser has privileges over objects they own and objects belonging\n> to other users automatically, without needing to SET ROLE.\n>\n\n+many\n\n\nI encountered such issues on a cloud provider several years ago, and\nblogged about the difficulties, which would have been solved very nicely\nand cleanly by this proposal. It was when I understood properly how this\nproposal worked, precisely as Robert states, that I became more\nenthusiastic about it.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 24 Jan 2022 16:00:28 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "Greetings,\n\nOn Mon, Jan 24, 2022 at 15:33 Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sat, Jan 22, 2022 at 4:20 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Whoah, really? No, I don't agree with this, it's throwing away the\n> > entire concept around inheritance of role rights and how you can have\n> > roles which you can get the privileges of by doing a SET ROLE to them\n> > but you don't automatically have those rights.\n>\n> I see it differently. In my opinion, what that does is make the patch\n> actually useful instead of largely a waste of time.\n\n\nThe idea behind this patch is to enable creation and dropping of roles,\nwhich isn’t possible now without being effectively a superuser.\n\nForcing owners to also implicitly have all rights of the roles they create\nis orthogonal to that and an unnecessary change.\n\nIf you are a\n> service provider, you want to give your customers a super-user-like\n> experience without actually making them superuser. You don't want to\n> actually make them superuser, because then they could do things like\n> change archive_command or install plperlu and shell out to the OS\n> account, which you don't want. But you do want them to be able to\n> administer objects within the database just as a superuser could. And\n> a superuser has privileges over objects they own and objects belonging\n> to other users automatically, without needing to SET ROLE.\n\n\nI am not saying that we would explicitly set all cases to be noninherit or\nthat we would even change the default away from what it is today, only that\nwe should use the existing role system and it’s concept of\ninherit-vs-noninherit rather than throwing all of that away.\n\nEverybody now has\n> to understand the behavior of a regular account, the behavior of a\n> superuser account, and the behavior of this third type of account\n> which is sort of like a superuser but requires a lot more SET ROLE\n> commands.\n\n\nInherit vs. noninherit roles is not a new concept, it has existed since the\nrole system was implemented. Further, that system does not require a lot\nof SET ROLE commands unless and until an admin sets up a non-inherit role.\nAt that time, however, it’s expected that the rights of a role which has\ninherit set to false are not automatically allowed for the role to which it\nwas GRANT’d. That’s how roles have always worked since they were\nintroduced.\n\nAnd also every tool. So for example pg_dump and restore\n> isn't going to work, not even on the set of objects this\n> elevated-privilege user can access. pgAdmin isn't going to understand\n> that it needs to insert a bunch of extra SET ROLE commands to\n> administer objects. Ditto literally every other tool anyone has ever\n> written to administer PostgreSQL. And for all of that pain, we get\n> exactly zero extra security.\n\n\nWe have an inherit system today and pg_dump works just fine, as far as I’m\naware, and it does, indeed, issue SET ROLE at various points. Perhaps you\ncould explain with PG today what the issue is that is caused? Or what\nissue pgAdmin has with PG’s existing role inherit system?\n\nFurther, being able to require a SET ROLE before running a given operation\nis certainly a benefit in much the same way that having a user have to sudo\nbefore running an operation is.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Mon, Jan 24, 2022 at 15:33 Robert Haas <robertmhaas@gmail.com> wrote:On Sat, Jan 22, 2022 at 4:20 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Whoah, really?  No, I don't agree with this, it's throwing away the\n> entire concept around inheritance of role rights and how you can have\n> roles which you can get the privileges of by doing a SET ROLE to them\n> but you don't automatically have those rights.\n\nI see it differently. In my opinion, what that does is make the patch\nactually useful instead of largely a waste of time. The idea behind this patch is to enable creation and dropping of roles, which isn’t possible now without being effectively a superuser.Forcing owners to also implicitly have all rights of the roles they create is orthogonal to that and an unnecessary change.If you are a\nservice provider, you want to give your customers a super-user-like\nexperience without actually making them superuser. You don't want to\nactually make them superuser, because then they could do things like\nchange archive_command or install plperlu and shell out to the OS\naccount, which you don't want. But you do want them to be able to\nadminister objects within the database just as a superuser could. And\na superuser has privileges over objects they own and objects belonging\nto other users automatically, without needing to SET ROLE.I am not saying that we would explicitly set all cases to be noninherit or that we would even change the default away from what it is today, only that we should use the existing role system and it’s concept of inherit-vs-noninherit rather than throwing all of that away.Everybody now has\nto understand the behavior of a regular account, the behavior of a\nsuperuser account, and the behavior of this third type of account\nwhich is sort of like a superuser but requires a lot more SET ROLE\ncommands. Inherit vs. noninherit roles is not a new concept, it has existed since the role system was implemented.  Further, that system does not require a lot of SET ROLE commands unless and until an admin sets up a non-inherit role.  At that time, however, it’s expected that the rights of a role which has inherit set to false are not automatically allowed for the role to which it was GRANT’d.  That’s how roles have always worked since they were introduced. And also every tool. So for example pg_dump and restore\nisn't going to work, not even on the set of objects this\nelevated-privilege user can access. pgAdmin isn't going to understand\nthat it needs to insert a bunch of extra SET ROLE commands to\nadminister objects. Ditto literally every other tool anyone has ever\nwritten to administer PostgreSQL. And for all of that pain, we get\nexactly zero extra security.We have an inherit system today and pg_dump works just fine, as far as I’m aware, and it does, indeed, issue SET ROLE at various points. Perhaps you could explain with PG today what the issue is that is caused?  Or what issue pgAdmin has with PG’s existing role inherit system?Further, being able to require a SET ROLE before running a given operation is certainly a benefit in much the same way that having a user have to sudo before running an operation is.Thanks,Stephen", "msg_date": "Mon, 24 Jan 2022 16:23:24 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On Mon, Jan 24, 2022 at 4:23 PM Stephen Frost <sfrost@snowman.net> wrote:\n> The idea behind this patch is to enable creation and dropping of roles, which isn’t possible now without being effectively a superuser.\n>\n> Forcing owners to also implicitly have all rights of the roles they create is orthogonal to that and an unnecessary change.\n\nI just took a look at the first email on this thread and it says this:\n\n>>> These patches have been split off the now deprecated monolithic \"Delegating superuser tasks to new security roles\" thread at [1].\n\nTherefore I think it is pretty clear that the goals of this patch set\ninclude being able to delegate superuser tasks to new security roles.\nAnd having those tasks be delegated but *work randomly differently* is\nmuch less useful.\n\n> I am not saying that we would explicitly set all cases to be noninherit or that we would even change the default away from what it is today, only that we should use the existing role system and it’s concept of inherit-vs-noninherit rather than throwing all of that away.\n\nINHERIT vs. NOINHERIT is documented to control the behavior of role\n*membership*. This patch is introducing a new concept of role\n*ownership*. It's not self-evident that what applies to one case\nshould apply to the other.\n\n> Further, being able to require a SET ROLE before running a given operation is certainly a benefit in much the same way that having a user have to sudo before running an operation is.\n\nThat's a reasonable point of view, but having things work similarly to\nwhat happens for a superuser is ALSO a very big benefit. In my\nopinion, in fact, it is a far larger benefit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 Jan 2022 16:41:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "Greetings,\n\nOn Mon, Jan 24, 2022 at 16:42 Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Jan 24, 2022 at 4:23 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > The idea behind this patch is to enable creation and dropping of roles,\n> which isn’t possible now without being effectively a superuser.\n> >\n> > Forcing owners to also implicitly have all rights of the roles they\n> create is orthogonal to that and an unnecessary change.\n>\n> I just took a look at the first email on this thread and it says this:\n>\n> >>> These patches have been split off the now deprecated monolithic\n> \"Delegating superuser tasks to new security roles\" thread at [1].\n>\n> Therefore I think it is pretty clear that the goals of this patch set\n> include being able to delegate superuser tasks to new security roles.\n> And having those tasks be delegated but *work randomly differently* is\n> much less useful.\n\n\nBeing able to create and drop users is, in fact, effectively a\nsuperuser-only task today. We could throw out the entire idea of role\nownership, in fact, as being entirely unnecessary when talking about that\nspecific task.\n\n> I am not saying that we would explicitly set all cases to be noninherit\n> or that we would even change the default away from what it is today, only\n> that we should use the existing role system and it’s concept of\n> inherit-vs-noninherit rather than throwing all of that away.\n>\n> INHERIT vs. NOINHERIT is documented to control the behavior of role\n> *membership*. This patch is introducing a new concept of role\n> *ownership*. It's not self-evident that what applies to one case\n> should apply to the other.\n\n\nThis is an argument to drop the role ownership concept, as I view it.\nPrivileges are driven by membership today and inventing some new\nindependent way to do that is increasing confusion, not improving things.\nI disagree that adding role ownership should necessarily change how the\nregular GRANT privilege system works or throw away basic concepts of that\nsystem which have been in place for decades. Increasing the number of\nindependent ways to answer the question of “what users have what rights on\nobject X” is an active bad thing. Anything that cares about object access\nwill now also have to address role ownership to answer that question, while\nif we don’t include this one change then they don’t need to directly have\nany concern for ownership because regular object privileges still work the\nsame way they did before.\n\n> Further, being able to require a SET ROLE before running a given\n> operation is certainly a benefit in much the same way that having a user\n> have to sudo before running an operation is.\n>\n> That's a reasonable point of view, but having things work similarly to\n> what happens for a superuser is ALSO a very big benefit. In my\n> opinion, in fact, it is a far larger benefit.\n\n\nSuperuser is a problem specifically because it gives people access to do\nabsolutely anything, both for security and safety concerns. Disallowing a\nway to curtail that same risk when it comes to role ownership invites\nexactly those same problems.\n\nI appreciate that there’s an edge between the ownership system being\nproposed and the existing role membership system, but we’d be much better\noff trying to minimize the amount that they end up overlapping- role\nownership should be about managing roles.\n\nTo push back on the original “tenant” argument, consider that one of the\nbigger issues in cloud computing today is exactly the problem that the\ncloud managers can potentially gain access to the sensitive data of their\ntenants and that’s not generally viewed as a positive thing. This change\nwould make it so that every landlord can go and SELECT from the tables of\ntheir tenants without so much as a by-your-leave. The tenants likely don’t\nlike that idea, and almost as likely the landlords in many cases aren’t\nthrilled with it either. Should the landlords be able to DROP the tenant\ndue to the tenant not paying their bill? Of course, and that should then\neliminate the tenant’s tables and other objects which take up resources,\nbut that’s not the same thing as saying that a landlord should be able to\nunlock a tenant’s old phone that they left behind (and yeah, maybe the\nanalogy falls apart a bit there, but the point I’m trying to get at is that\nit’s not as simple as it’s being made out to be here and we should think\nabout these things and not just implicitly grant all access to the owner\nbecause that’s an easy thing to do- and is exactly what viewing owners as\n“mini superusers” does and leads to many of the same issues we already have\nwith superusers).\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Mon, Jan 24, 2022 at 16:42 Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Jan 24, 2022 at 4:23 PM Stephen Frost <sfrost@snowman.net> wrote:\n> The idea behind this patch is to enable creation and dropping of roles, which isn’t possible now without being effectively a superuser.\n>\n> Forcing owners to also implicitly have all rights of the roles they create is orthogonal to that and an unnecessary change.\n\nI just took a look at the first email on this thread and it says this:\n\n>>> These patches have been split off the now deprecated monolithic \"Delegating superuser tasks to new security roles\" thread at [1].\n\nTherefore I think it is pretty clear that the goals of this patch set\ninclude being able to delegate superuser tasks to new security roles.\nAnd having those tasks be delegated but *work randomly differently* is\nmuch less useful.Being able to create and drop users is, in fact, effectively a superuser-only task today.  We could throw out the entire idea of role ownership, in fact, as being entirely unnecessary when talking about that specific task.\n> I am not saying that we would explicitly set all cases to be noninherit or that we would even change the default away from what it is today, only that we should use the existing role system and it’s concept of inherit-vs-noninherit rather than throwing all of that away.\n\nINHERIT vs. NOINHERIT is documented to control the behavior of role\n*membership*. This patch is introducing a new concept of role\n*ownership*. It's not self-evident that what applies to one case\nshould apply to the other.This is an argument to drop the role ownership concept, as I view it.  Privileges are driven by membership today and inventing some new independent way to do that is increasing confusion, not improving things.  I disagree that adding role ownership should necessarily change how the regular GRANT privilege system works or throw away basic concepts of that system which have been in place for decades.  Increasing the number of independent ways to answer the question of “what users have what rights on object X” is an active bad thing.  Anything that cares about object access will now also have to address role ownership to answer that question, while if we don’t include this one change then they don’t need to directly have any concern for ownership because regular object privileges still work the same way they did before.\n> Further, being able to require a SET ROLE before running a given operation is certainly a benefit in much the same way that having a user have to sudo before running an operation is.\n\nThat's a reasonable point of view, but having things work similarly to\nwhat happens for a superuser is ALSO a very big benefit. In my\nopinion, in fact, it is a far larger benefit.Superuser is a problem specifically because it gives people access to do absolutely anything, both for security and safety concerns. Disallowing a way to curtail that same risk when it comes to role ownership invites exactly those same problems.I appreciate that there’s an edge between the ownership system being proposed and the existing role membership system, but we’d be much better off trying to minimize the amount that they end up overlapping- role ownership should be about managing roles.To push back on the original “tenant” argument, consider that one of the bigger issues in cloud computing today is exactly the problem that the cloud managers can potentially gain access to the sensitive data of their tenants and that’s not generally viewed as a positive thing.  This change would make it so that every landlord can go and SELECT from the tables of their tenants without so much as a by-your-leave.  The tenants likely don’t like that idea, and almost as likely the landlords in many cases aren’t thrilled with it either.  Should the landlords be able to DROP the tenant due to the tenant not paying their bill?  Of course, and that should then eliminate the tenant’s tables and other objects which take up resources, but that’s not the same thing as saying that a landlord should be able to unlock a tenant’s old phone that they left behind (and yeah, maybe the analogy falls apart a bit there, but the point I’m trying to get at is that it’s not as simple as it’s being made out to be here and we should think about these things and not just implicitly grant all access to the owner because that’s an easy thing to do- and is exactly what viewing owners as “mini superusers” does and leads to many of the same issues we already have with superusers).Thanks,Stephen", "msg_date": "Mon, 24 Jan 2022 17:21:40 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Jan 24, 2022, at 2:21 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Being able to create and drop users is, in fact, effectively a superuser-only task today. We could throw out the entire idea of role ownership, in fact, as being entirely unnecessary when talking about that specific task.\n\nWow, that's totally contrary to how I see this patch. The heart and soul of this patch is to fix the fact that CREATEROLE is currently overpowered. Everything else is gravy.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 24 Jan 2022 14:49:34 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Jan 24, 2022, at 2:21 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Superuser is a problem specifically because it gives people access to do absolutely anything, both for security and safety concerns. Disallowing a way to curtail that same risk when it comes to role ownership invites exactly those same problems.\n\nBefore the patch, users with CREATEROLE can do mischief. After the patch, users with CREATEROLE can do mischief. The difference is that the mischief that can be done after the patch is a proper subset of the mischief that can be done before the patch. (Counter-examples highly welcome.)\n\nSpecifically, I claim that before the patch, non-superuser \"bob\" with CREATEROLE can interfere with *any* non-superuser. After the patch, non-superuser \"bob\" with CREATEROLE can interfere with *some* non-superusers; specifically, with non-superusers he created himself, or which have had ownership transferred to him.\n\nRestricting the scope of bob's mischief is a huge win, in my view.\n\nThe argument about whether owners should always implicitly inherit privileges from roles they own is a bit orthogonal to my point about mischief-making. Do we at least agree on the mischief-abatement aspect of this patch set? \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 24 Jan 2022 15:18:03 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\nOn 2022/01/25 8:18, Mark Dilger wrote:\n> \n> \n>> On Jan 24, 2022, at 2:21 PM, Stephen Frost <sfrost@snowman.net> wrote:\n>>\n>> Superuser is a problem specifically because it gives people access to do absolutely anything, both for security and safety concerns. Disallowing a way to curtail that same risk when it comes to role ownership invites exactly those same problems.\n> \n> Before the patch, users with CREATEROLE can do mischief. After the patch, users with CREATEROLE can do mischief. The difference is that the mischief that can be done after the patch is a proper subset of the mischief that can be done before the patch. (Counter-examples highly welcome.)\n> \n> Specifically, I claim that before the patch, non-superuser \"bob\" with CREATEROLE can interfere with *any* non-superuser. After the patch, non-superuser \"bob\" with CREATEROLE can interfere with *some* non-superusers; specifically, with non-superusers he created himself, or which have had ownership transferred to him.\n> \n> Restricting the scope of bob's mischief is a huge win, in my view.\n\n+1\n\nOne of \"mischiefs\" I'm thinking problematic is that users with CREATEROLE can give any predefined role that they don't have, to other users including themselves. For example, users with CREATEROLE can give pg_execute_server_program to themselves and run any OS commands by COPY PROGRAM. This would be an issue when providing something like PostgreSQL cloud service that wants to prevent end users from running OS commands but allow them to create/drop roles. Does the proposed patch fix also this issue?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 25 Jan 2022 15:55:49 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Jan 24, 2022, at 10:55 PM, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n> +1\n> \n> One of \"mischiefs\" I'm thinking problematic is that users with CREATEROLE can give any predefined role that they don't have, to other users including themselves. For example, users with CREATEROLE can give pg_execute_server_program to themselves and run any OS commands by COPY PROGRAM. This would be an issue when providing something like PostgreSQL cloud service that wants to prevent end users from running OS commands but allow them to create/drop roles. Does the proposed patch fix also this issue?\n\nYes, the patch restricts CREATEROLE privilege from granting any privilege they themselves lack. There is a regression test in the patch set which demonstrates this. See src/test/regress/expected/create_role.out. The diffs from v6-0004-Restrict-power-granted-via-CREATEROLE.patch are quoted here for ease of viewing:\n\n--- ok, having CREATEROLE is enough to create roles in privileged roles\n+-- fail, having CREATEROLE is not enough to create roles in privileged roles\n CREATE ROLE regress_read_all_data IN ROLE pg_read_all_data;\n+ERROR: must have admin option on role \"pg_read_all_data\"\n CREATE ROLE regress_write_all_data IN ROLE pg_write_all_data;\n+ERROR: must have admin option on role \"pg_write_all_data\"\n CREATE ROLE regress_monitor IN ROLE pg_monitor;\n+ERROR: must have admin option on role \"pg_monitor\"\n CREATE ROLE regress_read_all_settings IN ROLE pg_read_all_settings;\n+ERROR: must have admin option on role \"pg_read_all_settings\"\n CREATE ROLE regress_read_all_stats IN ROLE pg_read_all_stats;\n+ERROR: must have admin option on role \"pg_read_all_stats\"\n CREATE ROLE regress_stat_scan_tables IN ROLE pg_stat_scan_tables;\n+ERROR: must have admin option on role \"pg_stat_scan_tables\"\n CREATE ROLE regress_read_server_files IN ROLE pg_read_server_files;\n+ERROR: must have admin option on role \"pg_read_server_files\"\n CREATE ROLE regress_write_server_files IN ROLE pg_write_server_files;\n+ERROR: must have admin option on role \"pg_write_server_files\"\n CREATE ROLE regress_execute_server_program IN ROLE pg_execute_server_program;\n+ERROR: must have admin option on role \"pg_execute_server_program\"\n CREATE ROLE regress_signal_backend IN ROLE pg_signal_backend;\n+ERROR: must have admin option on role \"pg_signal_backend\"\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 25 Jan 2022 08:21:16 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Jan 24, 2022, at 2:21 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> To push back on the original “tenant” argument, consider that one of the bigger issues in cloud computing today is exactly the problem that the cloud managers can potentially gain access to the sensitive data of their tenants and that’s not generally viewed as a positive thing.\n\n+1. This is a real problem. I have been viewing this problem as separate from the one which role ownership is intended to fix. Do you have a suggestion about how to tackle the problems together with less work than tackling them separately?\n\n> This change would make it so that every landlord can go and SELECT from the tables of their tenants without so much as a by-your-leave.\n\nI would expect that is already true. A user with CREATEROLE can do almost everything. This patch closes some CREATEROLE related security problems, but not this one you mention.\n\n> The tenants likely don’t like that idea\n\n+1\n\n> , and almost as likely the landlords in many cases aren’t thrilled with it either.\n\n+1\n\n> Should the landlords be able to DROP the tenant due to the tenant not paying their bill? Of course, and that should then eliminate the tenant’s tables and other objects which take up resources, but that’s not the same thing as saying that a landlord should be able to unlock a tenant’s old phone that they left behind (and yeah, maybe the analogy falls apart a bit there, but the point I’m trying to get at is that it’s not as simple as it’s being made out to be here and we should think about these things and not just implicitly grant all access to the owner because that’s an easy thing to do- and is exactly what viewing owners as “mini superusers” does and leads to many of the same issues we already have with superusers).\n\nThis is a pretty interesting argument. I don't believe it will work to do as you say unconditionally, as there is still a need to have CREATEROLE users who have privileges on their created roles' objects, even if for no other purpose than to be able to REASSIGN OWNED BY those objects before dropping roles. But maybe there is also a need to have CREATEROLE users who lack that privilege? Would that be a privilege bit akin to (but not the same as!) the INHERIT privilege? Should I redesign for something like that?\n\nI like that the current patch restricts CREATEROLE users from granting privileges they themselves lack. Would such a new privilege bit work the same way? Imagine that you, \"stephen\", have CREATEROLE but not this new bit, and you create me, \"mark\" as a tenant with CREATEROLE. Can you give me the bit? Or does the fact that you lack the bit mean you can't give it to me, either?\n\nOther suggestions?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 25 Jan 2022 08:42:14 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On Mon, Jan 24, 2022 at 5:21 PM Stephen Frost <sfrost@snowman.net> wrote:\n> This is an argument to drop the role ownership concept, as I view it. Privileges are driven by membership today and inventing some new independent way to do that is increasing confusion, not improving things. I disagree that adding role ownership should necessarily change how the regular GRANT privilege system works or throw away basic concepts of that system which have been in place for decades. Increasing the number of independent ways to answer the question of “what users have what rights on object X” is an active bad thing. Anything that cares about object access will now also have to address role ownership to answer that question, while if we don’t include this one change then they don’t need to directly have any concern for ownership because regular object privileges still work the same way they did before.\n\nIt really feels to me like you just keep moving the goalposts. We\nstarted out with a conversation where Mark said he'd like to be able\nto grant permissions on GUCs to non-superusers.[1] You argued\nrepeatedly that we really needed to do something about CREATEROLE\n[2,3,4]. Mark argued that this was an unrelated problem[5] but you\nargued that unless it were addressed, users would still be able to\nbreak out of the sandbox[6] which must mean either the OS user, or at\nleast PostgreSQL users other than the ones they were supposed to be\nable to control.\n\nThat led *directly* to the patch at hand, which solves the problem by\ninventing the notion of role ownership, so that you can distinguish\nthe roles you can administer from the ones you drop. You are now\nproposing that we get rid of that concept, a concept that was added\nfour months ago[7] as a direct response to your previous feedback.\nIt's completely unfair to make an argument that results in the\naddition of a complex piece of machinery to a body of work that was\ninitially on an only marginally related topic and then turn around and\nargue, quite close to the end of the release cycle, for the removal of\nthat exact same mechanism.\n\nAnd your argument about whether the privileges should be able to be\nexercised without SET ROLE is also just completely baffling to me\ngiven the previous conversation. It seems 100% clear from the previous\ndiscussion that we were talking about service provider environments\nand trying to deliver a good user experience to \"lead tenants\" in such\nenvironments. Regardless of the technical details of how INHERIT or\nanything else work, an actual superuser would not be subject to a\nrestriction similar to the one you're talking about, so arguing that\nit ought to be present here for some technical reason is placing\ntechnicalities ahead of what seemed at the time to be a shared goal.\nThere's a perfectly good argument to be made that the superuser role\nshould not work the way it does, but it's too late to relitigate that.\nAnd I can't imagine why any service provider would find any value in a\nnew role that requires all of the extra push-ups you're trying to\nimpose on it.\n\nI just can't shake the feeling that you're trying to redesign this\npatch out of (a) getting committed and (b) solving any of the problems\nit intends to solve, problems with which you largely seemed to agree.\nI assume that is not actually your intention, but I can't think of\nanything you'd be doing differently here if it were.\n\n[1] https://www.postgresql.org/message-id/F9408A5A-B20B-42D2-9E7F-49CD3D1547BC%40enterprisedb.com\n[2] https://www.postgresql.org/message-id/20210726200542.GX20766%40tamriel.snowman.net\n[3] https://www.postgresql.org/message-id/20210726205433.GA20766%40tamriel.snowman.net\n[4] https://www.postgresql.org/message-id/20210823181351.GB17906%40tamriel.snowman.net\n[5] https://www.postgresql.org/message-id/92AA9A52-A644-42FE-B699-8ECAEE12E635%40enterprisedb.com\n[6] https://www.postgresql.org/message-id/20210823195130.GF17906%40tamriel.snowman.net\n[7] https://www.postgresql.org/message-id/67BB2F92-704B-415C-8D47-149327CA8F4B%40enterprisedb.com\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 Jan 2022 14:04:03 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Jan 24, 2022, at 2:21 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> > Being able to create and drop users is, in fact, effectively a superuser-only task today. We could throw out the entire idea of role ownership, in fact, as being entirely unnecessary when talking about that specific task.\n> \n> Wow, that's totally contrary to how I see this patch. The heart and soul of this patch is to fix the fact that CREATEROLE is currently overpowered. Everything else is gravy.\n\nI agree that CREATEROLE is overpowered and that the goal of this should\nbe to provide a way for roles to be created and dropped that doesn't\ngive the user who has that power everything that CREATEROLE currently\ndoes. The point I was making is that the concept of role ownership\nisn't intrinsically linked to that and is, therefore, as you say, gravy.\nThat isn't to say that I'm entirely against the role ownership idea but\nI'd want it to be focused on the goal of providing ways of creating and\ndropping users and otherwise performing that kind of administration and\nthat doesn't require the specific change to make owners be members of\nall roles they own and automatically have all privileges of those roles\nall the time.\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Jan 24, 2022, at 2:21 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> > \n> > Superuser is a problem specifically because it gives people access to do absolutely anything, both for security and safety concerns. Disallowing a way to curtail that same risk when it comes to role ownership invites exactly those same problems.\n> \n> Before the patch, users with CREATEROLE can do mischief. After the patch, users with CREATEROLE can do mischief. The difference is that the mischief that can be done after the patch is a proper subset of the mischief that can be done before the patch. (Counter-examples highly welcome.)\n> \n> Specifically, I claim that before the patch, non-superuser \"bob\" with CREATEROLE can interfere with *any* non-superuser. After the patch, non-superuser \"bob\" with CREATEROLE can interfere with *some* non-superusers; specifically, with non-superusers he created himself, or which have had ownership transferred to him.\n> \n> Restricting the scope of bob's mischief is a huge win, in my view.\n> \n> The argument about whether owners should always implicitly inherit privileges from roles they own is a bit orthogonal to my point about mischief-making. Do we at least agree on the mischief-abatement aspect of this patch set? \n\nI don't know how many bites at this particular apple we're going to get,\nbut I doubt folks are going to be happy if we change our minds every\nrelease. Further, I suspect we'll be better off going too far in the\ndirection of 'mischief reduction' than not far enough. If we restrict\nthings too far then we can provide ways to add those things back, but\nit's harder to remove things we didn't take away.\n\nThis particular case is even an oddity on that spectrum though-\nCREATEROLE users, today, don't have access to all the objects created by\nroles which they create. Yes, they can get such access if they go\nthrough some additional hoops, but that could then be caught by someone\nauditing the logs, a consideration that I don't think we appreciate\nenough today.\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Jan 24, 2022, at 2:21 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> > \n> > To push back on the original “tenant” argument, consider that one of the bigger issues in cloud computing today is exactly the problem that the cloud managers can potentially gain access to the sensitive data of their tenants and that’s not generally viewed as a positive thing.\n> \n> +1. This is a real problem. I have been viewing this problem as separate from the one which role ownership is intended to fix. Do you have a suggestion about how to tackle the problems together with less work than tackling them separately?\n\nI don't know about less work or not, but in this particular case I was\nasking for a few lines to be removed from the patch. I can believe that\ndoing so would create some issues in terms of the use-cases that you\nwant to solve with this and if we agree on those being sensible cases to\naddress then we'd need to implement something to address those, though\nit's also possibly not the case and maybe removing those few lines\ndoesn't impact anything beyond then allowing owners to not automatically\ninherit the rights of the roles they own if they don't wish to.\n\nInstead of talking about those cases concretely though, it seems like\nwe've shifted to abstractly talking about ownership and landlords.\nMaybe some of that is helpful, but it seems to increasingly be an area\nthat's causing more division than helping to move forward towards a\nmutually agreeable result.\n\n> > This change would make it so that every landlord can go and SELECT from the tables of their tenants without so much as a by-your-leave.\n> \n> I would expect that is already true. A user with CREATEROLE can do almost everything. This patch closes some CREATEROLE related security problems, but not this one you mention.\n\nYes, such a role *can* do almost anything, but they can't do this today:\n\n=> create role r3;\nCREATE ROLE\n=*> set role r3;\nERROR: permission denied to set role \"r3\"\n\nNor, should 'r3' log in and create tables, can the creating role SELECT\nfrom r3's tables or otherwise have any effect on them. That has\npositives and negatives- we do want the 'owning' role to be able to do\ncertain things, like DROP the role, and once a role has created objects\nthat isn't able to be done unless those objects are reassigned or\ndropped themselves. How do we allow explicitly that then? That's the\ngeneral direction I would think we'd be wanting to go in, rather than\njust blanketly giving the owner all privileges of the roles they create\nwithout any further say by anyone.\n\n> > The tenants likely don’t like that idea\n> \n> +1\n> \n> > , and almost as likely the landlords in many cases aren’t thrilled with it either.\n> \n> +1\n\nGlad we agree on those.\n\n> > Should the landlords be able to DROP the tenant due to the tenant not paying their bill? Of course, and that should then eliminate the tenant’s tables and other objects which take up resources, but that’s not the same thing as saying that a landlord should be able to unlock a tenant’s old phone that they left behind (and yeah, maybe the analogy falls apart a bit there, but the point I’m trying to get at is that it’s not as simple as it’s being made out to be here and we should think about these things and not just implicitly grant all access to the owner because that’s an easy thing to do- and is exactly what viewing owners as “mini superusers” does and leads to many of the same issues we already have with superusers).\n> \n> This is a pretty interesting argument. I don't believe it will work to do as you say unconditionally, as there is still a need to have CREATEROLE users who have privileges on their created roles' objects, even if for no other purpose than to be able to REASSIGN OWNED BY those objects before dropping roles. But maybe there is also a need to have CREATEROLE users who lack that privilege? Would that be a privilege bit akin to (but not the same as!) the INHERIT privilege? Should I redesign for something like that?\n\nWe have INHERIT today already for roles and I'm not really thrilled with\nthe idea of coming up with some new and independent way to make that\nwork, or having something that works effectively the same way as role\nmembership does today but is called something else (which is what this\npatch set is doing with ownership, hence my concern).\n\nThere's a couple of thoughts I have about addressing things around DROP\nand REASSIGN- one is that those could perhaps just be made to work for\nowners, but another is to allow owners to manage the role memberships of\nroles they own, to include allowing the role to be granted to\nthemselves, and maybe that's even the default? With today's CREATEROLE,\nthat looks like:\n\n=> create role r3 admin sfrost;\nCREATE ROLE\n=*> set role r3;\nSET\n\nbut we could possibly change that to be the default, or maybe we don't,\nsince that isn't how it works today.\n\nEither way, we likely would need to allow owners to modify the role\nmembership of roles they own, but that doesn't strike me as a terribly\ndifficult thing to allow. A more interesting question is about if a\nrole can manage their *own* membership- something we allow today but, as\nI've brought up before, we should probably curtail to some extent.\n\nUltimately, that makes it possible for this:\n\nSELECT * FROM secret_table;\n\nto fail when secret_table was created by a tenant and the query is run\nby a landlord. A landlord would still be able to get access to\nsecret_table, but they'd have to do:\n\nGRANT tenant TO landlord;\nSELECT * FROM secret_table;\n\nwhich may not seem like a lot to us, but it shows clear forethought and\nvery likely that GRANT would be an audited statement. If tenant also\ndoesn't have 'inherit' set then a SET ROLE might also be required.\nPerhaps additional requirements could be added to the GRANT/SET ROLE to\nmake those operations not be trivial to do (certainly we've been asked\nin the past for a way for SET ROLE to require a password, and, indeed,\nsome other database systems support that; consider that one day a\nlandlord might have to reset the PW for the role, GRANT themselves into\nthe role, and then SET ROLE with the reset password...).\n\nI'd also like to share that while we talk about 'landlords' and\n'tenants' here, the real world is more complicated- I'm sure the various\ncloud providers have employees who have different levels of access,\nperhaps some of whom are able to reset passwords for users, while others\nare able to create new accounts, and yet others are able to authorize\naccess to customer data, something which hopefully most of the\norganization isn't able to do and requires some additional hoops.\n\n> I like that the current patch restricts CREATEROLE users from granting privileges they themselves lack. Would such a new privilege bit work the same way? Imagine that you, \"stephen\", have CREATEROLE but not this new bit, and you create me, \"mark\" as a tenant with CREATEROLE. Can you give me the bit? Or does the fact that you lack the bit mean you can't give it to me, either?\n> \n> Other suggestions?\n\nAs I mentioned in the patch review, having a particular bit set doesn't\nnecessarily mean you should be able to pass it on- the existing object\nGRANT system distinguishes those two and it seems like we should too.\nIn other words, I'm saying that we should be able to explicitly say just\nwhat privileges a CREATEROLE user is able to grant to some other role\nrather than basing it on what that user themselves has. This might\nalready be possible with the proposed patch by creating a role with\nCREATEROLE that then has the privileges we want to be allowed to be\npassed on, and then GRANT'ing that role to the user who we want to allow\nto create roles, though they would then have to SET ROLE to that role to\nrun the CREATE ROLE since role attributes aren't inherited by role\nmemberships. That doesn't seem like a terrible approach to solving that\nparticular issue, but then perhaps others feel differently.\n\nThanks,\n\nStephen", "msg_date": "Tue, 25 Jan 2022 15:44:29 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Jan 25, 2022, at 12:44 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> As I mentioned in the patch review, having a particular bit set doesn't\n> necessarily mean you should be able to pass it on- the existing object\n> GRANT system distinguishes those two and it seems like we should too.\n> In other words, I'm saying that we should be able to explicitly say just\n> what privileges a CREATEROLE user is able to grant to some other role\n> rather than basing it on what that user themselves has.\n\nI like the way you are thinking, but I'm not sure I agree with the facts you are asserting.\n\nI agree that \"CREATE ROLE.. ROLE ..\" differs from \"CREATE ROLE .. ADMIN ..\", and \"GRANT..WITH GRANT OPTION\" differs from \"GRANT..\", but those only cover privileges tracked in an aclitem array. The privileges CREATEDB, CREATEROLE, REPLICATION, and BYPASSRLS don't work that way. There isn't a with/without grant option distinction for them. So I'm forced to say that a role without those privileges must not give them away.\n\nI'd be happier if we could get rid of all privileges of that kind, leaving only those that can be granted with/without grant option, tracked in an aclitem, and use that to determine if the user creating the role can give them away. But that's a bigger redesign of the system. Just touching how CREATEROLE works entails backwards compatibility problems. I'd hate to try to change all these other things; we'd be breaking a lot more, and features that appear more commonly used.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 25 Jan 2022 13:29:55 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "> On Jan 22, 2022, at 1:20 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n>> Subject: [PATCH v4 1/5] Add tests of the CREATEROLE attribute.\n> \n> No particular issue with this one.\n\nAndrew already committed this, forcing the remaining patches to be renumbered. Per your comments below, I have combined what was 0002+0003 into 0001, renumbered 0004 as 0002, and abandoned 0005. (It may come back as an independent patch.) Also owing to the fact that 0001 has been committed, I really need to post another patch set right away, to make the cfbot happy. I'm fixing non-controversial deficits you call out in your review, but leaving other things unchanged, in the interest of getting a patch posted sooner rather than later.\n\n>> Subject: [PATCH v4 2/5] Add owners to roles\n>> \n>> All roles now have owners. By default, roles belong to the role\n>> that created them, and initdb-time roles are owned by POSTGRES.\n> \n> ... database superuser, not 'POSTGRES'.\n\nI rephrased this as \"bootstrap superuser\" in the commit message.\n\n>> +++ b/src/backend/catalog/aclchk.c\n>> @@ -5430,6 +5434,57 @@ pg_statistics_object_ownercheck(Oid stat_oid, Oid roleid)\n>> \treturn has_privs_of_role(roleid, ownerId);\n>> }\n>> \n>> +/*\n>> + * Ownership check for a role (specified by OID)\n>> + */\n>> +bool\n>> +pg_role_ownercheck(Oid role_oid, Oid roleid)\n>> +{\n>> +\tHeapTuple\t\ttuple;\n>> +\tForm_pg_authid\tauthform;\n>> +\tOid\t\t\t\towner_oid;\n>> +\n>> +\t/* Superusers bypass all permission checking. */\n>> +\tif (superuser_arg(roleid))\n>> +\t\treturn true;\n>> +\n>> +\t/* Otherwise, look up the owner of the role */\n>> +\ttuple = SearchSysCache1(AUTHOID, ObjectIdGetDatum(role_oid));\n>> +\tif (!HeapTupleIsValid(tuple))\n>> +\t\tereport(ERROR,\n>> +\t\t\t\t(errcode(ERRCODE_UNDEFINED_OBJECT),\n>> +\t\t\t\t errmsg(\"role with OID %u does not exist\",\n>> +\t\t\t\t\t\trole_oid)));\n>> +\tauthform = (Form_pg_authid) GETSTRUCT(tuple);\n>> +\towner_oid = authform->rolowner;\n>> +\n>> +\t/*\n>> +\t * Roles must necessarily have owners. Even the bootstrap user has an\n>> +\t * owner. (It owns itself). Other roles must form a proper tree.\n>> +\t */\n>> +\tif (!OidIsValid(owner_oid))\n>> +\t\tereport(ERROR,\n>> +\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n>> +\t\t\t\t errmsg(\"role \\\"%s\\\" with OID %u has invalid owner\",\n>> +\t\t\t\t\t\tauthform->rolname.data, authform->oid)));\n>> +\tif (authform->oid != BOOTSTRAP_SUPERUSERID &&\n>> +\t\tauthform->rolowner == authform->oid)\n>> +\t\tereport(ERROR,\n>> +\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n>> +\t\t\t\t errmsg(\"role \\\"%s\\\" with OID %u owns itself\",\n>> +\t\t\t\t\t\tauthform->rolname.data, authform->oid)));\n>> +\tif (authform->oid == BOOTSTRAP_SUPERUSERID &&\n>> +\t\tauthform->rolowner != BOOTSTRAP_SUPERUSERID)\n>> +\t\tereport(ERROR,\n>> +\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n>> +\t\t\t\t errmsg(\"role \\\"%s\\\" with OID %u owned by role with OID %u\",\n>> +\t\t\t\t\t\tauthform->rolname.data, authform->oid,\n>> +\t\t\t\t\t\tauthform->rolowner)));\n>> +\tReleaseSysCache(tuple);\n>> +\n>> +\treturn (owner_oid == roleid);\n>> +}\n> \n> Do we really need all of these checks on every call of this function..?\n\nSince the function is following the ownership chain upwards, it seems necessary to check that the chain is wellformed, else we might get into an infinite loop or return the wrong answer. These would only happen under corrupt conditions, but it seems sensible to check for those, since they are cheap to check. (Actually, the check for nontrivial cycles included in the patch is not as efficient as it could be, but I'm punting the work of improving that algorithm from quadratic to linear until a later patch version, in the interest of posting the patch soon.) \n\n> Also, there isn't much point in including the role OID twice in the last\n> error message, is there? Unless things have gotten quite odd, it's\n> goint to be the same value both times as we just proved to ourselves\n> that it is, in fact, the same value (and that it's not the\n> BOOTSTRAP_SUPERUSERID).\n\nIt is comparing the authform->oid against the authform->rolowner, which are not the same. The first is the owned role, the second is the owning role. We could hardcode the message to say something like \"bootstrap superuser owned by role with Oid %u\", but that hardcodes \"bootstrap superuser\" into the message, rather than something like \"stephen\". I don't feel strongly about the wording. Let me know if you still want me to change it.\n\n> This function also doesn't actually do any kind of checking to see if\n> the role ownership forms a proper tree, so it seems a bit odd to have\n> the comment talking about that here where it's doing other checks.\n\nRight. The comment simply explains the structure we expect, not the structure we are fully validating. The point is that each link in the hierarchy must be compatible with the expected structure. It would be overkill to validate the whole tree in this one function. I don't mind rewording the code comment, if you have a less confusing suggestion.\n\n>> +++ b/src/backend/commands/user.c\n>> @@ -77,6 +79,9 @@ CreateRole(ParseState *pstate, CreateRoleStmt *stmt)\n>> \tDatum\t\tnew_record[Natts_pg_authid];\n>> \tbool\t\tnew_record_nulls[Natts_pg_authid];\n>> \tOid\t\t\troleid;\n>> +\tOid\t\t\towner_uid;\n>> +\tOid\t\t\tsaved_uid;\n>> +\tint\t\t\tsave_sec_context;\n> \n> Seems a bit odd to introduce 'uid' into this file, which hasn't got any\n> such anywhere in it, and I'm not entirely sure that any of these are\n> actually needed..?\n\n\nGood catch! The implementation in v6 was wrong. It didn't enforce that the creating role was a member of the target owner, something this next patch set does.\n\n>> @@ -108,6 +113,16 @@ CreateRole(ParseState *pstate, CreateRoleStmt *stmt)\n>> \tDefElem *dvalidUntil = NULL;\n>> \tDefElem *dbypassRLS = NULL;\n>> \n>> +\tGetUserIdAndSecContext(&saved_uid, &save_sec_context);\n>> +\n>> +\t/*\n>> +\t * Who is supposed to own the new role?\n>> +\t */\n>> +\tif (stmt->authrole)\n>> +\t\towner_uid = get_rolespec_oid(stmt->authrole, false);\n>> +\telse\n>> +\t\towner_uid = saved_uid;\n>> +\n>> \t/* The defaults can vary depending on the original statement type */\n>> \tswitch (stmt->stmt_type)\n>> \t{\n>> @@ -254,6 +269,10 @@ CreateRole(ParseState *pstate, CreateRoleStmt *stmt)\n>> \t\t\tereport(ERROR,\n>> \t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n>> \t\t\t\t\t errmsg(\"must be superuser to create superusers\")));\n>> +\t\tif (!superuser_arg(owner_uid))\n>> +\t\t\tereport(ERROR,\n>> +\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n>> +\t\t\t\t\t errmsg(\"must be superuser to own superusers\")));\n>> \t}\n>> \telse if (isreplication)\n>> \t{\n> \n> So, we're telling a superuser (which is the only way you could get to\n> this point...) that they aren't allowed to create a superuser role which\n> is owned by a non-superuser... Why?\n\nThe reason is one you won't like very much. Given that roles have the privileges of roles they own (which you don't like), allowing a non-superuser to own a superuser effectively promotes that owner to superuser status. That's a pretty obscure way of making someone a superuser, probably not what was intended, and quite a high-caliber foot-gun.\n\nEven if roles didn't inherit privileges from roles they own, I think it would be odd for a non-superuser to own a superuser. The definition of \"ownership\" would have to be extremely restricted to prevent the owner from using their ownership to obtain superuser.\n\n>> @@ -310,6 +329,19 @@ CreateRole(ParseState *pstate, CreateRoleStmt *stmt)\n>> \t\t\t\t errmsg(\"role \\\"%s\\\" already exists\",\n>> \t\t\t\t\t\tstmt->role)));\n>> \n>> +\t/*\n>> +\t * If the requested authorization is different from the current user,\n>> +\t * temporarily set the current user so that the object(s) will be created\n>> +\t * with the correct ownership.\n>> +\t *\n>> +\t * (The setting will be restored at the end of this routine, or in case of\n>> +\t * error, transaction abort will clean things up.)\n>> +\t */\n>> +\tif (saved_uid != owner_uid)\n>> +\t\tSetUserIdAndSecContext(owner_uid,\n>> +\t\t\t\t\t\t\t save_sec_context | SECURITY_LOCAL_USERID_CHANGE);\n> \n> Err, why is this needed? This looks copied from the CreateSchemaCommand\n> but, unlike with the create schema command, CreateRole doesn't actually\n> allow sub-commands to be run to create other objects in the way that\n> CreateSchemaCommand does.\n\nNot quite. There are still the check_password_hook and RunObjectPostCreateHook() to consider. The check_password_hook might want to validate the validuntil_time parameter against the owner's validuntil time, or some other property of the owner. And the RunObjectPostCreateHook (called via InvokeObjectPostCreateHook(AuthIdRelationId, roleid, 0)) may want the information, too.\n\nI'm not saying these are super strong arguments. If people generally feel that CREATE ROLE ... AUTHORIZATION shouldn't call SetUserIdAndSecContext, feel free to argue that.\n\n>> @@ -1675,3 +1714,110 @@ DelRoleMems(const char *rolename, Oid roleid,\n>> +static void\n>> +AlterRoleOwner_internal(HeapTuple tup, Relation rel, Oid newOwnerId)\n>> +{\n>> +\tForm_pg_authid authForm;\n>> +\n>> +\tAssert(tup->t_tableOid == AuthIdRelationId);\n>> +\tAssert(RelationGetRelid(rel) == AuthIdRelationId);\n>> +\n>> +\tauthForm = (Form_pg_authid) GETSTRUCT(tup);\n>> +\n>> +\t/*\n>> +\t * If the new owner is the same as the existing owner, consider the\n>> +\t * command to have succeeded. This is for dump restoration purposes.\n>> +\t */\n>> +\tif (authForm->rolowner != newOwnerId)\n>> +\t{\n>> +\t\t/* Otherwise, must be owner of the existing object */\n>> +\t\tif (!pg_role_ownercheck(authForm->oid, GetUserId()))\n>> +\t\t\taclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_ROLE,\n>> +\t\t\t\t\t\t NameStr(authForm->rolname));\n>> +\n>> +\t\t/* Must be able to become new owner */\n>> +\t\tcheck_is_member_of_role(GetUserId(), newOwnerId);\n> \n> Feels like we should be saying a bit more about why we check for role\n> membership vs. has_privs_of_role() here. I'm generally of the opinion\n> that membership is the right thing to check here, just feel like we\n> should try to explain more why that's the right thing.\n\nFor orthogonality with how ALTER .. OWNER TO works for everything else? AlterEventTriggerOwner_internal doesn't check this explicitly, but that's because it has already checked that the new owner is superuser, so the check must necessarily succeed. I'm not aware of any ALTER .. OWNER TO commands that don't require this, at least implicitly.\n\nWe could explain this in AlterRoleOwner_internal, as you suggest, but if we need it there, do we need to put the same explanation in functions which handle other object types? I don't see why this one function would require the explanation if other equivalent functions do not.\n\n>> +\t\t/*\n>> +\t\t * must have CREATEROLE rights\n>> +\t\t *\n>> +\t\t * NOTE: This is different from most other alter-owner checks in that\n>> +\t\t * the current user is checked for create privileges instead of the\n>> +\t\t * destination owner. This is consistent with the CREATE case for\n>> +\t\t * roles. Because superusers will always have this right, we need no\n>> +\t\t * special case for them.\n>> +\t\t */\n>> +\t\tif (!have_createrole_privilege())\n>> +\t\t\taclcheck_error(ACLCHECK_NO_PRIV, OBJECT_ROLE,\n>> +\t\t\t\t\t\t NameStr(authForm->rolname));\n>> +\n> \n> I would think we'd be trying to get away from the role attribute stuff.\n\nThat's not a bad idea, but I thought it was discussed months ago. The two options were (1) keep using CREATEROLE but change it to be less powerful, and (2) add a new built-in role, say \"pg_create_role\", and have membership in that role be what we use. Option (2) was generally viewed less favorably, or that was my sense of people's opinions, on the theory that we'd be better off fixing how CREATEROLE works than having two different ways of doing roughly the same thing.\n\n>> diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n> \n>> +\t\t\tCREATE ROLE RoleId AUTHORIZATION RoleSpec opt_with OptRoleList\n>> +\t\t\t\t{\n>> +\t\t\t\t\tCreateRoleStmt *n = makeNode(CreateRoleStmt);\n>> +\t\t\t\t\tn->stmt_type = ROLESTMT_ROLE;\n>> +\t\t\t\t\tn->role = $3;\n>> +\t\t\t\t\tn->authrole = $5;\n>> +\t\t\t\t\tn->options = $7;\n>> +\t\t\t\t\t$$ = (Node *)n;\n>> +\t\t\t\t}\n>> \t\t;\n> \n> ...\n> \n>> @@ -1218,6 +1229,10 @@ CreateOptRoleElem:\n>> \t\t\t\t{\n>> \t\t\t\t\t$$ = makeDefElem(\"addroleto\", (Node *)$3, @1);\n>> \t\t\t\t}\n>> +\t\t\t| OWNER RoleSpec\n>> +\t\t\t\t{\n>> +\t\t\t\t\t$$ = makeDefElem(\"owner\", (Node *)$2, @1);\n>> +\t\t\t\t}\n>> \t\t;\n> \n> Not sure why we'd have both AUTHORIZATION and OWNER for CREATE ROLE..?\n> We don't do that for other objects.\n\nGood catch! The \"OWNER RoleSpec\" here was unused. I have removed it from the new patch set.\n\n>> diff --git a/src/test/regress/sql/create_role.sql b/src/test/regress/sql/create_role.sql\n> \n>> @@ -1,6 +1,7 @@\n>> -- ok, superuser can create users with any set of privileges\n>> CREATE ROLE regress_role_super SUPERUSER;\n>> CREATE ROLE regress_role_1 CREATEDB CREATEROLE REPLICATION BYPASSRLS;\n>> +GRANT CREATE ON DATABASE regression TO regress_role_1;\n> \n> Seems odd to add this as part of this patch, or am I missing something?\n\nIt's not used much in patch 0001 where it gets introduced, but gets used more in patch 0002. I put it here to reduce the number of diffs the next patch creates.\n\n>> From 1784a5b51d4dbebf99798b5832d92b0f585feb08 Mon Sep 17 00:00:00 2001\n>> From: Mark Dilger <mark.dilger@enterprisedb.com>\n>> Date: Tue, 4 Jan 2022 11:42:27 -0800\n>> Subject: [PATCH v4 3/5] Give role owners control over owned roles\n>> \n>> Create a role ownership hierarchy. The previous commit added owners\n>> to roles. This goes further, making role ownership transitive. If\n>> role A owns role B, and role B owns role C, then role A can act as\n>> the owner of role C. Also, roles A and B can perform any action on\n>> objects belonging to role C that role C could itself perform.\n>> \n>> This is a preparatory patch for changing how CREATEROLE works.\n> \n> This feels odd to have be an independent commit.\n\nReworked the v6-0002 and v6-0003 patches into just one, as discussed at the top of this email.\n\n>> diff --git a/src/backend/commands/schemacmds.c b/src/backend/commands/schemacmds.c\n> \n>> @@ -363,7 +363,7 @@ AlterSchemaOwner_internal(HeapTuple tup, Relation rel, Oid newOwnerId)\n>> \t\t/*\n>> \t\t * must have create-schema rights\n>> \t\t *\n>> -\t\t * NOTE: This is different from other alter-owner checks in that the\n>> +\t\t * NOTE: This is different from most other alter-owner checks in that the\n>> \t\t * current user is checked for create privileges instead of the\n>> \t\t * destination owner. This is consistent with the CREATE case for\n>> \t\t * schemas. Because superusers will always have this right, we need\n> \n> Not a fan of just dropping 'most' in here, doesn't really help someone\n> understand what is being talked about. I'd suggest adjusting the\n> comment to talk about alter-owner checks for objects which exist in\n> schemas, as that's really what is being referred to.\n\nYeah, that's a better approach. This next patch set changes the comment in both AlterSchemaOwner_internal and AlterRoleOwner_internal to make that clear.\n\n...<snip>...\n\n> Whoah, really? No, I don't agree with this, it's throwing away the\n> entire concept around inheritance of role rights and how you can have\n> roles which you can get the privileges of by doing a SET ROLE to them\n> but you don't automatically have those rights.\n\nI didn't change any of this for the next patch set, not because I'm ignoring you, but because we're still arguing out what the right behavior should be. Whatever we come up with, I think it should allow the use case that Robert has been talking about. Doing that and also doing what you are talking about might be hard, but I'm still hoping to find some solution.\n\nRecall that upthread, months ago, we discussed that it is abnormal for any role to be a member of a login role. You can think of \"login role\" as a synonym for \"user\", and \"non-login role\" as a synonym for \"group\", and that language makes it easier to think about how weird it is for users to be members of other users.\n\nIt's perfectly sensible to have users own users, but not for users to be members of users. If not for that, I'd be in favor of what you suggest, excepting that I'd accommodate Robert's requirements by having the owner of a role have ADMIN on that role by default, with grammar for requesting the alternative. Maybe there is something I'm forgetting to consider just now, but I'd think that would handle Robert's \"tenant\" type argument while also making it easy to operate the way that you want. But, again, it does require having users be members of users, something which was rejected in the discussion months ago.\n\n>> +/*\n>> + * Is owner a direct or indirect owner of the role, not considering\n>> + * superuserness?\n>> + */\n>> +bool\n>> +is_owner_of_role_nosuper(Oid owner, Oid role)\n>> +{\n>> +\treturn list_member_oid(roles_is_owned_by(role), owner);\n>> +}\n> \n> \n> Surely if you're a member of a role which owns another role, you should\n> be considered to be an owner of that role too..? Just checking if the\n> current role is a member of the roles which directly own the specified\n> role misses that case.\n> \n> That is:\n> \n> CREATE ROLE r1;\n> CREATE ROLE r2;\n> \n> GRANT r2 to r1;\n> \n> CREATE ROLE r3 AUTHORIZATION r2;\n> \n> Surely, r1 is to be considered an owner of r3 in this case, but the\n> above check wouldn't consider that to be the case- it would only return\n> true if the current role is r2.\n> \n> We do need some kind of direct membership check in the list of owners to\n> avoid creating loops, so maybe this function is kept as that and the\n> pg_role_ownership() check is changed to address the above case, but I\n> don't think we should just ignore role membership when it comes to role\n> ownership- we don't do that for any other kind of ownership check.\n\nI like this line of reasoning, and it appears to be an argument in your favor where the larger question is concerned. If role ownership is transitive, and role membership is transitive, it gets weird trying to work out larger relationship chains.\n\nThis deserves more attention.\n\n>> Subject: [PATCH v4 4/5] Restrict power granted via CREATEROLE.\n> \n> I would think this would be done independently of the other patches and\n> probably be first.\n\nThe way I'm trying to fix CREATEROLE is first by introducing the concept of role owners, then second by restricting what roles can do based on whether they own a target role. I don't see how I can reverse the order.\n\n>> diff --git a/doc/src/sgml/ref/alter_role.sgml b/doc/src/sgml/ref/alter_role.sgml\n> \n>> @@ -70,18 +70,18 @@ ALTER ROLE { <replaceable class=\"parameter\">role_specification</replaceable> | A\n>> <link linkend=\"sql-revoke\"><command>REVOKE</command></link> for that.)\n>> Attributes not mentioned in the command retain their previous settings.\n>> Database superusers can change any of these settings for any role.\n>> - Roles having <literal>CREATEROLE</literal> privilege can change any of these\n>> - settings except <literal>SUPERUSER</literal>, <literal>REPLICATION</literal>,\n>> - and <literal>BYPASSRLS</literal>; but only for non-superuser and\n>> - non-replication roles.\n>> - Ordinary roles can only change their own password.\n>> + Role owners can change any of these settings on roles they own except\n>> + <literal>SUPERUSER</literal>, <literal>REPLICATION</literal>, and\n>> + <literal>BYPASSRLS</literal>; but only for non-superuser and non-replication\n>> + roles, and only if the role owner does not alter the target role to have a\n>> + privilege which the role owner itself lacks. Ordinary roles can only change\n>> + their own password.\n>> </para>\n> \n> Having contemplated this a bit more, I don't like it, and it's not how\n> things work when it comes to regular privileges.\n> \n> Consider that I can currently GRANT someone UPDATE privileges on an\n> object, but they can't GRANT that privilege to someone else unless I\n> explicitly allow it. The same could certainly be said for roles-\n> perhaps I want to allow someone the privilege to create non-login roles,\n> but I don't want them to be able to create new login roles, even if they\n> themselves have LOGIN.\n\nThis comment conflates privileges like LOGIN for which there isn't any \"with grant option\" logic with privileges that do. Granting someone UPDATE privileges on a relation will be tracked in an aclitem including whether the \"with grant option\" bit is set. Nothing like that will exist for LOGIN. I'm not dead-set against having that functionality for the privileges that currently lack it, but we'd have to do so in a way that doesn't gratuitously break backward compatibility, and how to do so has not been discussed.\n\n> As another point, I might want to have an 'admin' role that I want\n> admins to SET ROLE to before they go creating other roles, because I\n> don't want them to be creating roles as their regular user and so that\n> those other roles are owned by the 'admin' role, but I don't want that\n> role to have the 'login' attribute.\n\nSame problem. We don't have aclitem bits for this.\n\n> In other words, we should really consider what role attributes a given\n> role has to be independent of what role attributes that role is allowed\n> to set on roles they create. I appreciate that \"just whatever the\n> current role has\" is simpler and less work but also will be difficult to\n> walk back from once it's in the wild.\n\nI don't feel there is any fundamental disagreement here, except perhaps whether it needs to be done as part of this patch, vs. implemented in a future development cycle. We don't currently have any syntax for \"CREATE ROLE bob LOGIN WITH GRANT OPTION\". I can see some advantages in doing it all in one go, but also some advantage in being incremental. More discussion is needed here.\n\n>> @@ -1457,7 +1449,7 @@ AddRoleMems(const char *rolename, Oid roleid,\n> \n>> \t/*\n>> -\t * Check permissions: must have createrole or admin option on the role to\n>> +\t * Check permissions: must be owner or have admin option on the role to\n>> \t * be changed. To mess with a superuser role, you gotta be superuser.\n>> \t */\n>> \tif (superuser_arg(roleid))\n> \n> ...\n> \n>> @@ -1467,9 +1459,9 @@ AddRoleMems(const char *rolename, Oid roleid,\n>> \t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n>> \t\t\t\t\t errmsg(\"must be superuser to alter superusers\")));\n>> \t}\n>> -\telse\n>> +\telse if (!superuser())\n>> \t{\n>> -\t\tif (!have_createrole_privilege() &&\n>> +\t\tif (!pg_role_ownercheck(roleid, grantorId) &&\n>> \t\t\t!is_admin_of_role(grantorId, roleid))\n>> \t\t\tereport(ERROR,\n>> \t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> \n> I'm not entirely sure about including owners here though I'm not\n> completely against it either. This conflation of what the 'admin'\n> privileges on a role means vs. the 'ownership' of a role is part of what\n> I dislike about having two distinct systems for saying who is allowed to\n> GRANT one role to another.\n> \n> Also, if we're going to always consider owners to be admins of roles\n> they own, why not push that into is_admin_of_role()?\n\nUnchanged in this patch set, but worth further discussion and evaluation.\n\n>> Subject: [PATCH v4 5/5] Remove grantor field from pg_auth_members\n\n...<snip>...\n\n> If we're going to do this, it should also be done independently of the\n> role ownership stuff too.\n\nI've withdrawn 0005 from this patch set, and we can come back to it separately.\n\n> Thanks,\n> \n> Stephen\n\nThanks for the review! I hope we can keep pushing this forward. Again, no offense is intended in having not addressed all your concerns in the v7 patch set:\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 25 Jan 2022 15:17:47 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "> On Jan 25, 2022, at 12:44 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> I agree that CREATEROLE is overpowered and that the goal of this should\n> be to provide a way for roles to be created and dropped that doesn't\n> give the user who has that power everything that CREATEROLE currently\n> does.\n\nI'm attaching a patch that attempts to fix CREATEROLE without any connection to role ownership.\n\n> The point I was making is that the concept of role ownership\n> isn't intrinsically linked to that and is, therefore, as you say, gravy.\n\nI agree, they aren't intrinsically linked, though the solution to one might interact in some ways with the solution to the other.\n\n> That isn't to say that I'm entirely against the role ownership idea but\n> I'd want it to be focused on the goal of providing ways of creating and\n> dropping users and otherwise performing that kind of administration and\n> that doesn't require the specific change to make owners be members of\n> all roles they own and automatically have all privileges of those roles\n> all the time.\n\nThe attached WIP patch attempts to solve most of the CREATEROLE problems but not the problem of which role who can drop which other role. That will likely require an ownership concept.\n\nThe main idea here is that having CREATEROLE doesn't give you ADMIN on roles, nor on role attributes. For role attributes, the syntax has been extended. An excerpt from the patch's regression test illustrates some of that concept:\n\n-- ok, superuser can create a role that can create login replication users, but\n-- cannot itself login, nor perform replication\nCREATE ROLE regress_role_repladmin\n CREATEROLE WITHOUT ADMIN OPTION -- can create roles, but cannot give it away\n NOCREATEDB WITHOUT ADMIN OPTION -- cannot create db, nor give it away\n NOLOGIN WITH ADMIN OPTION -- cannot log in, but can give it away\n NOREPLICATION WITH ADMIN OPTION -- cannot replicate, but can give it away\n NOBYPASSRLS WITHOUT ADMIN OPTION; -- cannot bypassrls, nor give it away\n\n-- ok, superuser can create a role with CREATEROLE but restrict give-aways\nCREATE ROLE regress_role_minoradmin\n NOSUPERUSER -- WITHOUT ADMIN OPTION is implied\n CREATEROLE WITHOUT ADMIN OPTION\n NOCREATEDB WITHOUT ADMIN OPTION\n NOLOGIN WITHOUT ADMIN OPTION\n NOREPLICATION -- WITHOUT ADMIN OPTION is implied\n NOBYPASSRLS -- WITHOUT ADMIN OPTION is implied\n NOINHERIT WITHOUT ADMIN OPTION\n CONNECTION LIMIT NONE WITHOUT ADMIN OPTION\n VALID ALWAYS WITHOUT ADMIN OPTION\n PASSWORD NULL WITHOUT ADMIN OPTION;\n\n-- fail, having CREATEROLE is not enough to create roles in privileged roles\nSET SESSION AUTHORIZATION regress_role_minoradmin;\nCREATE ROLE regress_nosuch_read_all_data IN ROLE pg_read_all_data;\nERROR: must have admin option on role \"pg_read_all_data\"\n\n-- fail, cannot change attributes without ADMIN for them\nSET SESSION AUTHORIZATION regress_role_minoradmin;\nALTER ROLE regress_role_login LOGIN;\nERROR: must have admin on login to change login attribute\nALTER ROLE regress_role_login NOLOGIN;\nERROR: must have admin on login to change login attribute\n\n\nWhether \"WITH ADMIN OPTION\" or \"WITHOUT ADMIN OPTION\" is implied hinges on whether the role is given CREATEROLE. That hackery is necessary to preserve backwards compatibility. If we don't care about compatibility, I could change the patch to make \"WITHOUT ADMIN OPTION\" implied for all attributes when not specified.\n\nI'd appreciate feedback on the direction this patch is going.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 29 Jan 2022 21:58:38 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "Hi,\n\nOn Sat, Jan 29, 2022 at 09:58:38PM -0800, Mark Dilger wrote:\n> > On Jan 25, 2022, at 12:44 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> > I agree that CREATEROLE is overpowered and that the goal of this should\n> > be to provide a way for roles to be created and dropped that doesn't\n> > give the user who has that power everything that CREATEROLE currently\n> > does.\n> \n> I'm attaching a patch that attempts to fix CREATEROLE without any\n> connection to role ownership.\n\nSounds like a useful way forward.\n \n> > The point I was making is that the concept of role ownership\n> > isn't intrinsically linked to that and is, therefore, as you say, gravy.\n> \n> I agree, they aren't intrinsically linked, though the solution to one\n> might interact in some ways with the solution to the other.\n> \n> > That isn't to say that I'm entirely against the role ownership idea but\n> > I'd want it to be focused on the goal of providing ways of creating and\n> > dropping users and otherwise performing that kind of administration and\n> > that doesn't require the specific change to make owners be members of\n> > all roles they own and automatically have all privileges of those roles\n> > all the time.\n> \n> The attached WIP patch attempts to solve most of the CREATEROLE\n> problems but not the problem of which role who can drop which other\n> role. That will likely require an ownership concept.\n> \n> The main idea here is that having CREATEROLE doesn't give you ADMIN on\n> roles, nor on role attributes. For role attributes, the syntax has\n> been extended. An excerpt from the patch's regression test\n> illustrates some of that concept:\n> \n> -- ok, superuser can create a role that can create login replication users, but\n> -- cannot itself login, nor perform replication\n> CREATE ROLE regress_role_repladmin\n> CREATEROLE WITHOUT ADMIN OPTION -- can create roles, but cannot give it away\n> NOCREATEDB WITHOUT ADMIN OPTION -- cannot create db, nor give it away\n> NOLOGIN WITH ADMIN OPTION -- cannot log in, but can give it away\n> NOREPLICATION WITH ADMIN OPTION -- cannot replicate, but can give it away\n> NOBYPASSRLS WITHOUT ADMIN OPTION; -- cannot bypassrls, nor give it away\n> \n> -- ok, superuser can create a role with CREATEROLE but restrict give-aways\n> CREATE ROLE regress_role_minoradmin\n> NOSUPERUSER -- WITHOUT ADMIN OPTION is implied\n> CREATEROLE WITHOUT ADMIN OPTION\n> NOCREATEDB WITHOUT ADMIN OPTION\n> NOLOGIN WITHOUT ADMIN OPTION\n> NOREPLICATION -- WITHOUT ADMIN OPTION is implied\n> NOBYPASSRLS -- WITHOUT ADMIN OPTION is implied\n> NOINHERIT WITHOUT ADMIN OPTION\n> CONNECTION LIMIT NONE WITHOUT ADMIN OPTION\n> VALID ALWAYS WITHOUT ADMIN OPTION\n> PASSWORD NULL WITHOUT ADMIN OPTION;\n> \n> -- fail, having CREATEROLE is not enough to create roles in privileged roles\n> SET SESSION AUTHORIZATION regress_role_minoradmin;\n> CREATE ROLE regress_nosuch_read_all_data IN ROLE pg_read_all_data;\n> ERROR: must have admin option on role \"pg_read_all_data\"\n\nGreat.\n \n> -- fail, cannot change attributes without ADMIN for them\n> SET SESSION AUTHORIZATION regress_role_minoradmin;\n> ALTER ROLE regress_role_login LOGIN;\n> ERROR: must have admin on login to change login attribute\n>\n> ALTER ROLE regress_role_login NOLOGIN;\n> ERROR: must have admin on login to change login attribute\n> \n> Whether \"WITH ADMIN OPTION\" or \"WITHOUT ADMIN OPTION\" is implied\n> hinges on whether the role is given CREATEROLE. That hackery is\n> necessary to preserve backwards compatibility. If we don't care about\n> compatibility, I could change the patch to make \"WITHOUT ADMIN OPTION\"\n> implied for all attributes when not specified.\n> \n> I'd appreciate feedback on the direction this patch is going.\n \nOne thing I noticed (and which will likely make DBAs grumpy) is that it\nseems being able to create users (as opposed to non-login roles/groups)\ndepends on when you get the CREATEROLE attribute (on role creation or\nlater), viz:\n\npostgres=# CREATE USER admin CREATEROLE;\nCREATE ROLE\npostgres=# SET ROLE admin;\nSET\npostgres=> CREATE USER testuser; -- this works\nCREATE ROLE\npostgres=> RESET ROLE;\nRESET\npostgres=# CREATE USER admin2;\nCREATE ROLE\npostgres=# ALTER ROLE admin2 CREATEROLE; -- we get CREATEROLE after the fact\nALTER ROLE\npostgres=# SET ROLE admin2;\nSET\npostgres=> CREATE USER testuser2; -- bam\nERROR: must have grant option on LOGIN privilege to create login users\npostgres=# SELECT rolname, admcreaterole, admcanlogin FROM pg_authid\nWHERE rolname LIKE 'admin%';\n rolname | admcreaterole | admcanlogin \n---------+---------------+-------------\n admin | t | t\n admin2 | f | f\n(2 rows)\n\nIs that intentional? If it is, I think it would be nice if this could be\nchanged, unless I'm missing some serious security concerns or so. \n\nSome light review of the patch (I haven't read all the previous ones, so\nplease excuse me if I rehash old discussions):\n\n> From 82d235b39b32ca0cd0b94d47a54ee6806645a365 Mon Sep 17 00:00:00 2001\n> From: Mark Dilger <mark.dilger@enterprisedb.com>\n> Date: Fri, 28 Jan 2022 07:57:57 -0800\n> Subject: [PATCH v8] Adding admin options for role attributes\n> \n> When creating roles, attributes such as BYPASSRLS can be optionally\n> specified WITH ADMIN OPTION or WITHOUT ADMIN OPTION. If these\n> optional clauses are unspecified, they all default to WITHOUT\n> unless the role being created is given CREATEROLE, in which case\n> they default to WITHOUT for SUPERUSER, REPLICATION, and BYPASSRLS\n> and true for all others. This preserves backwards compatible\n> behavior.\n> \n> The CREATEROLE attribute no longer makes up for lacking the ADMIN\n> option on a role. The creator of a role only has the ADMIN-like\n> right to grant other roles into the new role during the creation\n> statement itself. After that, the creator may only do so if the\n> creator has ADMIN on the role. Note that creators may add\n> themselves to the list of ADMINs on the new role during creation\n> time.\n> \n> SUPERUSER can still only be granted by superusers.\n> ---\n> doc/src/sgml/ref/create_role.sgml | 50 ++--\n> src/backend/catalog/aclchk.c | 179 ++++++++++++--\n> src/backend/commands/user.c | 278 +++++++++++++++++-----\n> src/backend/parser/gram.y | 161 ++++++++++---\n> src/include/catalog/pg_authid.dat | 52 +++-\n> src/include/catalog/pg_authid.h | 10 +\n> src/include/nodes/nodes.h | 1 +\n> src/include/nodes/parsenodes.h | 11 +-\n> src/include/utils/acl.h | 12 +\n> src/test/regress/expected/create_role.out | 188 ++++++++++++++-\n> src/test/regress/expected/privileges.out | 4 +\n> src/test/regress/sql/create_role.sql | 153 +++++++++++-\n> src/test/regress/sql/privileges.sql | 3 +\n> src/tools/pgindent/typedefs.list | 1 +\n> 14 files changed, 936 insertions(+), 167 deletions(-)\n> \n> diff --git a/doc/src/sgml/ref/create_role.sgml b/doc/src/sgml/ref/create_role.sgml\n> index b6a4ea1f72..7163779e0a 100644\n> --- a/doc/src/sgml/ref/create_role.sgml\n> +++ b/doc/src/sgml/ref/create_role.sgml\n> @@ -26,15 +26,22 @@ CREATE ROLE <replaceable class=\"parameter\">name</replaceable> [ [ WITH ] <replac\n> <phrase>where <replaceable class=\"parameter\">option</replaceable> can be:</phrase>\n> \n> SUPERUSER | NOSUPERUSER\n> - | CREATEDB | NOCREATEDB\n> - | CREATEROLE | NOCREATEROLE\n> - | INHERIT | NOINHERIT\n> - | LOGIN | NOLOGIN\n> - | REPLICATION | NOREPLICATION\n> - | BYPASSRLS | NOBYPASSRLS\n> - | CONNECTION LIMIT <replaceable class=\"parameter\">connlimit</replaceable>\n> - | [ ENCRYPTED ] PASSWORD '<replaceable class=\"parameter\">password</replaceable>' | PASSWORD NULL\n> - | VALID UNTIL '<replaceable class=\"parameter\">timestamp</replaceable>'\n> + | INHERIT [ { WITH | WITHOUT } GRANT OPTION ]\n> +\t| NOINHERIT [ { WITH | WITHOUT } GRANT OPTION ]\n\nSpaces vs. tabs here...\n\n> + | CREATEDB [ { WITH | WITHOUT } GRANT OPTION ]\n> + | NOCREATEDB [ { WITH | WITHOUT } GRANT OPTION ]\n> + | CREATEROLE [ { WITH | WITHOUT } GRANT OPTION ]\n> + | NOCREATEROLE [ { WITH | WITHOUT } GRANT OPTION ]\n> + | LOGIN [ { WITH | WITHOUT } GRANT OPTION ]\n> + | NOLOGIN [ { WITH | WITHOUT } GRANT OPTION ]\n> + | REPLICATION [ { WITH | WITHOUT } GRANT OPTION ]\n> + | NOREPLICATION [ { WITH | WITHOUT } GRANT OPTION ]\n> + | BYPASSRLS [ { WITH | WITHOUT } GRANT OPTION ]\n> + | NOBYPASSRLS [ { WITH | WITHOUT } GRANT OPTION ]\n> + | CONNECTION LIMIT [ <replaceable class=\"parameter\">connlimit</replaceable> | NONE ] [ { WITH | WITHOUT } GRANT OPTION ]\n> + | [ ENCRYPTED ] PASSWORD '<replaceable class=\"parameter\">password</replaceable>' [ { WITH | WITHOUT } GRANT OPTION ]\n> +\t| PASSWORD NULL [ { WITH | WITHOUT } GRANT OPTION ]\n\n... and here, is that intentional?\n\n> @@ -356,6 +363,18 @@ in sync when changing the above synopsis!\n> <link linkend=\"sql-revoke\"><command>REVOKE</command></link>.\n> </para>\n> \n> + <para>\n> + Some parameters allow the <literal>WITH ADMIN OPTION</literal> or\n> + <literal>WITHOUT ADMIN OPTION</literal> clause to be specified. For roles\n> + with the <literal>CREATEROLE</literal> attribute, these clauses govern\n> + whether new roles may be created with the attribute. If not given, for\n> + reasons of backwards compatibility, <literal>WITHOUT ADMIN OPTION</literal>\n> + is the default for <literal>REPLICATION</literal> and\n> + <literal>BYPASSRLS</literal>, but <literal>WITH ADMIN OPTION</literal> is\n> + the default for <literal>CREATEDB</literal>, <literal>CREATEROLE</literal>,\n> + and <literal>LOGIN</literal>.\n> + </para>\n> +\n> <para>\n> The <literal>VALID UNTIL</literal> clause defines an expiration time for a\n> password only, not for the role per se. In\n> diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c\n> index 1dd03a8e51..c66f545f36 100644\n> --- a/src/backend/catalog/aclchk.c\n> +++ b/src/backend/catalog/aclchk.c\n> @@ -5430,6 +5430,91 @@ pg_statistics_object_ownercheck(Oid stat_oid, Oid roleid)\n> \treturn has_privs_of_role(roleid, ownerId);\n> }\n> \n> +typedef enum ROLPRIV\n\nI think typdefs usually go at the top of the file, not at line 5441...\n\n> +{\n> +\tCREATEROLE,\n> +\tCREATEDB,\n> +\tCANLOGIN,\n> +\tREPLICATION,\n> +\tBYPASSRLS,\n> +\tINHERIT,\n> +\tCONNLIMIT,\n> +\tVALIDUNTIL,\n> +\tPASSWORD\n> +} ROLPRIV;\n> +\n\n[...]\n\n> /*\n> * Check whether specified role has CREATEROLE privilege (or is a superuser)\n> *\n\nI feel this function comment needs revision; we now have a dozen similar\nfunctions that all do the same, but only the first one\n(has_createrole_privilege) is being explained.\n\nI guess the comment overall is still applicable, so as a minimum maybe\njust change the CREATEROLE above for a generic \"has some privilege\", and\nadd a space in order to make it clear this applies to all of the\nfollowing functions.\n\nHrm, maybe also mention why there may_admin_*_privilege for all\nprivileges, but has_*_privilege only for some.\n\n> diff --git a/src/backend/commands/user.c b/src/backend/commands/user.c\n> index f9d3c1246b..501613a840 100644\n> --- a/src/backend/commands/user.c\n> +++ b/src/backend/commands/user.c\n> @@ -255,27 +305,36 @@ CreateRole(ParseState *pstate, CreateRoleStmt *stmt)\n> \t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> \t\t\t\t\t errmsg(\"must be superuser to create superusers\")));\n> \t}\n> -\telse if (isreplication)\n> -\t{\n> -\t\tif (!superuser())\n> -\t\t\tereport(ERROR,\n> -\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> -\t\t\t\t\t errmsg(\"must be superuser to create replication users\")));\n> -\t}\n> -\telse if (bypassrls)\n> -\t{\n> -\t\tif (!superuser())\n> -\t\t\tereport(ERROR,\n> -\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> -\t\t\t\t\t errmsg(\"must be superuser to create bypassrls users\")));\n> -\t}\n> -\telse\n> -\t{\n> -\t\tif (!have_createrole_privilege())\n> -\t\t\tereport(ERROR,\n> -\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> -\t\t\t\t\t errmsg(\"permission denied to create role\")));\n> -\t}\n> +\n> +\tif (createrole && !may_admin_createrole_privilege(GetUserId()))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> +\t\t\t\t errmsg(\"must have grant option on CREATEROLE privilege to create createrole users\")));\n\nShouldn't this (and the following) be \"must have admin option on\nCREATEROLE\"?\n\n> @@ -311,7 +370,7 @@ CreateRole(ParseState *pstate, CreateRoleStmt *stmt)\n> \t\t\t\t\t\tstmt->role)));\n> \n> \t/* Convert validuntil to internal form */\n> -\tif (validUntil)\n> +\tif (validUntil && strcmp(validUntil, \"always\") != 0)\n\nThis (there are other similar hunks further down) looks like an\nindependent patch/feature?\n\n> @@ -637,32 +727,57 @@ AlterRole(ParseState *pstate, AlterRoleStmt *stmt)\n> \t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> \t\t\t\t\t errmsg(\"must be superuser to alter superuser roles or change superuser attribute\")));\n> \t}\n> -\telse if (authform->rolreplication || disreplication)\n> -\t{\n> -\t\tif (!superuser())\n> -\t\t\tereport(ERROR,\n> -\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> -\t\t\t\t\t errmsg(\"must be superuser to alter replication roles or change replication attribute\")));\n> -\t}\n> -\telse if (dbypassRLS)\n> -\t{\n> -\t\tif (!superuser())\n> -\t\t\tereport(ERROR,\n> -\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> -\t\t\t\t\t errmsg(\"must be superuser to change bypassrls attribute\")));\n> -\t}\n> -\telse if (!have_createrole_privilege())\n> -\t{\n> -\t\t/* check the rest */\n> -\t\tif (dinherit || dcreaterole || dcreatedb || dcanlogin || dconnlimit ||\n> -\t\t\tdrolemembers || dvalidUntil || !dpassword || roleid != GetUserId())\n> -\t\t\tereport(ERROR,\n> -\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> -\t\t\t\t\t errmsg(\"permission denied\")));\n> -\t}\n> +\n> +\t/* To mess with replication roles, must have admin on REPLICATION */\n> +\tif ((authform->rolreplication || disreplication) &&\n> +\t\t!may_admin_replication_privilege(GetUserId()))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> +\t\t\t\t errmsg(\"must have admin on replication to alter replication roles or change replication attribute\")));\n\n\"have admin\" sounds a bit weird, but I understand the error message is\ntoo long already to spell out \"must have admin option\"? Or am I mistaken\nand \"admin\" is what it's actually called (same for the ones below)?\n\nAlso, I think those role options are usually capitalized like\nREPLICATION in other error messages.\n\n> diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n> index b5966712ce..7503d3ead6 100644\n> --- a/src/backend/parser/gram.y\n> +++ b/src/backend/parser/gram.y\n> @@ -1131,67 +1140,111 @@ AlterOptRoleElem:\n[...]\n\n> +\t\t\t| VALID ALWAYS opt_admin_spec\n> +\t\t\t\t{\n> +\t\t\t\t\tRoleElem *n = makeNode(RoleElem);\n> +\t\t\t\t\tn->elem = makeDefElem(\"validUntil\", (Node *)makeString(\"always\"), @1);\n> +\t\t\t\t\tn->admin_spec = $3;\n> +\t\t\t\t\t$$ = (Node *)n;\n\nThis one is from another patch as well I think.\n\n> \t\t\t\t}\n> \t\t/*\tSupported but not documented for roles, for use by ALTER GROUP. */\n> -\t\t\t| USER role_list\n> +\t\t\t| USER role_list opt_admin_spec\n> \t\t\t\t{\n> -\t\t\t\t\t$$ = makeDefElem(\"rolemembers\", (Node *)$2, @1);\n> +\t\t\t\t\tRoleElem *n = makeNode(RoleElem);\n> +\t\t\t\t\tn->elem = makeDefElem(\"rolemembers\", (Node *)$2, @1);\n> +\t\t\t\t\tn->admin_spec = $3;\n> +\t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> -\t\t\t| IDENT\n> +\t\t\t| IDENT opt_admin_spec\n> \t\t\t\t{\n> \t\t\t\t\t/*\n> \t\t\t\t\t * We handle identifiers that aren't parser keywords with\n> \t\t\t\t\t * the following special-case codes, to avoid bloating the\n> \t\t\t\t\t * size of the main parser.\n> \t\t\t\t\t */\n> +\t\t\t\t\tRoleElem *n = makeNode(RoleElem);\n> +\n> +\t\t\t\t\t/*\n> +\t\t\t\t\t * Record whether the user specified WITH GRANT OPTION.\n\nWITH ADMIN OPTION rather?\n\n> +\t\t\t\t\t * Note that for some privileges this is always implied,\n> +\t\t\t\t\t * such as SUPERUSER, but we don't reflect that here.\n> +\t\t\t\t\t */\n> +\t\t\t\t\tn->admin_spec = $2;\n> +\n\n> diff --git a/src/include/catalog/pg_authid.dat b/src/include/catalog/pg_authid.dat\n> index 6c28119fa1..4829a6dbd2 100644\n> --- a/src/include/catalog/pg_authid.dat\n> +++ b/src/include/catalog/pg_authid.dat\n> @@ -22,67 +22,93 @@\n> { oid => '10', oid_symbol => 'BOOTSTRAP_SUPERUSERID',\n> rolname => 'POSTGRES', rolsuper => 't', rolinherit => 't',\n> rolcreaterole => 't', rolcreatedb => 't', rolcanlogin => 't',\n> - rolreplication => 't', rolbypassrls => 't', rolconnlimit => '-1',\n> + rolreplication => 't', rolbypassrls => 't', adminherit => 't', admcreaterole => 't',\n> + admcreatedb => 't', admcanlogin => 't', admreplication => 't', admbypassrls => 't',\n> + admconnlimit => 't', admpassword => 't', admvaliduntil => 't', rolconnlimit => '-1',\n> rolpassword => '_null_', rolvaliduntil => '_null_' },\n\nThose sure are a couple of new columns in pg_authid, but oh well...\n\n> diff --git a/src/include/catalog/pg_authid.h b/src/include/catalog/pg_authid.h\n> index 4b65e39a1f..4acdcaa685 100644\n> --- a/src/include/catalog/pg_authid.h\n> +++ b/src/include/catalog/pg_authid.h\n> @@ -39,6 +39,16 @@ CATALOG(pg_authid,1260,AuthIdRelationId) BKI_SHARED_RELATION BKI_ROWTYPE_OID(284\n> \tbool\t\trolcanlogin;\t/* allowed to log in as session user? */\n> \tbool\t\trolreplication; /* role used for streaming replication */\n> \tbool\t\trolbypassrls;\t/* bypasses row-level security? */\n> +\n> +\tbool\t\tadminherit;\t\t/* allowed to administer inherit? */\n> +\tbool\t\tadmcreaterole;\t/* allowed to administer createrole? */\n> +\tbool\t\tadmcreatedb;\t/* allowed to administer createdb?? */\n> +\tbool\t\tadmcanlogin;\t/* allowed to administer login? */\n> +\tbool\t\tadmreplication; /* allowed to administer replication? */\n> +\tbool\t\tadmbypassrls;\t/* allowed to administer bypassesrls? */\n> +\tbool\t\tadmconnlimit;\t/* allowed to administer connlimit? */\n> +\tbool\t\tadmpassword;\t/* allowed to administer password? */\n> +\tbool\t\tadmvaliduntil;\t/* allowed to administer validuntil? */\n> \tint32\t\trolconnlimit;\t/* max connections allowed (-1=no limit) */\n\nIt's cosmetic, but the space between rolbypassrls and adminherit is\nmaybe not needed, and I'd put rolconnlimit first (even though it has a\ndifferent type).\n\n\nMichael\n\n-- \nMichael Banck\nTeamleiter PostgreSQL-Team\nProjektleiter\nTel.: +49 2166 9901-171\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB M�nchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nGesch�ftsf�hrung: Dr. Michael Meskes, Geoff Richardson, Peter Lilley\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n", "msg_date": "Sun, 30 Jan 2022 23:38:10 +0100", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Jan 30, 2022, at 2:38 PM, Michael Banck <michael.banck@credativ.de> wrote:\n> \n> Hi,\n\nYour review is greatly appreciated!\n\n>> The attached WIP patch attempts to solve most of the CREATEROLE\n\nI'm mostly looking for whether the general approach in this Work In Progress patch is acceptable, so I was a bit sloppy with whitespace and such....\n\n> One thing I noticed (and which will likely make DBAs grumpy) is that it\n> seems being able to create users (as opposed to non-login roles/groups)\n> depends on when you get the CREATEROLE attribute (on role creation or\n> later), viz:\n> \n> postgres=# CREATE USER admin CREATEROLE;\n> CREATE ROLE\n> postgres=# SET ROLE admin;\n> SET\n> postgres=> CREATE USER testuser; -- this works\n> CREATE ROLE\n> postgres=> RESET ROLE;\n> RESET\n> postgres=# CREATE USER admin2;\n> CREATE ROLE\n> postgres=# ALTER ROLE admin2 CREATEROLE; -- we get CREATEROLE after the fact\n> ALTER ROLE\n> postgres=# SET ROLE admin2;\n> SET\n> postgres=> CREATE USER testuser2; -- bam\n> ERROR: must have grant option on LOGIN privilege to create login users\n> postgres=# SELECT rolname, admcreaterole, admcanlogin FROM pg_authid\n> WHERE rolname LIKE 'admin%';\n> rolname | admcreaterole | admcanlogin \n> ---------+---------------+-------------\n> admin | t | t\n> admin2 | f | f\n> (2 rows)\n> \n> Is that intentional? If it is, I think it would be nice if this could be\n> changed, unless I'm missing some serious security concerns or so. \n\nIt's intentional, but part of what I wanted review comments about. The issue is that historically:\n\n CREATE USER michael CREATEROLE\n\nmeant that you could go on to do things like create users with LOGIN privilege. I could take that away, which would be a backwards compatibility break, or I can do the weird thing this patch does. Or I could have your\n\n ALTER ROLE admin2 CREATEROLE;\n\nalso grant the other privileges like LOGIN unless you explicitly say otherwise with a bunch of explicit WITHOUT ADMIN OPTION clauses. Finding out which of those this is preferred was a big part of why I put this up for review. Thanks for calling it out in under 24 hours!\n\n> Some light review of the patch (I haven't read all the previous ones, so\n> please excuse me if I rehash old discussions):\n\nNot a problem.\n\n> Spaces vs. tabs here...\n> \n> ... and here, is that intentional?\n\n\n> I think typdefs usually go at the top of the file, not at line 5441...\n\n> I feel this function comment needs revision...\n\n> Hrm, maybe also mention ...\n\nAll good comments, but I'm not doing code cleanup on this WIP patch just yet. Forgive me.\n\n> Shouldn't this (and the following) be \"must have admin option on\n> CREATEROLE\"?\n\nYes, there may be other places where I failed to replace the verbiage \"grant option\" with \"admin option\". Earlier drafts of the patch were using that language. I wouldn't mind review comments on which language people thinks is better.\n\n> \n>> @@ -311,7 +370,7 @@ CreateRole(ParseState *pstate, CreateRoleStmt *stmt)\n>> \t\t\t\t\t\tstmt->role)));\n>> \n>> \t/* Convert validuntil to internal form */\n>> -\tif (validUntil)\n>> +\tif (validUntil && strcmp(validUntil, \"always\") != 0)\n> \n> This (there are other similar hunks further down) looks like an\n> independent patch/feature?\n\nPart of the problem with the grammar introduced in this patch is that you are not normally required to mention attributes like VALID UNTIL, but if you want to change whether the created role gets WITH ADMIN OPTION, you have to. That leaves the problem of what to do if you *only* want to specify the ADMIN part. The grammar needs some sort of \"dummy\" value that intentionally has no effect, but sets up for the WITH/WITHOUT ADMIN OPTION clause. I think I left a few bits of cruft around like that. But what I'd really like to know is if people think this sort of thing is even headed in the right direction? Are there problems with SQL spec compliance? Does it just feel icky? I don't have any pride-of-ownership in the grammar this WIP patch introduces. I just needed something to put out there for people to attack/improve.\n\n>> +\n>> +\t/* To mess with replication roles, must have admin on REPLICATION */\n>> +\tif ((authform->rolreplication || disreplication) &&\n>> +\t\t!may_admin_replication_privilege(GetUserId()))\n>> +\t\tereport(ERROR,\n>> +\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n>> +\t\t\t\t errmsg(\"must have admin on replication to alter replication roles or change replication attribute\")));\n> \n> \"have admin\" sounds a bit weird, but I understand the error message is\n> too long already to spell out \"must have admin option\"? Or am I mistaken\n> and \"admin\" is what it's actually called (same for the ones below)?\n\nIf it is the officially correct language, I arrived at it by accident. I didn't take any time to wordsmith those error messages. Improvements welcome!\n\n> Also, I think those role options are usually capitalized like\n> REPLICATION in other error messages.\n\nYeah, I noticed some amount of inconsistency there. For a brief time I was trying to make them all the same, but got a bit confused on what would be correct, and didn't waste the time. The sort of thing I'm thinking about is the pre-existing message text, \"must be superuser to change bypassrls attribute\". Note that neither \"superuser\" nor \"bypassrls\" are capitalized. If people like where this patch is going, I'll no doubt need to clean it up.\n\n>> diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n>> index b5966712ce..7503d3ead6 100644\n>> --- a/src/backend/parser/gram.y\n>> +++ b/src/backend/parser/gram.y\n>> @@ -1131,67 +1140,111 @@ AlterOptRoleElem:\n> [...]\n> \n>> +\t\t\t| VALID ALWAYS opt_admin_spec\n>> +\t\t\t\t{\n>> +\t\t\t\t\tRoleElem *n = makeNode(RoleElem);\n>> +\t\t\t\t\tn->elem = makeDefElem(\"validUntil\", (Node *)makeString(\"always\"), @1);\n>> +\t\t\t\t\tn->admin_spec = $3;\n>> +\t\t\t\t\t$$ = (Node *)n;\n> \n> This one is from another patch as well I think.\n\nThat was an attempt at a \"dummy\" type value. I agree it probably doesn't belong.\n\n>> \t\t\t\t}\n>> \t\t/*\tSupported but not documented for roles, for use by ALTER GROUP. */\n>> -\t\t\t| USER role_list\n>> +\t\t\t| USER role_list opt_admin_spec\n>> \t\t\t\t{\n>> -\t\t\t\t\t$$ = makeDefElem(\"rolemembers\", (Node *)$2, @1);\n>> +\t\t\t\t\tRoleElem *n = makeNode(RoleElem);\n>> +\t\t\t\t\tn->elem = makeDefElem(\"rolemembers\", (Node *)$2, @1);\n>> +\t\t\t\t\tn->admin_spec = $3;\n>> +\t\t\t\t\t$$ = (Node *)n;\n>> \t\t\t\t}\n>> -\t\t\t| IDENT\n>> +\t\t\t| IDENT opt_admin_spec\n>> \t\t\t\t{\n>> \t\t\t\t\t/*\n>> \t\t\t\t\t * We handle identifiers that aren't parser keywords with\n>> \t\t\t\t\t * the following special-case codes, to avoid bloating the\n>> \t\t\t\t\t * size of the main parser.\n>> \t\t\t\t\t */\n>> +\t\t\t\t\tRoleElem *n = makeNode(RoleElem);\n>> +\n>> +\t\t\t\t\t/*\n>> +\t\t\t\t\t * Record whether the user specified WITH GRANT OPTION.\n> \n> WITH ADMIN OPTION rather?\n\nYes.\n\n>> +\t\t\t\t\t * Note that for some privileges this is always implied,\n>> +\t\t\t\t\t * such as SUPERUSER, but we don't reflect that here.\n>> +\t\t\t\t\t */\n>> +\t\t\t\t\tn->admin_spec = $2;\n>> +\n> \n>> diff --git a/src/include/catalog/pg_authid.dat b/src/include/catalog/pg_authid.dat\n>> index 6c28119fa1..4829a6dbd2 100644\n>> --- a/src/include/catalog/pg_authid.dat\n>> +++ b/src/include/catalog/pg_authid.dat\n>> @@ -22,67 +22,93 @@\n>> { oid => '10', oid_symbol => 'BOOTSTRAP_SUPERUSERID',\n>> rolname => 'POSTGRES', rolsuper => 't', rolinherit => 't',\n>> rolcreaterole => 't', rolcreatedb => 't', rolcanlogin => 't',\n>> - rolreplication => 't', rolbypassrls => 't', rolconnlimit => '-1',\n>> + rolreplication => 't', rolbypassrls => 't', adminherit => 't', admcreaterole => 't',\n>> + admcreatedb => 't', admcanlogin => 't', admreplication => 't', admbypassrls => 't',\n>> + admconnlimit => 't', admpassword => 't', admvaliduntil => 't', rolconnlimit => '-1',\n>> rolpassword => '_null_', rolvaliduntil => '_null_' },\n> \n> Those sure are a couple of new columns in pg_authid, but oh well...\n\nYes, that's also a big part of what people might object to. I think it's a reasonable objection, but I don't know where else to put the information, given the lack of an aclitem[]?\n\n>> diff --git a/src/include/catalog/pg_authid.h b/src/include/catalog/pg_authid.h\n>> index 4b65e39a1f..4acdcaa685 100644\n>> --- a/src/include/catalog/pg_authid.h\n>> +++ b/src/include/catalog/pg_authid.h\n>> @@ -39,6 +39,16 @@ CATALOG(pg_authid,1260,AuthIdRelationId) BKI_SHARED_RELATION BKI_ROWTYPE_OID(284\n>> \tbool\t\trolcanlogin;\t/* allowed to log in as session user? */\n>> \tbool\t\trolreplication; /* role used for streaming replication */\n>> \tbool\t\trolbypassrls;\t/* bypasses row-level security? */\n>> +\n>> +\tbool\t\tadminherit;\t\t/* allowed to administer inherit? */\n>> +\tbool\t\tadmcreaterole;\t/* allowed to administer createrole? */\n>> +\tbool\t\tadmcreatedb;\t/* allowed to administer createdb?? */\n>> +\tbool\t\tadmcanlogin;\t/* allowed to administer login? */\n>> +\tbool\t\tadmreplication; /* allowed to administer replication? */\n>> +\tbool\t\tadmbypassrls;\t/* allowed to administer bypassesrls? */\n>> +\tbool\t\tadmconnlimit;\t/* allowed to administer connlimit? */\n>> +\tbool\t\tadmpassword;\t/* allowed to administer password? */\n>> +\tbool\t\tadmvaliduntil;\t/* allowed to administer validuntil? */\n>> \tint32\t\trolconnlimit;\t/* max connections allowed (-1=no limit) */\n> \n> It's cosmetic, but the space between rolbypassrls and adminherit is\n> maybe not needed, and I'd put rolconnlimit first (even though it has a\n> different type).\n\nOh, totally agree. I had that blank there during development because the \"rol...\" and \"adm...\" all started to blur together.\n\nThanks again! If the patch stays mostly like it is, I'll incorporate all your review comments into a next version.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sun, 30 Jan 2022 17:11:48 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "Hi,\n\nAm Sonntag, dem 30.01.2022 um 17:11 -0800 schrieb Mark Dilger:\n> > On Jan 30, 2022, at 2:38 PM, Michael Banck < \n> > michael.banck@credativ.de> wrote:\n> > > The attached WIP patch attempts to solve most of the CREATEROLE\n> \n> I'm mostly looking for whether the general approach in this Work In\n> Progress patch is acceptable, so I was a bit sloppy with whitespace\n> and such....\n\nOk, sure. I think this topic is hugely important and as I read the\npatch anyway, I added some comments, but yeah, we need to figure out\nthe fundamentals first.\n> \n\n> > One thing I noticed (and which will likely make DBAs grumpy) is that it\n> > seems being able to create users (as opposed to non-login roles/groups)\n> > depends on when you get the CREATEROLE attribute (on role creation or\n> > later), viz:\n> > \n> > postgres=# CREATE USER admin CREATEROLE;\n> > CREATE ROLE\n> > postgres=# SET ROLE admin;\n> > SET\n> > postgres=> CREATE USER testuser; -- this works\n> > CREATE ROLE\n> > postgres=> RESET ROLE;\n> > RESET\n> > postgres=# CREATE USER admin2;\n> > CREATE ROLE\n> > postgres=# ALTER ROLE admin2 CREATEROLE; -- we get CREATEROLE after the fact\n> > ALTER ROLE\n> > postgres=# SET ROLE admin2;\n> > SET\n> > postgres=> CREATE USER testuser2; -- bam\n> > ERROR:  must have grant option on LOGIN privilege to create login users\n> > postgres=# SELECT rolname, admcreaterole, admcanlogin FROM\n> > pg_authid\n> > WHERE rolname LIKE 'admin%';\n> > rolname | admcreaterole | admcanlogin \n> > ---------+---------------+-------------\n> > admin   | t             | t\n> > admin2  | f             | f\n> > (2 rows)\n> > \n> > Is that intentional? If it is, I think it would be nice if this\n> > could be\n> > changed, unless I'm missing some serious security concerns or so. \n> \n> It's intentional, but part of what I wanted review comments about. \n> The issue is that historically:\n> \n>   CREATE USER michael CREATEROLE\n> \n> meant that you could go on to do things like create users with LOGIN\n> privilege.  I could take that away, which would be a backwards\n> compatibility break, or I can do the weird thing this patch does.  Or\n> I could have your\n> \n>   ALTER ROLE admin2 CREATEROLE;\n> \n> also grant the other privileges like LOGIN unless you explicitly say\n> otherwise with a bunch of explicit WITHOUT ADMIN OPTION clauses. \n> Finding out which of those this is preferred was a big part of why I\n> put this up for review.  Thanks for calling it out in under 24 hours!\n\nOk, so what I would have needed to do in the above in order to have\n\"admin2\" and \"admin\" be the same as far as creating login users is (I\nbelieve):\n\nALTER ROLE admin2 CREATEROLE LOGIN WITH ADMIN OPTION;\n\nI think if possible, it would be nice to just have this part as default\nif possible. I.e. CREATEROLE and HASLOGIN are historically so much\nintertwined that I think the above should be implicit (again, if that\nis possible); I don't care and/or haven't made up my mind about any of\nthe other options so far...\n\nOk, so now that I had another look, I see we are going down Pandora's\nbox: For any of the other things a role admin would like to do (change\npassword, change conn limit), one would have to go with this weird\ndisconnect between CREATE USER admin CREATEROLE and ALTER USER admin2\nCREATEROLE [massive list of WITH ADMIN OPTION], and then I'm not sure\nwhere we stop.\n\nBy the way, is there now even a way to add admpassword to a role after\nit got created?\n\npostgres=# SET ROLE admin2;\nSET\npostgres=> \\password test\nEnter new password for user \"test\": \nEnter it again: \nERROR: must have admin on password to change password attribute\npostgres=> RESET ROLE;\nRESET\npostgres=# ALTER ROLE admin2 PASSWORD WITH ADMIN OPTION;\nERROR: syntax error at or near \"WITH\"\nUPDATE pg_authid SET admpassword = 't' WHERE rolname = 'admin2';\nUPDATE 1\npostgres=# SET ROLE admin2;\nSET\npostgres=> \\password test\nEnter new password for user \"test\": \nEnter it again: \npostgres=> \n\nHowever, the next thing is:\n\npostgres=# SET ROLE admin;\nSET\npostgres=> CREATE GROUP testgroup;\nCREATE ROLE\npostgres=> GRANT testgroup TO test;\nERROR: must have admin option on role \"testgroup\"\n\nFirst off, what does \"admin option\" mean on a role?\n\nI then tried this:\n\npostgres=# CREATE USER admin3 CREATEROLE WITH ADMIN OPTION;\nCREATE ROLE\npostgres=# SET ROLE admin3;\nSET\npostgres=> CREATE USER test3;\nCREATE ROLE\npostgres=> CREATE GROUP testgroup3;\nCREATE ROLE\npostgres=> GRANT testgroup3 TO test3;\nERROR: must have admin option on role \"testgroup3\"\n\nSo I created both user and group, I have the CREATEROLE priv (with or\nwithout admin option), but I still can't assign the group. Is that\n(tracking who created a role and letting the creator do more thing) the\npart that got chopped away in your last patch in order to find a common\nground?\n\nIs there now any way non-Superusers can assign groups to other users? I\nfeel this (next to creating users/groups) is the primary thing those\nCREATEROLE admins are supposed to do/where doing up to now.\n\n\nAgain, sorry if this was all discussed previously, I only skimmed this\nthread.\n\nTwo more comments regarding the code:\n\n> > > b/src/include/catalog/pg_authid.dat\n> > > index 6c28119fa1..4829a6dbd2 100644\n> > > --- a/src/include/catalog/pg_authid.dat\n> > > +++ b/src/include/catalog/pg_authid.dat\n> > > @@ -22,67 +22,93 @@\n> > > { oid => '10', oid_symbol => 'BOOTSTRAP_SUPERUSERID',\n> > > rolname => 'POSTGRES', rolsuper => 't', rolinherit => 't',\n> > > rolcreaterole => 't', rolcreatedb => 't', rolcanlogin => 't',\n> > > - rolreplication => 't', rolbypassrls => 't', rolconnlimit =>\n> > > '-1',\n> > > + rolreplication => 't', rolbypassrls => 't', adminherit => 't',\n> > > admcreaterole => 't',\n> > > + admcreatedb => 't', admcanlogin => 't', admreplication => 't',\n> > > admbypassrls => 't',\n> > > + admconnlimit => 't', admpassword => 't', admvaliduntil => 't',\n> > > rolconnlimit => '-1',\n> > > rolpassword => '_null_', rolvaliduntil => '_null_' },\n> > \n> > Those sure are a couple of new columns in pg_authid, but oh well...\n> \n> Yes, that's also a big part of what people might object to. I think\n> it's a reasonable objection, but I don't know where else to put the\n> information, given the lack of an aclitem[]?\n\nYeah, it crossed my mind that an array might not be bad. In any case,\nif we can fix CREATEROLE for good, a couple of extra columns in\npg_authid might be a small price to pay.\n\ndiff --git a/src/include/catalog/pg_authid.h\n> > > b/src/include/catalog/pg_authid.h\n> > > index 4b65e39a1f..4acdcaa685 100644\n> > > --- a/src/include/catalog/pg_authid.h\n> > > +++ b/src/include/catalog/pg_authid.h\n> > > @@ -39,6 +39,16 @@ CATALOG(pg_authid,1260,AuthIdRelationId)\n> > > BKI_SHARED_RELATION BKI_ROWTYPE_OID(284\n> > > bool rolcanlogin; /* allowed to log in as\n> > > session user? */\n> > > bool rolreplication; /* role used for\n> > > streaming replication */\n> > > bool rolbypassrls; /* bypasses row-level\n> > > security? */\n> > > +\n> > > + bool adminherit; /* allowed to\n> > > administer inherit? */\n> > > + bool admcreaterole; /* allowed to administer\n> > > createrole? */\n> > > + bool admcreatedb; /* allowed to administer\n> > > createdb?? */\n> > > + bool admcanlogin; /* allowed to administer\n> > > login? */\n> > > + bool admreplication; /* allowed to administer\n> > > replication? */\n> > > + bool admbypassrls; /* allowed to administer\n> > > bypassesrls? */\n> > > + bool admconnlimit; /* allowed to administer\n> > > connlimit? */\n> > > + bool admpassword; /* allowed to administer\n> > > password? */\n> > > + bool admvaliduntil; /* allowed to administer\n> > > validuntil? */\n> > > int32 rolconnlimit; /* max connections\n> > > allowed (-1=no limit) */\n> > \n> > It's cosmetic, but the space between rolbypassrls and adminherit is\n> > maybe not needed, and I'd put rolconnlimit first (even though it\n> > has a different type).\n> \n> Oh, totally agree. I had that blank there during development because\n> the \"rol...\" and \"adm...\" all started to blur together.\n\nThe way the adm* privs are now somewhere in the middle of the rol*\nprivs also looks weird for the end-user and there does not seems to be\nsome greater scheme behind it:\n\npostgres=# SELECT * FROM pg_authid WHERE rolname = 'admin' \\gx \n-[ RECORD 1 ]--+------\noid | 16385\nrolname | admin\nrolsuper | f\nrolinherit | t\nrolcreaterole | t\nrolcreatedb | f\nrolcanlogin | t\nrolreplication | f\nrolbypassrls | f\nadminherit | t\nadmcreaterole | t\nadmcreatedb | t\nadmcanlogin | t\nadmreplication | f\nadmbypassrls | f\nadmconnlimit | t\nadmpassword | t\nadmvaliduntil | t\nrolconnlimit | -1\nrolpassword | \nrolvaliduntil | \n\n\nMichael\n\n-- \nMichael Banck\nTeamleiter PostgreSQL-Team\nProjektleiter\nTel.: +49 2166 9901-171\nE-Mail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Geoff Richardson, Peter Lilley\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n\n", "msg_date": "Mon, 31 Jan 2022 09:43:27 +0100", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Jan 25, 2022, at 12:44 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> > I agree that CREATEROLE is overpowered and that the goal of this should\n> > be to provide a way for roles to be created and dropped that doesn't\n> > give the user who has that power everything that CREATEROLE currently\n> > does.\n> \n> I'm attaching a patch that attempts to fix CREATEROLE without any connection to role ownership.\n\nAlright.\n\n> > The point I was making is that the concept of role ownership\n> > isn't intrinsically linked to that and is, therefore, as you say, gravy.\n> \n> I agree, they aren't intrinsically linked, though the solution to one might interact in some ways with the solution to the other.\n\nSure.\n\n> > That isn't to say that I'm entirely against the role ownership idea but\n> > I'd want it to be focused on the goal of providing ways of creating and\n> > dropping users and otherwise performing that kind of administration and\n> > that doesn't require the specific change to make owners be members of\n> > all roles they own and automatically have all privileges of those roles\n> > all the time.\n> \n> The attached WIP patch attempts to solve most of the CREATEROLE problems but not the problem of which role who can drop which other role. That will likely require an ownership concept.\n\nYeah, we do need to have a way to determine who is allowed to drop\nroles and role ownership seems like it's one possible approach to that.\n\n> The main idea here is that having CREATEROLE doesn't give you ADMIN on roles, nor on role attributes. For role attributes, the syntax has been extended. An excerpt from the patch's regression test illustrates some of that concept:\n> \n> -- ok, superuser can create a role that can create login replication users, but\n> -- cannot itself login, nor perform replication\n> CREATE ROLE regress_role_repladmin\n> CREATEROLE WITHOUT ADMIN OPTION -- can create roles, but cannot give it away\n> NOCREATEDB WITHOUT ADMIN OPTION -- cannot create db, nor give it away\n> NOLOGIN WITH ADMIN OPTION -- cannot log in, but can give it away\n> NOREPLICATION WITH ADMIN OPTION -- cannot replicate, but can give it away\n> NOBYPASSRLS WITHOUT ADMIN OPTION; -- cannot bypassrls, nor give it away\n> \n> -- ok, superuser can create a role with CREATEROLE but restrict give-aways\n> CREATE ROLE regress_role_minoradmin\n> NOSUPERUSER -- WITHOUT ADMIN OPTION is implied\n> CREATEROLE WITHOUT ADMIN OPTION\n> NOCREATEDB WITHOUT ADMIN OPTION\n> NOLOGIN WITHOUT ADMIN OPTION\n> NOREPLICATION -- WITHOUT ADMIN OPTION is implied\n> NOBYPASSRLS -- WITHOUT ADMIN OPTION is implied\n> NOINHERIT WITHOUT ADMIN OPTION\n> CONNECTION LIMIT NONE WITHOUT ADMIN OPTION\n> VALID ALWAYS WITHOUT ADMIN OPTION\n> PASSWORD NULL WITHOUT ADMIN OPTION;\n\nRight, this was one of the approaches that I was thinking could work for\nmanaging role attributes and it's very similar to roles and the admin\noption for them. As I suggested at least once, another possible\napproach could be to have login users not be able to create roles but\nfor them to be able to SET ROLE to a role which is able to create roles,\nand then, using your prior method, only allow the attributes which that\nrole has to be able to be given to other roles. That essentially makes\na role be a proxy for the per-attribute admin options. There's pros and\ncons for each approach and so I'm curious as to which you feel is the\nbetter approach? I get the feeling that you're more inclined to go with\nthe approach of having an admin option for each role attribute (having\nwritten this WIP patch) but I'm not sure if that is because you\ncontempltaed both and felt this was better for some reason or more\nbecause I wasn't explaining the other approach very well, or if there\nwas some other reason.\n\n> -- fail, having CREATEROLE is not enough to create roles in privileged roles\n> SET SESSION AUTHORIZATION regress_role_minoradmin;\n> CREATE ROLE regress_nosuch_read_all_data IN ROLE pg_read_all_data;\n> ERROR: must have admin option on role \"pg_read_all_data\"\n\nI would say not just privileged roles, but any roles that the user\ndoesn't have admin rights on.\n\n> Whether \"WITH ADMIN OPTION\" or \"WITHOUT ADMIN OPTION\" is implied hinges on whether the role is given CREATEROLE. That hackery is necessary to preserve backwards compatibility. If we don't care about compatibility, I could change the patch to make \"WITHOUT ADMIN OPTION\" implied for all attributes when not specified.\n\nGiven the relative size of the changes we're talking about regarding\nCREATEROLE, I don't really think we need to stress about backwards\ncompatibility too much.\n\nThanks,\n\nStephen", "msg_date": "Mon, 31 Jan 2022 11:53:30 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Jan 31, 2022, at 12:43 AM, Michael Banck <michael.banck@credativ.de> wrote:\n\n> Ok, sure. I think this topic is hugely important and as I read the\n> patch anyway, I added some comments, but yeah, we need to figure out\n> the fundamentals first.\n\nRight.\n\nPerhaps some background on this patch series will help. The patch versions before v8 were creating an owner-owned relationship between the creator and the createe, and a lot of privileges were dependent on that ownership. Stephen objected that we were creating parallel tracks on which the privilege system was running; things like belonging to a role or having admin on a role were partially conflated with owning a role. He also objected that the pre-v8 patch sets allowed a creator role with the CREATEROLE privilege to give away any privilege the creator had, rather than needing to have GRANT or ADMIN option on the privilege being given.\n\nThe v8-WIP patch is not a complete replacement for the pre-v8 patches. It's just a balloon I'm floating to try out candidate solutions to some of Stephen's objections. In the long run, I want the solution to Stephen's objections to not create problems for anybody who liked the way the pre-v8 patches worked (Robert, Andrew, and to some extent me.)\n\nIn this WIP patch, for a creator to give *anything* away to a createe, the creator must have GRANT or ADMIN on the thing being given. That includes attributes like BYPASSRLS, CREATEDB, LOGIN, etc., and also ADMIN on any role the createe is granted into.\n\nI tried to structure things for backwards compatibility, considering which things roles with CREATEROLE could give away historically. It turns out they can give away most everything, but not SUPERUSER, BYPASSRLS, or REPLICATION. So I structured the default privileges for CREATEROLE to match. But I'm uncertain that design is any good, and your comments below suggest that you find it pretty hard to use.\n\nPart of the problem with trying to be backwards compatible is that we must break compatibility anyway, to address the problem that historically having CREATEROLE meant you effectively had ADMIN on all non-superuser roles. That's got to change. So in part I'm asking pgsql-hackers if partial backwards compatibility is worth the bother.\n\nIf we don't go with backwards compatibility, then CREATEROLE would only allow you to create a new role, but not to give that role LOGIN, nor CREATEDB, etc. You'd need to also have admin option on those things. To create a role that can give those things away, you'd need to run something like:\n\nCREATE ROLE michael\n\tCREATEROLE WITH ADMIN OPTION -- can further give away \"createrole\"\n\tCREATEDB WITH ADMIN OPTION -- can further give away \"createdb\"\n\tLOGIN WITH ADMIN OPTION -- can further give away \"login\"\n\tNOREPLICATION WITHOUT ADMIN OPTION -- this would be implied anyway\n\tNOBYPASSRLS WITHOUT ADMIN OPTION -- this would be implied anyway\n\tCONNECTION LIMIT WITH ADMIN OPTION -- can specify connection limits\n\tPASSWORD WITH ADMIN OPTION -- can specify passwords\n\tVALID UNTIL WITH ADMIN OPTION -- can specify expiration\n\n(I'm on the fence about the phrase \"WITH ADMIN OPTION\" vs. the phrase \"WITH GRANT OPTION\".)\n\nEven then, when \"michael\" creates new roles, if he wants to be able to further administer those roles, he needs to remember to give himself ADMIN membership in that role at creation time. After the role is created, if he doesn't have ADMIN, he can't give it to himself. So, at create time, he needs to remember to do this:\n\nSET ROLE michael;\nCREATE ROLE mark ADMIN michael;\n\nBut that's still a bit strange, because \"ADMIN michael\" means that michael can grant other roles membership in \"mark\", not that michael can, for example, change mark's password. If we don't want CREATEROLE to imply that you can mess around with arbitrary roles (rather than only roles that you created or have been transferred control over) then we need the concept of role ownership. This patch doesn't go that far, so for now, only superusers can do those things. Assuming some form of this patch is acceptable, the v9 series will resurrect some of the pre-v7 logic for role ownership and say that the owner can do those things.\n\n\n>>> One thing I noticed (and which will likely make DBAs grumpy) is that it\n>>> seems being able to create users (as opposed to non-login roles/groups)\n>>> depends on when you get the CREATEROLE attribute (on role creation or\n>>> later), viz:\n>>> \n>>> postgres=# CREATE USER admin CREATEROLE;\n>>> CREATE ROLE\n>>> postgres=# SET ROLE admin;\n>>> SET\n>>> postgres=> CREATE USER testuser; -- this works\n>>> CREATE ROLE\n>>> postgres=> RESET ROLE;\n>>> RESET\n>>> postgres=# CREATE USER admin2;\n>>> CREATE ROLE\n>>> postgres=# ALTER ROLE admin2 CREATEROLE; -- we get CREATEROLE after the fact\n>>> ALTER ROLE\n>>> postgres=# SET ROLE admin2;\n>>> SET\n>>> postgres=> CREATE USER testuser2; -- bam\n>>> ERROR: must have grant option on LOGIN privilege to create login users\n>>> postgres=# SELECT rolname, admcreaterole, admcanlogin FROM\n>>> pg_authid\n>>> WHERE rolname LIKE 'admin%';\n>>> rolname | admcreaterole | admcanlogin \n>>> ---------+---------------+-------------\n>>> admin | t | t\n>>> admin2 | f | f\n>>> (2 rows)\n>>> \n>>> Is that intentional? If it is, I think it would be nice if this\n>>> could be\n>>> changed, unless I'm missing some serious security concerns or so. \n>> \n>> It's intentional, but part of what I wanted review comments about. \n>> The issue is that historically:\n>> \n>> CREATE USER michael CREATEROLE\n>> \n>> meant that you could go on to do things like create users with LOGIN\n>> privilege. I could take that away, which would be a backwards\n>> compatibility break, or I can do the weird thing this patch does. Or\n>> I could have your\n>> \n>> ALTER ROLE admin2 CREATEROLE;\n>> \n>> also grant the other privileges like LOGIN unless you explicitly say\n>> otherwise with a bunch of explicit WITHOUT ADMIN OPTION clauses. \n>> Finding out which of those this is preferred was a big part of why I\n>> put this up for review. Thanks for calling it out in under 24 hours!\n> \n> Ok, so what I would have needed to do in the above in order to have\n> \"admin2\" and \"admin\" be the same as far as creating login users is (I\n> believe):\n> \n> ALTER ROLE admin2 CREATEROLE LOGIN WITH ADMIN OPTION;\n\nYes, those it's more likely admin2 would have been created with these privileges to begin with, if the creator intended admin2 to do such things. \n\n> I think if possible, it would be nice to just have this part as default\n> if possible. I.e. CREATEROLE and HASLOGIN are historically so much\n> intertwined that I think the above should be implicit (again, if that\n> is possible); I don't care and/or haven't made up my mind about any of\n> the other options so far...\n\nPossibily. But then, if you really wanted to grant someone CREATEROLE but not anything else, you'd need to remember which other things are implicit, and explicitly disavow them, like:\n\nALTER ROLE admin2 CREATEROLE (WITHOUT this, WITHOUT that, WITHOUT the other)\n\nand I think that mostly stinks.\n\n> Ok, so now that I had another look, I see we are going down Pandora's\n> box: For any of the other things a role admin would like to do (change\n> password, change conn limit), one would have to go with this weird\n> disconnect between CREATE USER admin CREATEROLE and ALTER USER admin2\n> CREATEROLE [massive list of WITH ADMIN OPTION], and then I'm not sure\n> where we stop.\n\nI agree. That's a good argument for just breaking backward compatibility.\n\n> By the way, is there now even a way to add admpassword to a role after\n> it got created?\n> \n> postgres=# SET ROLE admin2;\n> SET\n> postgres=> \\password test\n> Enter new password for user \"test\": \n> Enter it again: \n> ERROR: must have admin on password to change password attribute\n> postgres=> RESET ROLE;\n> RESET\n> postgres=# ALTER ROLE admin2 PASSWORD WITH ADMIN OPTION;\n> ERROR: syntax error at or near \"WITH\"\n> UPDATE pg_authid SET admpassword = 't' WHERE rolname = 'admin2';\n> UPDATE 1\n> postgres=# SET ROLE admin2;\n> SET\n> postgres=> \\password test\n> Enter new password for user \"test\": \n> Enter it again: \n> postgres=> \n\nI don't really have this worked out yet. That's mostly because I'm planning to fix it with role ownership, but perhaps there is a better way?\n\n> However, the next thing is:\n> \n> postgres=# SET ROLE admin;\n> SET\n> postgres=> CREATE GROUP testgroup;\n> CREATE ROLE\n> postgres=> GRANT testgroup TO test;\n> ERROR: must have admin option on role \"testgroup\"\n> \n> First off, what does \"admin option\" mean on a role?\n\nFrom the docs for \"CREATE ROLE\", https://www.postgresql.org/docs/14/sql-createrole.html\n\n The ADMIN clause is like ROLE, but the named roles are added to the new role WITH ADMIN OPTION, giving them the right to grant membership in this role to others.\n\n> I then tried this:\n> \n> postgres=# CREATE USER admin3 CREATEROLE WITH ADMIN OPTION;\n> CREATE ROLE\n> postgres=# SET ROLE admin3;\n> SET\n> postgres=> CREATE USER test3;\n> CREATE ROLE\n> postgres=> CREATE GROUP testgroup3;\n> CREATE ROLE\n> postgres=> GRANT testgroup3 TO test3;\n> ERROR: must have admin option on role \"testgroup3\"\n> \n> So I created both user and group, I have the CREATEROLE priv (with or\n> without admin option), but I still can't assign the group. Is that\n> (tracking who created a role and letting the creator do more thing) the\n> part that got chopped away in your last patch in order to find a common\n> ground?\n\nYou need ADMIN on the role, not on CREATEROLE. To add members to a target role, you must have ADMIN on that target role. To create new roles with CREATEROLE privilege, you must have ADMIN on the CREATEROLE privilege.\n\n> Is there now any way non-Superusers can assign groups to other users?\n\nYes, by having ADMIN on those groups.\n\n> I\n> feel this (next to creating users/groups) is the primary thing those\n> CREATEROLE admins are supposed to do/where doing up to now.\n\nRight. In the past, having CREATEROLE implied having ADMIN on every role. I'm intentionally breaking that.\n\n> The way the adm* privs are now somewhere in the middle of the rol*\n> privs also looks weird for the end-user and there does not seems to be\n> some greater scheme behind it:\n\nBecause they are not variable length nor nullable, they must come before such fields (namely, rolpassword and rolvaliduntil). They don't really need to come before rolconnlimit, but I liked the idea of packing twelve booleans together, since with \"bool\" typedef'd to unsigned char, that's twelve contiguous bytes, starting after oid (4 bytes) and rolname (64 bytes) and likely fitting nicely without padding bytes on at least some platforms. If I split them on either side of rolconnlimit (which is 4 bytes), there'd be seven bools before it and five bools after, which wouldn't pack nicely.\n \n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 31 Jan 2022 09:18:12 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Jan 31, 2022, at 8:53 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Yeah, we do need to have a way to determine who is allowed to drop\n> roles and role ownership seems like it's one possible approach to that.\n\nWhich other ways are on the table? Having ADMIN on a role doesn't allow you to do that, but maybe it could? What else?\n\n>> The main idea here is that having CREATEROLE doesn't give you ADMIN on roles, nor on role attributes. For role attributes, the syntax has been extended. An excerpt from the patch's regression test illustrates some of that concept:\n>> \n>> -- ok, superuser can create a role that can create login replication users, but\n>> -- cannot itself login, nor perform replication\n>> CREATE ROLE regress_role_repladmin\n>> CREATEROLE WITHOUT ADMIN OPTION -- can create roles, but cannot give it away\n>> NOCREATEDB WITHOUT ADMIN OPTION -- cannot create db, nor give it away\n>> NOLOGIN WITH ADMIN OPTION -- cannot log in, but can give it away\n>> NOREPLICATION WITH ADMIN OPTION -- cannot replicate, but can give it away\n>> NOBYPASSRLS WITHOUT ADMIN OPTION; -- cannot bypassrls, nor give it away\n>> \n>> -- ok, superuser can create a role with CREATEROLE but restrict give-aways\n>> CREATE ROLE regress_role_minoradmin\n>> NOSUPERUSER -- WITHOUT ADMIN OPTION is implied\n>> CREATEROLE WITHOUT ADMIN OPTION\n>> NOCREATEDB WITHOUT ADMIN OPTION\n>> NOLOGIN WITHOUT ADMIN OPTION\n>> NOREPLICATION -- WITHOUT ADMIN OPTION is implied\n>> NOBYPASSRLS -- WITHOUT ADMIN OPTION is implied\n>> NOINHERIT WITHOUT ADMIN OPTION\n>> CONNECTION LIMIT NONE WITHOUT ADMIN OPTION\n>> VALID ALWAYS WITHOUT ADMIN OPTION\n>> PASSWORD NULL WITHOUT ADMIN OPTION;\n> \n> Right, this was one of the approaches that I was thinking could work for\n> managing role attributes and it's very similar to roles and the admin\n> option for them. As I suggested at least once, another possible\n> approach could be to have login users not be able to create roles but\n> for them to be able to SET ROLE to a role which is able to create roles,\n> and then, using your prior method, only allow the attributes which that\n> role has to be able to be given to other roles.\n\nI'm not sure how that works. If I have a group named \"administrators\" which as multiple attributes like BYPASSRLS and such, and user \"stephen\" is a member of \"administrators\", then stephen can not only give away bypassrls to new users but also has it himself. How is that an improvement? (I mean this as a question, not as criticism.)\n\n> That essentially makes\n> a role be a proxy for the per-attribute admin options. There's pros and\n> cons for each approach and so I'm curious as to which you feel is the\n> better approach? I get the feeling that you're more inclined to go with\n> the approach of having an admin option for each role attribute (having\n> written this WIP patch) but I'm not sure if that is because you\n> contempltaed both and felt this was better for some reason or more\n> because I wasn't explaining the other approach very well, or if there\n> was some other reason.\n\nI need more explanation of the other option you are contemplating. My apologies if I'm being thick-headed.\n\n>> -- fail, having CREATEROLE is not enough to create roles in privileged roles\n>> SET SESSION AUTHORIZATION regress_role_minoradmin;\n>> CREATE ROLE regress_nosuch_read_all_data IN ROLE pg_read_all_data;\n>> ERROR: must have admin option on role \"pg_read_all_data\"\n> \n> I would say not just privileged roles, but any roles that the user\n> doesn't have admin rights on.\n\nYes, that's how it works. But this portion of the test is only checking the interaction between CREATEROLE and built-in privileged roles, hence the comment.\n\n>> Whether \"WITH ADMIN OPTION\" or \"WITHOUT ADMIN OPTION\" is implied hinges on whether the role is given CREATEROLE. That hackery is necessary to preserve backwards compatibility. If we don't care about compatibility, I could change the patch to make \"WITHOUT ADMIN OPTION\" implied for all attributes when not specified.\n> \n> Given the relative size of the changes we're talking about regarding\n> CREATEROLE, I don't really think we need to stress about backwards\n> compatibility too much.\n\nYeah, I'm leaning pretty strongly that way, too.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 31 Jan 2022 09:29:40 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Jan 31, 2022, at 8:53 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> > Yeah, we do need to have a way to determine who is allowed to drop\n> > roles and role ownership seems like it's one possible approach to that.\n> \n> Which other ways are on the table? Having ADMIN on a role doesn't allow you to do that, but maybe it could? What else?\n\nSupporting that through ADMIN is one option, another would be a\n'DROPROLE' attribute, though we'd want a way to curtail that from being\nable to be used for just any role and that does lead down a path similar\nto ownership or just generally the concept that some roles have certain\nrights over certain other roles (whether you create them or not...).\n\nI do think there's a lot of value in being able to segregate certain\nrights- consider that you may want a role that's able to create other\nroles, perhaps grant them into some set of roles, can lock those roles\n(prevent them from logging in, maybe do a password reset, something like\nthat), but which *isn't* able to drop those roles (and all their\nobjects) as that's dangerous and mistakes can certainly happen, or be\nable to become that role because the creating role simply doesn't have\nany need to be able to do that (or desire to in many cases, as we\ndiscussed in the landlord-vs-tenant sub-thread).\n\nNaturally, you'd want *some* role to be able to drop that role (and one\nthat doesn't have full superuser access) but that might be a role that's\nnot able to create new roles or take over accounts.\n\nSeparation of concerns and powers and all of that is what we want to be\ngoing for here, more generically, which is why I was opposed to the\nblanket \"owners have all rights of all roles they own\" implementation.\nThat approach doesn't support the ability to have a relatively\nunprivileged role that's able to create other roles, which seems like a\npretty important use-case for us to be considering.\n\nThe terminology seems to also be driving us in a certain direction and I\ndon't know that it's necessarily a good one. That is- the term 'owner'\nimplies certain things and maybe that's where some of the objection to\nmy argument that owners shouldn't necessarily have all rights of the\nroles they 'own' comes from (ok- I'll also put out there for general\nconsideration that since we're talking about roles, and login roles are\ngenerally associated with people, that maybe 'owner' isn't a great term\nto use for this anyway ...). I feel like the 'owner' concept came from\nthe way we have table owners and function owners and database owners\ntoday rather than from a starting point of what do we wish to\nspecifically enable.\n\nPerhaps instead of starting from the 'owner' concept, we start from the\nquestion about the kinds of things we want roles to be able to do and\nperhaps that will help inform the terminology.\n\n- Create new roles\n- Drop an existing role\n- Drop objects which belong to a role\n- Lock existing roles\n- Change/reset the PW of existing roles\n- Give roles to other roles\n- Revoke access to some roles from other roles\n- Give select role attributes to a role\n- Revoke role attributes from a role\n- Traditional role-based access control (group memberships, SET ROLE)\n\nCertain of the above are already covered by the existing role membership\nsystem and with the admin option, though there's definitely an argument\nto be made as to if that is as capable as we'd like it to be (there's no\nway to, today at least, GRANT *just* the admin option, for example, and\nmaybe that's something that it would actually be sensible to support).\n\nPerhaps there is a need to have a user who has all of the above\ncapabilities and maybe that would be an 'owner' or 'manager', but as I\ntried to illustrate above, there's definitely use-cases for giving\na role only some of the above capabilities rather than all of them\ntogether at once.\n\n> >> The main idea here is that having CREATEROLE doesn't give you ADMIN on roles, nor on role attributes. For role attributes, the syntax has been extended. An excerpt from the patch's regression test illustrates some of that concept:\n> >> \n> >> -- ok, superuser can create a role that can create login replication users, but\n> >> -- cannot itself login, nor perform replication\n> >> CREATE ROLE regress_role_repladmin\n> >> CREATEROLE WITHOUT ADMIN OPTION -- can create roles, but cannot give it away\n> >> NOCREATEDB WITHOUT ADMIN OPTION -- cannot create db, nor give it away\n> >> NOLOGIN WITH ADMIN OPTION -- cannot log in, but can give it away\n> >> NOREPLICATION WITH ADMIN OPTION -- cannot replicate, but can give it away\n> >> NOBYPASSRLS WITHOUT ADMIN OPTION; -- cannot bypassrls, nor give it away\n> >> \n> >> -- ok, superuser can create a role with CREATEROLE but restrict give-aways\n> >> CREATE ROLE regress_role_minoradmin\n> >> NOSUPERUSER -- WITHOUT ADMIN OPTION is implied\n> >> CREATEROLE WITHOUT ADMIN OPTION\n> >> NOCREATEDB WITHOUT ADMIN OPTION\n> >> NOLOGIN WITHOUT ADMIN OPTION\n> >> NOREPLICATION -- WITHOUT ADMIN OPTION is implied\n> >> NOBYPASSRLS -- WITHOUT ADMIN OPTION is implied\n> >> NOINHERIT WITHOUT ADMIN OPTION\n> >> CONNECTION LIMIT NONE WITHOUT ADMIN OPTION\n> >> VALID ALWAYS WITHOUT ADMIN OPTION\n> >> PASSWORD NULL WITHOUT ADMIN OPTION;\n> > \n> > Right, this was one of the approaches that I was thinking could work for\n> > managing role attributes and it's very similar to roles and the admin\n> > option for them. As I suggested at least once, another possible\n> > approach could be to have login users not be able to create roles but\n> > for them to be able to SET ROLE to a role which is able to create roles,\n> > and then, using your prior method, only allow the attributes which that\n> > role has to be able to be given to other roles.\n> \n> I'm not sure how that works. If I have a group named \"administrators\" which as multiple attributes like BYPASSRLS and such, and user \"stephen\" is a member of \"administrators\", then stephen can not only give away bypassrls to new users but also has it himself. How is that an improvement? (I mean this as a question, not as criticism.)\n\nThat's not how role attributes work though- \"stephen\" only has the\n'bypassrls' role attribute after a 'set role administrators'. This has\nbeen one of the issues with role attributes in general as there's no way\nto change that (unlike the 'inherit' option for roles themselves) but in\nthis particular case it might be to our advantage.\n\n> > That essentially makes\n> > a role be a proxy for the per-attribute admin options. There's pros and\n> > cons for each approach and so I'm curious as to which you feel is the\n> > better approach? I get the feeling that you're more inclined to go with\n> > the approach of having an admin option for each role attribute (having\n> > written this WIP patch) but I'm not sure if that is because you\n> > contempltaed both and felt this was better for some reason or more\n> > because I wasn't explaining the other approach very well, or if there\n> > was some other reason.\n> \n> I need more explanation of the other option you are contemplating. My apologies if I'm being thick-headed.\n\nHopefully the above helps. Note that in order to not allow the\n\"stephen\" role simply alter itself to have the bypassrls role attribute,\nwe'd need to consider roles to not have 'ownership' (or whatever) over\nthemselves, which leads into the prior complaint I made around roles\nhaving 'admin' rights on themselves which I generally don't feel is\ncorrect either.\n\n> >> -- fail, having CREATEROLE is not enough to create roles in privileged roles\n> >> SET SESSION AUTHORIZATION regress_role_minoradmin;\n> >> CREATE ROLE regress_nosuch_read_all_data IN ROLE pg_read_all_data;\n> >> ERROR: must have admin option on role \"pg_read_all_data\"\n> > \n> > I would say not just privileged roles, but any roles that the user\n> > doesn't have admin rights on.\n> \n> Yes, that's how it works. But this portion of the test is only checking the interaction between CREATEROLE and built-in privileged roles, hence the comment.\n\nBut.. predefined roles aren't actually different in this regard from any\nother role, so I disagree that such a test of explicitly predefined\nroles makes sense..?\n\n> >> Whether \"WITH ADMIN OPTION\" or \"WITHOUT ADMIN OPTION\" is implied hinges on whether the role is given CREATEROLE. That hackery is necessary to preserve backwards compatibility. If we don't care about compatibility, I could change the patch to make \"WITHOUT ADMIN OPTION\" implied for all attributes when not specified.\n> > \n> > Given the relative size of the changes we're talking about regarding\n> > CREATEROLE, I don't really think we need to stress about backwards\n> > compatibility too much.\n> \n> Yeah, I'm leaning pretty strongly that way, too.\n\nGreat.\n\nThanks,\n\nStephen", "msg_date": "Mon, 31 Jan 2022 13:50:14 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On Mon, Jan 31, 2022 at 1:50 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > > On Jan 31, 2022, at 8:53 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> > > Yeah, we do need to have a way to determine who is allowed to drop\n> > > roles and role ownership seems like it's one possible approach to that.\n> >\n> > Which other ways are on the table? Having ADMIN on a role doesn't allow you to do that, but maybe it could? What else?\n>\n> Supporting that through ADMIN is one option, another would be a\n> 'DROPROLE' attribute, though we'd want a way to curtail that from being\n> able to be used for just any role and that does lead down a path similar\n> to ownership or just generally the concept that some roles have certain\n> rights over certain other roles (whether you create them or not...).\n>\n> I do think there's a lot of value in being able to segregate certain\n> rights- consider that you may want a role that's able to create other\n> roles, perhaps grant them into some set of roles, can lock those roles\n> (prevent them from logging in, maybe do a password reset, something like\n> that), but which *isn't* able to drop those roles (and all their\n> objects) as that's dangerous and mistakes can certainly happen, or be\n> able to become that role because the creating role simply doesn't have\n> any need to be able to do that (or desire to in many cases, as we\n> discussed in the landlord-vs-tenant sub-thread).\n>\n\nThis is precisely the use case I am trying to accomplish with this\npatchset, roughly:\n\n- An automated bot that creates users and adds them to the employees role\n- Bot cannot access any employee (or other roles) table data\n- Bot cannot become any employee\n- Bot can disable the login of any employee\n\nYes there are attack surfaces around the fringes of login, etc but\nthose can be mitigated with certificate authentication. My pg_hba\nwould require any role in the employees role to use cert auth.\n\nThis would adequately mitigate many threats while greatly enhancing\nuser management.\n\n> Naturally, you'd want *some* role to be able to drop that role (and one\n> that doesn't have full superuser access) but that might be a role that's\n> not able to create new roles or take over accounts.\n\nI suspect some kind of web backend to handle manual user pruning. I\ndon't expect Bot to automatically drop users because mistakes can\nhappen, and disabling the login ability seems like an adequate\ntradeoff.\n\n> Separation of concerns and powers and all of that is what we want to be\n> going for here, more generically, which is why I was opposed to the\n> blanket \"owners have all rights of all roles they own\" implementation.\n> That approach doesn't support the ability to have a relatively\n> unprivileged role that's able to create other roles, which seems like a\n> pretty important use-case for us to be considering.\n\nAgreed.\n\n> The terminology seems to also be driving us in a certain direction and I\n> don't know that it's necessarily a good one. That is- the term 'owner'\n> implies certain things and maybe that's where some of the objection to\n> my argument that owners shouldn't necessarily have all rights of the\n> roles they 'own' comes from (ok- I'll also put out there for general\n> consideration that since we're talking about roles, and login roles are\n> generally associated with people, that maybe 'owner' isn't a great term\n> to use for this anyway ...). I feel like the 'owner' concept came from\n> the way we have table owners and function owners and database owners\n> today rather than from a starting point of what do we wish to\n> specifically enable.\n>\n> Perhaps instead of starting from the 'owner' concept, we start from the\n> question about the kinds of things we want roles to be able to do and\n> perhaps that will help inform the terminology.\n>\n> - Create new roles\n> - Drop an existing role\n> - Drop objects which belong to a role\n> - Lock existing roles\n> - Change/reset the PW of existing roles\n> - Give roles to other roles\n> - Revoke access to some roles from other roles\n> - Give select role attributes to a role\n> - Revoke role attributes from a role\n> - Traditional role-based access control (group memberships, SET ROLE)\n>\n> Certain of the above are already covered by the existing role membership\n> system and with the admin option, though there's definitely an argument\n> to be made as to if that is as capable as we'd like it to be (there's no\n> way to, today at least, GRANT *just* the admin option, for example, and\n> maybe that's something that it would actually be sensible to support).\n>\n> Perhaps there is a need to have a user who has all of the above\n> capabilities and maybe that would be an 'owner' or 'manager', but as I\n> tried to illustrate above, there's definitely use-cases for giving\n> a role only some of the above capabilities rather than all of them\n> together at once.\n>\n> > >> The main idea here is that having CREATEROLE doesn't give you ADMIN on roles, nor on role attributes. For role attributes, the syntax has been extended. An excerpt from the patch's regression test illustrates some of that concept:\n> > >>\n> > >> -- ok, superuser can create a role that can create login replication users, but\n> > >> -- cannot itself login, nor perform replication\n> > >> CREATE ROLE regress_role_repladmin\n> > >> CREATEROLE WITHOUT ADMIN OPTION -- can create roles, but cannot give it away\n> > >> NOCREATEDB WITHOUT ADMIN OPTION -- cannot create db, nor give it away\n> > >> NOLOGIN WITH ADMIN OPTION -- cannot log in, but can give it away\n> > >> NOREPLICATION WITH ADMIN OPTION -- cannot replicate, but can give it away\n> > >> NOBYPASSRLS WITHOUT ADMIN OPTION; -- cannot bypassrls, nor give it away\n> > >>\n> > >> -- ok, superuser can create a role with CREATEROLE but restrict give-aways\n> > >> CREATE ROLE regress_role_minoradmin\n> > >> NOSUPERUSER -- WITHOUT ADMIN OPTION is implied\n> > >> CREATEROLE WITHOUT ADMIN OPTION\n> > >> NOCREATEDB WITHOUT ADMIN OPTION\n> > >> NOLOGIN WITHOUT ADMIN OPTION\n> > >> NOREPLICATION -- WITHOUT ADMIN OPTION is implied\n> > >> NOBYPASSRLS -- WITHOUT ADMIN OPTION is implied\n> > >> NOINHERIT WITHOUT ADMIN OPTION\n> > >> CONNECTION LIMIT NONE WITHOUT ADMIN OPTION\n> > >> VALID ALWAYS WITHOUT ADMIN OPTION\n> > >> PASSWORD NULL WITHOUT ADMIN OPTION;\n> > >\n> > > Right, this was one of the approaches that I was thinking could work for\n> > > managing role attributes and it's very similar to roles and the admin\n> > > option for them. As I suggested at least once, another possible\n> > > approach could be to have login users not be able to create roles but\n> > > for them to be able to SET ROLE to a role which is able to create roles,\n> > > and then, using your prior method, only allow the attributes which that\n> > > role has to be able to be given to other roles.\n> >\n> > I'm not sure how that works. If I have a group named \"administrators\" which as multiple attributes like BYPASSRLS and such, and user \"stephen\" is a member of \"administrators\", then stephen can not only give away bypassrls to new users but also has it himself. How is that an improvement? (I mean this as a question, not as criticism.)\n>\n> That's not how role attributes work though- \"stephen\" only has the\n> 'bypassrls' role attribute after a 'set role administrators'. This has\n> been one of the issues with role attributes in general as there's no way\n> to change that (unlike the 'inherit' option for roles themselves) but in\n> this particular case it might be to our advantage.\n>\n> > > That essentially makes\n> > > a role be a proxy for the per-attribute admin options. There's pros and\n> > > cons for each approach and so I'm curious as to which you feel is the\n> > > better approach? I get the feeling that you're more inclined to go with\n> > > the approach of having an admin option for each role attribute (having\n> > > written this WIP patch) but I'm not sure if that is because you\n> > > contempltaed both and felt this was better for some reason or more\n> > > because I wasn't explaining the other approach very well, or if there\n> > > was some other reason.\n> >\n> > I need more explanation of the other option you are contemplating. My apologies if I'm being thick-headed.\n>\n> Hopefully the above helps. Note that in order to not allow the\n> \"stephen\" role simply alter itself to have the bypassrls role attribute,\n> we'd need to consider roles to not have 'ownership' (or whatever) over\n> themselves, which leads into the prior complaint I made around roles\n> having 'admin' rights on themselves which I generally don't feel is\n> correct either.\n>\n> > >> -- fail, having CREATEROLE is not enough to create roles in privileged roles\n> > >> SET SESSION AUTHORIZATION regress_role_minoradmin;\n> > >> CREATE ROLE regress_nosuch_read_all_data IN ROLE pg_read_all_data;\n> > >> ERROR: must have admin option on role \"pg_read_all_data\"\n> > >\n> > > I would say not just privileged roles, but any roles that the user\n> > > doesn't have admin rights on.\n> >\n> > Yes, that's how it works. But this portion of the test is only checking the interaction between CREATEROLE and built-in privileged roles, hence the comment.\n>\n> But.. predefined roles aren't actually different in this regard from any\n> other role, so I disagree that such a test of explicitly predefined\n> roles makes sense..?\n>\n> > >> Whether \"WITH ADMIN OPTION\" or \"WITHOUT ADMIN OPTION\" is implied hinges on whether the role is given CREATEROLE. That hackery is necessary to preserve backwards compatibility. If we don't care about compatibility, I could change the patch to make \"WITHOUT ADMIN OPTION\" implied for all attributes when not specified.\n> > >\n> > > Given the relative size of the changes we're talking about regarding\n> > > CREATEROLE, I don't really think we need to stress about backwards\n> > > compatibility too much.\n> >\n> > Yeah, I'm leaning pretty strongly that way, too.\n>\n> Great.\n>\n> Thanks,\n>\n> Stephen\n\n\n", "msg_date": "Mon, 31 Jan 2022 13:57:34 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "Hi,\n\nAm Montag, dem 31.01.2022 um 09:18 -0800 schrieb Mark Dilger:\n> On Jan 31, 2022, at 12:43 AM, Michael Banck < \n> michael.banck@credativ.de> wrote:\n\n> Ok, sure. I think this topic is hugely important and as I read the\n> patch anyway, I added some comments, but yeah, we need to figure out\n> the fundamentals first.\n\nRight.\n\nPerhaps some background on this patch series will help.  \n[...]\n\nThanks a lot!\n\n\nIf we don't go with backwards compatibility, then CREATEROLE would only\nallow you to create a new role, but not to give that role LOGIN, nor\nCREATEDB, etc.  You'd need to also have admin option on those things. \nTo create a role that can give those things away, you'd need to run\nsomething like:\n\nCREATE ROLE michael\n        CREATEROLE WITH ADMIN OPTION    -- can further give away\n\"createrole\"\n        CREATEDB WITH ADMIN OPTION    -- can further give away\n\"createdb\"\n        LOGIN WITH ADMIN OPTION    -- can further give away \"login\"\n        NOREPLICATION WITHOUT ADMIN OPTION    -- this would be implied\nanyway\n        NOBYPASSRLS WITHOUT ADMIN OPTION    -- this would be implied anyway\n\n\n        CONNECTION LIMIT WITH ADMIN OPTION    -- can specify connection\nlimits\n        PASSWORD WITH ADMIN OPTION    -- can specify passwords\n        VALID UNTIL WITH ADMIN OPTION    -- can specify expiration\n\nThose last three don't work for me:\n\npostgres=# CREATE ROLE admin3 VALID UNTIL WITH ADMIN OPTION;\nERROR: syntax error at or near \"WITH\"\n\npostgres=# CREATE ROLE admin3 PASSWORD WITH ADMIN OPTION;\nERROR: syntax error at or near \"WITH\"\n\npostgres=# CREATE ROLE admin3 CONNECTION LIMIT WITH ADMIN OPTION;\nERROR: syntax error at or near \"WITH\"\n\n> (I'm on the fence about the phrase \"WITH ADMIN OPTION\" vs. the phrase\n> \"WITH GRANT OPTION\".)\n> \n> Even then, when \"michael\" creates new roles, if he wants to be able\n> to further administer those roles, he needs to remember to give\n> himself ADMIN membership in that role at creation time.  After the\n> role is created, if he doesn't have ADMIN, he can't give it to\n> himself.  So, at create time, he needs to remember to do this:\n> \n> SET ROLE michael;\n> CREATE ROLE mark ADMIN michael;\n\nWhat would happen if ADMIN was implicit if michael is a non-superuser\nand there's no ADMIN in the CREATE ROLE statement? It would be\nbackwards-compatible, one could still let somebody else be ADMIN, but \nISTM a CREATEROLE role could no longer admin a role already existing\npreviously/it didn't create/got assigned admin for (e.g. the predefined\nroles).\n\nI.e. (responding what you wrote much further below), the CREATEROLE\nrole would no longer be ADMIN for all roles, just automatically for the\nones it created.\n\n> But that's still a bit strange, because \"ADMIN michael\" means that\n> michael can grant other roles membership in \"mark\", not that michael\n> can, for example, change mark's password.\n\nYeah, changing a password is one of the important tasks of a delegated\nrole admin, if no superusers are around.\n\n> If we don't want CREATEROLE to imply that you can mess around with\n> arbitrary roles (rather than only roles that you created or have been\n> transferred control over) then we need the concept of role\n> ownership.  This patch doesn't go that far, so for now, only\n> superusers can do those things.  Assuming some form of this patch is\n> acceptable, the v9 series will resurrect some of the pre-v7 logic for\n> role ownership and say that the owner can do those things.\n\n> > Ok, so what I would have needed to do in the above in order to have\n> > \"admin2\" and \"admin\" be the same as far as creating login users is (I\n> > believe):\n> > \n> > ALTER ROLE admin2 CREATEROLE LOGIN WITH ADMIN OPTION;\n> \n> Yes, those it's more likely admin2 would have been created with these\n> privileges to begin with, if the creator intended admin2 to do such\n> things.\n\nRight, maybe people just have to adjust to the new way. It still feels\nstrange that whatever you do at role creation time is more meaningful\nthan when altering a role. \n\n> \n> > By the way, is there now even a way to add admpassword to a role\n> > after it got created?\n> > \n> > postgres=# SET ROLE admin2;\n> > SET\n> > postgres=> \\password test\n> > Enter new password for user \"test\": \n> > Enter it again: \n> > ERROR:  must have admin on password to change password attribute\n> > postgres=> RESET ROLE;\n> > RESET\n> > postgres=# ALTER ROLE admin2 PASSWORD WITH ADMIN OPTION;\n> > ERROR:  syntax error at or near \"WITH\"\n> > UPDATE pg_authid SET admpassword = 't' WHERE rolname = 'admin2';\n> > UPDATE 1\n> > postgres=# SET ROLE admin2;\n> > SET\n> > postgres=> \\password test\n> > Enter new password for user \"test\": \n> > Enter it again: \n> > postgres=> \n> \n> I don't really have this worked out yet.  That's mostly because I'm\n> planning to fix it with role ownership, but perhaps there is a better\n> way?\n\nWell see above, maybe the patch is just broken/unfinished with respect\nto PASSWORD and the others? It works for REPLICATION e.g.:\n\npostgres=# ALTER ROLE admin2 REPLICATION WITH ADMIN OPTION;\nALTER ROLE\n\n> > However, the next thing is:\n> > \n> > postgres=# SET ROLE admin;\n> > SET\n> > postgres=> CREATE GROUP testgroup;\n> > CREATE ROLE\n> > postgres=> GRANT testgroup TO test;\n> > ERROR:  must have admin option on role \"testgroup\"\n> > \n> > First off, what does \"admin option\" mean on a role?\n> \n> From the docs for \"CREATE ROLE\", \n> https://www.postgresql.org/docs/14/sql-createrole.html\n> \n>   The ADMIN clause is like ROLE, but the named roles are added to the\n> new role WITH ADMIN OPTION, giving them the right to grant membership\n> in this role to others.\n\nHrm, I see; I guess I never paid attention to that part so far. The\nCREATEROLE thing or SUPERUSER was all I ever needed so far.\n\nAnd with that I guess I should really bow out of this thread and start\nreading from the beginning.\n\n> > I then tried this:\n> > \n> > postgres=# CREATE USER admin3 CREATEROLE WITH ADMIN OPTION;\n> > CREATE ROLE\n> > postgres=# SET ROLE admin3;\n> > SET\n> > postgres=> CREATE USER test3;\n> > CREATE ROLE\n> > postgres=> CREATE GROUP testgroup3;\n> > CREATE ROLE\n> > postgres=> GRANT testgroup3 TO test3;\n> > ERROR:  must have admin option on role \"testgroup3\"\n> > \n> > So I created both user and group, I have the CREATEROLE priv (with or\n> > without admin option), but I still can't assign the group. Is that\n> > (tracking who created a role and letting the creator do more thing) the\n> > part that got chopped away in your last patch in order to find a common\n> > ground?\n> \n> You need ADMIN on the role, not on CREATEROLE.  To add members to a\n> target role, you must have ADMIN on that target role.  To create new\n> roles with CREATEROLE privilege, you must have ADMIN on the\n> CREATEROLE privilege.\n\nRight ok. Maybe it's just me, but I feel a lot of people will need to\nlearn a lot more than they'd like to know about the ADMIN thing after\nthis patch goes in.\n\n> \n> > I\n> > feel this (next to creating users/groups) is the primary thing those\n> > CREATEROLE admins are supposed to do/where doing up to now.\n> \n> Right.  In the past, having CREATEROLE implied having ADMIN on every\n> role.  I'm intentionally breaking that.\n\n\nRight; I commented on that above.\n\n> > The way the adm* privs are now somewhere in the middle of the rol*\n> > privs also looks weird for the end-user and there does not seems to be\n> > some greater scheme behind it:\n> \n> Because they are not variable length nor nullable, they must come\n> before such fields (namely, rolpassword and rolvaliduntil).  They\n> don't really need to come before rolconnlimit, but I liked the idea\n> of packing twelve booleans together, since with \"bool\" typedef'd to\n> unsigned char, that's twelve contiguous bytes, starting after oid (4\n> bytes) and rolname (64 bytes) and likely fitting nicely without\n> padding bytes on at least some platforms.  If I split them on either\n> side of rolconnlimit (which is 4 bytes), there'd be seven bools\n> before it and five bools after, which wouldn't pack nicely.\n\nHrm ok, but it's a user-visible column ordering, so I'm wondering\nwhether that should trump efficiency here.\n\n\nMichael\n\n-- \nMichael Banck\nTeamleiter PostgreSQL-Team\nProjektleiter\nTel.: +49 2166 9901-171\nE-Mail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Geoff Richardson, Peter Lilley\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n\n\n", "msg_date": "Mon, 31 Jan 2022 20:55:56 +0100", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Jan 31, 2022, at 10:50 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Supporting that through ADMIN is one option, another would be a\n> 'DROPROLE' attribute, though we'd want a way to curtail that from being\n> able to be used for just any role and that does lead down a path similar\n> to ownership or just generally the concept that some roles have certain\n> rights over certain other roles (whether you create them or not...).\n\nI've been operating under the assumption that I have a lot more freedom to create new features than to change how existing features behave, for two reasons: backwards compatibility and sql-spec compliance.\n\nChanging how having ADMIN on a role works seems problematic for both those reasons. My family got me socks for Christmas, not what I actually wanted, a copy of the SQL-spec. So I'm somewhat guessing here. But I believe we'd have problems if we \"fixed\" the part where a role can revoke ADMIN from others on themselves. Whatever we have, whether we call it \"ownership\", it can't be something a role can unilaterally revoke.\n\nAs for a 'DROPROLE' attribute, I don't think that gets us anywhere. You don't seem to think so, either. So that leaves us with \"ownership\", perhaps by another word? I only chose that word because it's what we use elsewhere, but if we want to call it \"managementship\" and \"manager\" or whatever, that's fine. I'm not to the point of debating the terminology just yet. I'm still trying to get the behavior nailed down.\n\n> I do think there's a lot of value in being able to segregate certain\n> rights- consider that you may want a role that's able to create other\n> roles, perhaps grant them into some set of roles, can lock those roles\n> (prevent them from logging in, maybe do a password reset, something like\n> that), but which *isn't* able to drop those roles (and all their\n> objects) as that's dangerous and mistakes can certainly happen, or be\n> able to become that role because the creating role simply doesn't have\n> any need to be able to do that (or desire to in many cases, as we\n> discussed in the landlord-vs-tenant sub-thread).\n\nI'm totally on the same page. Your argument upthread about wanting any malfeasance on the part of a service provider showing up in the audit logs was compelling. Even for those things the \"owner\"/\"manager\" has the rights to do, we might want to make them choose to do it explicitly and not merely do it by accident.\n\n> Naturally, you'd want *some* role to be able to drop that role (and one\n> that doesn't have full superuser access) but that might be a role that's\n> not able to create new roles or take over accounts.\n\nI think it's important to go beyond the idea of a role attribute here. It's not that role \"bob\" can drop roles. It's that \"bob\" can drop *specific* roles, and for that, there has to be some kind of dependency tracked between \"bob\" and those other roles. I'm calling that \"ownership\". I think that language isn't just arbitrary, but actually helpful (technically, not politically) because REASSIGN OWNED should treat this kind of relationship exactly the same as it treats ownership of schemas, tables, functions, etc.\n\n> Separation of concerns and powers and all of that is what we want to be\n> going for here, more generically, which is why I was opposed to the\n> blanket \"owners have all rights of all roles they own\" implementation.\n\nI'm hoping to bring back, in v9, the idea of ownership/managership. The real sticking point here is that we (Robert, Andrew, I, and possibly others) want to be able to drop in a non-superuser-creator-role into existing systems that use superuser for role management. We'd like it to be as transparent a switch as possible.\n\nWith a superuser creating a role, that superuser can come back and muck with the role afterward, and the role can't revoke the superuser's right to do so. It's not enough that a non-superuser-creator-role (henceforth, \"manager\") can grant itself ADMIN on the created role. It also needs to be able to set passwords, transfer object ownerships to/from the role, grant the role into other roles or other roles into it, etc. All of that has to be sandboxed such that the \"manager\" can't touch stuff outside the manager's sandbox, but within the sandbox, it shouldn't make any practical difference that the manager isn't actually a superuser.\n\nI think what I had in v7 was almost right. I'm hoping that we just need to adjust things like the idea that managers always have implicit membership in and ADMIN on roles they manage. I think that needs to be optional, and the audit logs could show if the manager granted themselves such things, as it might violate policy and be a red flag in the audit log.\n\n> That approach doesn't support the ability to have a relatively\n> unprivileged role that's able to create other roles, which seems like a\n> pretty important use-case for us to be considering.\n\nI think we have that ability. It's just that the creator role isn't \"relatively unprivileged\" vis-a-vis the created role. But that could be handled by creating the role and then transferring the ownership to some other role, or specifying in the CREATE ROLE command that the creator doesn't want those privileges, etc. That requires some tinkering with the design, though, because the permission to perform the ownership transfer to that other role would need to be circumscribed to not give away other privileges, like the right to become that other role, or the specification that the creator disavows certain privileges over the created role might need to be something the creator could get back by force with some subsequent GRANT command, or ...?\n\n> The terminology seems to also be driving us in a certain direction and I\n> don't know that it's necessarily a good one. That is- the term 'owner'\n> implies certain things and maybe that's where some of the objection to\n> my argument that owners shouldn't necessarily have all rights of the\n> roles they 'own' comes from\n\nI think it does follow pretty closely the concept of ownership of objects, though. So closely, in fact, that I don't really see any daylight between the two concepts.\n\n> (ok- I'll also put out there for general\n> consideration that since we're talking about roles, and login roles are\n> generally associated with people, that maybe 'owner' isn't a great term\n> to use for this anyway ...).\n\nTechnically, we're talking about roles within computers owning other roles within computers, not about people owning people. We already have a command called REASSIGN OWNED, and if we don't call this ownership, then that command gets really squirrelly. Does it also reassign \"managed\"?\n\nOn the other hand, I'm not looking to create offense, so if this language seems unacceptable, perhaps you could propose something else?\n\n> I feel like the 'owner' concept came from\n> the way we have table owners and function owners and database owners\n> today rather than from a starting point of what do we wish to\n> specifically enable.\n\nLet's compare this to the idea of owning a table. Can the owner of a table revoke SELECT from themselves? Yes, they can. They can also give it back to themselves:\n\nCREATE ROLE michael;\nSET ROLE michael;\nCREATE TABLE michael_table (i INTEGER);\nREVOKE SELECT ON michael_table FROM PUBLIC, michael;\nSELECT * FROM michael_table;\nERROR: permission denied for table michael_table\nGRANT SELECT ON michael_table TO michael;\nSELECT * FROM michael_table;\n i \n---\n(0 rows)\n\nSo I'm curious if we can have the same idea for ADMIN of a role? The owner can revoke the role from themselves, and they can also grant it back. Would that be acceptable?\n\n> Perhaps instead of starting from the 'owner' concept, we start from the\n> question about the kinds of things we want roles to be able to do and\n> perhaps that will help inform the terminology.\n> \n> - Create new roles\n> - Drop an existing role\n> - Drop objects which belong to a role\n> - Lock existing roles\n> - Change/reset the PW of existing roles\n> - Give roles to other roles\n> - Revoke access to some roles from other roles\n> - Give select role attributes to a role\n> - Revoke role attributes from a role\n> - Traditional role-based access control (group memberships, SET ROLE)\n\nI agree we want the ability to do these things, and not as a single CREATEROLE privilege, but separable. The pre-v8 patch was separating only one who the role owner was, but v8 is attempting to separate these further, and I think that's the right way to go.\n\n> Certain of the above are already covered by the existing role membership\n> system and with the admin option, though there's definitely an argument\n> to be made as to if that is as capable as we'd like it to be (there's no\n> way to, today at least, GRANT *just* the admin option, for example, and\n> maybe that's something that it would actually be sensible to support).\n\nI think the ADMIN stuff *would* be the way to go, but for it's weird self-administration feature. That to me seems to kill the idea. What do you think?\n\n> Perhaps there is a need to have a user who has all of the above\n> capabilities and maybe that would be an 'owner' or 'manager', but as I\n> tried to illustrate above, there's definitely use-cases for giving\n> a role only some of the above capabilities rather than all of them\n> together at once.\n\nI'm using the terms \"owner\"/\"manager\" without regard for whether they have all those abilities or just some of them. However, I think these terms don't apply for just the traditional ADMIN option on the role. In that case, calling it \"ownership\" or \"managership\" is inappropriate.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 31 Jan 2022 13:09:44 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\nOn 1/31/22 12:18, Mark Dilger wrote:\n>\n>> On Jan 31, 2022, at 12:43 AM, Michael Banck <michael.banck@credativ.de> wrote:\n>> Ok, sure. I think this topic is hugely important and as I read the\n>> patch anyway, I added some comments, but yeah, we need to figure out\n>> the fundamentals first.\n> Right.\n>\n> Perhaps some background on this patch series will help. The patch versions before v8 were creating an owner-owned relationship between the creator and the createe, and a lot of privileges were dependent on that ownership. Stephen objected that we were creating parallel tracks on which the privilege system was running; things like belonging to a role or having admin on a role were partially conflated with owning a role. He also objected that the pre-v8 patch sets allowed a creator role with the CREATEROLE privilege to give away any privilege the creator had, rather than needing to have GRANT or ADMIN option on the privilege being given.\n>\n> The v8-WIP patch is not a complete replacement for the pre-v8 patches. It's just a balloon I'm floating to try out candidate solutions to some of Stephen's objections. In the long run, I want the solution to Stephen's objections to not create problems for anybody who liked the way the pre-v8 patches worked (Robert, Andrew, and to some extent me.)\n>\n> In this WIP patch, for a creator to give *anything* away to a createe, the creator must have GRANT or ADMIN on the thing being given. That includes attributes like BYPASSRLS, CREATEDB, LOGIN, etc., and also ADMIN on any role the createe is granted into.\n>\n> I tried to structure things for backwards compatibility, considering which things roles with CREATEROLE could give away historically. It turns out they can give away most everything, but not SUPERUSER, BYPASSRLS, or REPLICATION. So I structured the default privileges for CREATEROLE to match. But I'm uncertain that design is any good, and your comments below suggest that you find it pretty hard to use.\n>\n> Part of the problem with trying to be backwards compatible is that we must break compatibility anyway, to address the problem that historically having CREATEROLE meant you effectively had ADMIN on all non-superuser roles. That's got to change. So in part I'm asking pgsql-hackers if partial backwards compatibility is worth the bother.\n>\n> If we don't go with backwards compatibility, then CREATEROLE would only allow you to create a new role, but not to give that role LOGIN, nor CREATEDB, etc. You'd need to also have admin option on those things. To create a role that can give those things away, you'd need to run something like:\n>\n> CREATE ROLE michael\n> \tCREATEROLE WITH ADMIN OPTION -- can further give away \"createrole\"\n> \tCREATEDB WITH ADMIN OPTION -- can further give away \"createdb\"\n> \tLOGIN WITH ADMIN OPTION -- can further give away \"login\"\n> \tNOREPLICATION WITHOUT ADMIN OPTION -- this would be implied anyway\n> \tNOBYPASSRLS WITHOUT ADMIN OPTION -- this would be implied anyway\n> \tCONNECTION LIMIT WITH ADMIN OPTION -- can specify connection limits\n> \tPASSWORD WITH ADMIN OPTION -- can specify passwords\n> \tVALID UNTIL WITH ADMIN OPTION -- can specify expiration\n>\n> (I'm on the fence about the phrase \"WITH ADMIN OPTION\" vs. the phrase \"WITH GRANT OPTION\".)\n>\n> Even then, when \"michael\" creates new roles, if he wants to be able to further administer those roles, he needs to remember to give himself ADMIN membership in that role at creation time. After the role is created, if he doesn't have ADMIN, he can't give it to himself. So, at create time, he needs to remember to do this:\n>\n> SET ROLE michael;\n> CREATE ROLE mark ADMIN michael;\n>\n> But that's still a bit strange, because \"ADMIN michael\" means that michael can grant other roles membership in \"mark\", not that michael can, for example, change mark's password. If we don't want CREATEROLE to imply that you can mess around with arbitrary roles (rather than only roles that you created or have been transferred control over) then we need the concept of role ownership. This patch doesn't go that far, so for now, only superusers can do those things. Assuming some form of this patch is acceptable, the v9 series will resurrect some of the pre-v7 logic for role ownership and say that the owner can do those things.\n>\n\nThis seems complicated. Maybe the previous proposal was too simple, but\nsimplicity has some virtues. It seemed to me that more complex rules\ncould possibly have been implemented for those who really needed them by\nusing SECURITY DEFINER functions. The whole 'NOFOO WITH ADMIN OPTION'\nthing seems to me a bit like a POLA violation. Nevertheless I can\nprobably live with it as long as it's *really* well documented. Even so\nI suspect it would be too complex for many, and they will just continue\nto use superusers to create and manage roles if possible.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 1 Feb 2022 16:10:13 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Feb 1, 2022, at 1:10 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> The whole 'NOFOO WITH ADMIN OPTION'\n> thing seems to me a bit like a POLA violation. Nevertheless I can\n> probably live with it as long as it's *really* well documented. Even so\n> I suspect it would be too complex for many, and they will just continue\n> to use superusers to create and manage roles if possible.\n\nI agree with the sentiment, but it might help to distinguish between surprising behavior vs. surprising grammar.\n\nIn existing postgresql releases, having CREATEROLE means you can give away most attributes, including ones you yourself don't have (createdb, login). So we already have the concept of NOFOO WITH ADMIN OPTION, we just don't call it that. In pre-v8 patches on this thread, I got rid of that; you *must* have the attribute to give it away. But maybe that was too restrictive, and we need a way to specify, attribute by attribute, how this works. Is this just a problem of surprising grammar? Is it surprising behavior? If the latter, I'm inclined to give up this WIP as having been a bad move. If the former, I'll try to propose some less objectionable grammar.\n \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 1 Feb 2022 14:27:35 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\nOn 2/1/22 17:27, Mark Dilger wrote:\n>\n>> On Feb 1, 2022, at 1:10 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> The whole 'NOFOO WITH ADMIN OPTION'\n>> thing seems to me a bit like a POLA violation. Nevertheless I can\n>> probably live with it as long as it's *really* well documented. Even so\n>> I suspect it would be too complex for many, and they will just continue\n>> to use superusers to create and manage roles if possible.\n> I agree with the sentiment, but it might help to distinguish between surprising behavior vs. surprising grammar.\n>\n> In existing postgresql releases, having CREATEROLE means you can give away most attributes, including ones you yourself don't have (createdb, login). So we already have the concept of NOFOO WITH ADMIN OPTION, we just don't call it that. In pre-v8 patches on this thread, I got rid of that; you *must* have the attribute to give it away. But maybe that was too restrictive, and we need a way to specify, attribute by attribute, how this works. Is this just a problem of surprising grammar? Is it surprising behavior? If the latter, I'm inclined to give up this WIP as having been a bad move. If the former, I'll try to propose some less objectionable grammar.\n> \n>\n\nCertainly the grammar would need to be better. But I'm not sure any\ngrammar that expresses what is supported here is not going to be\nconfusing, because the underlying scheme seems complex. But I'm\npersuadable. I'd like to hear from others on the subject.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 1 Feb 2022 18:38:49 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On Tue, Feb 1, 2022 at 6:38 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > In existing postgresql releases, having CREATEROLE means you can give away most attributes, including ones you yourself don't have (createdb, login). So we already have the concept of NOFOO WITH ADMIN OPTION, we just don't call it that. In pre-v8 patches on this thread, I got rid of that; you *must* have the attribute to give it away. But maybe that was too restrictive, and we need a way to specify, attribute by attribute, how this works. Is this just a problem of surprising grammar? Is it surprising behavior? If the latter, I'm inclined to give up this WIP as having been a bad move. If the former, I'll try to propose some less objectionable grammar.\n> >\n>\n> Certainly the grammar would need to be better. But I'm not sure any\n> grammar that expresses what is supported here is not going to be\n> confusing, because the underlying scheme seems complex. But I'm\n> persuadable. I'd like to hear from others on the subject.\n\nWell, we've been moving more and more in the direction of using\npredefined roles to manage access. The things that are basically\nBoolean flags on the role are mostly legacy stuff. So my tentative\nopinion (and I'm susceptible to being persuaded that I'm wrong here)\nis that putting a lot of work into fleshing out that infrastructure\ndoes not necessarily make a ton of sense. Are we ever going to add\neven one more flag that works that way?\n\nAlso, any account that can create roles is a pretty high-privilege\naccount. Maybe it's superuser, or maybe not, but it's certainly\npowerful. In my opinion, that makes fine distinctions here less\nimportant. Is there really an argument for saying \"well, we're going\nto let you bypass RLS, but we're not going to let you give that\nprivilege to others\"? It seems contrived to think of restricting a\nrole that is powerful enough to create whole new accounts in such a\nway. I'm not saying that someone couldn't have a use case for it, but\nI think it'd be a pretty thin use case.\n\nIn short, I think it makes tons of sense to say that CREATEROLE lets\nyou give to others those role flags which you have, but not the ones\nyou lack. However, to me, it feels like overengineering to distinguish\nbetween things you have and can give away, things you have and can't\ngive away, and things you don't even have.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 2 Feb 2022 11:45:29 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Jan 31, 2022, at 10:50 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> > Supporting that through ADMIN is one option, another would be a\n> > 'DROPROLE' attribute, though we'd want a way to curtail that from being\n> > able to be used for just any role and that does lead down a path similar\n> > to ownership or just generally the concept that some roles have certain\n> > rights over certain other roles (whether you create them or not...).\n> \n> I've been operating under the assumption that I have a lot more freedom to create new features than to change how existing features behave, for two reasons: backwards compatibility and sql-spec compliance.\n\nI agree that those are concerns that need to be considered, though I'm\nmore concerned about the SQL compliance and less about backwards\ncompatibility in this case. For one thing, I'm afraid that we're not as\ncompliant as we really should be and that should really drive us to make\nchange here anyway, to get closer to what the spec calls for.\n\n> Changing how having ADMIN on a role works seems problematic for both those reasons. My family got me socks for Christmas, not what I actually wanted, a copy of the SQL-spec. So I'm somewhat guessing here. But I believe we'd have problems if we \"fixed\" the part where a role can revoke ADMIN from others on themselves. Whatever we have, whether we call it \"ownership\", it can't be something a role can unilaterally revoke.\n> \n> As for a 'DROPROLE' attribute, I don't think that gets us anywhere. You don't seem to think so, either. So that leaves us with \"ownership\", perhaps by another word? I only chose that word because it's what we use elsewhere, but if we want to call it \"managementship\" and \"manager\" or whatever, that's fine. I'm not to the point of debating the terminology just yet. I'm still trying to get the behavior nailed down.\n\nYeah, didn't mean to imply that those were great ideas or that I was\nparticularly advocating for them, but just to bring up some other ideas\nto try and get more thought going into this.\n\n> > I do think there's a lot of value in being able to segregate certain\n> > rights- consider that you may want a role that's able to create other\n> > roles, perhaps grant them into some set of roles, can lock those roles\n> > (prevent them from logging in, maybe do a password reset, something like\n> > that), but which *isn't* able to drop those roles (and all their\n> > objects) as that's dangerous and mistakes can certainly happen, or be\n> > able to become that role because the creating role simply doesn't have\n> > any need to be able to do that (or desire to in many cases, as we\n> > discussed in the landlord-vs-tenant sub-thread).\n> \n> I'm totally on the same page. Your argument upthread about wanting any malfeasance on the part of a service provider showing up in the audit logs was compelling. Even for those things the \"owner\"/\"manager\" has the rights to do, we might want to make them choose to do it explicitly and not merely do it by accident.\n\nGlad to hear that.\n\n> > Naturally, you'd want *some* role to be able to drop that role (and one\n> > that doesn't have full superuser access) but that might be a role that's\n> > not able to create new roles or take over accounts.\n> \n> I think it's important to go beyond the idea of a role attribute here. It's not that role \"bob\" can drop roles. It's that \"bob\" can drop *specific* roles, and for that, there has to be some kind of dependency tracked between \"bob\" and those other roles. I'm calling that \"ownership\". I think that language isn't just arbitrary, but actually helpful (technically, not politically) because REASSIGN OWNED should treat this kind of relationship exactly the same as it treats ownership of schemas, tables, functions, etc.\n\nI agree that role attributes isn't a good approach and that we should be\nmoving away from it.\n\nI'm less sure that the existance of REASSIGN OWNED for schemas and\ntables and such should be the driver for what this capability of one\nrole being able to drop another role needs to be called.\n\n> > Separation of concerns and powers and all of that is what we want to be\n> > going for here, more generically, which is why I was opposed to the\n> > blanket \"owners have all rights of all roles they own\" implementation.\n> \n> I'm hoping to bring back, in v9, the idea of ownership/managership. The real sticking point here is that we (Robert, Andrew, I, and possibly others) want to be able to drop in a non-superuser-creator-role into existing systems that use superuser for role management. We'd like it to be as transparent a switch as possible.\n\nThat description itself really makes me wonder about the sense of what\nwas proposed. Specifically \"existing systems that use superuser for\nrole management\" doesn't make me picture a system where this manager\nrole has any need to run SELECT statements against the tables created by\nthe role that it created- SELECT'ing data from tables isn't in the\npurview of 'role management' (and before someone complains that pg_dump\nis part of this, I wouldn't call running pg_dump role management but\nrather data export or, used very loosely, 'backup'). To the end, I push\nback with: what exactly is this existing superuser doing that's role\nmanagement? The specific use-case, not just 'role management'. What\nJoshua outlined was a reasonably defined use-case and that's what I'm\ntrying to get at here.\n\n> With a superuser creating a role, that superuser can come back and muck with the role afterward, and the role can't revoke the superuser's right to do so. It's not enough that a non-superuser-creator-role (henceforth, \"manager\") can grant itself ADMIN on the created role. It also needs to be able to set passwords, transfer object ownerships to/from the role, grant the role into other roles or other roles into it, etc. All of that has to be sandboxed such that the \"manager\" can't touch stuff outside the manager's sandbox, but within the sandbox, it shouldn't make any practical difference that the manager isn't actually a superuser.\n\nI appreciate that there needs to be a role who has certain rights over\nother roles and that those rights can't be revoked by the role. The\nright to grant ADMIN on a create role is, itself, a right and what I was\nsuggesting is that it could be one that the created role isn't able to\nrevoke. I disagree that the only possible role that could create some\nother role must necessarily be able to be essentially superuser when it\ncomes to that created role. I pointed out exactly the use-case where\nthat isn't the case and nothing here has said anything to refute the\nexistance of that use-case but seems to instead just focus on this idea\nthat we must have a 'mini superuser'.\n\n> I think what I had in v7 was almost right. I'm hoping that we just need to adjust things like the idea that managers always have implicit membership in and ADMIN on roles they manage. I think that needs to be optional, and the audit logs could show if the manager granted themselves such things, as it might violate policy and be a red flag in the audit log.\n\nI'd like to also move in a direction where implicit membership in and\nADMIN rights on the role is optional and potentially not even something\nthat the creating role is able to grant themselves- though *some* role\nwould need that ability and, ideally, it would be one that can be\ngranted out individually without being a full superuser.\n\n> > That approach doesn't support the ability to have a relatively\n> > unprivileged role that's able to create other roles, which seems like a\n> > pretty important use-case for us to be considering.\n> \n> I think we have that ability. It's just that the creator role isn't \"relatively unprivileged\" vis-a-vis the created role. But that could be handled by creating the role and then transferring the ownership to some other role, or specifying in the CREATE ROLE command that the creator doesn't want those privileges, etc. That requires some tinkering with the design, though, because the permission to perform the ownership transfer to that other role would need to be circumscribed to not give away other privileges, like the right to become that other role, or the specification that the creator disavows certain privileges over the created role might need to be something the creator could get back by force with some subsequent GRANT command, or ...?\n\nIf it's not relatively unprivileged regarding the created role then it's\nnot the ability which I outlined and therefore doesn't solve the\ndescribed use-case. I don't really feel that making it possible for the\ncreating role to give up those rights actually solves for the attack\nvector that is someone gaining access to the creating role's access,\nwhich is what we're talking about trying to address by having a separate\nrole whose only ability is to create roles which it then isn't able to\nbecome or otherwise impact.\n\n> > The terminology seems to also be driving us in a certain direction and I\n> > don't know that it's necessarily a good one. That is- the term 'owner'\n> > implies certain things and maybe that's where some of the objection to\n> > my argument that owners shouldn't necessarily have all rights of the\n> > roles they 'own' comes from\n> \n> I think it does follow pretty closely the concept of ownership of objects, though. So closely, in fact, that I don't really see any daylight between the two concepts.\n\nExcept that at least in the case we're contemplating, it's not desired\nfor the creator of the role to have absolute authority over the created\nrole. That's a pretty big difference between roles and objects. That\nwe aren't seeing the distinction here is part of what I'm getting at\nwith the above paragraph.\n\n> > (ok- I'll also put out there for general\n> > consideration that since we're talking about roles, and login roles are\n> > generally associated with people, that maybe 'owner' isn't a great term\n> > to use for this anyway ...).\n> \n> Technically, we're talking about roles within computers owning other roles within computers, not about people owning people. We already have a command called REASSIGN OWNED, and if we don't call this ownership, then that command gets really squirrelly. Does it also reassign \"managed\"?\n\nTechnically we were talking about PostgreSQL clusters that are just data\nfiles and processes within computers when it came to primaries and\nreplicas, but other terms were used previously and we generally agreed\nthat we should probably move away from those terms. Today REASSIGN\nOWNED only talks about tables and views and other things which are\nquite distinct from individuals.\n\n> On the other hand, I'm not looking to create offense, so if this language seems unacceptable, perhaps you could propose something else?\n\nManager might be one, but as I try to get at below, what I'm thinking\nabout is a set of privileges that roles have and there isn't a concept\nof \"owner\" or \"manager\" but rather \"role X has S, T, V privileges on\nroles A, B, C\". Conceptually perhaps we can consider a role that has\nALL privileges over another role to be that role's 'manager' or 'owner'\nbut we don't really even need to go into that once we've broken down the\nprivileges.\n\n> > I feel like the 'owner' concept came from\n> > the way we have table owners and function owners and database owners\n> > today rather than from a starting point of what do we wish to\n> > specifically enable.\n> \n> Let's compare this to the idea of owning a table. Can the owner of a table revoke SELECT from themselves? Yes, they can. They can also give it back to themselves:\n> \n> CREATE ROLE michael;\n> SET ROLE michael;\n> CREATE TABLE michael_table (i INTEGER);\n> REVOKE SELECT ON michael_table FROM PUBLIC, michael;\n> SELECT * FROM michael_table;\n> ERROR: permission denied for table michael_table\n> GRANT SELECT ON michael_table TO michael;\n> SELECT * FROM michael_table;\n> i \n> ---\n> (0 rows)\n> \n> So I'm curious if we can have the same idea for ADMIN of a role? The owner can revoke the role from themselves, and they can also grant it back. Would that be acceptable?\n\nThat might be acceptable for the ADMIN privilege of a role itself though\nI'm not sure if that's really all that distinct from ADMIN.\n\n> > Perhaps instead of starting from the 'owner' concept, we start from the\n> > question about the kinds of things we want roles to be able to do and\n> > perhaps that will help inform the terminology.\n> > \n> > - Create new roles\n> > - Drop an existing role\n> > - Drop objects which belong to a role\n> > - Lock existing roles\n> > - Change/reset the PW of existing roles\n> > - Give roles to other roles\n> > - Revoke access to some roles from other roles\n> > - Give select role attributes to a role\n> > - Revoke role attributes from a role\n> > - Traditional role-based access control (group memberships, SET ROLE)\n> \n> I agree we want the ability to do these things, and not as a single CREATEROLE privilege, but separable. The pre-v8 patch was separating only one who the role owner was, but v8 is attempting to separate these further, and I think that's the right way to go.\n\nRight, these should be separable, and I don't mean just the role\nattributes but rather the above as distinct privileges. Whereby a role\ncould have the right to create other roles but *not* have the right to\ndrop roles (either the one they created, or perhaps any others, or maybe\neven to have some distinct set of roles that they're able to drop that's\ndifferent from the roles they created), as an example.\n\n> > Certain of the above are already covered by the existing role membership\n> > system and with the admin option, though there's definitely an argument\n> > to be made as to if that is as capable as we'd like it to be (there's no\n> > way to, today at least, GRANT *just* the admin option, for example, and\n> > maybe that's something that it would actually be sensible to support).\n> \n> I think the ADMIN stuff *would* be the way to go, but for it's weird self-administration feature. That to me seems to kill the idea. What do you think?\n\nI don't think the self-administration stuff that we have for role ADMIN\nrights is actually sensible and, while I know it's a backwards\ncompatibility break, it's something we should fix.\n\n> > Perhaps there is a need to have a user who has all of the above\n> > capabilities and maybe that would be an 'owner' or 'manager', but as I\n> > tried to illustrate above, there's definitely use-cases for giving\n> > a role only some of the above capabilities rather than all of them\n> > together at once.\n> \n> I'm using the terms \"owner\"/\"manager\" without regard for whether they have all those abilities or just some of them. However, I think these terms don't apply for just the traditional ADMIN option on the role. In that case, calling it \"ownership\" or \"managership\" is inappropriate.\n\nI don't think it's sensible to have one term that means \"all\" and then\nuse that same term to also mean \"only some\". That strikes me as\nconfusing and I don't know that we need to even have an explicit name\nfor the role that has 'all' of the rights or that we need to provide a\nname for one that only has 'some' of them- they're just roles that have\ncertain privileges. The question that we need to solve is how to give\nusers the ability to choose what roles have which of the privileges that\nwe've outlined above and agreed should be separable.\n\nTHanks,\n\nStephen", "msg_date": "Wed, 2 Feb 2022 14:52:07 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "\n\n> On Feb 2, 2022, at 11:52 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> The question that we need to solve is how to give\n> users the ability to choose what roles have which of the privileges that\n> we've outlined above and agreed should be separable.\n\nOk, there are really two different things going on here, and the conversation keeps conflating them. Maybe I'm wrong, but I think the conflation of these things is the primary problem preventing us from finishing up the design.\n\nThing 1: The superuser needs to be able to create roles who can create other roles. Let's call them \"creators\". Not every organization will want the same level of privilege to be given to a creator, or even that all creators have equal levels of privilege. So when the superuser creates a creator, the superuser needs to be able to configure what exactly what that creator can do. This includes which attributes the creator can give to new roles. It *might* include whether the creator maintains a dependency link with the created role, called \"ownership\" or somesuch. It *might* include whether the creator can create roles into which the creator is granted membership/administership. But there really isn't any reason that these things should be all-or-nothing. Maybe one creator maintains a dependency link with created roles, and that dependency link entails some privileges. Maybe other creators do not maintain such a link. It seems like superuser can define a creator in many different ways, as long as we nail down what those ways are, and what they mean.\n\nThing 2: The creator needs to be able to specify which attributes and role memberships are set up with for roles the creator creates. To the extent that the creator has been granted the privilege to create yet more creators, this recurses to Thing 1. But not all creators will have that ability.\n\n\nI think the conversation gets off topic and disagreement abounds when Thing 1 is assumed to be hardcoded, leaving just the details of Thing 2 to be discussed.\n\nIt's perfectly reasonable (in my mind) that Robert, acting as superuser, may want to create a creator who acts like a superuser over the sandbox, while at the same time Stephen, acting as superuser, may want to create a creator who acts as a low privileged bot that only adds and removes roles, but cannot read their tables, SET ROLE to them, etc.\n\nI don't see any reason that Robert and Stephen can't both get what they want. We just have to make Thing 1 flexible enough.\n\nDo you agree at least with this much? If so, I think we can hammer out what to do about Thing 1 and get something committed in time for postgres 15. If not, then I'm probably going to stop working on this until next year, because at this point, we don't have enough time to finish.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 2 Feb 2022 12:23:26 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On Wed, Feb 2, 2022 at 3:23 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> It's perfectly reasonable (in my mind) that Robert, acting as superuser, may want to create a creator who acts like a superuser over the sandbox, while at the same time Stephen, acting as superuser, may want to create a creator who acts as a low privileged bot that only adds and removes roles, but cannot read their tables, SET ROLE to them, etc.\n>\n> I don't see any reason that Robert and Stephen can't both get what they want. We just have to make Thing 1 flexible enough.\n\nHmm, that would be fine with me. I don't mind a bit if other people\nget what they want, as long as I can get what I want, too! In fact,\nI'd prefer it if other people also get what they want...\n\nThat having been said, I have some reservations if it involves tightly\ncoupling new features that we're trying to add to existing things that\nmay or may not be that well designed, like the role-level INHERIT\nflag, or WITH ADMIN OPTION, or the not-properly maintained\npg_auth_members.grantor column, or even the SQL standard. I'm not\nsaying we should ignore any of those things and I don't think that we\nshould ... but at the same time, we can't whether the feature does\nwhat people want it to do, either. If we do, this whole thing is\nreally a complete waste of time. If a patch achieves infinitely large\namounts of backward compatibility, standards compliance, and\nconformity with existing design but doesn't do the right stuff, forget\nit!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 2 Feb 2022 15:49:48 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "I'm chiming in a little late here, but as someone who worked on a\nsystem to basically work around the lack of unprivileged CREATE ROLE\nfor a cloud provider (I worked on the Heroku Data team for several\nyears), I thought it might be useful to offer my perspective. This is,\nof course, not the only use case, but maybe it's useful to have\nsomething concrete. As a caveat, I don't know how current this still\nis (I no longer work there, though the docs [1] seem to still describe\nthe same system), or if there are better ways to achieve the goals of\na service provider.\n\nBroadly, the general use case is something like what Robert has\nsketched out in his e-mails. Heroku took care of setting up the\ndatabase, archiving, replication, and any other system-level config.\nIt would then keep the superuser credentials private, create a\ndatabase, and a user that owned that database and had all the\npermissions we could grant it without compromising the integrity of\nthe system. (We did not want customers to break their databases, both\nto ensure a better user experience and to avoid getting paged.)\nInitially, this meant customers got just the one database user because\nof CREATE ROLE's limitations.\n\nTo work around that, at some point, we added an API that would CREATE\nROLE for you, accessible through a dashboard and the Heroku CLI. This\nran CREATE ROLE (or DROP ROLE) for you, but otherwise it largely let\nyou configure the resulting roles as you pleased (using the original\nrole we create for you). We wanted to avoid reinventing the wheel as\nmuch as possible, and the customer database (including the role\nconfiguration) was mostly a black box for us (we did manage some\npredefined permissions configurations through our dashboard, but the\nPostgres catalogs were the source of truth for that).\n\nThinking about how this would fit into a potential non-superuser\nCREATE ROLE world, the sandbox superuser model discussed above covers\nthis pretty well, though I share some of Robert's concerns around how\nthis fits into existing systems.\n\nHope this is useful feedback. Thanks for working on this!\n\n[1]: https://devcenter.heroku.com/articles/heroku-postgresql-credentials\n\n\n", "msg_date": "Thu, 3 Feb 2022 21:38:39 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On Mon, Jan 31, 2022 at 1:57 PM Joshua Brindle\n<joshua.brindle@crunchydata.com> wrote:\n> This is precisely the use case I am trying to accomplish with this\n> patchset, roughly:\n>\n> - An automated bot that creates users and adds them to the employees role\n> - Bot cannot access any employee (or other roles) table data\n> - Bot cannot become any employee\n> - Bot can disable the login of any employee\n>\n> Yes there are attack surfaces around the fringes of login, etc but\n> those can be mitigated with certificate authentication. My pg_hba\n> would require any role in the employees role to use cert auth.\n>\n> This would adequately mitigate many threats while greatly enhancing\n> user management.\n\nSo, where do we go from here?\n\nI've been thinking about this comment a bit. On the one hand, I have\nsome reservations about the phrase \"the use case I am trying to\naccomplish with this patchset,\" because in the end, this is not your\npatch set. It's not reasonable to complain that a patch someone else\nwrote doesn't solve your problem; of course everyone writes patches to\nsolve their own problems, or those of their employer, not other\npeople's problems. And that's as it should be, else we will have few\ncontributors. On the other hand, to the extent that this patch set\nmakes things worse for a reasonable use case which you have in mind,\nthat's an entirely legitimate complaint.\n\nAfter a bit of testing, it seems to me that as things stand today,\nthings are nearly perfect for the use case that you have in mind. I\nwould be interested to know whether you agree. If I set up an account\nand give it CREATEROLE, it can create users, and it can put them into\nthe employees role, but it can't SET ROLE to any of those accounts. It\ncan also ALTER ROLE ... NOLOGIN on any of those accounts. The only gap\nI see is that there are certain role-based flags which the CREATEROLE\naccount cannot set: SUPERUSER, BYPASSRLS, REPLICATION. You might\nprefer a system where your bot account had the option to grant those\nprivileges also, and I think that's a reasonable thing to want.\n\nHowever, I *also* think it's reasonable to want an account that can\ncreate roles but can't give to those roles membership in roles that it\ndoes not itself possess. Likewise, I think it's reasonable to want an\naccount that can only drop roles which that account itself created.\nThese kinds of requirements stem from a different use case than what\nyou are talking about here, but they seem like fine things to want,\nand as far as I know we have pretty broad agreement that they are\nreasonable. It seems extremely difficult to make a convincing argument\nthat this is not a thing which anyone should want to block:\n\nrhaas=> create role bob role pg_execute_server_program;\nCREATE ROLE\n\nHonestly, that seems like a big yikes from here. How is it OK to block\n\"create role bob superuser\" yet allow that command? I'm inclined to\nthink that's just broken. Even if the role were pg_write_all_data\nrather than pg_execute_server_program, it's still a heck of a lot of\npower to be handing out, and I don't see how anyone could make a\nserious argument that we shouldn't have an option to restrict that.\n\nLet me separate the two features that I just mentioned and talk about\nthem individually:\n\n1. Don't allow a CREATEROLE user to give out membership in groups\nwhich that user does not possess. Leaving aside the details of any\npreviously-proposed patches and just speaking theoretically, how can\nthis be implemented? I can think of a few ideas. We could (1A) just\nchange CREATEROLE to work that way, but IIUC that would break the use\ncase you outline here, so I guess that's off the table unless I am\nmisunderstanding the situation. We could also (1B) add a second role\nattribute with a different name, like, err, CREATEROLEWEAKLY, that\nbehaves in that way, leaving the existing one untouched. But we could\nalso take it a lot further, because someone might want to let an\naccount hand out a set of privileges which corresponds neither to the\nprivileges of that account nor to the full set of available\nprivileges. That leads to another idea: (1C) implement an in-database\nsystem that lets you specify which privileges an account has, and,\nseparately, which ones it can assign to others. I am skeptical of that\nidea because it seems really, really complicated, not only from an\nimplementation standpoint but even just from a user-experience\nstandpoint. Suppose user 'userbot' has rights to grant a suitable set\nof groups to the new users that it creates -- but then someone creates\na new group. Should that also be added to the things 'userbot' can\ngrant or not? What if we have 'userbot1' through 'userbot6' and each\nof them can grant a different set of roles? I wouldn't mind (1D)\nproviding a hook that allows the system administrator to install a\nloadable module that can enforce any rules it likes, but it seems way\ntoo complicated to me to expose all of this configuration as SQL,\nespecially because for what I want to do, either (1A) or (1B) is\nadequate, and (1B) is a LOT simpler than (1C). It also caters to what\nI believe to be a common thing to want, without prejudice to the\npossibility that other people want other things.\n\nJoshua, what is your opinion on this point?\n\n2. Only allow a CREATEROLE user to drop users which that account\ncreated, and not just any role that isn't a superuser. Again leaving\naside previous proposals, this cannot be implemented without providing\nsome means by which we know which CREATEROLE user created which other\nuser. I believe there are a variety of words we could use to describe\nthat linkage, and I don't deeply care which ones we pick, although I\nhave my own preferences. We could speak of the CREATEROLE user being\nthe owner, manager, or administrator of the created role. We could\nspeak of a new kind of object, a TENANT, of which the CREATEROLE user\nis the administrator and to which the created user is linked. I\nproposed this previously and it's still my favorite idea. There are no\ndoubt other options as well. But it's axiomatic that we cannot\nrestrict the rights of a CREATEROLE user to drop other roles to a\nsubset of roles without having some way to define which subset is at\nissue.\n\nNow, my motivation for wanting this feature is pretty simple: I want\nto have something that feels like a superuser but isn't a full\nsuperuser, and can't interfere with accounts set up by the service\nprovider, but can do whatever they want to the other ones. But I think\nthis is potentially useful in the userbot case that you (Joshua)\nmention as well, because it seems like it could be pretty desirable to\nhave a certain list of users which the userbot can't remove, just for\nsafety, either to limit the damage if somebody gets into that account,\nor just to keep the bot from going nuts and doing something it\nshouldn't in the event of a programming error. Now, if you DON'T care\nabout the userbot being able to access this functionality, that's fine\nwith me, because then there's nothing left to do but argue about what\nto call the linkage between the CREATEROLE user and the created user.\nYour userbot need not participate in whatever system we decide on, and\nthings are no worse for that use case than they are today.\n\nBut if you DO want the userbot to be able to access that\nfunctionality, then things are more complicated, because now the\nlinkage has to be special-purpose. In that scenario, we can't say that\nthe right of a CREATEROLE user to drop a certain other role implies\nhaving the privileges of that other role, because in your use case,\nyou don't want that, whereas in mine, I do. What makes this\nparticularly ugly is that we can't, as things currently stand, use a\nrole as the grouping mechanism, because of the fact that a role can\nrevoke membership in itself from some other role. It will not do for\nroles to remove themselves from the set of roles that the CREATEROLE\nuser can drop. If we changed that behavior, then perhaps we could just\ndefine a way to say that role X can drop roles if they are members of\ngroup G. In my tenant scenario, G would be granted to X, and in your\nuserbot scenario, it wouldn't. Everybody wins, except for any people\nwho like the ability of roles to revoke themselves from any group\nwhatsoever.\n\nSo that leads to these questions: (2A) Do you care about restricting\nwhich roles the userbot can drop? (2B) If yes, do you endorse\nrestricting the ability of roles to revoke themselves from other\nroles?\n\nI think that we don't have any great problems here, at least as far as\nthis very specific issue is concerned, if either the answer to (2A) is\nno or the answer to (2B) is yes. However, if the answer to (2A) is yes\nand the answer to (2B) is no, there are difficulties. Evidently in\nthat case we need some new kind of thing that behaves mostly likes a\ngroup of roles but isn't actually a group of roles -- and that thing\nneeds to prohibit self-revocation. Given what I've written above, you\nmay be able to guess my preferred solution: let's call it a TENANT.\nThen, my pseudo-super-user can have permission to (i) create roles in\nthat tenant, (ii) drop roles in that tenant, and (iii) assume the\nprivileges of roles in that tenant -- and your userbot can have\nprivileges to do (i) and (ii) but not (iii). All we need do is add a\nroltenant column to pg_authid and find three bits someplace\ncorresponding to (i)-(iii), and we are home.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Feb 2022 12:40:09 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On Thu, Feb 17, 2022 at 12:40 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jan 31, 2022 at 1:57 PM Joshua Brindle\n> <joshua.brindle@crunchydata.com> wrote:\n> > This is precisely the use case I am trying to accomplish with this\n> > patchset, roughly:\n> >\n> > - An automated bot that creates users and adds them to the employees role\n> > - Bot cannot access any employee (or other roles) table data\n> > - Bot cannot become any employee\n> > - Bot can disable the login of any employee\n> >\n> > Yes there are attack surfaces around the fringes of login, etc but\n> > those can be mitigated with certificate authentication. My pg_hba\n> > would require any role in the employees role to use cert auth.\n> >\n> > This would adequately mitigate many threats while greatly enhancing\n> > user management.\n>\n> So, where do we go from here?\n>\n> I've been thinking about this comment a bit. On the one hand, I have\n> some reservations about the phrase \"the use case I am trying to\n> accomplish with this patchset,\" because in the end, this is not your\n> patch set. It's not reasonable to complain that a patch someone else\n> wrote doesn't solve your problem; of course everyone writes patches to\n> solve their own problems, or those of their employer, not other\n> people's problems. And that's as it should be, else we will have few\n> contributors. On the other hand, to the extent that this patch set\n> makes things worse for a reasonable use case which you have in mind,\n> that's an entirely legitimate complaint.\n\nYes, absolutely. It is my understanding that generally a community\nconsensus is attempted, I was throwing my (and Crunchy's) use case out\nthere as a possible goal, and I have spent time reviewing and testing\nthe patch, so I think that is fair. Obviously I am not in the position\nto stipulate hard requirements.\n\n> After a bit of testing, it seems to me that as things stand today,\n> things are nearly perfect for the use case that you have in mind. I\n> would be interested to know whether you agree. If I set up an account\n> and give it CREATEROLE, it can create users, and it can put them into\n> the employees role, but it can't SET ROLE to any of those accounts. It\n> can also ALTER ROLE ... NOLOGIN on any of those accounts. The only gap\n> I see is that there are certain role-based flags which the CREATEROLE\n> account cannot set: SUPERUSER, BYPASSRLS, REPLICATION. You might\n> prefer a system where your bot account had the option to grant those\n> privileges also, and I think that's a reasonable thing to want.\n\nI believe the only issue in the existing patchset was that membership\nwas required in employees was required for the Bot, but I can apply\nthe current patchset and test it out more in a bit.\n\n> However, I *also* think it's reasonable to want an account that can\n> create roles but can't give to those roles membership in roles that it\n> does not itself possess. Likewise, I think it's reasonable to want an\n> account that can only drop roles which that account itself created.\n> These kinds of requirements stem from a different use case than what\n> you are talking about here, but they seem like fine things to want,\n> and as far as I know we have pretty broad agreement that they are\n> reasonable. It seems extremely difficult to make a convincing argument\n> that this is not a thing which anyone should want to block:\n>\n> rhaas=> create role bob role pg_execute_server_program;\n> CREATE ROLE\n>\n> Honestly, that seems like a big yikes from here. How is it OK to block\n> \"create role bob superuser\" yet allow that command? I'm inclined to\n> think that's just broken. Even if the role were pg_write_all_data\n> rather than pg_execute_server_program, it's still a heck of a lot of\n> power to be handing out, and I don't see how anyone could make a\n> serious argument that we shouldn't have an option to restrict that.\n\nYes, agreed 100%. To be clear, I do not want Bot in the above use case\nto be able to add any role other than employees to new roles it\ncreates. So we are in complete agreement there, the only difference is\nthat I do not want Bot to be able to become those roles (or use any\naccess granted via those roles), it's only job is to manage roles, not\nlook at data.\n\n> Let me separate the two features that I just mentioned and talk about\n> them individually:\n>\n> 1. Don't allow a CREATEROLE user to give out membership in groups\n> which that user does not possess. Leaving aside the details of any\n> previously-proposed patches and just speaking theoretically, how can\n> this be implemented? I can think of a few ideas. We could (1A) just\n> change CREATEROLE to work that way, but IIUC that would break the use\n> case you outline here, so I guess that's off the table unless I am\n> misunderstanding the situation. We could also (1B) add a second role\n> attribute with a different name, like, err, CREATEROLEWEAKLY, that\n> behaves in that way, leaving the existing one untouched. But we could\n> also take it a lot further, because someone might want to let an\n> account hand out a set of privileges which corresponds neither to the\n> privileges of that account nor to the full set of available\n> privileges. That leads to another idea: (1C) implement an in-database\n> system that lets you specify which privileges an account has, and,\n> separately, which ones it can assign to others. I am skeptical of that\n> idea because it seems really, really complicated, not only from an\n> implementation standpoint but even just from a user-experience\n> standpoint. Suppose user 'userbot' has rights to grant a suitable set\n> of groups to the new users that it creates -- but then someone creates\n> a new group. Should that also be added to the things 'userbot' can\n> grant or not? What if we have 'userbot1' through 'userbot6' and each\n> of them can grant a different set of roles? I wouldn't mind (1D)\n> providing a hook that allows the system administrator to install a\n> loadable module that can enforce any rules it likes, but it seems way\n> too complicated to me to expose all of this configuration as SQL,\n> especially because for what I want to do, either (1A) or (1B) is\n> adequate, and (1B) is a LOT simpler than (1C). It also caters to what\n> I believe to be a common thing to want, without prejudice to the\n> possibility that other people want other things.\n\nif 1A worked for admins, or members I think it may work (i.e., Bot is\nadmin of employees but not a member of employees and therefore can\nmanage employees but not become them or read their tables)\n\nFor example, today this works (in master):\n\npostgres=# CREATE USER creator password 'a';\nCREATE ROLE\npostgres=# CREATE ROLE employees ADMIN creator NOLOGIN;\nCREATE ROLE\n\nas creator:\npostgres=> CREATE USER joshua IN ROLE employees PASSWORD 'a';\nERROR: permission denied to create role\n\nas superuser:\npostgres=# CREATE USER joshua LOGIN PASSWORD 'a';\nCREATE ROLE\n\nas creator:\npostgres=> GRANT employees TO joshua;\nGRANT ROLE\npostgres=> SET ROLE joshua;\nERROR: permission denied to set role \"joshua\"\npostgres=> SET ROLE employees;\nSET\n\nSo ADMIN of a role can add membership, but not create, and\nunfortunately can SET ROLE to employees.\n\nCan ADMIN mean \"can create and drop roles with membership of this role\nbut not implicitly be a member of the role\"?\n\nI think Stephen was advocating for this but wanted to look at the SQL\nspec to see if it conflicts.\n\nThe current (v8) patch conflates membership and admin:\n\npostgres=# CREATE USER user_creator CREATEROLE WITHOUT ADMIN OPTION\nPASSWORD 'a';\nCREATE ROLE\npostgres=# CREATE ROLE employees ADMIN user_creator NOLOGIN;\nCREATE ROLE\n\n(Note I never GRANTED employees to user_creator):\n\npostgres=# \\du\n List of roles\n Role name | Attributes\n | Member of\n--------------+------------------------------------------------------------+-------------\n employees | Cannot login | {}\n postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}\n user_creator | Create role\n | {employees}\n\npostgres=# REVOKE employees FROM user_creator;\nREVOKE ROLE\n\nas user_creator:\npostgres=> CREATE USER joshua2 IN ROLE employees;\nERROR: must have admin option on role \"employees\"\n\nThis seems non-intuitive to me, employees was never granted, but after\nbeing revoked the admin option is gone.\n\n> Joshua, what is your opinion on this point?\n>\n> 2. Only allow a CREATEROLE user to drop users which that account\n> created, and not just any role that isn't a superuser. Again leaving\n> aside previous proposals, this cannot be implemented without providing\n> some means by which we know which CREATEROLE user created which other\n> user. I believe there are a variety of words we could use to describe\n> that linkage, and I don't deeply care which ones we pick, although I\n> have my own preferences. We could speak of the CREATEROLE user being\n> the owner, manager, or administrator of the created role. We could\n> speak of a new kind of object, a TENANT, of which the CREATEROLE user\n> is the administrator and to which the created user is linked. I\n> proposed this previously and it's still my favorite idea. There are no\n> doubt other options as well. But it's axiomatic that we cannot\n> restrict the rights of a CREATEROLE user to drop other roles to a\n> subset of roles without having some way to define which subset is at\n> issue.\n>\n> Now, my motivation for wanting this feature is pretty simple: I want\n> to have something that feels like a superuser but isn't a full\n> superuser, and can't interfere with accounts set up by the service\n> provider, but can do whatever they want to the other ones. But I think\n> this is potentially useful in the userbot case that you (Joshua)\n> mention as well, because it seems like it could be pretty desirable to\n> have a certain list of users which the userbot can't remove, just for\n> safety, either to limit the damage if somebody gets into that account,\n> or just to keep the bot from going nuts and doing something it\n> shouldn't in the event of a programming error. Now, if you DON'T care\n> about the userbot being able to access this functionality, that's fine\n> with me, because then there's nothing left to do but argue about what\n> to call the linkage between the CREATEROLE user and the created user.\n> Your userbot need not participate in whatever system we decide on, and\n> things are no worse for that use case than they are today.\n\nNot being able to drop roles that weren't created or managed by the\nBot is good. Being able to specify exactly what roles the Bot can drop\nis ideal, we may want no automated drops whatsoever (just automated\ndisabling, to constrain possible damage).\n\n> But if you DO want the userbot to be able to access that\n> functionality, then things are more complicated, because now the\n> linkage has to be special-purpose. In that scenario, we can't say that\n> the right of a CREATEROLE user to drop a certain other role implies\n> having the privileges of that other role, because in your use case,\n> you don't want that, whereas in mine, I do. What makes this\n> particularly ugly is that we can't, as things currently stand, use a\n> role as the grouping mechanism, because of the fact that a role can\n> revoke membership in itself from some other role. It will not do for\n> roles to remove themselves from the set of roles that the CREATEROLE\n> user can drop. If we changed that behavior, then perhaps we could just\n> define a way to say that role X can drop roles if they are members of\n> group G. In my tenant scenario, G would be granted to X, and in your\n> userbot scenario, it wouldn't. Everybody wins, except for any people\n> who like the ability of roles to revoke themselves from any group\n> whatsoever.\n>\n> So that leads to these questions: (2A) Do you care about restricting\n> which roles the userbot can drop? (2B) If yes, do you endorse\n> restricting the ability of roles to revoke themselves from other\n> roles?\n\n2A, yes\n2B, yes, and IIUC this already exists:\npostgres=> select current_user;\n current_user\n--------------\n joshua\n(1 row)\n\npostgres=> REVOKE employees FROM joshua;\nERROR: must have admin option on role \"employees\"\n\n> I think that we don't have any great problems here, at least as far as\n> this very specific issue is concerned, if either the answer to (2A) is\n> no or the answer to (2B) is yes. However, if the answer to (2A) is yes\n> and the answer to (2B) is no, there are difficulties. Evidently in\n> that case we need some new kind of thing that behaves mostly likes a\n> group of roles but isn't actually a group of roles -- and that thing\n> needs to prohibit self-revocation. Given what I've written above, you\n> may be able to guess my preferred solution: let's call it a TENANT.\n> Then, my pseudo-super-user can have permission to (i) create roles in\n> that tenant, (ii) drop roles in that tenant, and (iii) assume the\n> privileges of roles in that tenant -- and your userbot can have\n> privileges to do (i) and (ii) but not (iii). All we need do is add a\n> roltenant column to pg_authid and find three bits someplace\n> corresponding to (i)-(iii), and we are home.\n\nI believe this works.\n\n> Thoughts?\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Feb 2022 10:54:04 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> 1. Don't allow a CREATEROLE user to give out membership in groups\n> which that user does not possess. Leaving aside the details of any\n> previously-proposed patches and just speaking theoretically, how can\n> this be implemented? I can think of a few ideas. We could (1A) just\n> change CREATEROLE to work that way, but IIUC that would break the use\n> case you outline here, so I guess that's off the table unless I am\n> misunderstanding the situation. We could also (1B) add a second role\n> attribute with a different name, like, err, CREATEROLEWEAKLY, that\n> behaves in that way, leaving the existing one untouched. But we could\n> also take it a lot further, because someone might want to let an\n> account hand out a set of privileges which corresponds neither to the\n> privileges of that account nor to the full set of available\n> privileges. That leads to another idea: (1C) implement an in-database\n> system that lets you specify which privileges an account has, and,\n> separately, which ones it can assign to others. I am skeptical of that\n> idea because it seems really, really complicated, not only from an\n> implementation standpoint but even just from a user-experience\n> standpoint. Suppose user 'userbot' has rights to grant a suitable set\n> of groups to the new users that it creates -- but then someone creates\n> a new group. Should that also be added to the things 'userbot' can\n> grant or not? What if we have 'userbot1' through 'userbot6' and each\n> of them can grant a different set of roles? I wouldn't mind (1D)\n> providing a hook that allows the system administrator to install a\n> loadable module that can enforce any rules it likes, but it seems way\n> too complicated to me to expose all of this configuration as SQL,\n> especially because for what I want to do, either (1A) or (1B) is\n> adequate, and (1B) is a LOT simpler than (1C). It also caters to what\n> I believe to be a common thing to want, without prejudice to the\n> possibility that other people want other things.\n\nI'm generally in support of changing CREATEROLE to only allow roles that\nthe role with CREATEROLE is an admin of to be allowed as part of the\ncommand (throwing an error in other cases). That doesn't solve other\nuse-cases which would certainly be nice to solve but it would at least\nreduce the shock people have when they discover how CREATEROLE actually\nworks (that is, the way we document it to work, but that's ultimately\nnot what people expect).\n\nIf that's all this was about then that would be one thing, but folks are\ninterested in doing more here and that's good because there's a lot here\nthat could be (and I'd say should be..) done.\n\nI'm not a fan of 1B. In general, I'm in support of 1C but I don't feel\nthat absolutely everything must be done for 1C right from the start-\nrather, I would argue that we'd be better off building a way for 1C to\nbe improved upon in the future, akin to our existing privilege system\nwhere we've added things like the ability to GRANT TRUNCATE rights which\ndidn't originally exist. I don't think 1D is a reasonable way to\naccomplish that though, particularly as this involves storing\ninformation about roles which needs to be cleaned up if those roles are\nremoved or modified. I also don't really agree with the statement that\nthis ends up being too complicated for SQL.\n\n> 2. Only allow a CREATEROLE user to drop users which that account\n> created, and not just any role that isn't a superuser. Again leaving\n> aside previous proposals, this cannot be implemented without providing\n> some means by which we know which CREATEROLE user created which other\n> user. I believe there are a variety of words we could use to describe\n> that linkage, and I don't deeply care which ones we pick, although I\n> have my own preferences. We could speak of the CREATEROLE user being\n> the owner, manager, or administrator of the created role. We could\n> speak of a new kind of object, a TENANT, of which the CREATEROLE user\n> is the administrator and to which the created user is linked. I\n> proposed this previously and it's still my favorite idea. There are no\n> doubt other options as well. But it's axiomatic that we cannot\n> restrict the rights of a CREATEROLE user to drop other roles to a\n> subset of roles without having some way to define which subset is at\n> issue.\n\nI don't think it's a great plan to limit who is allowed to DROP roles to\nbe just those that a given role created. I also don't like the idea of\nintroducing a single field for owner/manager/tenant/whatever to the role\nsystem- instead we should add other ways that roles can be associated to\neach other by extending the existing system that we have for that, which\nis role membership. Role membership today is pretty limited but I don't\nsee any reason why we couldn't improve on that in a way that's flexible\nand allows us to define new associations in the future. The biggest\ndifference between a 'tenant' or such as proposed vs. a role association\nis in where the information is tracked and what exactly it means.\nSaying \"I want a owner\" or such is easy because it's basically punting\non the complciated bit of asking the question: what does that *mean*\nwhen it comes to what rights that includes vs. doesn't? What if I only\nwant some of those rights to be given away but not all of them? We have\nthat system for tables/schemas/etc, and it hasn't been great as we've\nseen through the various requests to add things like GRANT TRUNCATE.\n\n> But if you DO want the userbot to be able to access that\n> functionality, then things are more complicated, because now the\n> linkage has to be special-purpose. In that scenario, we can't say that\n> the right of a CREATEROLE user to drop a certain other role implies\n> having the privileges of that other role, because in your use case,\n> you don't want that, whereas in mine, I do. What makes this\n> particularly ugly is that we can't, as things currently stand, use a\n> role as the grouping mechanism, because of the fact that a role can\n> revoke membership in itself from some other role. It will not do for\n> roles to remove themselves from the set of roles that the CREATEROLE\n> user can drop. If we changed that behavior, then perhaps we could just\n> define a way to say that role X can drop roles if they are members of\n> group G. In my tenant scenario, G would be granted to X, and in your\n> userbot scenario, it wouldn't. Everybody wins, except for any people\n> who like the ability of roles to revoke themselves from any group\n> whatsoever.\n\nThe ability of a role to revoke itself from some other role is just\nsomething we need to accept as being a change that needs to be made, and\nI do believe that such a change is supported by the standard, in that a\nREVOKE will only work if you have the right to make it as the user who\nperformed the GRANT in the first place.\n\n> So that leads to these questions: (2A) Do you care about restricting\n> which roles the userbot can drop? (2B) If yes, do you endorse\n> restricting the ability of roles to revoke themselves from other\n> roles?\n\nAs with Joshua, and as hopefully came across from the above discussion,\nI'm also a 'yes and yes' on these two.\n\n> I think that we don't have any great problems here, at least as far as\n> this very specific issue is concerned, if either the answer to (2A) is\n> no or the answer to (2B) is yes. However, if the answer to (2A) is yes\n> and the answer to (2B) is no, there are difficulties. Evidently in\n> that case we need some new kind of thing that behaves mostly likes a\n> group of roles but isn't actually a group of roles -- and that thing\n> needs to prohibit self-revocation. Given what I've written above, you\n> may be able to guess my preferred solution: let's call it a TENANT.\n> Then, my pseudo-super-user can have permission to (i) create roles in\n> that tenant, (ii) drop roles in that tenant, and (iii) assume the\n> privileges of roles in that tenant -- and your userbot can have\n> privileges to do (i) and (ii) but not (iii). All we need do is add a\n> roltenant column to pg_authid and find three bits someplace\n> corresponding to (i)-(iii), and we are home.\n\nWhere are those bits going to go though..? I don't think they should go\ninto pg_authid, nor do I feel that this 'tenant' or such should go there\neither because pg_authid is about describing individual roles, not about\nrole associations. Instead, I'd suggest that those bits go into\npg_auth_members in the form of additional columns to describe the role\nassociations. That is, instead of the existance of a row in\npg_auth_members meaning that one role has membership in another role, we\ngive users the choice of if that's the case or not with a separate\ncolumn. That would then neatly give us a way for a role to have admin\nrights over another role but not membership in that role. We could then\nfurther extend this by adding other columns to pg_auth_members for other\nrights as users decide they need them- such as the ability for a role to\nDROP some set of roles.\n\n> 2A, yes\n> 2B, yes, and IIUC this already exists:\n> postgres=> select current_user;\n> current_user\n> --------------\n> joshua\n> (1 row)\n> \n> postgres=> REVOKE employees FROM joshua;\n> ERROR: must have admin option on role \"employees\"\n\nThat's not the right direction though, or, at least, might not be in the\ncase being discussed (though, I suppose, we could discuss that..). In\nwhat you're showing, employees doesn't have the rights of joshua, but\njoshua has the rights of employees. If, instead, joshua was GRANT'd to\nadmin and joshua decided that they didn't care for that, they can:\n\n=> select current_user;\n current_user \n--------------\n joshua\n(1 row)\n\n=> \\du\n List of roles\n Role name | Attributes | Member of \n-----------+------------------------------------------------------------+-------------\n admin | Cannot login | {joshua}\n employees | Cannot login | {}\n joshua | | {employees}\n sfrost | Superuser, Create role, Create DB, Replication, Bypass RLS | {}\n\n=> revoke joshua from admin;\nREVOKE ROLE\n\n=*> \\du\n List of roles\n Role name | Attributes | Member of \n-----------+------------------------------------------------------------+-------------\n admin | Cannot login | {}\n employees | Cannot login | {}\n joshua | | {employees}\n sfrost | Superuser, Create role, Create DB, Replication, Bypass RLS | {}\n\nEven though, in this case, it was 'sfrost' (a superuser) who GRANT'd\njoshua to admin.\n\nThanks,\n\nStephen", "msg_date": "Mon, 28 Feb 2022 14:09:23 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "[ Been away, catching up on email. ]\n\nOn Tue, Feb 22, 2022 at 10:54 AM Joshua Brindle\n<joshua.brindle@crunchydata.com> wrote:\n> Yes, absolutely. It is my understanding that generally a community\n> consensus is attempted, I was throwing my (and Crunchy's) use case out\n> there as a possible goal, and I have spent time reviewing and testing\n> the patch, so I think that is fair. Obviously I am not in the position\n> to stipulate hard requirements.\n\nI agree with all of that -- and thanks for writing back.\n\n> if 1A worked for admins, or members I think it may work (i.e., Bot is\n> admin of employees but not a member of employees and therefore can\n> manage employees but not become them or read their tables)\n>\n> For example, today this works (in master):\n>\n> postgres=# CREATE USER creator password 'a';\n> CREATE ROLE\n> postgres=# CREATE ROLE employees ADMIN creator NOLOGIN;\n> CREATE ROLE\n>\n> as creator:\n> postgres=> CREATE USER joshua IN ROLE employees PASSWORD 'a';\n> ERROR: permission denied to create role\n>\n> as superuser:\n> postgres=# CREATE USER joshua LOGIN PASSWORD 'a';\n> CREATE ROLE\n>\n> as creator:\n> postgres=> GRANT employees TO joshua;\n> GRANT ROLE\n> postgres=> SET ROLE joshua;\n> ERROR: permission denied to set role \"joshua\"\n> postgres=> SET ROLE employees;\n> SET\n>\n> So ADMIN of a role can add membership, but not create, and\n> unfortunately can SET ROLE to employees.\n>\n> Can ADMIN mean \"can create and drop roles with membership of this role\n> but not implicitly be a member of the role\"?\n\nI foresee big problems trying to go in this direction. According to\nthe documentation, \"the ADMIN clause is like ROLE, but the named roles\nare added to the new role WITH ADMIN OPTION, giving them the right to\ngrant membership in this role to others.\" And for me, the name \"WITH\nADMIN OPTION\" is a huge red flag. You grant membership in a role, and\nyou may grant that membership with the admin option, or without the\nadmin option, but either way you are granting membership. And to me\nthat is just built into the phraseology. You may be able to buy the\ncar that you want with or without the all-wheel drive option, and you\nmay even be able to upgrade a car purchased without that option to\nhave it later, but you can't buy all-wheel drive in the abstract\nwithout an association to some particular car. That's what it means\nfor it to be an option.\n\nNow, I think there is a good argument to be made that in this case the\nfact that the administration privileges are an option associated with\nmembership is artificial. I expect we can all agree that it is\nconceptually easy to understand the idea of being able to administer a\nrole and the idea of having that role's privileges as two separate\nconcepts, neither dependent upon the other, and certainly the SQL\nsyntax could be written in a way that makes that very natural. But as\nit is, what is the equivalent of GRANT employees TO bot WITH ADMIN\nOPTION when you want to convey only administration rights and not\nmembership? GRANT employees TO bot WITH ADMIN OPTION BUT WITHOUT THE\nUNDERLYING MEMBERSHIP TO WHICH ADMIN IS AN OPTION? Maybe that sounds\nsarcastic, but to me it seems like a genuinely serious problem. People\nconstruct a mental model of how stuff works based to a significant\ndegree on the structure of the syntax, and I really don't see an\nobvious way of extending the grammar in a way that is actually going\nto make sense to people.\n\n> The current (v8) patch conflates membership and admin:\n>\n> postgres=# CREATE USER user_creator CREATEROLE WITHOUT ADMIN OPTION\n> PASSWORD 'a';\n> CREATE ROLE\n> postgres=# CREATE ROLE employees ADMIN user_creator NOLOGIN;\n> CREATE ROLE\n>\n> (Note I never GRANTED employees to user_creator):\n\nI think you did, because even right now without the patch \"ADMIN\nwhatever\" is documented to mean membership with admin option.\n\n> > So that leads to these questions: (2A) Do you care about restricting\n> > which roles the userbot can drop? (2B) If yes, do you endorse\n> > restricting the ability of roles to revoke themselves from other\n> > roles?\n>\n> 2A, yes\n> 2B, yes, and IIUC this already exists:\n> postgres=> select current_user;\n> current_user\n> --------------\n> joshua\n> (1 row)\n>\n> postgres=> REVOKE employees FROM joshua;\n> ERROR: must have admin option on role \"employees\"\n\nNo, because as Stephen correctly points out, you've got that REVOKE\ncommand backwards.\n\n> > I think that we don't have any great problems here, at least as far as\n> > this very specific issue is concerned, if either the answer to (2A) is\n> > no or the answer to (2B) is yes. However, if the answer to (2A) is yes\n> > and the answer to (2B) is no, there are difficulties. Evidently in\n> > that case we need some new kind of thing that behaves mostly likes a\n> > group of roles but isn't actually a group of roles -- and that thing\n> > needs to prohibit self-revocation. Given what I've written above, you\n> > may be able to guess my preferred solution: let's call it a TENANT.\n> > Then, my pseudo-super-user can have permission to (i) create roles in\n> > that tenant, (ii) drop roles in that tenant, and (iii) assume the\n> > privileges of roles in that tenant -- and your userbot can have\n> > privileges to do (i) and (ii) but not (iii). All we need do is add a\n> > roltenant column to pg_authid and find three bits someplace\n> > corresponding to (i)-(iii), and we are home.\n>\n> I believe this works.\n\nCool.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 1 Mar 2022 13:13:55 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On Mon, Feb 28, 2022 at 2:09 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I'm generally in support of changing CREATEROLE to only allow roles that\n> the role with CREATEROLE is an admin of to be allowed as part of the\n> command (throwing an error in other cases). That doesn't solve other\n> use-cases which would certainly be nice to solve but it would at least\n> reduce the shock people have when they discover how CREATEROLE actually\n> works (that is, the way we document it to work, but that's ultimately\n> not what people expect).\n\nSo I'm 100% good with that because it does exactly what I want, but my\nunderstanding of the situation is that it breaks the userbot case that\nJoshua is talking about. Right now, with stock PostgreSQL in any\nreleased version, his userbot can have CREATEROLE and give out roles\nthat it doesn't itself possess. If we restrict CREATEROLE in the way\ndescribed here, that's no longer possible.\n\nNow it's possible that you and/or he would take the position that\nwe're still coming out ahead despite that functional regression,\nbecause as I now understand after reading Joshua's latest email, he\ndoesn't want the userbot to be able to grant ANY role, just the\n'employees' role - and today he can't get that. So in a modified\nuniverse where we restrict the privileges of CREATEROLE, then on the\none hand he GAINS the ability to have a userbot that can grant some\nroles but not others, but on the other hand, he's forced to give the\nuserbot the roles he wants it to be able to hand out. Is that better\noverall or worse?\n\nTo really give him EXACTLY what he wants, we need a way of specifying\nadministration without membership. See my last reply to the thread for\nmy concerns about that.\n\n> I don't think it's a great plan to limit who is allowed to DROP roles to\n> be just those that a given role created. I also don't like the idea of\n> introducing a single field for owner/manager/tenant/whatever to the role\n> system- instead we should add other ways that roles can be associated to\n> each other by extending the existing system that we have for that, which\n> is role membership. Role membership today is pretty limited but I don't\n> see any reason why we couldn't improve on that in a way that's flexible\n> and allows us to define new associations in the future. The biggest\n> difference between a 'tenant' or such as proposed vs. a role association\n> is in where the information is tracked and what exactly it means.\n> Saying \"I want a owner\" or such is easy because it's basically punting\n> on the complciated bit of asking the question: what does that *mean*\n> when it comes to what rights that includes vs. doesn't? What if I only\n> want some of those rights to be given away but not all of them? We have\n> that system for tables/schemas/etc, and it hasn't been great as we've\n> seen through the various requests to add things like GRANT TRUNCATE.\n\nWell, there's no accounting for taste, but I guess I see this pretty\nmuch opposite to the way you do. I think GRANT TRUNCATE is nice and\nsimple and clear. It does one thing and it's easy to understand what\nthat thing is, and it has very few surprising or undocumented side\neffects. On the other hand, role membership is a mess, and it's not at\nall clear how to sort that mess out. I guess I agree with you that it\nwould be nice if it could be done, but the list of problems is pretty\nsubstantial. Like, membership implies the right to SET ROLE, and also\nthe right to implicitly exercise the privileges of the role, and\nyou've complained about that fuzziness. And ADMIN OPTION implies\nmembership, and you don't like that either. And elsewhere it's been\nraised that nobody would expect to have a table end up owned by\n'pg_execute_server_programs', or a user logged in directly as\n'employees' rather than as some particular employee, but all that\nstuff can happen, and some of it can't even be effectively prevented\nwith good configuration. 'toe' can be a member of 'foot' while, which\nmakes sense to everybody, and at the same time, 'foot' can be a member\nof 'toe', which doesn't make any sense at all. And because both\ndirections are possible even experienced PostgreSQL users and hackers\nget confused, as demonstrated by Joshua's having just got the\nrevoke-from-role case backwards.\n\nOf those four problems, the last two are clearly the result of\nconflating users with groups - and really also with capabilities - and\nhaving a unified role concept that encompasses all of those things. I\nthink we would be better off if we had not done that, both in the\nsense that I think the system would be less confusing to understand,\nand also in the sense that we would likely have fewer security bugs.\nAnd similarly I agree with you that it would be better if the right to\nadminister a role were clearly separated from membership in a role,\nand if the right to use the privileges of a role were separated from\nthe ability to SET ROLE to it. However, unlike you, I see the whole\n'role membership' concept as the problem, not the solution. We\nconflate a bunch of different kinds of things together and call them\nall 'roles' and a bunch of other things together and call them\n'membership' and then we end up with an awkward mess. That's how I see\nit, anyway.\n\n> The ability of a role to revoke itself from some other role is just\n> something we need to accept as being a change that needs to be made, and\n> I do believe that such a change is supported by the standard, in that a\n> REVOKE will only work if you have the right to make it as the user who\n> performed the GRANT in the first place.\n\nGreat. I propose that we sever that issue and discuss it on a new\nthread to avoid confusion. I believe there is some debate to be had\nabout exactly what we want the behavior to be in this area, but if we\ncan reach consensus on that point, this shouldn't be too hard to knock\nout. I will take it as an action item to get that thread going, if\nthat works for you.\n\n> > So that leads to these questions: (2A) Do you care about restricting\n> > which roles the userbot can drop? (2B) If yes, do you endorse\n> > restricting the ability of roles to revoke themselves from other\n> > roles?\n>\n> As with Joshua, and as hopefully came across from the above discussion,\n> I'm also a 'yes and yes' on these two.\n\nGreat.\n\n> > I think that we don't have any great problems here, at least as far as\n> > this very specific issue is concerned, if either the answer to (2A) is\n> > no or the answer to (2B) is yes. However, if the answer to (2A) is yes\n> > and the answer to (2B) is no, there are difficulties. Evidently in\n> > that case we need some new kind of thing that behaves mostly likes a\n> > group of roles but isn't actually a group of roles -- and that thing\n> > needs to prohibit self-revocation. Given what I've written above, you\n> > may be able to guess my preferred solution: let's call it a TENANT.\n> > Then, my pseudo-super-user can have permission to (i) create roles in\n> > that tenant, (ii) drop roles in that tenant, and (iii) assume the\n> > privileges of roles in that tenant -- and your userbot can have\n> > privileges to do (i) and (ii) but not (iii). All we need do is add a\n> > roltenant column to pg_authid and find three bits someplace\n> > corresponding to (i)-(iii), and we are home.\n>\n> Where are those bits going to go though..? I don't think they should go\n> into pg_authid, nor do I feel that this 'tenant' or such should go there\n> either because pg_authid is about describing individual roles, not about\n> role associations. Instead, I'd suggest that those bits go into\n> pg_auth_members in the form of additional columns to describe the role\n> associations. That is, instead of the existance of a row in\n> pg_auth_members meaning that one role has membership in another role, we\n> give users the choice of if that's the case or not with a separate\n> column. That would then neatly give us a way for a role to have admin\n> rights over another role but not membership in that role. We could then\n> further extend this by adding other columns to pg_auth_members for other\n> rights as users decide they need them- such as the ability for a role to\n> DROP some set of roles.\n\nWhat I had in mind is to add a pg_tenant catalog (tenid, tenname) and\nadd some columns to the pg_authid catalog (roltenant, roltenantrights,\nor something like that). See above for why I am not excited about\npiggybacking more things onto role membership.\n\n> => revoke joshua from admin;\n> REVOKE ROLE\n>\n> =*> \\du\n> List of roles\n> Role name | Attributes | Member of\n> -----------+------------------------------------------------------------+-------------\n> admin | Cannot login | {}\n> employees | Cannot login | {}\n> joshua | | {employees}\n> sfrost | Superuser, Create role, Create DB, Replication, Bypass RLS | {}\n>\n> Even though, in this case, it was 'sfrost' (a superuser) who GRANT'd\n> joshua to admin.\n\nQuite so.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 1 Mar 2022 15:03:48 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On Mon, Feb 28, 2022 at 2:09 PM Stephen Frost <sfrost@snowman.net> wrote:\n> The ability of a role to revoke itself from some other role is just\n> something we need to accept as being a change that needs to be made, and\n> I do believe that such a change is supported by the standard, in that a\n> REVOKE will only work if you have the right to make it as the user who\n> performed the GRANT in the first place.\n\nMoving this part of the discussion to a new thread to reduce confusion\nand hopefully get broader input on this topic. It seems like Stephen\nand I agree in principle that some change here is a good idea. If\nanyone else thinks it's a bad idea, then this would be a great time to\nmention that, ideally with reasons. If you agree that it's a good\nidea, then it would be great to have your views on the follow-up\nquestions which I shall pose below. To the extent that it is\nreasonably possible to do so, I would like to try to keep focused on\nspecific design questions rather than getting tangled up in general\ndiscussion of long-term direction. First, a quick overview of the\nissue for those who have not followed the earlier threads in their\ngrueling entirety:\n\nrhaas=# create user boss;\nCREATE ROLE\nrhaas=# create user peon;\nCREATE ROLE\nrhaas=# grant peon to boss;\nGRANT ROLE\nrhaas=# \\c - peon\nYou are now connected to database \"rhaas\" as user \"peon\".\nrhaas=> revoke peon from boss; -- i don't like being bossed around!\nREVOKE ROLE\n\nI argue (and Stephen seems to agree) that the peon shouldn't be able\nto undo the superuser's GRANT. Furthermore, we also seem to agree that\nyou don't necessarily have to be the exact user who performed the\ngrant. For example, it would be shocking if one superuser couldn't\nremove a grant made by another superuser, or for that matter if a\nsuperuser couldn't remove a grant made by a non-superuser. But there\nare a few open questions in my mind:\n\n1. What should be the exact rule for whether A can remove a grant made\nby B? Is it has_privs_of_role()? is_member_of_role()? Something else?\n\n2. What happens if the same GRANT is enacted by multiple users? For\nexample, suppose peon does \"GRANT peon to boss\" and then the superuser\ndoes the same thing afterwards, or vice versa? One design would be to\ntry to track those as two separate grants, but I'm not sure if we want\nto add that much complexity, since that's not how we do it now and it\nwould, for example, implicate the choice of PK on the pg_auth_members\ntable. An idea that occurs to me is to say that the first GRANT works\nand becomes the grantor of record, and any duplicate GRANT that\nhappens later issues a NOTICE without changing anything. If the user\nperforming the later GRANT has sufficient privileges and wishes to do\nso, s/he can REVOKE first and then re-GRANT. On the other hand, for\nother types of grants, like table privileges, we do track multiple\ngrants by different users, so maybe we should do the same thing here:\n\nrhaas=# create table example (a int, b int);\nCREATE TABLE\nrhaas=# grant select on table example to foo with grant option;\nGRANT\nrhaas=# grant select on table example to bar with grant option;\nGRANT\nrhaas=# \\c - foo\nYou are now connected to database \"rhaas\" as user \"foo\".\nrhaas=> grant select on table example to exemplar;\nGRANT\nrhaas=> \\c - bar\nYou are now connected to database \"rhaas\" as user \"bar\".\nrhaas=> grant select on table example to exemplar;\nGRANT\nrhaas=> select relacl from pg_class where relname = 'example';\n relacl\n-------------------------------------------------------------------------------\n {rhaas=arwdDxt/rhaas,foo=r*/rhaas,bar=r*/rhaas,exemplar=r/foo,exemplar=r/bar}\n(1 row)\n\n3. What happens if a user is dropped after being recorded as a\ngrantor? We actually have a grantor column in pg_auth_members today,\nbut it's not properly maintained. If the grantor is dropped the OID\nremains in the table, and could eventually end up pointing to some\nother user if the OID counter wraps around and a new role is created\nwith the same OID. That's completely unacceptable for something we\nwant to use for any serious purpose. I suggest that what ought to\nhappen is the role should acquire a dependency on the grant, such that\nDROP fails and the GRANT is listed as something to be dropped, and\nDROP OWNED BY drops the GRANT. I think this would require adding an\nOID column to pg_auth_members so that a dependency can point to it,\nwhich sounds like a significant infrastructure change that would need\nto be carefully validated for adverse side effects, but not a huge\ncrazy problem that we can't get past.\n\n4. Should we apply this rule to other types of grants, rather than\njust to role membership? Why or why not? Consider this:\n\nrhaas=# create user accountant;\nCREATE ROLE\nrhaas=# create user auditor;\nCREATE ROLE\nrhaas=# create table money (a int, b text);\nCREATE TABLE\nrhaas=# alter table money owner to accountant;\nALTER TABLE\nrhaas=# grant select on table money to auditor;\nGRANT\nrhaas=# \\c - accountant\nYou are now connected to database \"rhaas\" as user \"accountant\".\nrhaas=> revoke select on table money from auditor;\nREVOKE\n\nI would argue that's exactly the same problem. The superuser has\ndecreed that the auditor gets to select from the money table owned by\nthe accountant. The fact that the accountant may not be not in favor\nof the auditor seeing what the accountant is doing with the money is\nprecisely the reason why we have auditors. That said, if we apply this\nto all object types, it's a much bigger change. Unlike role\nmembership, we do record dependencies on table privileges, which makes\nany change here a bit simpler, and you can't drop a role without\nremoving the associated grants first. However, when the superuser\nperforms the GRANT as in the above example, the grantor is recorded as\nthe table owner, not the superuser! So if we really want role\nmembersip and other kinds of grants to behave in the same way, we have\nour work cut out for us here.\n\nPlease note that it is not really my intention to try to shove\nanything into v15 here. If it so happens that we quickly agree on\nsomething that already exists in the patches Mark's already written,\nand we also agree that those patches are in good enough shape that we\ncan commit something in the next few weeks, fantastic, but I'm not\nnecessarily expecting that. What I do want to do is agree on a plan so\nthat, if somebody does the work to implement said plan, we do not then\nend up relitigating the whole thing and coming to a different\nconclusion the second time. This being a community whose membership\nvaries from time to time and the opinions of whose members vary from\ntime to time, such misadventure can never be entirely ruled out.\nHowever, I would like to minimize the chances of such an outcome as\nmuch as we can.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Mar 2022 15:50:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "role self-revocation" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> 1. What should be the exact rule for whether A can remove a grant made\n> by B? Is it has_privs_of_role()? is_member_of_role()? Something else?\n\nNo strong opinion here, but I'd lean slightly to the more restrictive\noption.\n\n> 2. What happens if the same GRANT is enacted by multiple users? For\n> example, suppose peon does \"GRANT peon to boss\" and then the superuser\n> does the same thing afterwards, or vice versa? One design would be to\n> try to track those as two separate grants, but I'm not sure if we want\n> to add that much complexity, since that's not how we do it now and it\n> would, for example, implicate the choice of PK on the pg_auth_members\n> table.\n\nAs you note later, we *do* track such grants separately in ordinary\nACLs, and I believe this is clearly required by the SQL spec.\nIt says (for privileges on objects):\n\n Each privilege is represented by a privilege descriptor.\n A privilege descriptor contains:\n — The identification of the object on which the privilege is granted.\n — The <authorization identifier> of the grantor of the privilege.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n — The <authorization identifier> of the grantee of the privilege.\n — Identification of the action that the privilege allows.\n — An indication of whether or not the privilege is grantable.\n — An indication of whether or not the privilege has the WITH HIERARCHY OPTION specified.\n\nFurther down (4.42.3 in SQL:2021), the granting of roles is described,\nand that says:\n\n Each role authorization is described by a role authorization descriptor.\n A role authorization descriptor includes:\n — The role name of the role.\n — The authorization identifier of the grantor.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n — The authorization identifier of the grantee.\n — An indication of whether or not the role authorization is grantable.\n\nIf we are not tracking the grantors of role authorizations,\nthen we are doing it wrong and we ought to fix that.\n\n> 3. What happens if a user is dropped after being recorded as a\n> grantor?\n\nShould work the same as it does now for ordinary ACLs, ie, you\ngotta drop the grant first.\n\n> 4. Should we apply this rule to other types of grants, rather than\n> just to role membership?\n\nI am not sure about the reasoning behind the existing rule that\nsuperuser-granted privileges are recorded as being granted by the\nobject owner. It does feel more like a wart than something we want.\nIt might have been a hack to deal with the lack of GRANTED BY\noptions in GRANT/REVOKE back in the day.\n\nChanging it could have some bad compatibility consequences though.\nIn particular, I believe it would break existing pg_dump files,\nin that after restore all privileges would be attributed to the\nrestoring superuser, and there'd be no very easy way to clean that\nup.\n\n> Please note that it is not really my intention to try to shove\n> anything into v15 here.\n\nAgreed, this is not something to move on quickly. We might want\nto think about adjusting pg_dump to use explicit GRANTED BY\noptions in GRANT/REVOKE a release or two before making incompatible\nchanges.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Mar 2022 16:34:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Fri, Mar 4, 2022 at 1:50 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Feb 28, 2022 at 2:09 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > The ability of a role to revoke itself from some other role is just\n> > something we need to accept as being a change that needs to be made, and\n> > I do believe that such a change is supported by the standard, in that a\n> > REVOKE will only work if you have the right to make it as the user who\n> > performed the GRANT in the first place.\n>\n> First, a quick overview of the\n> issue for those who have not followed the earlier threads in their\n> grueling entirety:\n>\n> rhaas=# create user boss;\n> CREATE ROLE\n> rhaas=# create user peon;\n> CREATE ROLE\n> rhaas=# grant peon to boss;\n> GRANT ROLE\n> rhaas=# \\c - peon\n> You are now connected to database \"rhaas\" as user \"peon\".\n> rhaas=> revoke peon from boss; -- i don't like being bossed around!\n> REVOKE ROLE\n>\n>\nThe wording for this example is hurting my brain.\nGRANT admin TO joe;\n\\c admin\nREVOKE admin FROM joe;\n\n> I argue (and Stephen seems to agree) that the peon shouldn't be able\n> to undo the superuser's GRANT.\n\n\nI think I disagree. Or, at least, the superuser has full control of\ndictating how role membership is modified and that seems sufficient.\n\nThe example above works because of:\n\n\"A role is not considered to hold WITH ADMIN OPTION on itself, but it may\ngrant or revoke membership in itself from a database session where the\nsession user matches the role.\"\n\nIf a superuser doesn't want \"admin\" to modify its own membership then they\ncan prevent anyone but a superuser from being able to have a session_user\nof \"admin\". If that happens then the only way a non-superuser can modify\ngroup membership is by being added to said group WITH ADMIN OPTION.\n\nNow, if two people and a superuser are all doing membership management on\nthe same group, and we want to add permission checks and multiple grants as\ntools, instead of having them just communicate with each other, then by all\nmeans let us do so. In that case, in answer to questions 2 and 3, we\nshould indeed track which session_user made the grant and only allow the\nsame session_user or the superuser to revoke it (we'd want to stop\n\"ignoring\" the GRANTED BY clause of REVOKE ROLE FROM so the superuser at\nleast could remove grants made via WITH ADMIN OPTION).\n\n4. Should we apply this rule to other types of grants, rather than\n> just to role membership? Why or why not? Consider this:\n>\n\n\n> The fact that the accountant may not be not in favor\n> of the auditor seeing what the accountant is doing with the money is\n> precisely the reason why we have auditors.\n\n[...]\n\n> However, when the superuser\n> performs the GRANT as in the above example, the grantor is recorded as\n> the table owner, not the superuser! So if we really want role\n> membersip and other kinds of grants to behave in the same way, we have\n> our work cut out for us here.\n>\n\nYes, this particular choice seems unfortunate, but also not something that\nI think it is necessarily mandatory for us to improve. If the accountant\nis the owner then yes they get to decide permissions. In the presence of\nan auditor role either you trust the accountant role to keep the\npermissions in place or you define a superior authority to both the auditor\nand accountant to be the owner. Or let the superuser manage everything by\nwitholding login and WITH ADMIN OPTION privileges from the ownership role.\n\n\nIf we do extend role membership tracking I suppose the design question is\nwhether the new role grantor dependency tracking will have a superuser be\nthe recorded grantor instead of some owner. Given that roles don't\npresently have an owner concept, consistency with existing permissions in\nthis manner would be trickier. Because of this, I would probably leave\nrole grantor tracking at the session_user level while database objects\ncontinue to emanate from the object owner. The global vs database\ndifferences seem like a sufficient theoretical justification for the\ndifference in implementation.\n\nDavid J.\n\nOn Fri, Mar 4, 2022 at 1:50 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Feb 28, 2022 at 2:09 PM Stephen Frost <sfrost@snowman.net> wrote:\n> The ability of a role to revoke itself from some other role is just\n> something we need to accept as being a change that needs to be made, and\n> I do believe that such a change is supported by the standard, in that a\n> REVOKE will only work if you have the right to make it as the user who\n> performed the GRANT in the first place.\nFirst, a quick overview of the\nissue for those who have not followed the earlier threads in their\ngrueling entirety:\n\nrhaas=# create user boss;\nCREATE ROLE\nrhaas=# create user peon;\nCREATE ROLE\nrhaas=# grant peon to boss;\nGRANT ROLE\nrhaas=# \\c - peon\nYou are now connected to database \"rhaas\" as user \"peon\".\nrhaas=> revoke peon from boss; -- i don't like being bossed around!\nREVOKE ROLE\nThe wording for this example is hurting my brain.GRANT admin TO joe;\\c adminREVOKE admin FROM joe;\nI argue (and Stephen seems to agree) that the peon shouldn't be able\nto undo the superuser's GRANT.I think I disagree.  Or, at least, the superuser has full control of dictating how role membership is modified and that seems sufficient.The example above works because of:\"A role is not considered to hold WITH ADMIN OPTION on itself, but it may grant or revoke membership in itself from a database session where the session user matches the role.\"If a superuser doesn't want \"admin\" to modify its own membership then they can prevent anyone but a superuser from being able to have a session_user of \"admin\".  If that happens then the only way a non-superuser can modify group membership is by being added to said group WITH ADMIN OPTION. Now, if two people and a superuser are all doing membership management on the same group, and we want to add permission checks and multiple grants as tools, instead of having them just communicate with each other, then by all means let us do so.  In that case, in answer to questions 2 and 3, we should indeed track which session_user made the grant and only allow the same session_user or the superuser to revoke it (we'd want to stop \"ignoring\" the GRANTED BY clause of REVOKE ROLE FROM so the superuser at least could remove grants made via WITH ADMIN OPTION).\n4. Should we apply this rule to other types of grants, rather than\njust to role membership? Why or why not? Consider this: The fact that the accountant may not be not in favor\nof the auditor seeing what the accountant is doing with the money is\nprecisely the reason why we have auditors.[...] However, when the superuser\nperforms the GRANT as in the above example, the grantor is recorded as\nthe table owner, not the superuser! So if we really want role\nmembersip and other kinds of grants to behave in the same way, we have\nour work cut out for us here.Yes, this particular choice seems unfortunate, but also not something that I think it is necessarily mandatory for us to improve.  If the accountant is the owner then yes they get to decide permissions.  In the presence of an auditor role either you trust the accountant role to keep the permissions in place or you define a superior authority to both the auditor and accountant to be the owner.  Or let the superuser manage everything by witholding login and WITH ADMIN OPTION privileges from the ownership role.If we do extend role membership tracking I suppose the design question is whether the new role grantor dependency tracking will have a superuser be the recorded grantor instead of some owner.  Given that roles don't presently have an owner concept, consistency with existing permissions in this manner would be trickier.  Because of this, I would probably leave role grantor tracking at the session_user level while database objects continue to emanate from the object owner.  The global vs database differences seem like a sufficient theoretical justification for the difference in implementation.David J.", "msg_date": "Fri, 4 Mar 2022 15:20:10 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Fri, Mar 4, 2022 at 5:20 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> I think I disagree. Or, at least, the superuser has full control of dictating how role membership is modified and that seems sufficient.\n\nThe point is that the superuser DOES NOT have full control. The\nsuperuser cannot prevent relatively low-privileged users from undoing\nthings that the superuser did intentionally and doesn't want reversed.\n\nThe choice of names in my example wasn't accidental. If the granted\nrole is a login role, then the superuser's intention was to vest the\nprivileges of that role in some other role, and it is surely not right\nfor that role to be able to decide that it doesn't want it's\nprivileges to be so granted. That's why I chose the name \"peon\". In\nyour example, where you chose the name \"admin\", the situation is less\nclear. If we imagine the granted role as a container for a bundle of\nprivileges, giving it the ability to administer itself feels more\nreasonable. However, I am very much unconvinced that it's correct even\nthere. Suppose the superuser grants \"admin\" to both \"joe\" and \"sally\".\nNow \"joe\" can SET ROLE to \"admin\" and revoke it from \"sally\", and the\nsuperuser has no tool to prevent this.\n\nNow you can imagine a situation where the superuser is totally OK with\neither \"joe\" or \"sally\" having the ability to lock the other one out,\nbut I don't think it's right to say that this will be true in all\ncases.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 6 Mar 2022 10:19:39 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Fri, Mar 4, 2022 at 4:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If we are not tracking the grantors of role authorizations,\n> then we are doing it wrong and we ought to fix that.\n\nHmm, so maybe that's the place to start. We are tracking it in the\nsense that we record an OID in the catalog, but nothing that happens\nafter that makes a lot of sense.\n\n> > 3. What happens if a user is dropped after being recorded as a\n> > grantor?\n>\n> Should work the same as it does now for ordinary ACLs, ie, you\n> gotta drop the grant first.\n\nOK, that makes sense to me.\n\n> Changing it could have some bad compatibility consequences though.\n> In particular, I believe it would break existing pg_dump files,\n> in that after restore all privileges would be attributed to the\n> restoring superuser, and there'd be no very easy way to clean that\n> up.\n\nI kind of wonder whether we ought to attribute all privileges granted\nby any superuser to the bootstrap superuser. That doesn't seem to have\nany meaningful downside, and it could avoid a lot of annoying\ndependencies that serve no real purpose.\n\n> Agreed, this is not something to move on quickly. We might want\n> to think about adjusting pg_dump to use explicit GRANTED BY\n> options in GRANT/REVOKE a release or two before making incompatible\n> changes.\n\nUggh. I really want to make some meaningful progress here before the\nheat death of the universe, and I'm not sure that this manner of\nproceeding is really going in that direction. That said, I do entirely\nsee your point. Are you thinking we'd actually add a GRANTED BY clause\nto GRANT/REVOKE, vs. just wrapping it in SET ROLE incantations of some\nsort?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 6 Mar 2022 10:27:40 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Mar 4, 2022 at 4:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Agreed, this is not something to move on quickly. We might want\n>> to think about adjusting pg_dump to use explicit GRANTED BY\n>> options in GRANT/REVOKE a release or two before making incompatible\n>> changes.\n\n> Uggh. I really want to make some meaningful progress here before the\n> heat death of the universe, and I'm not sure that this manner of\n> proceeding is really going in that direction. That said, I do entirely\n> see your point. Are you thinking we'd actually add a GRANTED BY clause\n> to GRANT/REVOKE, vs. just wrapping it in SET ROLE incantations of some\n> sort?\n\nI was thinking the former ... however, after a bit of experimentation\nI see that we accept \"grant foo to bar granted by baz\" a VERY long\nway back, but the \"granted by\" option for object privileges is\n(a) pretty new and (b) apparently restrictively implemented:\n\nregression=# grant delete on alices_table to bob granted by alice;\nERROR: grantor must be current user\n\nThat's ... surprising. I guess whoever put that in was only\ninterested in pro-forma SQL syntax compliance and not in making\na usable feature.\n\nSo if we decide to extend this change into object privileges\nit would be advisable to use SET ROLE, else we'd be giving up\nan awful lot of backwards compatibility in dump scripts.\nBut if we're only talking about role grants then I think\nGRANTED BY would work fine.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Mar 2022 11:34:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Sun, Mar 6, 2022 at 10:19 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Mar 4, 2022 at 5:20 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> > I think I disagree. Or, at least, the superuser has full control of dictating how role membership is modified and that seems sufficient.\n>\n> The point is that the superuser DOES NOT have full control. The\n> superuser cannot prevent relatively low-privileged users from undoing\n> things that the superuser did intentionally and doesn't want reversed.\n>\n> The choice of names in my example wasn't accidental. If the granted\n> role is a login role, then the superuser's intention was to vest the\n> privileges of that role in some other role, and it is surely not right\n> for that role to be able to decide that it doesn't want it's\n> privileges to be so granted. That's why I chose the name \"peon\". In\n> your example, where you chose the name \"admin\", the situation is less\n> clear. If we imagine the granted role as a container for a bundle of\n> privileges, giving it the ability to administer itself feels more\n> reasonable. However, I am very much unconvinced that it's correct even\n> there. Suppose the superuser grants \"admin\" to both \"joe\" and \"sally\".\n> Now \"joe\" can SET ROLE to \"admin\" and revoke it from \"sally\", and the\n> superuser has no tool to prevent this.\n>\n> Now you can imagine a situation where the superuser is totally OK with\n> either \"joe\" or \"sally\" having the ability to lock the other one out,\n> but I don't think it's right to say that this will be true in all\n> cases.\n>\n\nAnother example here is usage of groups in pg_hba.conf, if the admin\nhas a group of users with stronger authentication requirements: e.g.,\n\nhostssl all +certonlyusers all cert map=certmap clientcert=1\n\nand one can remove their membership, they can change their\nauthentication requirements.\n\n\n", "msg_date": "Sun, 6 Mar 2022 11:40:11 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> ... Suppose the superuser grants \"admin\" to both \"joe\" and \"sally\".\n> Now \"joe\" can SET ROLE to \"admin\" and revoke it from \"sally\", and the\n> superuser has no tool to prevent this.\n\nReally?\n\nregression=# grant admin to joe;\nGRANT ROLE\nregression=# grant admin to sally;\nGRANT ROLE\nregression=# \\c - joe\nYou are now connected to database \"regression\" as user \"joe\".\nregression=> revoke admin from sally;\nERROR: must have admin option on role \"admin\"\nregression=> set role admin;\nSET\nregression=> revoke admin from sally;\nERROR: must have admin option on role \"admin\"\n\nI think there is an issue here around exactly what the admin option\nmeans, but if it doesn't grant you the ability to remove grants\nmade by other people, it's pretty hard to see what it's for.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Mar 2022 11:53:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Sun, Mar 6, 2022 at 9:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > ... Suppose the superuser grants \"admin\" to both \"joe\" and \"sally\".\n> > Now \"joe\" can SET ROLE to \"admin\" and revoke it from \"sally\", and the\n> > superuser has no tool to prevent this.\n>\n> Really?\n>\n> regression=# grant admin to joe;\n> GRANT ROLE\n> regression=# grant admin to sally;\n> GRANT ROLE\n> regression=# \\c - joe\n> You are now connected to database \"regression\" as user \"joe\".\n> regression=> revoke admin from sally;\n> ERROR: must have admin option on role \"admin\"\n> regression=> set role admin;\n> SET\n> regression=> revoke admin from sally;\n> ERROR: must have admin option on role \"admin\"\n>\n> I think there is an issue here around exactly what the admin option\n> means, but if it doesn't grant you the ability to remove grants\n> made by other people, it's pretty hard to see what it's for.\n>\n>\nPrecisely.\n\nThe current system, with the session_user exception, basically guides a\nsuperuser to define two kinds of roles.\n\nGroups: No login, permission grants\nUsers: Login, inherits permissions from groups, can manage group membership\nif given WITH ADMIN OPTION.\n\nThe original example using only users is not all that compelling to me.\nIMO, DBAs should not be setting up their system that way.\n\nTwo questions remain:\n\n1. Are we willing to get rid of the session_user exception?\n\n2. Do we want to track who the grantor is for role membership grants and\ninstitute a requirement that non-superusers can only revoke the grants that\nthey personally made?\n\nI'm personally in favor of getting rid of the session_user exception, which\nnicely prevents the problem at the beginning of this thread and further\nencourages the DBA to define groups and roles with a greater\nseparation-of-concerns design. WITH ADMIN OPTION is sufficient.\n\nI think tracking grantor information for role membership would allow for\ngreater auditing capabilities and a better degree of control in the\npermissions system.\n\nIn short, I am in favor of both options. The grantor tracking seems to be\nheaded for acceptance.\n\nSo, do we really want to treat every single login role as a potential group\nby keeping the session_user exception?\n\nDavid J.\n\nOn Sun, Mar 6, 2022 at 9:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> ... Suppose the superuser grants \"admin\" to both \"joe\" and \"sally\".\n> Now \"joe\" can SET ROLE to \"admin\" and revoke it from \"sally\", and the\n> superuser has no tool to prevent this.\n\nReally?\n\nregression=# grant admin to joe;\nGRANT ROLE\nregression=# grant admin to sally;\nGRANT ROLE\nregression=# \\c - joe\nYou are now connected to database \"regression\" as user \"joe\".\nregression=> revoke admin from sally;\nERROR:  must have admin option on role \"admin\"\nregression=> set role admin;\nSET\nregression=> revoke admin from sally;\nERROR:  must have admin option on role \"admin\"\n\nI think there is an issue here around exactly what the admin option\nmeans, but if it doesn't grant you the ability to remove grants\nmade by other people, it's pretty hard to see what it's for.Precisely.The current system, with the session_user exception, basically guides a superuser to define two kinds of roles.Groups: No login, permission grantsUsers: Login, inherits permissions from groups, can manage group membership if given WITH ADMIN OPTION.The original example using only users is not all that compelling to me.  IMO, DBAs should not be setting up their system that way.Two questions remain:1. Are we willing to get rid of the session_user exception?2. Do we want to track who the grantor is for role membership grants and institute a requirement that non-superusers can only revoke the grants that they personally made?I'm personally in favor of getting rid of the session_user exception, which nicely prevents the problem at the beginning of this thread and further encourages the DBA to define groups and roles with a greater separation-of-concerns design.  WITH ADMIN OPTION is sufficient.I think tracking grantor information for role membership would allow for greater auditing capabilities and a better degree of control in the permissions system.In short, I am in favor of both options.  The grantor tracking seems to be headed for acceptance.So, do we really want to treat every single login role as a potential group by keeping the session_user exception?David J.", "msg_date": "Sun, 6 Mar 2022 12:08:42 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Sun, Mar 6, 2022 at 8:19 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> The choice of names in my example wasn't accidental. If the granted\n> role is a login role, then the superuser's intention was to vest the\n> privileges of that role in some other role, and it is surely not right\n> for that role to be able to decide that it doesn't want it's\n> privileges to be so granted. That's why I chose the name \"peon\".\n\n\n >> rhaas [as peon] => revoke peon from boss; -- i don't like being bossed\naround!\n\nWell, the peon is not getting bossed around, the boss is getting peoned\naround and the peon has decided that they like boss too much and don't need\nto do that anymore.\n\nWhen you grant a group \"to\" a role you place the role under the group - and\ninheritance flows downward.\n\nIn the original thread Stephen wrote:\n\n\"This is because we allow 'self administration' of roles, meaning that\nthey can decide what other roles they are a member of.:\n\nThe example, which you moved here, then attempts to demonstrate this \"fact\"\nbut gets it wrong. Boss became a member of peon so if you want to\ndemonstrate self-administration of a role's membership in a different group\nyou have to login as boss, not peon. Doing that, and then revoking peon\nfrom boss, yields \"ERROR: must have admin option on role \"peon\"\".\n\nSo no, without \"WITH ADMIN OPTION\" a role cannot decide what other roles\nthey are a member of.\n\nI don't necessarily have an issue changing self-administration but if the\nmotivating concern is that all these new pg_* roles we are creating are\nsomething a normal user can opt-out of/revoke that simply isn't the case\ntoday, unless they are added to the pg_* role WITH ADMIN OPTION.\n\nThat all said, permissions SHOULD BE strictly additive. If boss doesn't\nwant to be a member of pg_read_all_files allowing them to revoke themself\nfrom that role seems like it should be acceptable. If there is fear in\nallowing someone to revoke (not add) themselves as a member of a different\nrole that suggests we have a design issue in another feature of the\nsystem. Today, they neither grant nor revoke, and the self-revocation\ndoesn't seem that important to add.\n\nDavid J.\n\nOn Sun, Mar 6, 2022 at 8:19 AM Robert Haas <robertmhaas@gmail.com> wrote:The choice of names in my example wasn't accidental. If the granted\nrole is a login role, then the superuser's intention was to vest the\nprivileges of that role in some other role, and it is surely not right\nfor that role to be able to decide that it doesn't want it's\nprivileges to be so granted. That's why I chose the name \"peon\". >> rhaas [as peon] => revoke peon from boss; -- i don't like being bossed around!Well, the peon is not getting bossed around, the boss is getting peoned around and the peon has decided that they like boss too much and don't need to do that anymore.When you grant a group \"to\" a role you place the role under the group - and inheritance flows downward.In the original thread Stephen wrote:\"This is because we allow 'self administration' of roles, meaning thatthey can decide what other roles they are a member of.:The example, which you moved here, then attempts to demonstrate this \"fact\" but gets it wrong.  Boss became a member of peon so if you want to demonstrate self-administration of a role's membership in a different group you have to login as boss, not peon.  Doing that, and then revoking peon from boss, yields \"ERROR: must have admin option on role \"peon\"\".So no, without \"WITH ADMIN OPTION\" a role cannot decide what other roles they are a member of.I don't necessarily have an issue changing self-administration but if the motivating concern is that all these new pg_* roles we are creating are something a normal user can opt-out of/revoke that simply isn't the case today, unless they are added to the pg_* role WITH ADMIN OPTION.That all said, permissions SHOULD BE strictly additive.  If boss doesn't want to be a member of pg_read_all_files allowing them to revoke themself from that role seems like it should be acceptable.  If there is fear in allowing someone to revoke (not add) themselves as a member of a different role that suggests we have a design issue in another feature of the system.  Today, they neither grant nor revoke, and the self-revocation doesn't seem that important to add.David J.", "msg_date": "Sun, 6 Mar 2022 21:01:20 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Sun, Mar 6, 2022 at 11:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I was thinking the former ... however, after a bit of experimentation\n> I see that we accept \"grant foo to bar granted by baz\" a VERY long\n> way back, but the \"granted by\" option for object privileges is\n> (a) pretty new and (b) apparently restrictively implemented:\n>\n> regression=# grant delete on alices_table to bob granted by alice;\n> ERROR: grantor must be current user\n>\n> That's ... surprising. I guess whoever put that in was only\n> interested in pro-forma SQL syntax compliance and not in making\n> a usable feature.\n\nIt appears so: https://www.postgresql.org/message-id/2073b6a9-7f79-5a00-5f26-cd19589a52c7%402ndquadrant.com\n\nIt doesn't seem like that would be hard to fix. Maybe we should just do that.\n\n> So if we decide to extend this change into object privileges\n> it would be advisable to use SET ROLE, else we'd be giving up\n> an awful lot of backwards compatibility in dump scripts.\n> But if we're only talking about role grants then I think\n> GRANTED BY would work fine.\n\nOK.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Mar 2022 10:12:54 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Sun, Mar 6, 2022 at 11:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Really?\n>\n> regression=# grant admin to joe;\n> GRANT ROLE\n> regression=# grant admin to sally;\n> GRANT ROLE\n> regression=# \\c - joe\n> You are now connected to database \"regression\" as user \"joe\".\n> regression=> revoke admin from sally;\n> ERROR: must have admin option on role \"admin\"\n> regression=> set role admin;\n> SET\n> regression=> revoke admin from sally;\n> ERROR: must have admin option on role \"admin\"\n\nOops. I stand corrected.\n\n> I think there is an issue here around exactly what the admin option\n> means, but if it doesn't grant you the ability to remove grants\n> made by other people, it's pretty hard to see what it's for.\n\nHmm. I think the real issue is what David Johnson calls the session\nuser exception. I hadn't quite understood how that played into this.\nAccording to the documentation: \"If WITH ADMIN OPTION is specified,\nthe member can in turn grant membership in the role to others, and\nrevoke membership in the role as well. Without the admin option,\nordinary users cannot do that. A role is not considered to hold WITH\nADMIN OPTION on itself, but it may grant or revoke membership in\nitself from a database session where the session user matches the\nrole.\"\n\nIs there some use case for the behavior described in that last\nsentence? If that exception is the only case in which an unprivileged\nuser can revoke a grant made by someone else, then getting rid of it\nseems pretty appealing from where I sit. I can't speak to the\nstandards compliance end of things, but it doesn't intrinsically seem\nbothersome that having \"WITH ADMIN OPTION\" on a role lets you control\nwho has membership in said role. And certainly it's not bothersome\nthat the superuser can change whatever they want. The problem here is\njust that a user with NO special privileges on any role, including\ntheir own, can make changes that more privileged users might not like.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Mar 2022 10:37:16 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Sun, Mar 6, 2022 at 2:09 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> So, do we really want to treat every single login role as a potential group by keeping the session_user exception?\n\nI think that we DO want to continue to treat login roles as\npotentially grantable privileges. That feels fundamentally useful to\nme. The superuser is essentially granted the privileges of all users\non the system, and has all the rights they have, including the right\nto drop tables owned by those users as if they were the owner of those\ntables. If it's useful for the superuser to implicitly have the rights\nof all users on the system, why should it not be useful for some\nnon-superuser to implicitly have the rights of some other users on the\nsystem? I think it pretty clearly is. If one of my colleagues leaves\nthe company, the DBA can say \"grant jdoe to rhaas\" and let me mess\naround with this stuff. Or, the DBA can grant me the privileges of all\nmy direct reports even when they're not leaving so that I can sort out\nanything I need to do without superuser involvement. That all seems\ncool and OK to me.\n\nNow I think it is fair to say that we could have chosen a different\ndesign, and MAYBE that would have been better. Nobody forced us to\nconflate users and groups into a unified thing called roles, and I\nthink there's pretty good evidence that it's confusing and\ncounterintuitive in some ways. There's also no intrinsic reason why\nthe superuser has to be able to directly exercise the privileges of\nevery role rather than, say, having a way to become any given role.\nBut at this point, those design decisions are pretty well baked into\nthe system design, and I don't really think it's likely that we want\nto change them. To put that another way, just because you don't like\nthe idea of granting one login role to another login role, that\ndoesn't mean that the feature doesn't exist, and as long as that\nfeature does exist, trying to make it work better or differently is\nfair game.\n\nBut I think that's separate from your other question about whether we\nshould remove the session user exception. That looks tempting to me at\nfirst glance, because we have exchanged several hundred, and it really\nfeels more like several million, emails on this list about how much of\na problem it is that an unprivileged user can just log in and run a\nREVOKE. It breaks the idea that the people WITH ADMIN OPTION on a role\nare the ones who control membership in that role. Joshua Brindle's\nnote upthread about the interaction of this with pg_hba.conf is\nanother example of that, and I think there are more. Any idea that a\nrole is a general-purpose way of designating a group of users for some\nsecurity critical purpose is threatened if people can make changes to\nthe membership of that group without being specifically authorized to\ndo so.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Mar 2022 10:58:39 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Mon, Mar 7, 2022 at 8:37 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> A role is not considered to hold WITH\n> ADMIN OPTION on itself, but it may grant or revoke membership in\n> itself from a database session where the session user matches the\n> role.\"\n>\n> Is there some use case for the behavior described in that last\n> sentence?\n\n\nI can imagine, in particular combined with CREATEROLE, that this allows for\nany user to delegate their personal permissions to a separate newly created\nuser. Like an assistant. I'm not all that sure whether CREATEROLE is\npresently safe enough to give to a normal user in order to make this use\ncase work but it seems reasonable.\n\nI would be concerned about changing the behavior at this point. But I\nwould be in favor of at least removing the hard-coded exception and linking\nit to a role attribute. That attribute can default to \"SELFADMIN\" to match\nthe existing behavior but then \"NOSELFADMIN\" would exist to disable that\nbehavior on the per-role basis. Still tied to session_user as opposed to\ncurrent_user.\n\nDavid J.\n\nP.S.\n\ncreate role selfadmin admin selfadmin; -- ERROR: role \"selfadmin\" is a\nmember of role \"selfadmin\"\n\ncreate role selfadmin;\ngrant selfadmin to selfadmin with admin option; -- ERROR: role \"selfadmin\"\nis a member of role \"selfadmin\"\n\nThe error message seems odd. I tried this because instead of a \"SELFADMIN\"\nattribute adding a role to itself WITH ADMIN OPTION could be defined to\nhave the same effect. You cannot change WITH ADMIN OPTION independently of\nthe adding of the role to the group.\n\nOn Mon, Mar 7, 2022 at 8:37 AM Robert Haas <robertmhaas@gmail.com> wrote:A role is not considered to hold WITH\nADMIN OPTION on itself, but it may grant or revoke membership in\nitself from a database session where the session user matches the\nrole.\"\n\nIs there some use case for the behavior described in that last\nsentence?I can imagine, in particular combined with CREATEROLE, that this allows for any user to delegate their personal permissions to a separate newly created user.  Like an assistant.  I'm not all that sure whether CREATEROLE is presently safe enough to give to a normal user in order to make this use case work but it seems reasonable.I would be concerned about changing the behavior at this point.  But I would be in favor of at least removing the hard-coded exception and linking it to a role attribute.  That attribute can default to \"SELFADMIN\" to match the existing behavior but then \"NOSELFADMIN\" would exist to disable that behavior on the per-role basis.  Still tied to session_user as opposed to current_user.David J.P.S.create role selfadmin admin selfadmin; -- ERROR: role \"selfadmin\" is a member of role \"selfadmin\"create role selfadmin;grant selfadmin to selfadmin with admin option; -- ERROR: role \"selfadmin\" is a member of role \"selfadmin\"The error message seems odd.  I tried this because instead of a \"SELFADMIN\" attribute adding a role to itself WITH ADMIN OPTION could be defined to have the same effect.  You cannot change WITH ADMIN OPTION independently of the adding of the role to the group.", "msg_date": "Mon, 7 Mar 2022 09:02:16 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Hmm. I think the real issue is what David Johnson calls the session\n> user exception. I hadn't quite understood how that played into this.\n> According to the documentation: \"If WITH ADMIN OPTION is specified,\n> the member can in turn grant membership in the role to others, and\n> revoke membership in the role as well. Without the admin option,\n> ordinary users cannot do that. A role is not considered to hold WITH\n> ADMIN OPTION on itself, but it may grant or revoke membership in\n> itself from a database session where the session user matches the\n> role.\"\n\n> Is there some use case for the behavior described in that last\n> sentence?\n\nGood question. You might try figuring out when that text was added\nand then see if there's relevant discussion in the archives.\n\nJust looking at it now, without having done any historical research,\nI wonder why it is that we don't attach significance to WITH ADMIN\nOPTION being granted to the role itself. It seems like the second\npart of that sentence is effectively saying that a role DOES have\nadmin option on itself, contradicting the first part.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Mar 2022 11:04:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Mon, Mar 7, 2022 at 9:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Just looking at it now, without having done any historical research,\n>\nI wonder why it is that we don't attach significance to WITH ADMIN\n> OPTION being granted to the role itself. It seems like the second\n> part of that sentence is effectively saying that a role DOES have\n> admin option on itself, contradicting the first part.\n>\n>\nWITH ADMIN OPTION is inheritable which is really bad if the group has WITH\nADMIN OPTION on itself. The session_user exception temporarily grants WITH\nADMIN OPTION to the group but it is done in such a way so that it is not\ninheritable.\n\nThere is no possible way to even assign WITH ADMIN OPTION on a role to\nitself since pg_auth_members doesn't record a self-relationship and\nadmin_option only exists there.\n\nDavid J.\n\nP.S. Feature request; modify \\du+ to show which \"Member of\" roles a given\nrole has the WITH ADMIN OPTION privilege on.\n\nOn Mon, Mar 7, 2022 at 9:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Just looking at it now, without having done any historical research,\nI wonder why it is that we don't attach significance to WITH ADMIN\nOPTION being granted to the role itself.  It seems like the second\npart of that sentence is effectively saying that a role DOES have\nadmin option on itself, contradicting the first part. WITH ADMIN OPTION is inheritable which is really bad if the group has WITH ADMIN OPTION on itself.  The session_user exception temporarily grants WITH ADMIN OPTION to the group but it is done in such a way so that it is not inheritable.There is no possible way to even assign WITH ADMIN OPTION on a role to itself since pg_auth_members doesn't record a self-relationship and admin_option only exists there.David J.P.S. Feature request; modify \\du+ to show which \"Member of\" roles a given role has the WITH ADMIN OPTION privilege on.", "msg_date": "Mon, 7 Mar 2022 09:58:22 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Mon, Mar 7, 2022 at 11:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Is there some use case for the behavior described in that last\n> > sentence?\n>\n> Good question. You might try figuring out when that text was added\n> and then see if there's relevant discussion in the archives.\n\nApparently the permission used to be broader, and commit\nfea164a72a7bfd50d77ba5fb418d357f8f2bb7d0 (February 2014, Noah,\nCVE-2014-0060) restricted it by requiring that (a) the user had to be\nthe logged-in user, rather than an identity assumed via SET ROLE (so\nmaybe my bogus example from before would have worked in 2013) and (b)\nthat we're not in a security-restricted operation at the time.\n\nInterestingly, it appears to me that the behavior wasn't documented\nprior to that commit. The previous text read simply:\n\n If <literal>WITH ADMIN OPTION</literal> is specified, the member can\n in turn grant membership in the role to others, and revoke membership\n in the role as well. Without the admin option, ordinary users cannot do\n that.\n\nThat doesn't give any hint that self-administration is a special case.\n\nI reviewed the (private) discussion of this vulnerability on the\npgsql-security mailing list where various approaches were considered.\nI think it's safe to share a few broad details about that conversation\npublicly now, since it was many years ago and the fix has long since\nbeen published. There was discussion of making this\nself-administration behavior something that could be turned off, but\nsuch a change was deemed too large for the back-branches. There was no\ndiscussion that I could find about removing the behavior altogether.\nIt was noted that having a special case for this was different than\ngranting WITH ADMIN OPTION because WITH ADMIN OPTION is inherited and\nbeing logged in as a certain user is not.\n\nIt appears to me that the actual behavior of having is_admin_of_role()\nreturn true when member == role dates to\nf9fd1764615ed5d85fab703b0ffb0c323fe7dfd5 (Tom Lane, 2005). If I'm not\nreading this code wrong, prior to that commit, it seems to me that we\nonly searched the roles that were members of that role, directly or\nindirectly, and you had to have admin_option on the last hop of the\nmembership chain in order to get a \"true\" result. But that commit,\namong other changes, made member == role a special case, but the\ncomment just says /* Fast path for simple case */ which makes it\nappear that it wasn't thought to be a behavior change at all, but it\nlooks to me like it was. Am I confused?\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Mar 2022 12:51:22 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Sun, Mar 6, 2022 at 11:01 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> The example, which you moved here, then attempts to demonstrate this \"fact\" but gets it wrong. Boss became a member of peon so if you want to demonstrate self-administration of a role's membership in a different group you have to login as boss, not peon. Doing that, and then revoking peon from boss, yields \"ERROR: must have admin option on role \"peon\"\".\n\nThis doesn't seem to me to be making a constructive argument. I showed\nan example with certain names demonstrating a certain behavior that I\nfind problematic. You don't have to think it's problematic, and you\ncan show other examples that demonstrate things you want to show. But\nplease don't tell me that when I literally cut and paste the output\nfrom my terminal into an email window, what I'm showing is somehow\ncounterfactual. The behavior as it exists today is surely a fact, and\nan easily demonstrable one at that. It's not a \"fact'\" in quotes, and\nit doesn't \"get it wrong\". It is the actual behavior and the example\nwith the names I picked demonstrates precisely what I want to\ndemonstrate. When you say that I should have chosen a different\nexample or used different identifier names or talked about it in\ndifferent way, *that* is an opinion. I believe that you are wholly\nentitled to that opinion, even if (as in this case) I disagree, but I\nbelieve that it is not right at all to make it sound as if I don't\nhave the right to pick the examples I care about, or as if terminal\noutput is not a factual representation of how things work today.\n\n> So no, without \"WITH ADMIN OPTION\" a role cannot decide what other roles they are a member of.\n\nIt clearly can in some limited cases, because I showed an example\ndemonstrating *exactly that thing*.\n\n> I don't necessarily have an issue changing self-administration but if the motivating concern is that all these new pg_* roles we are creating are something a normal user can opt-out of/revoke that simply isn't the case today, unless they are added to the pg_* role WITH ADMIN OPTION.\n\nI agree with this, but that's not my concern, because that's a\ndifferent use case from the one that I complained about. Since the\nsession user exception only applies to login roles, the problem that\nI'm talking about only occurs when a login role is granted to some\nother role.\n\n> That all said, permissions SHOULD BE strictly additive. If boss doesn't want to be a member of pg_read_all_files allowing them to revoke themself from that role seems like it should be acceptable. If there is fear in allowing someone to revoke (not add) themselves as a member of a different role that suggests we have a design issue in another feature of the system. Today, they neither grant nor revoke, and the self-revocation doesn't seem that important to add.\n\nI disagree with this on principle, and I also think that's not how it\nworks today. On the general principle, I do not see a compelling\nreason why we should have two systems for maintaining groups of users,\none of which is used for additive things and one of which is used for\nsubtractive things. That is a lot of extra machinery for little gain,\nespecially given how close we are to having it sorted out so that the\nsame mechanism can serve both purposes. It presently appears to me\nthat if we either remove the session user exception OR do the\ngrantor-tracking thing discussed earlier, we can get to a place where\nthe same facility can be used for either purpose. That would, I think,\nbe a significant step forward over the status quo. In terms of how\nthings work today, see Joshua Brindle's email about the use of groups\nin pg_hba.conf. That is an excellent example of how removing oneself\nfrom a group could enable one to bypass security restrictions intended\nby the DBA.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Mar 2022 13:18:40 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> It appears to me that the actual behavior of having is_admin_of_role()\n> return true when member == role dates to\n> f9fd1764615ed5d85fab703b0ffb0c323fe7dfd5 (Tom Lane, 2005). If I'm not\n> reading this code wrong, prior to that commit, it seems to me that we\n> only searched the roles that were members of that role, directly or\n> indirectly, and you had to have admin_option on the last hop of the\n> membership chain in order to get a \"true\" result. But that commit,\n> among other changes, made member == role a special case, but the\n> comment just says /* Fast path for simple case */ which makes it\n> appear that it wasn't thought to be a behavior change at all, but it\n> looks to me like it was. Am I confused?\n\nUgh, I think you are right. It's been a long time of course, but it sure\nlooks like that was copied-and-pasted without recognizing that it was\nwrong in this function because of the need to check the admin_option flag.\nAnd then in the later security discussion we didn't realize that the\nproblematic behavior was a flat-out thinko, so we narrowed it as much as\nwe could instead of just taking it out.\n\nDoes anything interesting break if you do just take it out?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Mar 2022 13:28:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Mon, Mar 7, 2022 at 1:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Ugh, I think you are right. It's been a long time of course, but it sure\n> looks like that was copied-and-pasted without recognizing that it was\n> wrong in this function because of the need to check the admin_option flag.\n> And then in the later security discussion we didn't realize that the\n> problematic behavior was a flat-out thinko, so we narrowed it as much as\n> we could instead of just taking it out.\n>\n> Does anything interesting break if you do just take it out?\n\nThat is an excellent question, but I haven't had time yet to\ninvestigate the matter.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Mar 2022 13:33:04 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > 1. What should be the exact rule for whether A can remove a grant made\n> > by B? Is it has_privs_of_role()? is_member_of_role()? Something else?\n> \n> No strong opinion here, but I'd lean slightly to the more restrictive\n> option.\n> \n> > 2. What happens if the same GRANT is enacted by multiple users? For\n> > example, suppose peon does \"GRANT peon to boss\" and then the superuser\n> > does the same thing afterwards, or vice versa? One design would be to\n> > try to track those as two separate grants, but I'm not sure if we want\n> > to add that much complexity, since that's not how we do it now and it\n> > would, for example, implicate the choice of PK on the pg_auth_members\n> > table.\n> \n> As you note later, we *do* track such grants separately in ordinary\n> ACLs, and I believe this is clearly required by the SQL spec.\n\nAgreed.\n\n> It says (for privileges on objects):\n> \n> Each privilege is represented by a privilege descriptor.\n> A privilege descriptor contains:\n> — The identification of the object on which the privilege is granted.\n> — The <authorization identifier> of the grantor of the privilege.\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> — The <authorization identifier> of the grantee of the privilege.\n> — Identification of the action that the privilege allows.\n> — An indication of whether or not the privilege is grantable.\n> — An indication of whether or not the privilege has the WITH HIERARCHY OPTION specified.\n> \n> Further down (4.42.3 in SQL:2021), the granting of roles is described,\n> and that says:\n> \n> Each role authorization is described by a role authorization descriptor.\n> A role authorization descriptor includes:\n> — The role name of the role.\n> — The authorization identifier of the grantor.\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> — The authorization identifier of the grantee.\n> — An indication of whether or not the role authorization is grantable.\n> \n> If we are not tracking the grantors of role authorizations,\n> then we are doing it wrong and we ought to fix that.\n\nYup, and as noted elsewhere, we are tracking it but not properly dealing\nwith dependencies nor are we considering the grantor when REVOKE is run.\n\nLooking at the spec for REVOKE is quite useful when trying to understand\nhow this is all supposed to work (and, admittedly, isn't something I did\nenough of when I did the original work on roles... sorry about that, was\nearly on). In particular, a REVOKE only works when it finds something\nto revoke/remove, and part of that search includes basically \"was it the\ncurrent role who was the grantor?\"\n\nThe specific language here being: A role authorization descriptor is\nsaid to be identified if it defines the grant of any of the specified\nroles revoked to grantee with grantor A.\n\nBasically, a role authorization descriptor isn't identified unless it's\none that this user/role had previously granted.\n\n> > 3. What happens if a user is dropped after being recorded as a\n> > grantor?\n> \n> Should work the same as it does now for ordinary ACLs, ie, you\n> gotta drop the grant first.\n\nAgreed.\n\n> > 4. Should we apply this rule to other types of grants, rather than\n> > just to role membership?\n> \n> I am not sure about the reasoning behind the existing rule that\n> superuser-granted privileges are recorded as being granted by the\n> object owner. It does feel more like a wart than something we want.\n> It might have been a hack to deal with the lack of GRANTED BY\n> options in GRANT/REVOKE back in the day.\n\nYeah, that doesn't seem right and isn't great.\n\n> Changing it could have some bad compatibility consequences though.\n> In particular, I believe it would break existing pg_dump files,\n> in that after restore all privileges would be attributed to the\n> restoring superuser, and there'd be no very easy way to clean that\n> up.\n\nUgh, that's pretty grotty, certainly.\n\n> > Please note that it is not really my intention to try to shove\n> > anything into v15 here.\n> \n> Agreed, this is not something to move on quickly. We might want\n> to think about adjusting pg_dump to use explicit GRANTED BY\n> options in GRANT/REVOKE a release or two before making incompatible\n> changes.\n\nI'm with Robert on this though- folks should know already that they need\nto use the pg_dump of the version of PG that they want to move to and\nnot try to re-use older pg_dump output with newer versions, for a number\nof reasons and this is just another.\n\nThanks,\n\nStephen", "msg_date": "Mon, 7 Mar 2022 13:45:12 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Mon, Mar 7, 2022 at 11:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sun, Mar 6, 2022 at 11:01 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> > The example, which you moved here, then attempts to demonstrate this\n> \"fact\" but gets it wrong. Boss became a member of peon so if you want to\n> demonstrate self-administration of a role's membership in a different group\n> you have to login as boss, not peon. Doing that, and then revoking peon\n> from boss, yields \"ERROR: must have admin option on role \"peon\"\".\n>\n> This doesn't seem to me to be making a constructive argument. I showed\n> an example with certain names demonstrating a certain behavior that I\n> find problematic.\n\n\nWhether you choose the wording of the original thread:\n\n\"This is because we allow 'self administration' of roles, meaning that\nthey can decide what other roles they are a member of.\"\n\nhttps://www.postgresql.org/message-id/flat/20211005025746.GN20998%40tamriel.snowman.net\n\nOr you quote at the top of this one:\n\n> The ability of a role to revoke itself from some other role is just\n> something we need to accept as being a change that needs to be made,\n\nThis example:\n\nrhaas=# create user boss;\nCREATE ROLE\nrhaas=# create user peon;\nCREATE ROLE\nrhaas=# grant peon to boss;\nGRANT ROLE\nrhaas=# \\c - peon\nYou are now connected to database \"rhaas\" as user \"peon\".\nrhaas=> revoke peon from boss; -- i don't like being bossed around!\nREVOKE ROLE\n\nFails to demonstrate the boss \"can revoke itself from peon\" / \"boss can\ndecide what other roles they are a member of.\"\n\nYou are logged in as peon when you do the revoke, not boss, so the extent\nof what \"boss\" can or cannot do has not been shown.\n\nboss is a member of peon, not the other way around. That the wording\n\"grant peon to boss\" makes you think otherwise is unfortunate.\n\nDavid J.\n\nOn Mon, Mar 7, 2022 at 11:18 AM Robert Haas <robertmhaas@gmail.com> wrote:On Sun, Mar 6, 2022 at 11:01 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> The example, which you moved here, then attempts to demonstrate this \"fact\" but gets it wrong.  Boss became a member of peon so if you want to demonstrate self-administration of a role's membership in a different group you have to login as boss, not peon.  Doing that, and then revoking peon from boss, yields \"ERROR: must have admin option on role \"peon\"\".\n\nThis doesn't seem to me to be making a constructive argument. I showed\nan example with certain names demonstrating a certain behavior that I\nfind problematic.Whether you choose the wording of the original thread:\"This is because we allow 'self administration' of roles, meaning thatthey can decide what other roles they are a member of.\"https://www.postgresql.org/message-id/flat/20211005025746.GN20998%40tamriel.snowman.netOr you quote at the top of this one:> The ability of a role to revoke itself from some other role is just> something we need to accept as being a change that needs to be made,This example: rhaas=# create user boss;CREATE ROLErhaas=# create user peon;CREATE ROLErhaas=# grant peon to boss;GRANT ROLErhaas=# \\c - peonYou are now connected to database \"rhaas\" as user \"peon\".rhaas=> revoke peon from boss; -- i don't like being bossed around!REVOKE ROLEFails to demonstrate the boss \"can revoke itself from peon\" / \"boss can decide what other roles they are a member of.\"You are logged in as peon when you do the revoke, not boss, so the extent of what \"boss\" can or cannot do has not been shown.boss is a member of peon, not the other way around.  That the wording \"grant peon to boss\" makes you think otherwise is unfortunate.David J.", "msg_date": "Mon, 7 Mar 2022 11:47:00 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Sun, Mar 6, 2022 at 11:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I was thinking the former ... however, after a bit of experimentation\n> > I see that we accept \"grant foo to bar granted by baz\" a VERY long\n> > way back, but the \"granted by\" option for object privileges is\n> > (a) pretty new and (b) apparently restrictively implemented:\n> >\n> > regression=# grant delete on alices_table to bob granted by alice;\n> > ERROR: grantor must be current user\n> >\n> > That's ... surprising. I guess whoever put that in was only\n> > interested in pro-forma SQL syntax compliance and not in making\n> > a usable feature.\n> \n> It appears so: https://www.postgresql.org/message-id/2073b6a9-7f79-5a00-5f26-cd19589a52c7%402ndquadrant.com\n> \n> It doesn't seem like that would be hard to fix. Maybe we should just do that.\n\nYeah, that seems like something that should be fixed. Superusers should\nbe allowed to set GRANTED BY to whatever they feel like, and I'd argue\nthat a role who wants a GRANT to actually be GRANTED BY some other role\nthey're a member of should also be allowed to (as they could anyway by\ndoing a SET ROLE), provided that role also has the privileges to do the\nGRANT itself, of course.\n\n> > So if we decide to extend this change into object privileges\n> > it would be advisable to use SET ROLE, else we'd be giving up\n> > an awful lot of backwards compatibility in dump scripts.\n> > But if we're only talking about role grants then I think\n> > GRANTED BY would work fine.\n> \n> OK.\n\nI'm not quite following this bit. Where would SET ROLE come into play\nwhen we're talking about old dump scripts and how the commands in those\nscripts might be interpreted by newer versions of PG..?\n\nThanks,\n\nStephen", "msg_date": "Mon, 7 Mar 2022 13:49:43 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Agreed, this is not something to move on quickly. We might want\n>> to think about adjusting pg_dump to use explicit GRANTED BY\n>> options in GRANT/REVOKE a release or two before making incompatible\n>> changes.\n\n> I'm with Robert on this though- folks should know already that they need\n> to use the pg_dump of the version of PG that they want to move to and\n> not try to re-use older pg_dump output with newer versions, for a number\n> of reasons and this is just another.\n\nYeah, in an ideal world you'd do that, but our users don't always have\nthe luxury of living in an ideal world. Sometimes all you've got is\nan old pg_dump file. Perhaps this behavior wouldn't mess things up\nenough to make the restored database unusable, but we need to think\nabout (and test) that case while we're considering changes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Mar 2022 13:52:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> I'm not quite following this bit. Where would SET ROLE come into play\n> when we're talking about old dump scripts and how the commands in those\n> scripts might be interpreted by newer versions of PG..?\n\nNo, the concern there is the other way around: what if you take a\nscript made by newer pg_dump and try to load it into an older server\nthat doesn't have the GRANTED BY option?\n\nWe're accustomed to saying that that doesn't work if you use a\ndatabase feature that didn't exist in the old server, but\nprivilege grants are hardly that. I don't want us to change the\npg_dump output in such a way that the grants can't be restored at all\nto an older server, just because of a syntax choice that we could\nmake backwards-compatibly instead of not-backwards-compatibly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Mar 2022 13:58:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> Agreed, this is not something to move on quickly. We might want\n> >> to think about adjusting pg_dump to use explicit GRANTED BY\n> >> options in GRANT/REVOKE a release or two before making incompatible\n> >> changes.\n> \n> > I'm with Robert on this though- folks should know already that they need\n> > to use the pg_dump of the version of PG that they want to move to and\n> > not try to re-use older pg_dump output with newer versions, for a number\n> > of reasons and this is just another.\n> \n> Yeah, in an ideal world you'd do that, but our users don't always have\n> the luxury of living in an ideal world. Sometimes all you've got is\n> an old pg_dump file. Perhaps this behavior wouldn't mess things up\n> enough to make the restored database unusable, but we need to think\n> about (and test) that case while we're considering changes.\n\nI agree it's something to consider and deal with if we're able to do so\nsanely, but I disagree that we should be beholden to old dump files when\nconsidering how to move the project forward. Further, they can surely\nbuild and install the version of PG that goes with that dump file in a\ngreat many cases and then dump the data out using a newer version of\npg_dump. For 5 years they could do that with a completely supported\nversion of PG, but we've recently agreed to make an effort to do more\nhere by supporting the building of even older versions on modern\nsystems.\n\nThanks,\n\nStephen", "msg_date": "Mon, 7 Mar 2022 14:17:05 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Mon, Mar 7, 2022 at 1:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > I'm not quite following this bit. Where would SET ROLE come into play\n> > when we're talking about old dump scripts and how the commands in those\n> > scripts might be interpreted by newer versions of PG..?\n>\n> No, the concern there is the other way around: what if you take a\n> script made by newer pg_dump and try to load it into an older server\n> that doesn't have the GRANTED BY option?\n>\n> We're accustomed to saying that that doesn't work if you use a\n> database feature that didn't exist in the old server, but\n> privilege grants are hardly that. I don't want us to change the\n> pg_dump output in such a way that the grants can't be restored at all\n> to an older server, just because of a syntax choice that we could\n> make backwards-compatibly instead of not-backwards-compatibly.\n\nAre you absolutely positive that it's that simple? I mean, what if the\nSET ROLE command has other side effects, or if the GRANT command\nbehaves differently in some way as a result of the SET ROLE having\nbeen done? I feel like a solution that involves explicitly specifying\nthe behavior that we want (i.e. GRANTED BY) is likely to be more\nreliable and more secure than a solution which involves absorbing a\nkey value from a session property (i.e. the role established by SET\nROLE). Even if we decide that SET ROLE is the way to go for\ncompatibility reasons, I would personally say that it's an inferior\nhack only worth accepting for that reason than a truly desirable\ndesign.\n\nSee CVE-2018-1058 for an example of what I'm talking about. The\nprevailing search_path turned out to affect not only the creation\nschema, as intended, but also the resolution of references to other\nobjects mentioned in the CREATE COMMAND, as not intended. I don't see\na similar hazard here, but I'm worried that there might be one.\nDeclarative syntax is a very powerful tool for avoiding those kinds of\nmishaps, and I think we should make as much use of it as we can.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Mar 2022 14:18:35 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > I'm not quite following this bit. Where would SET ROLE come into play\n> > when we're talking about old dump scripts and how the commands in those\n> > scripts might be interpreted by newer versions of PG..?\n> \n> No, the concern there is the other way around: what if you take a\n> script made by newer pg_dump and try to load it into an older server\n> that doesn't have the GRANTED BY option?\n\nWow. No, I really don't think I can agree that we need to care about\nthis.\n\n> We're accustomed to saying that that doesn't work if you use a\n> database feature that didn't exist in the old server, but\n> privilege grants are hardly that. I don't want us to change the\n> pg_dump output in such a way that the grants can't be restored at all\n> to an older server, just because of a syntax choice that we could\n> make backwards-compatibly instead of not-backwards-compatibly.\n\nGRANTED BY is clearly such a feature that exists in the newer version\nand doesn't exist in the older and I can't agree that we should\ncomplicate things for ourselves and bend over backwards to try and make\nit work to take a dump from a newer version of PG and make it work on\nrandom older versions.\n\nFolks are also able to exclude privileges from dumps if they want to.\n\nWhere do we document that we are going to put in effort to make these\nkinds of things work? What other guarantees are we supposed to be\nproviding regarding using output from a newer pg_dump against older\nservers? What about newer custom format dumps? Surely you're not\nsuggesting that we need to back-patch support for them to released\nversions of pg_restore.\n\nThanks,\n\nStephen", "msg_date": "Mon, 7 Mar 2022 14:22:06 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Mon, Mar 7, 2022 at 11:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> In terms of how\n>\nthings work today, see Joshua Brindle's email about the use of groups\n> in pg_hba.conf. That is an excellent example of how removing oneself\n> from a group could enable one to bypass security restrictions intended\n> by the DBA.\n>\n>\nYou mean the one that was based upon your \"ooops\"...I discounted that\nimmediately because members cannot revoke their own membership in a group\nunless they were given WITH ADMIN OPTION on that group.\n\nThe mere fact that the pg_hba.conf concern raised there hasn't been\nreported as a live issue suggests the lack of any meaningful design flaw\nhere.\n\nThat isn't to say that having a LOGIN role get an automatic temporary WITH\nADMIN OPTION on itself is a good thing - but there isn't any privilege\nescalation vector here to be squashed. There is just a \"DBAs should treat\nLOGIN roles as leaf nodes\" expectation in which case there would be no\nsuperuser granted memberships to be removed.\n\nDavid J.\n\nOn Mon, Mar 7, 2022 at 11:18 AM Robert Haas <robertmhaas@gmail.com> wrote:In terms of how\nthings work today, see Joshua Brindle's email about the use of groups\nin pg_hba.conf. That is an excellent example of how removing oneself\nfrom a group could enable one to bypass security restrictions intended\nby the DBA.You mean the one that was based upon your \"ooops\"...I discounted that immediately because members cannot revoke their own membership in a group unless they were given WITH ADMIN OPTION on that group.The mere fact that the pg_hba.conf concern raised there hasn't been reported as a live issue suggests the lack of any meaningful design flaw here.That isn't to say that having a LOGIN role get an automatic temporary WITH ADMIN OPTION on itself is a good thing - but there isn't any privilege escalation vector here to be squashed.  There is just a \"DBAs should treat LOGIN roles as leaf nodes\" expectation in which case there would be no superuser granted memberships to be removed.David J.", "msg_date": "Mon, 7 Mar 2022 12:29:31 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Mon, Mar 7, 2022 at 2:29 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> You mean the one that was based upon your \"ooops\"...I discounted that immediately because members cannot revoke their own membership in a group unless they were given WITH ADMIN OPTION on that group.\n\nOh, hmm. That example might be backwards from the case I'm talking about.\n\n> The mere fact that the pg_hba.conf concern raised there hasn't been reported as a live issue suggests the lack of any meaningful design flaw here.\n\nNot really. The system is full of old bugs, just as all software\nsystem are, and the particular role self-administration behavior that\nis at issue here appears to be something that was accidentally\nintroduced 16 years years ago in a commit that did something else and\nnever scrutinized from a design perspective since then.\n\nPersonally, I've been shocked by the degree to which this entire area\nseems to be full of design flaws and half-baked code. I mean, just the\nfact that the pg_auth_members.grantor can be left pointing to a role\nOID that no longer exists is pretty crazy, right? I don't think anyone\ntoday would consider something with that kind of wart committable.\n\n> That isn't to say that having a LOGIN role get an automatic temporary WITH ADMIN OPTION on itself is a good thing - but there isn't any privilege escalation vector here to be squashed. There is just a \"DBAs should treat LOGIN roles as leaf nodes\" expectation in which case there would be no superuser granted memberships to be removed.\n\nWell, we may not have found one yet, but that doesn't prove none\nexists. In any case, if we can agree that it's not necessarily a\ndesirable behavior, that's good enough for me.\n\n(I still disagree with the idea that LOGIN roles have to be leaf\nnodes. We could have a system where that's true, but that's not how\nthe system we actually have is designed.)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Mar 2022 14:46:56 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "\n\n> On Mar 7, 2022, at 10:28 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Does anything interesting break if you do just take it out?\n\n SET SESSION AUTHORIZATION regress_priv_group2;\n GRANT regress_priv_group2 TO regress_priv_user5; -- ok: a role can self-admin\n-NOTICE: role \"regress_priv_user5\" is already a member of role \"regress_priv_group2\"\n+ERROR: must have admin option on role \"regress_priv_group2\"\n\nThis test failure is just a manifestation of the intended change, but assuming we make no other changes, the error message would clearly need to be updated, because it suggests the role should have admin_option on itself, a situation which is not currently supported.\n\nPerhaps we should support that, though, by adding a reflexive aclitem[] to pg_authid (meaning it tracks which privileges a role has on itself) with tracking of who granted it, so that revocation can be handled properly. The aclitem could start out null, meaning the role has by default the traditional limited self-admin which the code comments discuss:\n\n /*\n * A role can admin itself when it matches the session user and we're\n * outside any security-restricted operation, SECURITY DEFINER or\n * similar context. SQL-standard roles cannot self-admin. However,\n * SQL-standard users are distinct from roles, and they are not\n * grantable like roles: PostgreSQL's role-user duality extends the\n * standard. Checking for a session user match has the effect of\n * letting a role self-admin only when it's conspicuously behaving\n * like a user. Note that allowing self-admin under a mere SET ROLE\n * would make WITH ADMIN OPTION largely irrelevant; any member could\n * SET ROLE to issue the otherwise-forbidden command.\n *\n * Withholding self-admin in a security-restricted operation prevents\n * object owners from harnessing the session user identity during\n * administrative maintenance. Suppose Alice owns a database, has\n * issued \"GRANT alice TO bob\", and runs a daily ANALYZE. Bob creates\n * an alice-owned SECURITY DEFINER function that issues \"REVOKE alice\n * FROM carol\". If he creates an expression index calling that\n * function, Alice will attempt the REVOKE during each ANALYZE.\n * Checking InSecurityRestrictedOperation() thwarts that attack.\n *\n * Withholding self-admin in SECURITY DEFINER functions makes their\n * behavior independent of the calling user. There's no security or\n * SQL-standard-conformance need for that restriction, though.\n *\n * A role cannot have actual WITH ADMIN OPTION on itself, because that\n * would imply a membership loop. Therefore, we're done either way.\n */\n\nFor non-null aclitem[], we could support REVOKE ADMIN OPTION FOR joe FROM joe, and for explicit re-grants, we could track who granted it, such that further revocations could properly refuse if the revoker doesn't have sufficient privileges vis-a-vis the role that granted it in the first place.\n\nI have not yet tried to implement this, and might quickly hit problems with the idea, but will take a stab at a proof-of-concept patch unless you suggest a better approach.\n\nThoughts?\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 7 Mar 2022 11:59:54 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Mon, Mar 7, 2022 at 2:59 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> This test failure is just a manifestation of the intended change, but assuming we make no other changes, the error message would clearly need to be updated, because it suggests the role should have admin_option on itself, a situation which is not currently supported.\n\nIt's been pointed out upthread that this would have undesirable\nsecurity implications, because the admin option would be inherited,\nand the implicit permission isn't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Mar 2022 15:01:37 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "\n\n> On Mar 7, 2022, at 12:01 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> It's been pointed out upthread that this would have undesirable\n> security implications, because the admin option would be inherited,\n> and the implicit permission isn't.\n\nRight, but with a reflexive self-admin-option, we could document that it works in a non-inherited way. We'd just be saying the current hard-coded behavior is an option which can be revoked rather than something you're stuck with.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 7 Mar 2022 12:03:55 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "\n\n> On Mar 7, 2022, at 12:03 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Right, but with a reflexive self-admin-option, we could document that it works in a non-inherited way. We'd just be saying the current hard-coded behavior is an option which can be revoked rather than something you're stuck with.\n\nWe could also say that the default is to not have admin option on yourself, with that being something grantable, but that is a larger change from the historical behavior and might have more consequences for dump/restore, etc.\n\nMy concern about just nuking self-admin is that there may be sites which use self-admin and we'd be leaving them without a simple work-around after upgrade, because they couldn't restore the behavior by executing a grant. They'd have to more fundamentally restructure their role relationships to not depend on self-admin, something which might be harder for them to do. Perhaps nobody is using self-admin, or very few people are using it, and I'm being overly concerned.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 7 Mar 2022 12:09:48 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Mar 7, 2022, at 12:01 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n>> \n>> It's been pointed out upthread that this would have undesirable\n>> security implications, because the admin option would be inherited,\n>> and the implicit permission isn't.\n\n> Right, but with a reflexive self-admin-option, we could document that it works in a non-inherited way. We'd just be saying the current hard-coded behavior is an option which can be revoked rather than something you're stuck with.\n\nAfter reflection, I think that role self-admin is probably a bad idea that\nwe should stay away from. It could perhaps be reasonable given some other\nsystem design and/or syntax than what SQL gives us, but we're dealing in\nSQL. It doesn't make sense to GRANT a role to itself, and therefore it\nlikewise doesn't make sense to GRANT WITH ADMIN OPTION.\n\nBased on Robert's archaeological dig, it now seems that the fact that\nwe have any such behavior at all was just a mistake. What would be\nlost if we drop it?\n\nHaving said that, one thing that I find fishy is that it's not clear\nwhere the admin privilege for a role originates. After \"CREATE ROLE\nalice\", alice has no members, therefore none that have admin privilege,\ntherefore the only way that the first member could be added is via\nsuperuser deus ex machina. This does not seem clean. If we recorded\nwhich user created the role, we could act as though that user has\nadmin privilege (whether or not it's a member). Perhaps I'm\nreinventing something that was already discussed upthread. I wonder\nwhat the SQL spec has to say on this point, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Mar 2022 15:16:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Mon, Mar 7, 2022 at 1:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Based on Robert's archaeological dig, it now seems that the fact that\n> we have any such behavior at all was just a mistake. What would be\n> lost if we drop it?\n>\n\nProbably nothing that couldn't be replaced, and with a better model, but I\ndo have a concern that there are setups in the wild inadvertently using\nthis behavior. Enough so that I would vote to change it but include a\nmigration GUC to restore the current behavior, probably with a deprecation\nwarning. Kinda depends on the post-change dump/restore mechanics. But\njust tearing it out wouldn't seem extraordinary for us.\n\n\n>\n> Having said that, one thing that I find fishy is that it's not clear\n> where the admin privilege for a role originates.\n\n\nI do not see a problem with there being no inherent admin privilege for a\nrole. A superuser or CREATEROLE user holds admin privilege on all roles in\nthe cluster. They can delegate the privilege to administer a role to yet\nanother role in the system. The necessitates creating two roles - the one\nbeing administered and the one being delegated to. I don't see a benefit\nto saving which specific superuser or CREATEROLE user \"owns\" the role that\nis to be administered. Not unless non-owner CREATEROLE users are prevented\nfrom exercising admin privileges on the role. That all said, I'd accept\nthe choice to include such ownership information as a requirement for\nmeeting the auditing needs of DBAs. But I would argue that such auditing\nprobably needs to be external to the working system - the fact that\nownership can be changed reduces the benefit of an in-database value.\n\n> If we recorded\n\nwhich user created the role, we could act as though that user has\n> admin privilege (whether or not it's a member).\n\n\nI suppose we could record the current owner of a role but that seems\nunnecessary. I dislike using the \"created\" concept by virtue of the fact\nthat, for routines, \"security definer\" implies creator but it actually\nmeans \"security owner\".\n\nDavid J.\n\nOn Mon, Mar 7, 2022 at 1:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Based on Robert's archaeological dig, it now seems that the fact that\nwe have any such behavior at all was just a mistake.  What would be\nlost if we drop it?Probably nothing that couldn't be replaced, and with a better model, but I do have a concern that there are setups in the wild inadvertently using this behavior.  Enough so that I would vote to change it but include a migration GUC to restore the current behavior, probably with a deprecation warning.  Kinda depends on the post-change dump/restore mechanics.  But just tearing it out wouldn't seem extraordinary for us. \n\nHaving said that, one thing that I find fishy is that it's not clear\nwhere the admin privilege for a role originates.I do not see a problem with there being no inherent admin privilege for a role.  A superuser or CREATEROLE user holds admin privilege on all roles in the cluster.  They can delegate the privilege to administer a role to yet another role in the system.  The necessitates creating two roles - the one being administered and the one being delegated to.  I don't see a benefit to saving which specific superuser or CREATEROLE user \"owns\" the role that is to be administered.  Not unless non-owner CREATEROLE users are prevented from exercising admin privileges on the role.  That all said, I'd accept the choice to include such ownership information as a requirement for meeting the auditing needs of DBAs.  But I would argue that such auditing probably needs to be external to the working system - the fact that ownership can be changed reduces the benefit of an in-database value.If we recorded\nwhich user created the role, we could act as though that user has\nadmin privilege (whether or not it's a member).I suppose we could record the current owner of a role but that seems unnecessary.  I dislike using the \"created\" concept by virtue of the fact that, for routines, \"security definer\" implies creator but it actually means \"security owner\".David J.", "msg_date": "Mon, 7 Mar 2022 14:34:20 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "> On Mar 7, 2022, at 12:16 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> What would be\n> lost if we drop it?\n\nI looked into this a bit. Removing that bit of code, the only regression test changes for \"check-world\" are the expected ones, with nothing else breaking. Running installcheck+pg_upgrade to the patched version of HEAD from each of versions 11, 12, 13 and 14 doesn't turn up anything untoward. The change I used (for reference) is attached:\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 7 Mar 2022 20:14:31 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On 07.03.22 19:18, Robert Haas wrote:\n>> That all said, permissions SHOULD BE strictly additive. If boss doesn't want to be a member of pg_read_all_files allowing them to revoke themself from that role seems like it should be acceptable. If there is fear in allowing someone to revoke (not add) themselves as a member of a different role that suggests we have a design issue in another feature of the system. Today, they neither grant nor revoke, and the self-revocation doesn't seem that important to add.\n> I disagree with this on principle, and I also think that's not how it\n> works today. On the general principle, I do not see a compelling\n> reason why we should have two systems for maintaining groups of users,\n> one of which is used for additive things and one of which is used for\n> subtractive things.\n\nDo we have subtractive permissions today?\n\n\n\n", "msg_date": "Wed, 9 Mar 2022 13:55:01 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Wed, Mar 9, 2022 at 7:55 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> Do we have subtractive permissions today?\n\nNot in the GRANT/REVOKE sense, I think, but you can put a user in a\ngroup and then mention that group in pg_hba.conf. And that line might\nbe \"reject\" or whatever.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Mar 2022 08:02:25 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Mon, Mar 7, 2022 at 11:14 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On Mar 7, 2022, at 12:16 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > What would be\n> > lost if we drop it?\n>\n> I looked into this a bit. Removing that bit of code, the only regression test changes for \"check-world\" are the expected ones, with nothing else breaking. Running installcheck+pg_upgrade to the patched version of HEAD from each of versions 11, 12, 13 and 14 doesn't turn up anything untoward.\n\nI looked into this a bit, too. I attach a draft patch for removing the\nself-admin exception.\n\nI found that having is_admin_of_role() return true matters in three\nways: (1) It lets you grant membership in the role to some other role.\n(2) It lets you revoke membership in the role from some other role.\n(3) It changes the return value of pg_role_aclcheck(), which is used\nin the implementation of various SQL-callable functions all invoked\nvia the name pg_has_role(). We've mostly been discussing (2) as an\nissue, but (1) and (3) are pretty interesting too. Regarding (3),\nthere is a comment in the code indicating that Noah considered the\nself-admin exception something of a wart as far as pg_has_role() is\nconcerned. As to (1), I discovered that today you can do this:\n\nrhaas=# create user foo;\nCREATE ROLE\nrhaas=# create user bar;\nCREATE ROLE\nrhaas=# \\q\n[rhaas ~]$ psql -U foo rhaas\npsql (15devel)\nType \"help\" for help.\n\nrhaas=> grant foo to bar with admin option;\nGRANT ROLE\n\nI don't know why I didn't realize that before. It's a natural result\nof treating the logged-in user as if they had admin option. But it's\nweird that you can't even be granted WITH ADMIN OPTION on your own\nlogin role, but at the same time without having it you can grant it to\nsomeone else!\n\nI believe there are three other points worth some consideration here.\n\nFirst, in the course of my investigation I re-discovered what Tom\nalready did a good job articulating:\n\ntgl> Having said that, one thing that I find fishy is that it's not clear\ntgl> where the admin privilege for a role originates. After \"CREATE ROLE\ntgl> alice\", alice has no members, therefore none that have admin privilege,\ntgl> therefore the only way that the first member could be added is via\ntgl> superuser deus ex machina. This does not seem clean.\n\nI agree with that, but I don't think it's a sufficient reason for\nkeeping the self-admin exception, because the same problem exists for\nnon-login roles. I don't even think it's the right idea conceptually\nto suppose that the power to administer a role originates from the\nrole itself. If that were so, then it would be inherited by all\nmembers of the role along with all the rest of the role's privileges,\nwhich is so clearly not right that we've already prohibited a role\nfrom having WITH ADMIN OPTION on itself. In my opinion, the right to\nadminister a role - regardless of whether or not it is a login role -\nmost naturally vests in the role that created it, or something in that\ndirection at least, if not that exact thing. Today, that means the\nsuperuser or a CREATEROLE user who could hack superuser if they\nwished. In the future, I hope for other alternatives, as recently\nargued on other threads. But we need not resolve the question of how\nthat should work exactly in order to agree (as I hope we do) that\ndoubling down on the self-administration exception is not the answer.\n\nSecond, it occured to me to wonder what implications a change like\nthis might have for dump and restore. If privilege restoration somehow\nrelied on this behavior, then we'd have a problem. But I don't think\nit does, because (a) pg_dump can SET ROLE but can't change the session\nuser without reconnecting, so it's unclear how we could be relying on\nit; (b) it wouldn't work for non-login roles, and it's unlikely that\nwe would treat login and non-login roles different in terms of\nrestoring privileges, and (c) when I execute the example shown above\nand then run pg_dump, there's no attempt to change the current user,\nit just dumps \"GRANT foo TO bar WITH ADMIN OPTION GRANTED BY foo\".\n\nThird, it occurred to me to wonder whether some users might be using\nand relying upon this behavior. It's certainly possible, and it does\nsuck that we'd be removing it without providing a workable substitute.\nBut it's probably not a LOT of users because most people who have\ncommented on this topic on this mailing list seem to find granting\nmembership in a login role a super-weird thing to do, because a lot of\npeople really seem to want every role to be a user or a group, and a\nlogin role with members feels like it's blurring that line. I'm\ninclined to think that the small number of people who may be unhappy\nis an acceptable price to pay for removing this wart, but it's a\njudgement call and if someone has information to suggest that I'm\nwrong, it'd be good to hear about that.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 9 Mar 2022 15:51:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mar 7, 2022, at 12:16 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> tgl> Having said that, one thing that I find fishy is that it's not clear\n> tgl> where the admin privilege for a role originates. After \"CREATE ROLE\n> tgl> alice\", alice has no members, therefore none that have admin privilege,\n> tgl> therefore the only way that the first member could be added is via\n> tgl> superuser deus ex machina. This does not seem clean.\n\n> I agree with that, but I don't think it's a sufficient reason for\n> keeping the self-admin exception, because the same problem exists for\n> non-login roles. I don't even think it's the right idea conceptually\n> to suppose that the power to administer a role originates from the\n> role itself.\n\nActually, that's the same thing I was trying to say. But if it doesn't\noriginate from the role itself, where does it originate from?\n\n> In my opinion, the right to\n> administer a role - regardless of whether or not it is a login role -\n> most naturally vests in the role that created it, or something in that\n> direction at least, if not that exact thing.\n\nThis seems like a reasonable answer to me too: the creating role has admin\noption implicitly, and can then choose to grant that to other roles.\nObviously some work needs to be done to make that happen (and we should\nsee whether the SQL spec has some different idea).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Mar 2022 16:01:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Wed, Mar 9, 2022 at 4:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > In my opinion, the right to\n> > administer a role - regardless of whether or not it is a login role -\n> > most naturally vests in the role that created it, or something in that\n> > direction at least, if not that exact thing.\n>\n> This seems like a reasonable answer to me too: the creating role has admin\n> option implicitly, and can then choose to grant that to other roles.\n> Obviously some work needs to be done to make that happen (and we should\n> see whether the SQL spec has some different idea).\n\nWell, the problem is that as far as I can see, the admin option is an\noptional feature of membership. You can grant someone membership\nwithout admin option, or with admin option, but you can't grant them\nthe admin option without membership, just like you can't purchase an\nupgrade to first class without the underlying plane ticket. What would\nthe syntax look even like for this? GRANT foo TO bar WITH ADMIN OPTION\nBUT WITHOUT MEMBERSHIP? Yikes.\n\nBut do we really have to solve this problem before we can clean up\nthis session exception? I hope not, because I think that's a much\nbigger can of worms than this is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Mar 2022 16:15:51 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Mar 7, 2022, at 12:16 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > tgl> Having said that, one thing that I find fishy is that it's not clear\n> > tgl> where the admin privilege for a role originates. After \"CREATE ROLE\n> > tgl> alice\", alice has no members, therefore none that have admin privilege,\n> > tgl> therefore the only way that the first member could be added is via\n> > tgl> superuser deus ex machina. This does not seem clean.\n> \n> > I agree with that, but I don't think it's a sufficient reason for\n> > keeping the self-admin exception, because the same problem exists for\n> > non-login roles. I don't even think it's the right idea conceptually\n> > to suppose that the power to administer a role originates from the\n> > role itself.\n> \n> Actually, that's the same thing I was trying to say. But if it doesn't\n> originate from the role itself, where does it originate from?\n> \n> > In my opinion, the right to\n> > administer a role - regardless of whether or not it is a login role -\n> > most naturally vests in the role that created it, or something in that\n> > direction at least, if not that exact thing.\n> \n> This seems like a reasonable answer to me too: the creating role has admin\n> option implicitly, and can then choose to grant that to other roles.\n\nI agree that this has some appeal, but it's not desirable in all cases\nand so I wouldn't want it to be fully baked into the system ala the role\n'owner' concept.\n\n> Obviously some work needs to be done to make that happen (and we should\n> see whether the SQL spec has some different idea).\n\nAgreed on this, though I don't recall it having much to say on it.\n\nThanks,\n\nStephen", "msg_date": "Wed, 9 Mar 2022 16:20:05 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Mar 9, 2022 at 4:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > In my opinion, the right to\n> > > administer a role - regardless of whether or not it is a login role -\n> > > most naturally vests in the role that created it, or something in that\n> > > direction at least, if not that exact thing.\n> >\n> > This seems like a reasonable answer to me too: the creating role has admin\n> > option implicitly, and can then choose to grant that to other roles.\n> > Obviously some work needs to be done to make that happen (and we should\n> > see whether the SQL spec has some different idea).\n> \n> Well, the problem is that as far as I can see, the admin option is an\n> optional feature of membership. You can grant someone membership\n> without admin option, or with admin option, but you can't grant them\n> the admin option without membership, just like you can't purchase an\n> upgrade to first class without the underlying plane ticket. What would\n> the syntax look even like for this? GRANT foo TO bar WITH ADMIN OPTION\n> BUT WITHOUT MEMBERSHIP? Yikes.\n\nI've been meaning to reply to your other email regarding this, but I\ndon't really agree that the syntax ends up being so terrible or\ndifficult to deal with, considering we have these same general things\nfor ALTER ROLE already and there hasn't been all that much complaining.\nThat is, we have LOGIN and NOLOGIN, CREATEROLE and NOCREATEROLE, and we\ncould have MEMBERSHIP and NOMEMBERSHIP pretty easily here if we wanted\nto.\n\n> But do we really have to solve this problem before we can clean up\n> this session exception? I hope not, because I think that's a much\n> bigger can of worms than this is.\n\nI do believe we can deal with the above independently and at a later\ntime and go ahead and clean up the session excepton bit without dealing\nwith the above at the same time.\n\nThanks,\n\nStephen", "msg_date": "Wed, 9 Mar 2022 16:23:48 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "I wrote:\n> This seems like a reasonable answer to me too: the creating role has admin\n> option implicitly, and can then choose to grant that to other roles.\n> Obviously some work needs to be done to make that happen (and we should\n> see whether the SQL spec has some different idea).\n\nAh, here we go: it's buried under CREATE ROLE. SQL:2021 12.4 <role\ndefinition> saith that when role A executes CREATE ROLE <role name>,\nthen\n\n1) A grantable role authorization descriptor is created whose role name\nis <role name>, whose grantor is \"_SYSTEM\", and whose grantee is A.\n\nSince nobody is _SYSTEM, this grant can't be deleted except by dropping\nthe new role (or, maybe, dropping A?). So that has nearly the same\nend result as \"the creating role has admin option implicitly\". The main\ndifference I can see is that it also means the creating role is a *member*\nimplicitly, which is something I'd argue we don't want to enforce. This\nis analogous to the way we let an object owner revoke her own ordinary\npermissions, which the SQL model doesn't allow since those permissions\nwere granted to her by _SYSTEM.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Mar 2022 16:24:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Well, the problem is that as far as I can see, the admin option is an\n> optional feature of membership. You can grant someone membership\n> without admin option, or with admin option, but you can't grant them\n> the admin option without membership, just like you can't purchase an\n> upgrade to first class without the underlying plane ticket. What would\n> the syntax look even like for this? GRANT foo TO bar WITH ADMIN OPTION\n> BUT WITHOUT MEMBERSHIP? Yikes.\n\nI don't think we need syntax to describe it. As I just said in my\nother reply, we have a perfectly good precedent for this already\nin ordinary object permissions. That is: an object owner always,\nimplicitly, has GRANT OPTION for all the object's privileges, even\nif she revoked the corresponding plain privilege from herself.\n\nYeah, this does mean that we're effectively deciding that the creator\nof a role is its owner. What's the problem with that?\n\n> But do we really have to solve this problem before we can clean up\n> this session exception?\n\nI think we need a plan for where we're going. I don't see \"clean up\nthe session exception\" as an end in itself; it's part of re-examining\nhow all of this ought to work. I don't say that we have to have a\ncomplete patch right away, only that we need a coherent end goal.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Mar 2022 16:31:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Wed, Mar 9, 2022 at 2:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Well, the problem is that as far as I can see, the admin option is an\n> > optional feature of membership. You can grant someone membership\n> > without admin option, or with admin option, but you can't grant them\n> > the admin option without membership, just like you can't purchase an\n> > upgrade to first class without the underlying plane ticket. What would\n> > the syntax look even like for this? GRANT foo TO bar WITH ADMIN OPTION\n> > BUT WITHOUT MEMBERSHIP? Yikes.\n>\n> I don't think we need syntax to describe it. As I just said in my\n> other reply, we have a perfectly good precedent for this already\n> in ordinary object permissions. That is: an object owner always,\n> implicitly, has GRANT OPTION for all the object's privileges, even\n> if she revoked the corresponding plain privilege from herself.\n>\n\nSo CREATE ROLE will assign ownership of AND membership in the newly created\nrole to the session_user UNLESS the OWNER clause is present in which case\nthe named role, so long as the session_user can SET ROLE to the named role,\nbecomes the owner & member. Subsequent to that the owner can issue: REVOKE\nnew_role FROM role_name where role_name is again the session_user role or\none that can be SET ROLE to.\n\n\n> Yeah, this does mean that we're effectively deciding that the creator\n> of a role is its owner. What's the problem with that?\n>\n\nI'm fine with this. It does introduce an OWNER concept to roles and so at\nminimum we need to add:\n\nALTER ROLE foo OWNER TO { new_owner | CURRENT_ROLE | CURRENT_USER |\nSESSION_USER }\n\nAnd similar for CREATE ROLE\nAnd keep the USER alias commands in sync.\nGROUP commands are only present for backward compatibility and so don't get\nupdated with new features by design.\n\nObviously a superuser can change ownership.\n\nPlaying with table ownership I find this behavior:\n-- superuser\nCREATE ROLE tblowner;\nCREATE TABLE tblowner_test (id serial primary key);\nALTER TABLE tblowner_test OWNER TO tblowner;\n\nCREATE ROLE boss;\nGRANT boss TO tblowner;\n\nSET SESSION AUTHORIZATION tblowner;\nALTER TABLE tblowner_test OWNER TO boss; --works\n\nSo tblowner can push their ownership attribute to any group they are a\nmember of. Is that the behavior we want for roles as well?\n\nDavid J.\n\nOn Wed, Mar 9, 2022 at 2:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> Well, the problem is that as far as I can see, the admin option is an\n> optional feature of membership. You can grant someone membership\n> without admin option, or with admin option, but you can't grant them\n> the admin option without membership, just like you can't purchase an\n> upgrade to first class without the underlying plane ticket. What would\n> the syntax look even like for this? GRANT foo TO bar WITH ADMIN OPTION\n> BUT WITHOUT MEMBERSHIP? Yikes.\n\nI don't think we need syntax to describe it.  As I just said in my\nother reply, we have a perfectly good precedent for this already\nin ordinary object permissions.  That is: an object owner always,\nimplicitly, has GRANT OPTION for all the object's privileges, even\nif she revoked the corresponding plain privilege from herself.So CREATE ROLE will assign ownership of AND membership in the newly created role to the session_user UNLESS the OWNER clause is present in which case the named role, so long as the session_user can SET ROLE to the named role, becomes the owner & member.  Subsequent to that the owner can issue: REVOKE new_role FROM role_name where role_name is again the session_user role or one that can be SET ROLE to.\n\nYeah, this does mean that we're effectively deciding that the creator\nof a role is its owner.  What's the problem with that?I'm fine with this.  It does introduce an OWNER concept to roles and so at minimum we need to add:ALTER ROLE foo OWNER TO { new_owner | CURRENT_ROLE | CURRENT_USER | SESSION_USER }And similar for CREATE ROLEAnd keep the USER alias commands in sync.GROUP commands are only present for backward compatibility and so don't get updated with new features by design.Obviously a superuser can change ownership.Playing with table ownership I find this behavior:-- superuserCREATE ROLE tblowner;CREATE TABLE tblowner_test (id serial primary key);ALTER TABLE tblowner_test OWNER TO tblowner;CREATE ROLE boss;GRANT boss TO tblowner;SET SESSION AUTHORIZATION tblowner;ALTER TABLE tblowner_test OWNER TO boss; --worksSo tblowner can push their ownership attribute to any group they are a member of.  Is that the behavior we want for roles as well?David J.", "msg_date": "Wed, 9 Mar 2022 15:00:51 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> So CREATE ROLE will assign ownership of AND membership in the newly created\n> role to the session_user\n\nI would NOT have it automatically assign membership in the new role,\neven though the SQL spec says so. We've not done that historically\nand it doesn't seem desirable. In particular, it's *really* not\ndesirable for a user (role with LOGIN).\n\n> I'm fine with this. It does introduce an OWNER concept to roles and so at\n> minimum we need to add:\n> ALTER ROLE foo OWNER TO { new_owner | CURRENT_ROLE | CURRENT_USER |\n> SESSION_USER }\n\nAgreed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Mar 2022 17:35:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Wed, Mar 9, 2022 at 4:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't think we need syntax to describe it. As I just said in my\n> other reply, we have a perfectly good precedent for this already\n> in ordinary object permissions. That is: an object owner always,\n> implicitly, has GRANT OPTION for all the object's privileges, even\n> if she revoked the corresponding plain privilege from herself.\n>\n> Yeah, this does mean that we're effectively deciding that the creator\n> of a role is its owner. What's the problem with that?\n\nI don't think that's entirely the wrong concept, but it doesn't make a\nlot of sense in a world where the creator has to be a superuser. If\nalice, bob, and charlie are superusers who take turns creating new\nusers, and then we let charlie go due to budget cuts, forcing alice\nand bob to change the owner of all the users he created to some other\nsuperuser as a condition of dropping his account is a waste of\neveryone's time. They can do exactly the same things to every account\non the system after we change the role owner as before.\n\nBut wait, I hear you cry, what about CREATEROLE? Well, CREATEROLE is\ngenerally agreed to be broken right now, and if you don't agree with\nthat, consider that it can grant pg_execute_server_programs to a\nnewly-created account and then explain to me how it's functionally\ndifferent from superuser. The whole area needs a rethink. I believe\neveryone involved in the discussion on the other threads agrees that\nsome reform of CREATEROLE is necessary, and more generally with the\nidea that it's useful for non-superusers to be able to create roles.\nBut the reasons why people want that vary.\n\nI want that because I want mini-superusers, where alice can administer\nthe users that alice creates just as if she were a superuser,\nincluding having their permissions implicitly and dropping them when\nshe wants them gone, but where alice cannot break out to the operating\nsystem as a true superuser could do. I want this because the lack of\nmeaningful privilege separation that led to CVE-2019-9193 being filed\nspuriously is a very real problem. It's a thing a lot of people want,\nand I want to give it to them. David Steele, on the other hand, wants\nto build a user-creating bot that can create accounts but otherwise\nconforms to the principle of least privilege: the bot can stand up\naccounts, can grant them membership in a defined set of groups, but\ncannot exercise the privileges of those accounts (or hack superuser\neither). Other people may well want other things.\n\nAnd that's why I'm not sure it's really the right idea to say that we\ndon't need syntax for this admin-without-member concept. If we just\nwant to bolt role ownership onto the existing framework without really\nchanging anything else, we can do that without extra syntax and, as\nyou say here, make it an implicit property of role ownership. But I\ndon't see that as has having much value; we just end up with a bunch\nof superuser owners. Whatever. Now Stephen made the argument that we\nought to actually have admin-without-member as a first class concept,\nsomething that could be assigned to arbitrary users. Actually, I think\nhe wanted it even more fine grained with that. And I think that could\nmake the concept a lot more useful, but then it needs some kind of\nunderstandable syntax.\n\nThere's a lot of moving parts here. It's not just about coming up with\nsomething that sounds generally logical, but about creating a system\nthat has some real-world utility.\n\n> > But do we really have to solve this problem before we can clean up\n> > this session exception?\n>\n> I think we need a plan for where we're going. I don't see \"clean up\n> the session exception\" as an end in itself; it's part of re-examining\n> how all of this ought to work. I don't say that we have to have a\n> complete patch right away, only that we need a coherent end goal.\n\nI'd like to have a plan, too, but if this behavior is accidental, I\nstill think we can remove it without making big decisions about future\ndirection. The perfect is the enemy of the good.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Mar 2022 09:46:46 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 10, 2022 at 7:46 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Mar 9, 2022 at 4:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I don't think we need syntax to describe it. As I just said in my\n> > other reply, we have a perfectly good precedent for this already\n> > in ordinary object permissions. That is: an object owner always,\n> > implicitly, has GRANT OPTION for all the object's privileges, even\n> > if she revoked the corresponding plain privilege from herself.\n> >\n> > Yeah, this does mean that we're effectively deciding that the creator\n> > of a role is its owner. What's the problem with that?\n>\n> I don't think that's entirely the wrong concept, but it doesn't make a\n> lot of sense in a world where the creator has to be a superuser. If\n> alice, bob, and charlie are superusers who take turns creating new\n> users, and then we let charlie go due to budget cuts, forcing alice\n> and bob to change the owner of all the users he created to some other\n> superuser as a condition of dropping his account is a waste of\n> everyone's time. They can do exactly the same things to every account\n> on the system after we change the role owner as before.\n>\n\nThen maybe we should just implement the idea that if a superuser would\nbecome the owner we instead substitute in the bootstrap user. Or give the\nDBA the choice whether they want to retain knowledge of specific roles -\nand thus are willing to accept the \"waste of time\".\n\n\n> But wait, I hear you cry, what about CREATEROLE? Well, CREATEROLE is\n> generally agreed to be broken right now, and if you don't agree with\n> that, consider that it can grant pg_execute_server_programs to a\n> newly-created account and then explain to me how it's functionally\n> different from superuser.\n\n\nCREATEROLE has long been defined as basically having \"with admin option\" on\nevery role in the system. The failure to special-case the roles that grant\ndifferent aspects of superuser-ness to its members doesn't make CREATEROLE\nitself broken, it makes the implementation of pg_execute_server_programs\nbroken. Only superusers should be considered to have with admin option on\nthese roles. They can delegate through the usual membership+admin mechanism\nto a CREATEROLE role if they desire.\n\n\n> The whole area needs a rethink. I believe\n> everyone involved in the discussion on the other threads agrees that\n> some reform of CREATEROLE is necessary, and more generally with the\n> idea that it's useful for non-superusers to be able to create roles.\n>\n\nAs the documentation says, using SUPERUSER for day-to-day administration is\ncontrary to good security practices. Role management is considered to be a\nday-to-day administration activity. I agree with this principle. It was\ndesigned to neither be a superuser nor grant superuser, so removing the\nability to grant the pg_* role memberships remains consistent with its\noriginal intent.\n\n\n> I want that because I want mini-superusers, where alice can administer\n> the users that alice creates just as if she were a superuser,\n> including having their permissions implicitly and dropping them when\n> she wants them gone, but where alice cannot break out to the operating\n> system as a true superuser could do.\n\n\nCREATEROLE (once the pg_* with admin rules are fixed) + Ownership and rules\nrestricting interfering with another role's objects (unless superuser)\nseems to handle this.\n\n\n> the bot can stand up\n> accounts, can grant them membership in a defined set of groups, but\n> cannot exercise the privileges of those accounts (or hack superuser\n> either).\n\n\nThe bot should be provided a security definer procedure that encapsulates\nall of this rather than us trying to hack the permission system. This\nisn't a user permission concern, it is an unauthorized privilege escalation\nconcern. Anyone with the bot's credentials can trivially overcome the\nthird restriction by creating a role with the desired membership and then\nlogging in as that role - and there is nothing the system can do to prevent\nthat while also allowing the other two permissions.\n\n\n> And that's why I'm not sure it's really the right idea to say that we\n> don't need syntax for this admin-without-member concept.\n\n\nWe already have this syntax in the form of CREATEROLE. But we do need a\nfix, just on the group side. We need a way to define a group as having no\nADMINS.\n\nALTER ROLE pg_superuser WITH [NO] ADMIN;\n\nThen adding a role membership including the WITH ADMIN OPTION can be\nrejected, as can the non-superuser situation. Setting WITH NO ADMIN should\nfail if any existing members have admin. You must be a superuser to\nexecute WITH ADMIN (maybe WITH NO ADMIN as well...). And possibly even a\nnew pg_* role that grants this ability (and maybe some others) for use by a\nbackup/restore user.\n\nOr just special-case pg_* roles.\n\nThe advantage of exposing this to the DBA is that they can then package\npg_* roles into a custom group and still have the benefit of superuser only\nadministration. In the special-case implementation the presence of a pg_*\nrole in a group hierarchy would then preclude a non-superuser from having\nadmin on the entire tree (the pg_* roles are all roots, or in the case of\npg_monitor, directly emanate from a root role).\n\nDavid J.\n\n\nDavid J.\n\nOn Thu, Mar 10, 2022 at 7:46 AM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Mar 9, 2022 at 4:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't think we need syntax to describe it.  As I just said in my\n> other reply, we have a perfectly good precedent for this already\n> in ordinary object permissions.  That is: an object owner always,\n> implicitly, has GRANT OPTION for all the object's privileges, even\n> if she revoked the corresponding plain privilege from herself.\n>\n> Yeah, this does mean that we're effectively deciding that the creator\n> of a role is its owner.  What's the problem with that?\n\nI don't think that's entirely the wrong concept, but it doesn't make a\nlot of sense in a world where the creator has to be a superuser. If\nalice, bob, and charlie are superusers who take turns creating new\nusers, and then we let charlie go due to budget cuts, forcing alice\nand bob to change the owner of all the users he created to some other\nsuperuser as a condition of dropping his account is a waste of\neveryone's time. They can do exactly the same things to every account\non the system after we change the role owner as before.Then maybe we should just implement the idea that if a superuser would become the owner we instead substitute in the bootstrap user.  Or give the DBA the choice whether they want to retain knowledge of specific roles - and thus are willing to accept the \"waste of time\".\n\nBut wait, I hear you cry, what about CREATEROLE? Well, CREATEROLE is\ngenerally agreed to be broken right now, and if you don't agree with\nthat, consider that it can grant pg_execute_server_programs to a\nnewly-created account and then explain to me how it's functionally\ndifferent from superuser.CREATEROLE has long been defined as basically having \"with admin option\" on every role in the system.  The failure to special-case the roles that grant different aspects of superuser-ness to its members doesn't make CREATEROLE itself broken, it makes the implementation of pg_execute_server_programs broken.  Only superusers should be considered to have with admin option on these roles. They can delegate through the usual membership+admin mechanism to a CREATEROLE role if they desire.  The whole area needs a rethink. I believe\neveryone involved in the discussion on the other threads agrees that\nsome reform of CREATEROLE is necessary, and more generally with the\nidea that it's useful for non-superusers to be able to create roles.As the documentation says, using SUPERUSER for day-to-day administration is contrary to good security practices.  Role management is considered to be a day-to-day administration activity.  I agree with this principle.  It was designed to neither be a superuser nor grant superuser, so removing the ability to grant the pg_* role memberships remains consistent with its original intent.\n\nI want that because I want mini-superusers, where alice can administer\nthe users that alice creates just as if she were a superuser,\nincluding having their permissions implicitly and dropping them when\nshe wants them gone, but where alice cannot break out to the operating\nsystem as a true superuser could do.CREATEROLE (once the pg_* with admin rules are fixed) + Ownership and rules restricting interfering with another role's objects (unless superuser) seems to handle this. the bot can stand up\naccounts, can grant them membership in a defined set of groups, but\ncannot exercise the privileges of those accounts (or hack superuser\neither).The bot should be provided a security definer procedure that encapsulates all of this rather than us trying to hack the permission system.  This isn't a user permission concern, it is an unauthorized privilege escalation concern.  Anyone with the bot's credentials can trivially overcome the third restriction by creating a role with the desired membership and then logging in as that role - and there is nothing the system can do to prevent that while also allowing the other two permissions. \nAnd that's why I'm not sure it's really the right idea to say that we\ndon't need syntax for this admin-without-member concept.We already have this syntax in the form of CREATEROLE.  But we do need a fix, just on the group side.  We need a way to define a group as having no ADMINS.ALTER ROLE pg_superuser WITH [NO] ADMIN;Then adding a role membership including the WITH ADMIN OPTION can be rejected, as can the non-superuser situation.  Setting WITH NO ADMIN should fail if any existing members have admin.  You must be a superuser to execute WITH ADMIN (maybe WITH NO ADMIN as well...).  And possibly even a new pg_* role that grants this ability (and maybe some others) for use by a backup/restore user.Or just special-case pg_* roles.The advantage of exposing this to the DBA is that they can then package pg_* roles into a custom group and still have the benefit of superuser only administration.  In the special-case implementation the presence of a pg_* role in a group hierarchy would then preclude a non-superuser from having admin on the entire tree (the pg_* roles are all roots, or in the case of pg_monitor, directly emanate from a root role).David J.David J.", "msg_date": "Thu, 10 Mar 2022 08:56:36 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* David G. Johnston (david.g.johnston@gmail.com) wrote:\n> On Thu, Mar 10, 2022 at 7:46 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Wed, Mar 9, 2022 at 4:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > I don't think we need syntax to describe it. As I just said in my\n> > > other reply, we have a perfectly good precedent for this already\n> > > in ordinary object permissions. That is: an object owner always,\n> > > implicitly, has GRANT OPTION for all the object's privileges, even\n> > > if she revoked the corresponding plain privilege from herself.\n> > >\n> > > Yeah, this does mean that we're effectively deciding that the creator\n> > > of a role is its owner. What's the problem with that?\n> >\n> > I don't think that's entirely the wrong concept, but it doesn't make a\n> > lot of sense in a world where the creator has to be a superuser. If\n> > alice, bob, and charlie are superusers who take turns creating new\n> > users, and then we let charlie go due to budget cuts, forcing alice\n> > and bob to change the owner of all the users he created to some other\n> > superuser as a condition of dropping his account is a waste of\n> > everyone's time. They can do exactly the same things to every account\n> > on the system after we change the role owner as before.\n> \n> Then maybe we should just implement the idea that if a superuser would\n> become the owner we instead substitute in the bootstrap user. Or give the\n> DBA the choice whether they want to retain knowledge of specific roles -\n> and thus are willing to accept the \"waste of time\".\n\nThis doesn't strike me as going in the right direction. Falling back to\nthe bootstrap superuser is generally a hack and not a great one. I'll\nalso point out that the SQL spec hasn't got a concept of role ownership\neither.\n\n> > But wait, I hear you cry, what about CREATEROLE? Well, CREATEROLE is\n> > generally agreed to be broken right now, and if you don't agree with\n> > that, consider that it can grant pg_execute_server_programs to a\n> > newly-created account and then explain to me how it's functionally\n> > different from superuser.\n> \n> CREATEROLE has long been defined as basically having \"with admin option\" on\n> every role in the system. The failure to special-case the roles that grant\n> different aspects of superuser-ness to its members doesn't make CREATEROLE\n> itself broken, it makes the implementation of pg_execute_server_programs\n> broken. Only superusers should be considered to have with admin option on\n> these roles. They can delegate through the usual membership+admin mechanism\n> to a CREATEROLE role if they desire.\n\nNo, CREATEROLE having admin option on every role in the system is broken\nand always has been. It's not just an issue for predefined roles like\npg_execute_server_program, it's an issue for any role that could become\na superuser either directly or indirectly and that extends beyond the\npredefined ones. As this issue with CREATEROLE existed way before\npredefined roles were added to PG, claiming that it's an issue with\npredefined roles doesn't make a bit of sense.\n\n> > The whole area needs a rethink. I believe\n> > everyone involved in the discussion on the other threads agrees that\n> > some reform of CREATEROLE is necessary, and more generally with the\n> > idea that it's useful for non-superusers to be able to create roles.\n> \n> As the documentation says, using SUPERUSER for day-to-day administration is\n> contrary to good security practices. Role management is considered to be a\n> day-to-day administration activity. I agree with this principle. It was\n> designed to neither be a superuser nor grant superuser, so removing the\n> ability to grant the pg_* role memberships remains consistent with its\n> original intent.\n\nThat would not be sufficient to make CREATEROLE safe. Far, far from it.\n\n> > I want that because I want mini-superusers, where alice can administer\n> > the users that alice creates just as if she were a superuser,\n> > including having their permissions implicitly and dropping them when\n> > she wants them gone, but where alice cannot break out to the operating\n> > system as a true superuser could do.\n> \n> CREATEROLE (once the pg_* with admin rules are fixed) + Ownership and rules\n> restricting interfering with another role's objects (unless superuser)\n> seems to handle this.\n\nThis is not sufficient- roles can be not-superuser themselves but have\nthe ability to become superuser if GRANT'd a superuser role and\ntherefore we can't have a system where CREATEROLE allows arbitrary\nGRANT'ing of roles to each other. I'm a bit confused too as anything\nwhere we are curtailing what CREATEROLE roles are able to do in a manner\nthat means they're only able to modify some subset of roles should\nequally apply to predefined roles too- that is, CREATEROLE shouldn't be\nthe determining factor in the question of if a role can GRANT a\npredefined (or any other role) to some other role- that should be\ngoverned by the admin option on that role, and that should work exactly\nthe same for predefined roles as it does for any other.\n\nI disagree that ownership is needed that's not what the spec calls for\neither. What we need is more flexibility when it comes to the\nrelationships which are allowed to be created between roles and what\nprivileges come with them. To that end, I'd argue that we should be\nextending pg_auth_members, first by separating out membership itself\ninto an explicitly tracked attribute (instead of being implicit in the\nexistance of a row in the table) and then adding on what other\nprivileges we see fit to add, such as the ability to DROP a role. We\ndo need to remove the ability for a role who hasn't been explicitly\ngiven the admin right on another role to modify that role's membership\ntoo, as was originally proposed here. This also seems to more closely\nfollow the spec's expectation, something that role ownership doesn't.\n\n> > the bot can stand up\n> > accounts, can grant them membership in a defined set of groups, but\n> > cannot exercise the privileges of those accounts (or hack superuser\n> > either).\n> \n> The bot should be provided a security definer procedure that encapsulates\n> all of this rather than us trying to hack the permission system. This\n> isn't a user permission concern, it is an unauthorized privilege escalation\n> concern. Anyone with the bot's credentials can trivially overcome the\n> third restriction by creating a role with the desired membership and then\n> logging in as that role - and there is nothing the system can do to prevent\n> that while also allowing the other two permissions.\n\nFalling back to security definer functions may be one approach but it's\nnot a great one and it only works if it's possible to end up with the\ncatalogs having what is actually desired- for example, ADMIN option\nwithout membership isn't something the catalogs today can understand\nbecause existance in pg_auth_members implies membership and you can't\nhave ADMIN without having that row. The same issue would exist with\nownership if ownership implied the same- that's not improving things.\n\n> > And that's why I'm not sure it's really the right idea to say that we\n> > don't need syntax for this admin-without-member concept.\n> \n> We already have this syntax in the form of CREATEROLE. But we do need a\n> fix, just on the group side. We need a way to define a group as having no\n> ADMINS.\n\nWe don't have this syntax today nor do we have a way to store such a\nconcept in the catalogs either, so I'm pretty baffled by this. Defining\na group without admins is, in fact, what we actually have support for\ntoday in the catalogs- it's just a case where there aren't any rows in\npg_auth_members which have 'admin_option' as true. The opposite is what\nwe're talking about here- rows which have 'admin_option' as true but\ndon't have membership, and that can't be the case today because\nexistance in the table itself implies membership.\n\n> ALTER ROLE pg_superuser WITH [NO] ADMIN;\n> \n> Then adding a role membership including the WITH ADMIN OPTION can be\n> rejected, as can the non-superuser situation. Setting WITH NO ADMIN should\n> fail if any existing members have admin. You must be a superuser to\n> execute WITH ADMIN (maybe WITH NO ADMIN as well...). And possibly even a\n> new pg_* role that grants this ability (and maybe some others) for use by a\n> backup/restore user.\n\nI'm not following this in general or how it helps. Surely we don't want\nto limit WITH ADMIN to superusers. As for if we should migrate\nCREATEROLE to a new predefined role, maybe, but that seems like a\ndifferent question.\n\n> Or just special-case pg_* roles.\n\nAs I hopefully made clear above, this isn't actually a solution, nor do\npg_* roles need to be treated somehow differently in this aspect.\n\n> The advantage of exposing this to the DBA is that they can then package\n> pg_* roles into a custom group and still have the benefit of superuser only\n> administration. In the special-case implementation the presence of a pg_*\n> role in a group hierarchy would then preclude a non-superuser from having\n> admin on the entire tree (the pg_* roles are all roots, or in the case of\n> pg_monitor, directly emanate from a root role).\n\nWe are very much trying to move away from 'superuser only\nadministration'.\n\nThanks,\n\nStephen", "msg_date": "Thu, 10 Mar 2022 11:19:10 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "\n\n> On Mar 10, 2022, at 7:56 AM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> \n> \n> I want that because I want mini-superusers, where alice can administer\n> the users that alice creates just as if she were a superuser,\n> including having their permissions implicitly and dropping them when\n> she wants them gone, but where alice cannot break out to the operating\n> system as a true superuser could do.\n> \n> CREATEROLE (once the pg_* with admin rules are fixed) + Ownership and rules restricting interfering with another role's objects (unless superuser) seems to handle this.\n\nWhat if one of alice's subordinates also owns roles? Can alice interfere with *that* role's objects? I don't see that a simple rule restricting roles from interfering with another role's objects is quite enough. That raises the question of whether role ownership is transitive, and whether we need a concept similar to inherit/noinherit for ownership.\n\nThere is also the problem that CREATEROLE currently allows a set of privileges to be granted to created roles, and that set of privileges is hard-coded. You've suggested changing the hard-coded rules to remove pg_* roles from the list of grantable privileges, but that's still an inflexible set of hardcoded privileges. Wouldn't it make more sense for the grantor to need GRANT OPTION on any privilege they give to roles they create?\n\n> the bot can stand up\n> accounts, can grant them membership in a defined set of groups, but\n> cannot exercise the privileges of those accounts (or hack superuser\n> either).\n> \n> The bot should be provided a security definer procedure that encapsulates all of this rather than us trying to hack the permission system. This isn't a user permission concern, it is an unauthorized privilege escalation concern. Anyone with the bot's credentials can trivially overcome the third restriction by creating a role with the desired membership and then logging in as that role - and there is nothing the system can do to prevent that while also allowing the other two permissions.\n\nDoesn't this assume password authentication? If the server uses ldap authentication, for example, wouldn't the bot need valid ldap credentials for at least one user for this attack to work? And if CREATEROLE has been made more configurable, wouldn't the bot only be able to grant that ldap user the limited set of privileges that the bot's database user has been granted ADMIN OPTION for?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 10 Mar 2022 08:26:42 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 10, 2022 at 11:19 AM Stephen Frost <sfrost@snowman.net> wrote:\n> I disagree that ownership is needed that's not what the spec calls for\n> either. What we need is more flexibility when it comes to the\n> relationships which are allowed to be created between roles and what\n> privileges come with them. To that end, I'd argue that we should be\n> extending pg_auth_members, first by separating out membership itself\n> into an explicitly tracked attribute (instead of being implicit in the\n> existance of a row in the table) and then adding on what other\n> privileges we see fit to add, such as the ability to DROP a role. We\n> do need to remove the ability for a role who hasn't been explicitly\n> given the admin right on another role to modify that role's membership\n> too, as was originally proposed here. This also seems to more closely\n> follow the spec's expectation, something that role ownership doesn't.\n\nI do not have a problem with more fine-grained kinds of authorization\neven though I think there are syntactic issues to work out, but I\nstrongly disagree with the idea that we can't or shouldn't also have\nrole ownership. Marc invented it. Now Tom has invented it\nindependently. All sorts of other objects have it already. Trying to\nmake it out like this is some kind of kooky idea is not believable.\nYeah, it's not the most sophisticated or elegant model and that's why\nit's good for us to also have other things, but for simple cases it is\neasy to understand and works great.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Mar 2022 12:11:36 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 10, 2022 at 9:19 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * David G. Johnston (david.g.johnston@gmail.com) wrote:\n> > On Thu, Mar 10, 2022 at 7:46 AM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> > > On Wed, Mar 9, 2022 at 4:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > I don't think we need syntax to describe it. As I just said in my\n> > > > other reply, we have a perfectly good precedent for this already\n> > > > in ordinary object permissions. That is: an object owner always,\n> > > > implicitly, has GRANT OPTION for all the object's privileges, even\n> > > > if she revoked the corresponding plain privilege from herself.\n> > > >\n> > > > Yeah, this does mean that we're effectively deciding that the creator\n> > > > of a role is its owner. What's the problem with that?\n> > >\n> > > I don't think that's entirely the wrong concept, but it doesn't make a\n> > > lot of sense in a world where the creator has to be a superuser. If\n> > > alice, bob, and charlie are superusers who take turns creating new\n> > > users, and then we let charlie go due to budget cuts, forcing alice\n> > > and bob to change the owner of all the users he created to some other\n> > > superuser as a condition of dropping his account is a waste of\n> > > everyone's time. They can do exactly the same things to every account\n> > > on the system after we change the role owner as before.\n> >\n> > Then maybe we should just implement the idea that if a superuser would\n> > become the owner we instead substitute in the bootstrap user. Or give\n> the\n> > DBA the choice whether they want to retain knowledge of specific roles -\n> > and thus are willing to accept the \"waste of time\".\n>\n> This doesn't strike me as going in the right direction. Falling back to\n> the bootstrap superuser is generally a hack and not a great one. I'll\n> also point out that the SQL spec hasn't got a concept of role ownership\n> either.\n>\n> > > But wait, I hear you cry, what about CREATEROLE? Well, CREATEROLE is\n> > > generally agreed to be broken right now, and if you don't agree with\n> > > that, consider that it can grant pg_execute_server_programs to a\n> > > newly-created account and then explain to me how it's functionally\n> > > different from superuser.\n> >\n> > CREATEROLE has long been defined as basically having \"with admin option\"\n> on\n> > every role in the system. The failure to special-case the roles that\n> grant\n> > different aspects of superuser-ness to its members doesn't make\n> CREATEROLE\n> > itself broken, it makes the implementation of pg_execute_server_programs\n> > broken. Only superusers should be considered to have with admin option\n> on\n> > these roles. They can delegate through the usual membership+admin\n> mechanism\n> > to a CREATEROLE role if they desire.\n>\n> No, CREATEROLE having admin option on every role in the system is broken\n> and always has been. It's not just an issue for predefined roles like\n> pg_execute_server_program,\n\n\n\n> it's an issue for any role that could become\n> a superuser either directly or indirectly and that extends beyond the\n> predefined ones.\n\n\nThe only indirect way for a role to become superuser is to have been\ngranted membership in a superuser group, then SET ROLE. Non-superusers\ncannot do this. If a superuser does this I consider the outcome to be no\ndifferent than if they go and do:\n\nSET allow_system_table_mods TO true;\nDROP pg_catalog.pg_class;\n\nIn short, having a CREATEROLE user issuing:\nGRANT pg_read_all_stats TO davidj;\nshould result in the same outcome as them issuing:\nGRANT postgres TO davidj;\n-- ERROR: must be superuser to alter superusers\n\nSuperusers can break their system and we don't go to great effort to stop\nthem. I see no difference here, so arguments of this nature aren't all\nthat compelling to me.\n\nCREATEROLE shouldn't be\n> the determining factor in the question of if a role can GRANT a\n> predefined (or any other role) to some other role- that should be\n> governed by the admin option on that role, and that should work exactly\n> the same for predefined roles as it does for any other.\n>\n\nNever granting the CREATEROLE attribute to anyone will give you this\noutcome today.\n\n\n\n> ADMIN option\n> without membership isn't something the catalogs today can understand\n>\n\nToday, they don't need to in order for the system to function within its\nexisting design specs.\n\n\n> > ALTER ROLE pg_superuser WITH [NO] ADMIN;\n> >\n> > Then adding a role membership including the WITH ADMIN OPTION can be\n> > rejected, as can the non-superuser situation. Setting WITH NO ADMIN\n> should\n> > fail if any existing members have admin. You must be a superuser to\n> > execute WITH ADMIN (maybe WITH NO ADMIN as well...). And possibly even a\n> > new pg_* role that grants this ability (and maybe some others) for use\n> by a\n> > backup/restore user.\n>\n> I'm not following this in general or how it helps. Surely we don't want\n> to limit WITH ADMIN to superusers.\n\n\nToday a non-superuser cannot \"grant postgres to someuser;\"\n\nThe point of this attribute is to allow the superuser to apply that rule to\nother roles that aren't superuser. In particular, the predefined pg_*\nroles. But it could extend to any other role the superuser would like to\nlimit. It means, for that for named role, ADMIN privileges cannot be\ndelegated to other roles - thus all administration of that role's\nmembership roster must happen by a superuser.\n\nIn particular, this means CREATEROLE roles cannot assign membership in the\nmarked roles; just like they cannot assign membership in superuser roles\ntoday.\n\nFor me, because the SUPERUSER cannot have its role become a group without a\nsuperuser making that choice, and by default the default pg_* roles will\nall have this property as well, and any newly superuser created roles that\nmay be members of either superuser or pg_* can have the property defined as\nwell, gives full control to the superuser as to how superuser abilities are\ndoled out and so the design itself allows for what many of you are\nconsidering to be \"safe usage\". That \"unsafe configurations\" are possible\nis due to the policy that superusers are unrestricted in what they can do,\nincluding making unsafe and destructive choices.\n\nIn short, removing the self-administration rule solves the \"login roles\nshould not be automatically considered groups administered by themselves\"\nproblem - or at least a feature we really don't need.\nAnd defining a \"superuser administration only\" attribute to a role solves\nthe indirect superuser privileges and assignment thereof by non-superusers\nproblem.\n\nI can see value in adding a feature whereby we allow the DBA to define a\ngroup as a schema-like container and then assign roles to that group with a\nfine-grained permissions model. My take is this proposal is a new feature\nwhile the two problems noted above can be solved more readily and with less\nrisk with the two suggested changes.\n\nDavid J.\n\nOn Thu, Mar 10, 2022 at 9:19 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* David G. Johnston (david.g.johnston@gmail.com) wrote:\n> On Thu, Mar 10, 2022 at 7:46 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Wed, Mar 9, 2022 at 4:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > I don't think we need syntax to describe it.  As I just said in my\n> > > other reply, we have a perfectly good precedent for this already\n> > > in ordinary object permissions.  That is: an object owner always,\n> > > implicitly, has GRANT OPTION for all the object's privileges, even\n> > > if she revoked the corresponding plain privilege from herself.\n> > >\n> > > Yeah, this does mean that we're effectively deciding that the creator\n> > > of a role is its owner.  What's the problem with that?\n> >\n> > I don't think that's entirely the wrong concept, but it doesn't make a\n> > lot of sense in a world where the creator has to be a superuser. If\n> > alice, bob, and charlie are superusers who take turns creating new\n> > users, and then we let charlie go due to budget cuts, forcing alice\n> > and bob to change the owner of all the users he created to some other\n> > superuser as a condition of dropping his account is a waste of\n> > everyone's time. They can do exactly the same things to every account\n> > on the system after we change the role owner as before.\n> \n> Then maybe we should just implement the idea that if a superuser would\n> become the owner we instead substitute in the bootstrap user.  Or give the\n> DBA the choice whether they want to retain knowledge of specific roles -\n> and thus are willing to accept the \"waste of time\".\n\nThis doesn't strike me as going in the right direction.  Falling back to\nthe bootstrap superuser is generally a hack and not a great one.  I'll\nalso point out that the SQL spec hasn't got a concept of role ownership\neither.\n\n> > But wait, I hear you cry, what about CREATEROLE? Well, CREATEROLE is\n> > generally agreed to be broken right now, and if you don't agree with\n> > that, consider that it can grant pg_execute_server_programs to a\n> > newly-created account and then explain to me how it's functionally\n> > different from superuser.\n> \n> CREATEROLE has long been defined as basically having \"with admin option\" on\n> every role in the system.  The failure to special-case the roles that grant\n> different aspects of superuser-ness to its members doesn't make CREATEROLE\n> itself broken, it makes the implementation of pg_execute_server_programs\n> broken.  Only superusers should be considered to have with admin option on\n> these roles. They can delegate through the usual membership+admin mechanism\n> to a CREATEROLE role if they desire.\n\nNo, CREATEROLE having admin option on every role in the system is broken\nand always has been.  It's not just an issue for predefined roles like\npg_execute_server_program,  it's an issue for any role that could become\na superuser either directly or indirectly and that extends beyond the\npredefined ones.The only indirect way for a role to become superuser is to have been granted membership in a superuser group, then SET ROLE.  Non-superusers cannot do this.  If a superuser does this I consider the outcome to be no different than if they go and do:SET allow_system_table_mods TO true;DROP pg_catalog.pg_class;In short, having a CREATEROLE user issuing:GRANT pg_read_all_stats TO davidj;should result in the same outcome as them issuing:GRANT postgres TO davidj;-- ERROR:  must be superuser to alter superusersSuperusers can break their system and we don't go to great effort to stop them.  I see no difference here, so arguments of this nature aren't all that compelling to me.CREATEROLE shouldn't be\nthe determining factor in the question of if a role can GRANT a\npredefined (or any other role) to some other role- that should be\ngoverned by the admin option on that role, and that should work exactly\nthe same for predefined roles as it does for any other.Never granting the CREATEROLE attribute to anyone will give you this outcome today. ADMIN option\nwithout membership isn't something the catalogs today can understandToday, they don't need to in order for the system to function within its existing design specs.\n\n> ALTER ROLE pg_superuser WITH [NO] ADMIN;\n> \n> Then adding a role membership including the WITH ADMIN OPTION can be\n> rejected, as can the non-superuser situation.  Setting WITH NO ADMIN should\n> fail if any existing members have admin.  You must be a superuser to\n> execute WITH ADMIN (maybe WITH NO ADMIN as well...).  And possibly even a\n> new pg_* role that grants this ability (and maybe some others) for use by a\n> backup/restore user.\n\nI'm not following this in general or how it helps.  Surely we don't want\nto limit WITH ADMIN to superusers.Today a non-superuser cannot \"grant postgres to someuser;\"The point of this attribute is to allow the superuser to apply that rule to other roles that aren't superuser.  In particular, the predefined pg_* roles.  But it could extend to any other role the superuser would like to limit.  It means, for that for named role, ADMIN privileges cannot be delegated to other roles - thus all administration of that role's membership roster must happen by a superuser.In particular, this means CREATEROLE roles cannot assign membership in the marked roles; just like they cannot assign membership in superuser roles today.For me, because the SUPERUSER cannot have its role become a group without a superuser making that choice, and by default the default pg_* roles will all have this property as well, and any newly superuser created roles that may be members of either superuser or pg_* can have the property defined as well, gives full control to the superuser as to how superuser abilities are doled out and so the design itself allows for what many of you are considering to be \"safe usage\".  That \"unsafe configurations\" are possible is due to the policy that superusers are unrestricted in what they can do, including making unsafe and destructive choices.In short, removing the self-administration rule solves the \"login roles should not be automatically considered groups administered by themselves\" problem - or at least a feature we really don't need.And defining a \"superuser administration only\" attribute to a role solves the indirect superuser privileges and assignment thereof by non-superusers problem.I can see value in adding a feature whereby we allow the DBA to define a group as a schema-like container and then assign roles to that group with a fine-grained permissions model.  My take is this proposal is a new feature while the two problems noted above can be solved more readily and with less risk with the two suggested changes.David J.", "msg_date": "Thu, 10 Mar 2022 10:26:08 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 10, 2022 at 12:11 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Mar 10, 2022 at 11:19 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > I disagree that ownership is needed that's not what the spec calls for\n> > either. What we need is more flexibility when it comes to the\n> > relationships which are allowed to be created between roles and what\n> > privileges come with them. To that end, I'd argue that we should be\n> > extending pg_auth_members, first by separating out membership itself\n> > into an explicitly tracked attribute (instead of being implicit in the\n> > existance of a row in the table) and then adding on what other\n> > privileges we see fit to add, such as the ability to DROP a role. We\n> > do need to remove the ability for a role who hasn't been explicitly\n> > given the admin right on another role to modify that role's membership\n> > too, as was originally proposed here. This also seems to more closely\n> > follow the spec's expectation, something that role ownership doesn't.\n>\n> I do not have a problem with more fine-grained kinds of authorization\n> even though I think there are syntactic issues to work out, but I\n> strongly disagree with the idea that we can't or shouldn't also have\n> role ownership. Marc invented it. Now Tom has invented it\n> independently. All sorts of other objects have it already. Trying to\n> make it out like this is some kind of kooky idea is not believable.\n> Yeah, it's not the most sophisticated or elegant model and that's why\n> it's good for us to also have other things, but for simple cases it is\n> easy to understand and works great.\n\nOwnership implies DAC, the ability to grant others rights to an\nobject. It's not \"kooky\" to see roles as owned objects, but it isn't\nrequired either. For example most objects on a UNIX system are owned\nand subject to DAC but users aren't.\n\nStephen's, and now my, issue with ownership is that, since it implies\nDAC, most checks will be bypassed for the owner. We would both prefer\nfor everyone to be subject to the grants, including whoever created\nthe role.\n\nRather, we'd like to see a \"creators of roles get this set of grants\nagainst the role by default\" and \"as a superuser I can revoke grants\nfrom creators against roles they created\"\n\n\n", "msg_date": "Thu, 10 Mar 2022 12:26:42 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* David G. Johnston (david.g.johnston@gmail.com) wrote:\n> On Thu, Mar 10, 2022 at 9:19 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > * David G. Johnston (david.g.johnston@gmail.com) wrote:\n> > > On Thu, Mar 10, 2022 at 7:46 AM Robert Haas <robertmhaas@gmail.com>\n> > wrote:\n> > > > On Wed, Mar 9, 2022 at 4:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > > I don't think we need syntax to describe it. As I just said in my\n> > > > > other reply, we have a perfectly good precedent for this already\n> > > > > in ordinary object permissions. That is: an object owner always,\n> > > > > implicitly, has GRANT OPTION for all the object's privileges, even\n> > > > > if she revoked the corresponding plain privilege from herself.\n> > > > >\n> > > > > Yeah, this does mean that we're effectively deciding that the creator\n> > > > > of a role is its owner. What's the problem with that?\n> > > >\n> > > > I don't think that's entirely the wrong concept, but it doesn't make a\n> > > > lot of sense in a world where the creator has to be a superuser. If\n> > > > alice, bob, and charlie are superusers who take turns creating new\n> > > > users, and then we let charlie go due to budget cuts, forcing alice\n> > > > and bob to change the owner of all the users he created to some other\n> > > > superuser as a condition of dropping his account is a waste of\n> > > > everyone's time. They can do exactly the same things to every account\n> > > > on the system after we change the role owner as before.\n> > >\n> > > Then maybe we should just implement the idea that if a superuser would\n> > > become the owner we instead substitute in the bootstrap user. Or give\n> > the\n> > > DBA the choice whether they want to retain knowledge of specific roles -\n> > > and thus are willing to accept the \"waste of time\".\n> >\n> > This doesn't strike me as going in the right direction. Falling back to\n> > the bootstrap superuser is generally a hack and not a great one. I'll\n> > also point out that the SQL spec hasn't got a concept of role ownership\n> > either.\n> >\n> > > > But wait, I hear you cry, what about CREATEROLE? Well, CREATEROLE is\n> > > > generally agreed to be broken right now, and if you don't agree with\n> > > > that, consider that it can grant pg_execute_server_programs to a\n> > > > newly-created account and then explain to me how it's functionally\n> > > > different from superuser.\n> > >\n> > > CREATEROLE has long been defined as basically having \"with admin option\"\n> > on\n> > > every role in the system. The failure to special-case the roles that\n> > grant\n> > > different aspects of superuser-ness to its members doesn't make\n> > CREATEROLE\n> > > itself broken, it makes the implementation of pg_execute_server_programs\n> > > broken. Only superusers should be considered to have with admin option\n> > on\n> > > these roles. They can delegate through the usual membership+admin\n> > mechanism\n> > > to a CREATEROLE role if they desire.\n> >\n> > No, CREATEROLE having admin option on every role in the system is broken\n> > and always has been. It's not just an issue for predefined roles like\n> > pg_execute_server_program,\n> \n> \n> \n> > it's an issue for any role that could become\n> > a superuser either directly or indirectly and that extends beyond the\n> > predefined ones.\n> \n> \n> The only indirect way for a role to become superuser is to have been\n> granted membership in a superuser group, then SET ROLE. Non-superusers\n> cannot do this. If a superuser does this I consider the outcome to be no\n> different than if they go and do:\n\nA non-superuser absolutely can be GRANT'd membership in a superuser role\nand then SET ROLE to that user thus becoming a superuser. Giving users\na regular role to log in as and then membership in a role that can\nbecome a superuser is akin to having a sudoers group in Unix and is good\npractice, not something that everyone should have to be super-dooper\ncareful to not do, lest a CREATEROLE user be able to leverage that.\n\n> SET allow_system_table_mods TO true;\n> DROP pg_catalog.pg_class;\n\nI don't equate these in the least.\n\n> In short, having a CREATEROLE user issuing:\n> GRANT pg_read_all_stats TO davidj;\n> should result in the same outcome as them issuing:\n> GRANT postgres TO davidj;\n> -- ERROR: must be superuser to alter superusers\n\nNo, what should matter is if the role doing the GRANT has admin rights\non pg_read_all_stats, or on the postgres role. That also happens to be\nwhat the spec says.\n\n> Superusers can break their system and we don't go to great effort to stop\n> them. I see no difference here, so arguments of this nature aren't all\n> that compelling to me.\n\nThat you don't feel they're compelling don't make them somehow not real,\nnor even particularly uncommon, nor do I view ignoring that possibility\nas somehow creating a strong authentication system.\n\n> CREATEROLE shouldn't be\n> > the determining factor in the question of if a role can GRANT a\n> > predefined (or any other role) to some other role- that should be\n> > governed by the admin option on that role, and that should work exactly\n> > the same for predefined roles as it does for any other.\n> >\n> \n> Never granting the CREATEROLE attribute to anyone will give you this\n> outcome today.\n\n... which is why CREATEROLE is broken.\n\n> > ADMIN option\n> > without membership isn't something the catalogs today can understand\n> \n> Today, they don't need to in order for the system to function within its\n> existing design specs.\n\nEh? Your argument here is \"don't use CREATEROLE\"? While I agree with\nthat being a generally good idea today, it hardly makes sense to suggest\nit in a thread where we're talking about how to make CREATEROLE, or\nsomething like it, be useful.\n\n> > > ALTER ROLE pg_superuser WITH [NO] ADMIN;\n> > >\n> > > Then adding a role membership including the WITH ADMIN OPTION can be\n> > > rejected, as can the non-superuser situation. Setting WITH NO ADMIN\n> > should\n> > > fail if any existing members have admin. You must be a superuser to\n> > > execute WITH ADMIN (maybe WITH NO ADMIN as well...). And possibly even a\n> > > new pg_* role that grants this ability (and maybe some others) for use\n> > by a\n> > > backup/restore user.\n> >\n> > I'm not following this in general or how it helps. Surely we don't want\n> > to limit WITH ADMIN to superusers.\n> \n> Today a non-superuser cannot \"grant postgres to someuser;\"\n\nNo, but a role can be created like 'admin', which a superuser GRANT's\n'postgres' to and then that role can be GRANT'd to anyone by anyone who\nhas CREATEROLE rights. That's not sane.\n\n> The point of this attribute is to allow the superuser to apply that rule to\n> other roles that aren't superuser. In particular, the predefined pg_*\n> roles. But it could extend to any other role the superuser would like to\n> limit. It means, for that for named role, ADMIN privileges cannot be\n> delegated to other roles - thus all administration of that role's\n> membership roster must happen by a superuser.\n\nThe whole \"X can't modify a superuser role without being a superuser\"\nconcept is just broken and was a poor choice when it was originally\ndone specifically because it only looks at individual roles and their\nspecific rolsuper bit, completely ignoring the fact that role membership\nexists as a thing that we should handle sanely, including a\nnon-superuser role being grant'd a superuser role. Predefined roles\nhaven't got anything to do with any of this, they only make it more\nobvious to people who didn't understand how the system worked before\nthey came along.\n\nI disagree entirely with the idea that we must have some roles who can\nonly ever be administered by a superuser. If anything, we should be\nmoving away (as we have, in fact, been doing), from anything being the\nexclusive purview of the superuser.\n\n> In particular, this means CREATEROLE roles cannot assign membership in the\n> marked roles; just like they cannot assign membership in superuser roles\n> today.\n\nI disagree with the idea that we need to mark some roles as only being\nable to be modified by the superuser- why invent this? We have the\nADMIN option already and that can be applied to allow any role X to have\nthe ability to modify the members of role Y. That's a whole lot better\nthan some explicit flag that says \"only superusers can modify this\nrole\". If an admin wants that, they can set things up that way already\ntoday, as long as they don't use the current CREATEROLE attribute.\nIdeally, we'd modify CREATEROLE, or remove it and replace it with\nsomething better, which still maintains that same flexibility. What you\nseem to be arguing for here is to rip out the ADMIN functionality, which\nis defined by spec and not even exclusively by PG, and replace it with a\nsingle per-role flag that says if that role can only be modified by\nsuperusers. That seems entirely backwards to me.\n\n> For me, because the SUPERUSER cannot have its role become a group without a\n> superuser making that choice, and by default the default pg_* roles will\n> all have this property as well, and any newly superuser created roles that\n> may be members of either superuser or pg_* can have the property defined as\n> well, gives full control to the superuser as to how superuser abilities are\n> doled out and so the design itself allows for what many of you are\n> considering to be \"safe usage\". That \"unsafe configurations\" are possible\n> is due to the policy that superusers are unrestricted in what they can do,\n> including making unsafe and destructive choices.\n\nI disagree that it's an 'unsafe configuration' for there to ever exist a\nnon-superuser role that has been granted a superuser role. The only\nthing that makes this unsafe is the existance of CREATEROLE.\n\nWhy are we making this all about superusers though? In what you're\nproposing, you're suggesting that it's perfectly fine for any role which\nhas CREATEROLE to be able to take over any other role in the entire\nsystem, excluding predefined roles and superusers. How is that sane, or\ntruely much less than what the superuser has in terms of ability? The\nshort answer is that it's not- which is why we have documented\nCREATEROLE as being 'superuser light'. The goal here is to get rid of\nthat.\n\n> In short, removing the self-administration rule solves the \"login roles\n> should not be automatically considered groups administered by themselves\"\n> problem - or at least a feature we really don't need.\n> And defining a \"superuser administration only\" attribute to a role solves\n> the indirect superuser privileges and assignment thereof by non-superusers\n> problem.\n\nBut it doesn't *actually* make CREATEROLE something that you can give\nout to folks on a general basis because anyone with CREATEROLE would\nstill be able to take over every single non-superuser and non-predefined\nrole in the system. We do *not* want that.\n\n> I can see value in adding a feature whereby we allow the DBA to define a\n> group as a schema-like container and then assign roles to that group with a\n> fine-grained permissions model. My take is this proposal is a new feature\n> while the two problems noted above can be solved more readily and with less\n> risk with the two suggested changes.\n\nYes, we're talking about a new feature- one intended to replace the\nbroken way that CREATEROLE works, which your proposal doesn't.\n\nThanks,\n\nStephen", "msg_date": "Thu, 10 Mar 2022 13:05:54 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 10, 2022 at 12:26 PM Joshua Brindle\n<joshua.brindle@crunchydata.com> wrote:\n> Ownership implies DAC, the ability to grant others rights to an\n> object. It's not \"kooky\" to see roles as owned objects, but it isn't\n> required either. For example most objects on a UNIX system are owned\n> and subject to DAC but users aren't.\n\nI have no issue with anything you write in this paragraph.\n\n> Stephen's, and now my, issue with ownership is that, since it implies\n> DAC, most checks will be bypassed for the owner. We would both prefer\n> for everyone to be subject to the grants, including whoever created\n> the role.\n\nThat sounds like MAC, which is usually something that sits on top of\nDAC and is enforced in addition to DAC, not a reason for DAC to not\nexist.\n\n> Rather, we'd like to see a \"creators of roles get this set of grants\n> against the role by default\" and \"as a superuser I can revoke grants\n> from creators against roles they created\"\n\nIf you create a table, you own it. You get a set of default\npermissions on the table which can be revoked either by you or by\nsomeone else, and you also have certain intrinsic rights over the\nobject as owner which cannot be revoked - including the ability to\nre-grant yourself any previously-revoked permissions. I am not against\nthe idea of trying to clean things up so that everything you can do\nwith a table is a revocable privilege and you can be the owner without\nhaving any rights at all, including the right to give yourself other\nrights back, but I cannot believe that the idea of removing table\nownership as a concept would ever gain consensus on this list.\nTherefore, I also do not think it is reasonable to say that we\nshouldn't introduce a similar concept for object types that don't have\nit yet, such as roles.\n\nBut that's not to say that we couldn't decide to do something else\ninstead, and that other thing might well be better. Do you want to\nsketch out a full proposal, even just what the syntax would look like,\nand share that here? And if you could explain how I could use it to\ncreate the mini-superusers that I'm trying to get out of this thing,\neven better.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Mar 2022 13:11:43 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On 09.03.22 14:02, Robert Haas wrote:\n> On Wed, Mar 9, 2022 at 7:55 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> Do we have subtractive permissions today?\n> \n> Not in the GRANT/REVOKE sense, I think, but you can put a user in a\n> group and then mention that group in pg_hba.conf. And that line might\n> be \"reject\" or whatever.\n\nWell, you can always build an external system that looks at roles and \ndoes nonsensical things with it. But the privilege system itself seems \nto be additive only. Personally, I agree with the argument that there \nshould not be any subtractive permissions. The mental model where \npermissions are sort of keys to doors or boxes just doesn't work for that.\n\n\n\n", "msg_date": "Thu, 10 Mar 2022 20:05:02 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 10, 2022 at 2:05 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> On 09.03.22 14:02, Robert Haas wrote:\n> > On Wed, Mar 9, 2022 at 7:55 AM Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> >> Do we have subtractive permissions today?\n> >\n> > Not in the GRANT/REVOKE sense, I think, but you can put a user in a\n> > group and then mention that group in pg_hba.conf. And that line might\n> > be \"reject\" or whatever.\n>\n> Well, you can always build an external system that looks at roles and\n> does nonsensical things with it. But the privilege system itself seems\n> to be additive only. Personally, I agree with the argument that there\n> should not be any subtractive permissions. The mental model where\n> permissions are sort of keys to doors or boxes just doesn't work for that.\n\nI mean, I didn't design pg_hba.conf, but I think it's part of the\ndatabase doing a reasonable thing, not an external system doing a\nnonsensical thing.\n\nI am not sure that I (or anyone) would endorse a system where you can\nsay something like GRANT NOT SELECT ON TABLE foo TO bar, essentially\nputting a negative ACL into the system dictating that, regardless of\nany other grants that may exist, foo should not be able to SELECT from\nthat table. But I think it's reasonable to use groups as a way of\nreferencing a defined collection of users for some purpose. The\npg_hba.conf thing is an example of that. You put all the users that\nyou want to be treated in a certain way for authentication purposes\ninto a group, and then you mention the group in the file, and it just\nworks. I don't find that an unreasonable design at all. We could've\ncreated some other kind of grouping mechanism for such purposes that\nis separate from the role system, but we didn't choose to do that. I\ndon't know if that was the absolute best possible decision or not, but\nit doesn't seem like an especially bad choice.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Mar 2022 14:22:05 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 10, 2022 at 11:05 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * David G. Johnston (david.g.johnston@gmail.com) wrote:\n> > On Thu, Mar 10, 2022 at 9:19 AM Stephen Frost <sfrost@snowman.net>\n> wrote:\n> > > * David G. Johnston (david.g.johnston@gmail.com) wrote:\n> > > > On Thu, Mar 10, 2022 at 7:46 AM Robert Haas <robertmhaas@gmail.com>\n> > > wrote:\n>\n> > The only indirect way for a role to become superuser is to have been\n> > granted membership in a superuser group, then SET ROLE. Non-superusers\n> > cannot do this. If a superuser does this I consider the outcome to be no\n> > different than if they go and do:\n>\n> A non-superuser absolutely can be GRANT'd membership in a superuser role\n> and then SET ROLE to that user thus becoming a superuser.\n\n\nA non-superuser cannot grant a non-superuser membership in a superuser\nrole. A superuser granting a user membership in a superuser role makes\nthat user a superuser. This seems sane.\n\nIf a superuser grants a non-superuser membership in a superuser role then\ntoday a non-superuser can grant a user membership in that intermediate\nrole, thus having a non-superuser make another user a superuser. This is\narguably a bug that needs to be fixed.\n\nMy desired fix is to just require the superuser to mark (or have it marked\nby default ideally) the role inheriting superuser and put the\nresponsibility on the superuser. I agree this is not ideal, but it is\nprobably quick and low risk.\n\nI'll let someone else describe the details of the alternative option. I\nsuspect it will end up being a better option in terms of design. But\ndepending on time and risk even knowing that we want the better design\neventually doesn't preclude getting the easier fix in now.\n\n\n> No, what should matter is if the role doing the GRANT has admin rights\n> on pg_read_all_stats, or on the postgres role. That also happens to be\n> what the spec says.\n>\n\nYes, and superusers implicitly have that right, while CREATEROLE users\nimplicitly have that right on the pg_* role but not on superuser roles. I\njust want to plug that hole and include the pg_* roles (or any role for\nthat matter) in being able to be denied implied ADMIN rights for\nnon-superusers.\n\n\n> Today a non-superuser cannot \"grant postgres to someuser;\"\n>\n> No, but a role can be created like 'admin', which a superuser GRANT's\n> 'postgres' to and then that role can be GRANT'd to anyone by anyone who\n> has CREATEROLE rights. That's not sane.\n>\n\nI agree. And I've suggested a minimal fix, adding an attribute to the role\nthat prohibits non-superusers from granting it to others, that removes the\ninsane behavior.\n\nI'm on board for a hard-coded fix as well - if a superuser is in the\nmembership chain of a role then non-superusers cannot grant membership in\nthat role to others.\n\nNeither of those really solves the pg_* roles problem. We still need to\nindicate that they are somehow special. Whether it is a nice matrix or\nroles and permissions or a simple attribute that makes them behave like\nthey are superuser roles.\n\n\n>\n> I disagree entirely with the idea that we must have some roles who can\n> only ever be administered by a superuser.\n\n\nI don't think this is a must have. I think that since we do have it today\nthat fixes that leverage the status quo in order to be done more easily are\nperfectly valid solutions.\n\n\n\n> If anything, we should be\n> moving away (as we have, in fact, been doing), from anything being the\n> exclusive purview of the superuser.\n>\n\nI totally agree.\n\n\n>\n> > In particular, this means CREATEROLE roles cannot assign membership in\n> the\n> > marked roles; just like they cannot assign membership in superuser roles\n> > today.\n>\n> I disagree with the idea that we need to mark some roles as only being\n> able to be modified by the superuser- why invent this?\n\n\nBecause CREATEUSER is a thing and people want to prevent roles with that\nattribute from assigning membership to the predefined superuser-aspect\nroles. If I've misunderstood that desire and the scope of delegation given\nby the superuser to CREATEUSER roles is acceptable, then no change here is\nneeded.\n\nWhat you\n> seem to be arguing for here is to rip out the ADMIN functionality, which\n> is defined by spec and not even exclusively by PG, and replace it with a\n> single per-role flag that says if that role can only be modified by\n> superusers.\n\n\nI made the observation that being able to manage the membership of a group\nwithout having the ability to create new users seems like a half a loaf of\na feature. That's it. I would presume that any redesign of the\npermissions system here would address this adequately.\n\n The\n>\nshort answer is that it's not- which is why we have documented\n> CREATEROLE as being 'superuser light'. The goal here is to get rid of\n> that.\n>\n\nNow you tell me. Robert should have led with that goal upfront.\n\n>\n> Yes, we're talking about a new feature- one intended to replace the\n> broken way that CREATEROLE works, which your proposal doesn't.\n>\n>\nThat is correct, I was trying to figure out minimally invasive fixes to\nwhat are arguably being called bugs.\n\nDavid J.\n\nOn Thu, Mar 10, 2022 at 11:05 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* David G. Johnston (david.g.johnston@gmail.com) wrote:\n> On Thu, Mar 10, 2022 at 9:19 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > * David G. Johnston (david.g.johnston@gmail.com) wrote:\n> > > On Thu, Mar 10, 2022 at 7:46 AM Robert Haas <robertmhaas@gmail.com>\n> > wrote:\n> The only indirect way for a role to become superuser is to have been\n> granted membership in a superuser group, then SET ROLE.  Non-superusers\n> cannot do this.  If a superuser does this I consider the outcome to be no\n> different than if they go and do:\n\nA non-superuser absolutely can be GRANT'd membership in a superuser role\nand then SET ROLE to that user thus becoming a superuser.A non-superuser cannot grant a non-superuser membership in a superuser role.  A superuser granting a user membership in a superuser role makes that user a superuser.  This seems sane.If a superuser grants a non-superuser membership in a superuser role then today a non-superuser can grant a user membership in that intermediate role, thus having a non-superuser make another user a superuser.  This is arguably a bug that needs to be fixed.My desired fix is to just require the superuser to mark (or have it marked by default ideally) the role inheriting superuser and put the responsibility on the superuser.  I agree this is not ideal, but it is probably quick and low risk.I'll let someone else describe the details of the alternative option.  I suspect it will end up being a better option in terms of design.  But depending on time and risk even knowing that we want the better design eventually doesn't preclude getting the easier fix in now.\n\nNo, what should matter is if the role doing the GRANT has admin rights\non pg_read_all_stats, or on the postgres role.  That also happens to be\nwhat the spec says.Yes, and superusers implicitly have that right, while CREATEROLE users implicitly have that right on the pg_* role but not on superuser roles.  I just want to plug that hole and include the pg_* roles (or any role for that matter) in being able to be denied implied ADMIN rights for non-superusers. \n> Today a non-superuser cannot \"grant postgres to someuser;\"\n\nNo, but a role can be created like 'admin', which a superuser GRANT's\n'postgres' to and then that role can be GRANT'd to anyone by anyone who\nhas CREATEROLE rights.  That's not sane.I agree.  And I've suggested a minimal fix, adding an attribute to the role that prohibits non-superusers from granting it to others, that removes the insane behavior.I'm on board for a hard-coded fix as well - if a superuser is in the membership chain of a role then non-superusers cannot grant membership in that role to others.Neither of those really solves the pg_* roles problem.  We still need to indicate that they are somehow special.  Whether it is a nice matrix or roles and permissions or a simple attribute that makes them behave like they are superuser roles. \n\nI disagree entirely with the idea that we must have some roles who can\nonly ever be administered by a superuser.I don't think this is a must have.  I think that since we do have it today that fixes that leverage the status quo in order to be done more easily are perfectly valid solutions.   If anything, we should be\nmoving away (as we have, in fact, been doing), from anything being the\nexclusive purview of the superuser.I totally agree. \n\n> In particular, this means CREATEROLE roles cannot assign membership in the\n> marked roles; just like they cannot assign membership in superuser roles\n> today.\n\nI disagree with the idea that we need to mark some roles as only being\nable to be modified by the superuser- why invent this?Because CREATEUSER is a thing and people want to prevent roles with that attribute from assigning membership to the predefined superuser-aspect roles.  If I've misunderstood that desire and the scope of delegation given by the superuser to CREATEUSER roles is acceptable, then no change here is needed.What you\nseem to be arguing for here is to rip out the ADMIN functionality, which\nis defined by spec and not even exclusively by PG, and replace it with a\nsingle per-role flag that says if that role can only be modified by\nsuperusers.I made the observation that being able to manage the membership of a group without having the ability to create new users seems like a half a loaf of a feature.  That's it.  I would presume that any redesign of the permissions system here would address this adequately. The\nshort answer is that it's not- which is why we have documented\nCREATEROLE as being 'superuser light'.  The goal here is to get rid of\nthat.Now you tell me.  Robert should have led with that goal upfront.\nYes, we're talking about a new feature- one intended to replace the\nbroken way that CREATEROLE works, which your proposal doesn't.That is correct, I was trying to figure out minimally invasive fixes to what are arguably being called bugs.David J.", "msg_date": "Thu, 10 Mar 2022 12:31:30 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* David G. Johnston (david.g.johnston@gmail.com) wrote:\n> On Thu, Mar 10, 2022 at 11:05 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > * David G. Johnston (david.g.johnston@gmail.com) wrote:\n> > > On Thu, Mar 10, 2022 at 9:19 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > > > * David G. Johnston (david.g.johnston@gmail.com) wrote:\n> > > > > On Thu, Mar 10, 2022 at 7:46 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > > The only indirect way for a role to become superuser is to have been\n> > > granted membership in a superuser group, then SET ROLE. Non-superusers\n> > > cannot do this. If a superuser does this I consider the outcome to be no\n> > > different than if they go and do:\n> >\n> > A non-superuser absolutely can be GRANT'd membership in a superuser role\n> > and then SET ROLE to that user thus becoming a superuser.\n> \n> A non-superuser cannot grant a non-superuser membership in a superuser\n> role. A superuser granting a user membership in a superuser role makes\n> that user a superuser. This seems sane.\n> \n> If a superuser grants a non-superuser membership in a superuser role then\n> today a non-superuser can grant a user membership in that intermediate\n> role, thus having a non-superuser make another user a superuser. This is\n> arguably a bug that needs to be fixed.\n> \n> My desired fix is to just require the superuser to mark (or have it marked\n> by default ideally) the role inheriting superuser and put the\n> responsibility on the superuser. I agree this is not ideal, but it is\n> probably quick and low risk.\n> \n> I'll let someone else describe the details of the alternative option. I\n> suspect it will end up being a better option in terms of design. But\n> depending on time and risk even knowing that we want the better design\n> eventually doesn't preclude getting the easier fix in now.\n> \n> > No, what should matter is if the role doing the GRANT has admin rights\n> > on pg_read_all_stats, or on the postgres role. That also happens to be\n> > what the spec says.\n> \n> Yes, and superusers implicitly have that right, while CREATEROLE users\n> implicitly have that right on the pg_* role but not on superuser roles. I\n> just want to plug that hole and include the pg_* roles (or any role for\n> that matter) in being able to be denied implied ADMIN rights for\n> non-superusers.\n\nCREATEROLE users implicitly have that right on *all non-superuser\nroles*. Not just the pg_* ones, which is why the pg_* ones aren't any\ndifferent in this regard.\n\n> > Today a non-superuser cannot \"grant postgres to someuser;\"\n> >\n> > No, but a role can be created like 'admin', which a superuser GRANT's\n> > 'postgres' to and then that role can be GRANT'd to anyone by anyone who\n> > has CREATEROLE rights. That's not sane.\n> \n> I agree. And I've suggested a minimal fix, adding an attribute to the role\n> that prohibits non-superusers from granting it to others, that removes the\n> insane behavior.\n\nI disagree that this is a minimal fix as I don't see it as a fix to the\nactual issue, which is the ability for CREATEROLE users to GRANT role\nmembership to all non-superuser roles on the system. CREATEROLE\nshouldn't be allowing that.\n\n> I'm on board for a hard-coded fix as well - if a superuser is in the\n> membership chain of a role then non-superusers cannot grant membership in\n> that role to others.\n\nWhy not just look at the admin_option field of pg_auth_members...? I\ndon't get why that isn't an even more minimal fix than this idea you\nhave of adding a column to pg_authid and then propagating around \"this\nuser could become a superuser\" or writing code that has to go check \"is\nthere some way for this role to become a superuser, either directly or\nthrough some subset of pg_* roles?\"\n\n> Neither of those really solves the pg_* roles problem. We still need to\n> indicate that they are somehow special. Whether it is a nice matrix or\n> roles and permissions or a simple attribute that makes them behave like\n> they are superuser roles.\n\nI disagree that they should be considered special when it comes to role\nmembership and management. They're just roles, like any other.\n\n> > I disagree entirely with the idea that we must have some roles who can\n> > only ever be administered by a superuser.\n> \n> I don't think this is a must have. I think that since we do have it today\n> that fixes that leverage the status quo in order to be done more easily are\n> perfectly valid solutions.\n\nWe have a half-way-implemented attempt at this, not something that's\nactually effective, and therefore I don't agree that we really have it\ntoday or that we should keep it. I'd much prefer to throw out nearly\neverything in the system that's doing an explicit check of \"does this\nrole have a superuser bit set on it?\"\n\n> > If anything, we should be\n> > moving away (as we have, in fact, been doing), from anything being the\n> > exclusive purview of the superuser.\n> >\n> \n> I totally agree.\n\nGreat.\n\n> > > In particular, this means CREATEROLE roles cannot assign membership in\n> > the\n> > > marked roles; just like they cannot assign membership in superuser roles\n> > > today.\n> >\n> > I disagree with the idea that we need to mark some roles as only being\n> > able to be modified by the superuser- why invent this?\n> \n> Because CREATEUSER is a thing and people want to prevent roles with that\n> attribute from assigning membership to the predefined superuser-aspect\n> roles. If I've misunderstood that desire and the scope of delegation given\n> by the superuser to CREATEUSER roles is acceptable, then no change here is\n> needed.\n\nWe can do that by using the admin_option in pg_auth_members instead\nthough and limiting everyone to using that.\n\n> What you\n> > seem to be arguing for here is to rip out the ADMIN functionality, which\n> > is defined by spec and not even exclusively by PG, and replace it with a\n> > single per-role flag that says if that role can only be modified by\n> > superusers.\n> \n> I made the observation that being able to manage the membership of a group\n> without having the ability to create new users seems like a half a loaf of\n> a feature. That's it. I would presume that any redesign of the\n> permissions system here would address this adequately.\n\nIf the new design ideas that are being thrown around don't address what\nyou're thinking they should, it'd be great to point that out.\n\n> The\n> >\n> short answer is that it's not- which is why we have documented\n> > CREATEROLE as being 'superuser light'. The goal here is to get rid of\n> > that.\n> \n> Now you tell me. Robert should have led with that goal upfront.\n\n... blink.\n\n> > Yes, we're talking about a new feature- one intended to replace the\n> > broken way that CREATEROLE works, which your proposal doesn't.\n>\n> That is correct, I was trying to figure out minimally invasive fixes to\n> what are arguably being called bugs.\n\nWhat's been proposed here doesn't strike me as minimally invasive,\nthough I suppose I'm looking at it more from the database system\nperspective and less from the end-user side of things for people who\nactually use CREATEROLE, but in this particular case, that's the side\nI'm on.\n\nThanks,\n\nStephen", "msg_date": "Thu, 10 Mar 2022 14:45:52 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> But that's not to say that we couldn't decide to do something else\n> instead, and that other thing might well be better. Do you want to\n> sketch out a full proposal, even just what the syntax would look like,\n> and share that here? And if you could explain how I could use it to\n> create the mini-superusers that I'm trying to get out of this thing,\n> even better.\n\nIt'd be useful to have a better definition of exactly what a\n'mini-superuser' is, but at least for the moment when it comes to roles,\nlet's look at what the spec says:\n\nCREATE ROLE\n - Who is allowed to run CREATE ROLE is implementation-defined\n - After creation, this is effictively run:\n GRANT new_role TO creator_role WITH ADMIN, GRANTOR \"_SYSTEM\"\n\nDROP ROLE\n - Any user who has been GRANT'd a role with ADMIN option is able to\n DROP that role.\n\nGRANT ROLE\n - No cycles allowed\n - A role must have ADMIN rights on the role to be able to GRANT it to\n another role.\n\nALTER ROLE\n - Doesn't exist\n\nThis actually looks to me like more-or-less what you're looking for, it\njust isn't what we have today because CREATEROLE brings along with it a\nbunch of other stuff, some of which we want and some that we don't, and\nsome things that the SQL spec says ADMIN should be allowed to do (DROP\nROLE) we don't allow today.\n\nIt's also not quite what I want because it requires that membership and\nADMIN go together where I'd like to be able to have those be\nindependently GRANT'able- and then some.\n\nI don't think we're that far from having all of these though. To start\nwith, we remove from CREATEROLE the random things that it does which go\nbeyond what folks tend to expect- remove the whole 'grant any role to\nany other' stuff, remove the 'drop role' exception, remove the\n'alter role' stuff. Do make it so that when you create a role, however,\nthe above GRANT is effectively done. Now, for the items above where we\nremoved the checks against have_createrole_privilege() we go back and\nadd in checks using is_admin_of_role(). Of course, also remove the role\nself-administration bug.\n\nThat's step #1, but it gets us more-or-less what you're looking for, I\nthink, and brings us a lot closer to what the spec has.\n\nStep #2 is also in-line with the spec: track GRANTORs and care about\nthem, for everything. We really should have been doing this all along.\nNote that I'm not saying that an owner of a table can't REVOKE some\nright that was GRANT'd on that table, but rather that a user who was\nGRANT'd ADMIN rights on a table and then GRANT'd that right to some\nother user shouldn't have some other user who only has ADMIN rights on\nthe table be able to remove that GRANT. Same goes for roles, meaning\nthat you could GRANT rights in a role with ADMIN option and not have to\nbe afraid that the role you just gave that to will be able to remove\n*your* ADMIN rights on that role. In general, I don't think this\nwould actually have a very large impact on users because most users\ndon't, today, use the ADMIN option much.\n\nStep #3 starts going in the direction of what I'd like to see, which\nwould be to break out membership in a role as a separate thing from\nadmin rights on that role. This is also what would help with the 'bot'\nuse-case that Joshua (not David Steele, btw) brought up.\n\nStep #4 then breaks the 'admin' option on roles into pieces- a 'drop\nrole' right, a 'reset password' right, maybe separate rights for\ndifferent role attributes, etc. We would likely still keep the\n'admin_option' column in pg_auth_members and just check that first\nand then check the individual rights (similar to table-level vs.\ncolumn-level privileges) so that we stay in line with the spec's\nexpectation here and with what users are used to.\n\nIn some hyptothetical world, there's even a later step #5 which allows\nus to define user profiles and then grant the ability for a user to\ncreate a role with a certain profile (but not any arbitrary profile),\nthus making things like the 'bot' even more constrained in terms of\nwhat it's able to do (maybe it can then create a role that's a member of\na role without itself being a member of that role or explicitly having\nadmin rights in that role, as an example).\n\nThanks,\n\nStephen", "msg_date": "Thu, 10 Mar 2022 14:58:49 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 10, 2022 at 12:45 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n>\n> * David G. Johnston (david.g.johnston@gmail.com) wrote:\n> > On Thu, Mar 10, 2022 at 11:05 AM Stephen Frost <sfrost@snowman.net>\n> wrote:\n>\nWhy not just look at the admin_option field of pg_auth_members...? I\n> don't get why that isn't an even more minimal fix than this idea you\n> have of adding a column to pg_authid and then propagating around \"this\n> user could become a superuser\" or writing code that has to go check \"is\n> there some way for this role to become a superuser, either directly or\n> through some subset of pg_* roles?\"\n>\n\nIndeed, maybe I am wrong on the scope of the patch. But at least for the\nexplicit attribute it should be no more difficult than changing:\n\nif (grouprole_is_superuser and current_role_is_not_superuser) then error:\nto be\nif ((grouoprole_is_superuser OR !groupuser_has_adminattr) AND\ncurrent_role_is_not_superuser) then error;\n\nI have to imagine that given how fundamental inheritance is to our\npermissions system than doing a similar check up the tree wouldn't be\ndifficult, but I truly don't know with a strong degree of certainty.\n\nAssuming we don't actually rip out CREATEROLE when this change goes in...do\nyou propose to prohibit a CREATEROLE user from altering the membership\nroster of any group which itself is not a member of and also those which it\nis a member of but where admin_option is false?\n\nI don't personally have a problem with the current state where CREATEROLE\nis an admin for, but not a member of, every non-superuser(-related) role in\nthe system. If the consensus is to change that then I suppose this becomes\nthe minimally invasive fix that accomplishes that goal as well. It seems\nincomplete though, since you still need superuser to create a group and add\nthe initial WITH ADMIN member to it. So this seems to work in the \"avoid\nusing superuser\" sense if you've also added something that has what\nCREATEROLE provides today - admin without membership - but that would have\nthe benefit of not carrying around all the baggage that CREATEROLE has.\n\n\n> > I made the observation that being able to manage the membership of a\n> group\n> > without having the ability to create new users seems like a half a loaf\n> of\n> > a feature. That's it. I would presume that any redesign of the\n> > permissions system here would address this adequately.\n>\n> If the new design ideas that are being thrown around don't address what\n> you're thinking they should, it'd be great to point that out.\n>\n\nI mean, you need a Create Role permission in some form, even if it's\ndeprecating the attribute and making it a predefined role. I picked this\nthread up because it seemed like a limited scope that I could get my head\naround with the time I have, with the main goal to try to understand this\naspect of the system better. I haven't gone and looked into the main\nthread yet.\n\nDavid J.\n\nOn Thu, Mar 10, 2022 at 12:45 PM Stephen Frost <sfrost@snowman.net> wrote:\n* David G. Johnston (david.g.johnston@gmail.com) wrote:\n> On Thu, Mar 10, 2022 at 11:05 AM Stephen Frost <sfrost@snowman.net> wrote:\nWhy not just look at the admin_option field of pg_auth_members...?  I\ndon't get why that isn't an even more minimal fix than this idea you\nhave of adding a column to pg_authid and then propagating around \"this\nuser could become a superuser\" or writing code that has to go check \"is\nthere some way for this role to become a superuser, either directly or\nthrough some subset of pg_* roles?\"Indeed, maybe I am wrong on the scope of the patch.  But at least for the explicit attribute it should be no more difficult than changing:if (grouprole_is_superuser and current_role_is_not_superuser) then error:to beif ((grouoprole_is_superuser OR !groupuser_has_adminattr) AND current_role_is_not_superuser) then error;I have to imagine that given how fundamental inheritance is to our permissions system than doing a similar check up the tree wouldn't be difficult, but I truly don't know with a strong degree of certainty.Assuming we don't actually rip out CREATEROLE when this change goes in...do you propose to prohibit a CREATEROLE user from altering the membership roster of any group which itself is not a member of and also those which it is a member of but where admin_option is false?I don't personally have a problem with the current state where CREATEROLE is an admin for, but not a member of, every non-superuser(-related) role in the system.  If the consensus is to change that then I suppose this becomes the minimally invasive fix that accomplishes that goal as well.  It seems incomplete though, since you still need superuser to create a group and add the initial WITH ADMIN member to it.  So this seems to work in the \"avoid using superuser\" sense if you've also added something that has what CREATEROLE provides today - admin without membership - but that would have the benefit of not carrying around all the baggage that CREATEROLE has. \n> I made the observation that being able to manage the membership of a group\n> without having the ability to create new users seems like a half a loaf of\n> a feature.  That's it.  I would presume that any redesign of the\n> permissions system here would address this adequately.\n\nIf the new design ideas that are being thrown around don't address what\nyou're thinking they should, it'd be great to point that out.I mean, you need a Create Role permission in some form, even if it's deprecating the attribute and making it a predefined role.  I picked this thread up because it seemed like a limited scope that I could get my head around with the time I have, with the main goal to try to understand this aspect of the system better.  I haven't gone and looked into the main thread yet.David J.", "msg_date": "Thu, 10 Mar 2022 13:21:31 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 10, 2022 at 2:58 PM Stephen Frost <sfrost@snowman.net> wrote:\n> It'd be useful to have a better definition of exactly what a\n> 'mini-superuser' is, but at least for the moment when it comes to roles,\n> let's look at what the spec says:\n\nGosh, I feel like I've spelled that out approximately 463,121 times\nalready. That estimate might be slightly off though; I've been known\nto make mistakes from time to time....\n\n> CREATE ROLE\n> - Who is allowed to run CREATE ROLE is implementation-defined\n> - After creation, this is effictively run:\n> GRANT new_role TO creator_role WITH ADMIN, GRANTOR \"_SYSTEM\"\n>\n> DROP ROLE\n> - Any user who has been GRANT'd a role with ADMIN option is able to\n> DROP that role.\n>\n> GRANT ROLE\n> - No cycles allowed\n> - A role must have ADMIN rights on the role to be able to GRANT it to\n> another role.\n>\n> ALTER ROLE\n> - Doesn't exist\n>\n> This actually looks to me like more-or-less what you're looking for, it\n> just isn't what we have today because CREATEROLE brings along with it a\n> bunch of other stuff, some of which we want and some that we don't, and\n> some things that the SQL spec says ADMIN should be allowed to do (DROP\n> ROLE) we don't allow today.\n\nThe above is mostly fine with me, except for the part about ALTER ROLE\nnot existing. I think it's always good to be able to change your mind\npost-CREATE.\n\nBasically, in this sketch, ADMIN OPTION on a role involves the ability\nto DROP it, which means we don't need a separate role owner concept.\nIt also involves membership, meaning that you can freely exercise the\nprivileges of the role without SET ROLE. While I'm totally down with\nhaving other possible behaviors as options, that particular behavior\nseems very useful to me, so, sounds great.\n\n> It's also not quite what I want because it requires that membership and\n> ADMIN go together where I'd like to be able to have those be\n> independently GRANT'able- and then some.\n>\n> I don't think we're that far from having all of these though. To start\n> with, we remove from CREATEROLE the random things that it does which go\n> beyond what folks tend to expect- remove the whole 'grant any role to\n> any other' stuff, remove the 'drop role' exception, remove the\n> 'alter role' stuff. Do make it so that when you create a role, however,\n> the above GRANT is effectively done. Now, for the items above where we\n> removed the checks against have_createrole_privilege() we go back and\n> add in checks using is_admin_of_role(). Of course, also remove the role\n> self-administration bug.\n\nWhat do you mean by the 'drop role' exception?\n\nI don't like removing 'alter role'.\n\nThe rest sounds good.\n\n> That's step #1, but it gets us more-or-less what you're looking for, I\n> think, and brings us a lot closer to what the spec has.\n\nGreat.\n\n> Step #2 is also in-line with the spec: track GRANTORs and care about\n> them, for everything. We really should have been doing this all along.\n> Note that I'm not saying that an owner of a table can't REVOKE some\n> right that was GRANT'd on that table, but rather that a user who was\n> GRANT'd ADMIN rights on a table and then GRANT'd that right to some\n> other user shouldn't have some other user who only has ADMIN rights on\n> the table be able to remove that GRANT. Same goes for roles, meaning\n> that you could GRANT rights in a role with ADMIN option and not have to\n> be afraid that the role you just gave that to will be able to remove\n> *your* ADMIN rights on that role. In general, I don't think this\n> would actually have a very large impact on users because most users\n> don't, today, use the ADMIN option much.\n\nThere are details to work out here, but in general, I like it.\n\n> Step #3 starts going in the direction of what I'd like to see, which\n> would be to break out membership in a role as a separate thing from\n> admin rights on that role. This is also what would help with the 'bot'\n> use-case that Joshua (not David Steele, btw) brought up.\n\nWoops, apologies for getting the name wrong. I also said Marc earlier\nwhen I meant Mark, because I work with people named Mark, Marc, and\nMarc, and Mark's spelling got outvoted by some distant corner of my\nbrain.\n\nI think this is a fine long-term direction, with the caveat that\nyou've not provided enough specifics here for me to really understand\nhow it would work. I fear the specifics might be hard to get right,\nboth in terms of making it understandable to users and in terms of\npreserving as much backward-compatibility as we can. However, I am not\nopposed to the concept.\n\n> Step #4 then breaks the 'admin' option on roles into pieces- a 'drop\n> role' right, a 'reset password' right, maybe separate rights for\n> different role attributes, etc. We would likely still keep the\n> 'admin_option' column in pg_auth_members and just check that first\n> and then check the individual rights (similar to table-level vs.\n> column-level privileges) so that we stay in line with the spec's\n> expectation here and with what users are used to.\n\nSame comments as #3, plus I wonder whether it really makes sense to\nseparate #3 and #4. But we can decide that when there's a fleshed-out\ndesign for this.\n\n> In some hyptothetical world, there's even a later step #5 which allows\n> us to define user profiles and then grant the ability for a user to\n> create a role with a certain profile (but not any arbitrary profile),\n> thus making things like the 'bot' even more constrained in terms of\n> what it's able to do (maybe it can then create a role that's a member of\n> a role without itself being a member of that role or explicitly having\n> admin rights in that role, as an example).\n\nRight. I don't object to this either, hypothetically, but I think\nwe're a long way from understanding how to get there, and I don't want\nstep #1 to get blocked behind all the rest of this. Particularly the\npart where we remove the role self-administration thing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Mar 2022 15:22:05 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Mar 10, 2022 at 2:58 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > It'd be useful to have a better definition of exactly what a\n> > 'mini-superuser' is, but at least for the moment when it comes to roles,\n> > let's look at what the spec says:\n> \n> Gosh, I feel like I've spelled that out approximately 463,121 times\n> already. That estimate might be slightly off though; I've been known\n> to make mistakes from time to time....\n\nIf there's a specific message that details it closely on the lists\nsomewhere, I'm happy to go review it. I admit that I didn't go back and\nlook for such.\n\n> > CREATE ROLE\n> > - Who is allowed to run CREATE ROLE is implementation-defined\n> > - After creation, this is effictively run:\n> > GRANT new_role TO creator_role WITH ADMIN, GRANTOR \"_SYSTEM\"\n> >\n> > DROP ROLE\n> > - Any user who has been GRANT'd a role with ADMIN option is able to\n> > DROP that role.\n> >\n> > GRANT ROLE\n> > - No cycles allowed\n> > - A role must have ADMIN rights on the role to be able to GRANT it to\n> > another role.\n> >\n> > ALTER ROLE\n> > - Doesn't exist\n> >\n> > This actually looks to me like more-or-less what you're looking for, it\n> > just isn't what we have today because CREATEROLE brings along with it a\n> > bunch of other stuff, some of which we want and some that we don't, and\n> > some things that the SQL spec says ADMIN should be allowed to do (DROP\n> > ROLE) we don't allow today.\n> \n> The above is mostly fine with me, except for the part about ALTER ROLE\n> not existing. I think it's always good to be able to change your mind\n> post-CREATE.\n\nErrr, just to be clear, ALTER ROLE doesn't exist *in the spec*. I\nwasn't suggesting that we get rid of it, just that it doesn't exist in\nthe spec and therefore the spec doesn't have anything to say about it.\n\n> Basically, in this sketch, ADMIN OPTION on a role involves the ability\n> to DROP it, which means we don't need a separate role owner concept.\n\nRight. The above doesn't include any specifics about what to do with\nALTER ROLE, but my thought would be to have it also be under ADMIN\nOPTION rather than under CREATEROLE, as I tried to outline (though not\nvery well, I'll admit) below.\n\n> It also involves membership, meaning that you can freely exercise the\n> privileges of the role without SET ROLE. While I'm totally down with\n> having other possible behaviors as options, that particular behavior\n> seems very useful to me, so, sounds great.\n\nWell, yes and no- by default you're right, presuming everything is set\nas inheirited, but I'd wish for us to keep the option of creating roles\nwhich are noinherit and having that work just as it does today.\n\n> > It's also not quite what I want because it requires that membership and\n> > ADMIN go together where I'd like to be able to have those be\n> > independently GRANT'able- and then some.\n> >\n> > I don't think we're that far from having all of these though. To start\n> > with, we remove from CREATEROLE the random things that it does which go\n> > beyond what folks tend to expect- remove the whole 'grant any role to\n> > any other' stuff, remove the 'drop role' exception, remove the\n> > 'alter role' stuff. Do make it so that when you create a role, however,\n> > the above GRANT is effectively done. Now, for the items above where we\n> > removed the checks against have_createrole_privilege() we go back and\n> > add in checks using is_admin_of_role(). Of course, also remove the role\n> > self-administration bug.\n> \n> What do you mean by the 'drop role' exception?\n\n'ability' was probably a better word there. What I'm talking about is\nchanging in DropRole:\n\n if (!have_createrole_privilege())\n ereport(ERROR,\n (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n errmsg(\"permission denied to drop role\")));\n\nto be, more or less:\n\n if (!is_admin_of_role(role))\n ereport(ERROR,\n (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n errmsg(\"permission denied to drop role\")));\n\n> I don't like removing 'alter role'.\n\nDitto above but for AlterRole. Taking it away from users with\nCREATEROLE being able to run those commands on $anyrole and instead\nmaking it so that the role running DROP ROLE or ALTER ROLE needs to have\nADMIN on the role they're messing with. I do think we may also need to\nmake some adjustments in terms of what a regular user WITH ADMIN on a\ngiven role is able to do when it comes to ALTER ROLE, in particular, I\ndon't think we'll want to remove the existing is-superuser checks\nagainst a user settings bypassrls or replication or superuser on some\nother role. Maybe we can provide a way for a non-superuser to be given\nthe ability to set those attributes for roles they create, but that\nwould be a separate thing.\n\n> The rest sounds good.\n\nGreat.\n\n> > That's step #1, but it gets us more-or-less what you're looking for, I\n> > think, and brings us a lot closer to what the spec has.\n> \n> Great.\n\nAwesome.\n\n> > Step #2 is also in-line with the spec: track GRANTORs and care about\n> > them, for everything. We really should have been doing this all along.\n> > Note that I'm not saying that an owner of a table can't REVOKE some\n> > right that was GRANT'd on that table, but rather that a user who was\n> > GRANT'd ADMIN rights on a table and then GRANT'd that right to some\n> > other user shouldn't have some other user who only has ADMIN rights on\n> > the table be able to remove that GRANT. Same goes for roles, meaning\n> > that you could GRANT rights in a role with ADMIN option and not have to\n> > be afraid that the role you just gave that to will be able to remove\n> > *your* ADMIN rights on that role. In general, I don't think this\n> > would actually have a very large impact on users because most users\n> > don't, today, use the ADMIN option much.\n> \n> There are details to work out here, but in general, I like it.\n\nCool. Note that superusers would still be able to do $anything,\nincluding removing someone's ADMIN rights on a role even if that\nsuperuser didn't GRANT it (at least, that's my thinking on this).\n\n> > Step #3 starts going in the direction of what I'd like to see, which\n> > would be to break out membership in a role as a separate thing from\n> > admin rights on that role. This is also what would help with the 'bot'\n> > use-case that Joshua (not David Steele, btw) brought up.\n> \n> Woops, apologies for getting the name wrong. I also said Marc earlier\n> when I meant Mark, because I work with people named Mark, Marc, and\n> Marc, and Mark's spelling got outvoted by some distant corner of my\n> brain.\n\nHah, no worries.\n\n> I think this is a fine long-term direction, with the caveat that\n> you've not provided enough specifics here for me to really understand\n> how it would work. I fear the specifics might be hard to get right,\n> both in terms of making it understandable to users and in terms of\n> preserving as much backward-compatibility as we can. However, I am not\n> opposed to the concept.\n\nWe can perhaps debate the specifics around this later.\n\n> > Step #4 then breaks the 'admin' option on roles into pieces- a 'drop\n> > role' right, a 'reset password' right, maybe separate rights for\n> > different role attributes, etc. We would likely still keep the\n> > 'admin_option' column in pg_auth_members and just check that first\n> > and then check the individual rights (similar to table-level vs.\n> > column-level privileges) so that we stay in line with the spec's\n> > expectation here and with what users are used to.\n> \n> Same comments as #3, plus I wonder whether it really makes sense to\n> separate #3 and #4. But we can decide that when there's a fleshed-out\n> design for this.\n\nDitto. I don't know that they need to be independent either.\n\n> > In some hyptothetical world, there's even a later step #5 which allows\n> > us to define user profiles and then grant the ability for a user to\n> > create a role with a certain profile (but not any arbitrary profile),\n> > thus making things like the 'bot' even more constrained in terms of\n> > what it's able to do (maybe it can then create a role that's a member of\n> > a role without itself being a member of that role or explicitly having\n> > admin rights in that role, as an example).\n> \n> Right. I don't object to this either, hypothetically, but I think\n> we're a long way from understanding how to get there, and I don't want\n> step #1 to get blocked behind all the rest of this. Particularly the\n> part where we remove the role self-administration thing.\n\nSure.\n\nThanks,\n\nStephen", "msg_date": "Thu, 10 Mar 2022 15:41:57 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 10, 2022 at 12:58 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n> I don't think we're that far from having all of these though. To start\n> with, we remove from CREATEROLE the random things that it does which go\n> beyond what folks tend to expect- remove the whole 'grant any role to\n> any other' stuff, remove the 'drop role' exception, remove the\n> 'alter role' stuff. Do make it so that when you create a role, however,\n> the above GRANT is effectively done. Now, for the items above where we\n> removed the checks against have_createrole_privilege() we go back and\n> add in checks using is_admin_of_role(). Of course, also remove the role\n> self-administration bug.\n>\n> That's step #1, but it gets us more-or-less what you're looking for, I\n> think, and brings us a lot closer to what the spec has.\n>\n\nThat still leaves attribute specification in place: e.g., REPLICATION,\nCREATEROLE, CREATEDB, etc... (I see BYPASSRLS already is SUPERUSER only)\n\nI dislike changing the documented behavior of CREATEROLE to the degree\nsuggested here. However, there are three choices here, only one of which\ncan be chosen:\n\n1. Leave CREATEROLE alone entirely\n2. Make it so CREATEROLE cannot assign membership to the predefined roles\nor superuser (inheritance included), but leave the rest alone. This would\nbe the hard-coded version, not the role attribute one.\n3. Make it so CREATEROLE can only assign membership to roles for which it\nhas been made an admin; as well as the other things mentioned\n\nMoving forward I'd prefer options 1 or 2, leaving the ability to\ncreate/alter/drop a role to be vested via predefined roles.\n\nThe rest seems fine at an initial glance.\n\nDavid J.\n\nOn Thu, Mar 10, 2022 at 12:58 PM Stephen Frost <sfrost@snowman.net> wrote:I don't think we're that far from having all of these though.  To start\nwith, we remove from CREATEROLE the random things that it does which go\nbeyond what folks tend to expect- remove the whole 'grant any role to\nany other' stuff, remove the 'drop role' exception, remove the\n'alter role' stuff.  Do make it so that when you create a role, however,\nthe above GRANT is effectively done.  Now, for the items above where we\nremoved the checks against have_createrole_privilege() we go back and\nadd in checks using is_admin_of_role().  Of course, also remove the role\nself-administration bug.\n\nThat's step #1, but it gets us more-or-less what you're looking for, I\nthink, and brings us a lot closer to what the spec has.That still leaves attribute specification in place: e.g., REPLICATION, CREATEROLE, CREATEDB, etc... (I see BYPASSRLS already is SUPERUSER only)I dislike changing the documented behavior of CREATEROLE to the degree suggested here.  However, there are three choices here, only one of which can be chosen:1. Leave CREATEROLE alone entirely2. Make it so CREATEROLE cannot assign membership to the predefined roles or superuser (inheritance included), but leave the rest alone.  This would be the hard-coded version, not the role attribute one.3. Make it so CREATEROLE can only assign membership to roles for which it has been made an admin; as well as the other things mentionedMoving forward I'd prefer options 1 or 2, leaving the ability to create/alter/drop a role to be vested via predefined roles.The rest seems fine at an initial glance.David J.", "msg_date": "Thu, 10 Mar 2022 14:00:41 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 10, 2022 at 3:41 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Gosh, I feel like I've spelled that out approximately 463,121 times\n> > already. That estimate might be slightly off though; I've been known\n> > to make mistakes from time to time....\n>\n> If there's a specific message that details it closely on the lists\n> somewhere, I'm happy to go review it. I admit that I didn't go back and\n> look for such.\n\nProbably easier to just say it again: I want to have users that can\ncreate roles and then have superuser-like powers with respect to those\nroles. They can freely exercise the privileges of those roles, and\nthey can do all the things that a superuser can do but only with\nrespect to those roles. They cannot break out to the OS. I think it's\npretty similar to what you are describing, with a couple of possible\nexceptions. For example, would you imagine that being an admin of a\nlogin role would let you change that user's password? Because that\nwould be desirable behavior from where I sit.\n\n> Errr, just to be clear, ALTER ROLE doesn't exist *in the spec*. I\n> wasn't suggesting that we get rid of it, just that it doesn't exist in\n> the spec and therefore the spec doesn't have anything to say about it.\n\nOh, OK.\n\n> > Basically, in this sketch, ADMIN OPTION on a role involves the ability\n> > to DROP it, which means we don't need a separate role owner concept.\n>\n> Right. The above doesn't include any specifics about what to do with\n> ALTER ROLE, but my thought would be to have it also be under ADMIN\n> OPTION rather than under CREATEROLE, as I tried to outline (though not\n> very well, I'll admit) below.\n\nThis sentence really confused me at first, but I think you're saying\nthat the right to alter a role would be dependent on having ADMIN\nOPTION on the role rather than on having the CREATEROLE attribute.\nThat seems like a reasonable idea to me.\n\n> > It also involves membership, meaning that you can freely exercise the\n> > privileges of the role without SET ROLE. While I'm totally down with\n> > having other possible behaviors as options, that particular behavior\n> > seems very useful to me, so, sounds great.\n>\n> Well, yes and no- by default you're right, presuming everything is set\n> as inheirited, but I'd wish for us to keep the option of creating roles\n> which are noinherit and having that work just as it does today.\n\nHmm, so if I have membership WITH ADMIN OPTION in a role, but my role\nis marked NOINHERIT, that means I can't exercise the privileges of\nthat role without SET ROLE. But, can I still do other things to that\nrole, such as dropping it? Given the current coding of\nroles_is_member_of(), it seems like I can't. I don't like that, but\nthen I don't like much of anything about NOINHERIT. Do you have any\nsuggestions for how this could be improved?\n\nTo make this more concrete, suppose the superuser does \"CREATE USER\nalice CREATEROLE\". Alice will have INHERIT, so she'll have control\nover any roles she creates. But if she does \"CREATE USER bob\nCREATEROLE NOINHERIT\" then neither she nor Bob will be able to control\nthe roles bob creates. I'd like to have a way to make it so that\nneither Alice nor any other CREATEROLE users she spins up can create\nroles over which they no longer have control. Because otherwise people\nwill do dumb stuff like that and then have to call the superuser to\nsort it out, and the superuser won't like that because s/he is a super\nbusy person.\n\n> > What do you mean by the 'drop role' exception?\n>\n> 'ability' was probably a better word there. What I'm talking about is\n> changing in DropRole:\n>\n> to be, more or less:\n>\n> if (!is_admin_of_role(role))\n> ereport(ERROR,\n> (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> errmsg(\"permission denied to drop role\")));\n\nSounds good.\n\n> > I don't like removing 'alter role'.\n>\n> Ditto above but for AlterRole. Taking it away from users with\n> CREATEROLE being able to run those commands on $anyrole and instead\n> making it so that the role running DROP ROLE or ALTER ROLE needs to have\n> ADMIN on the role they're messing with. I do think we may also need to\n> make some adjustments in terms of what a regular user WITH ADMIN on a\n> given role is able to do when it comes to ALTER ROLE, in particular, I\n> don't think we'll want to remove the existing is-superuser checks\n> against a user settings bypassrls or replication or superuser on some\n> other role. Maybe we can provide a way for a non-superuser to be given\n> the ability to set those attributes for roles they create, but that\n> would be a separate thing.\n\nThis too.\n\n> > > Step #2 is also in-line with the spec: track GRANTORs and care about\n> > > them, for everything. We really should have been doing this all along.\n> >\n> > There are details to work out here, but in general, I like it.\n>\n> Cool. Note that superusers would still be able to do $anything,\n> including removing someone's ADMIN rights on a role even if that\n> superuser didn't GRANT it (at least, that's my thinking on this).\n\nAgree. I also think that it would be a good idea to attribute grants\nperformed by any superuser to the bootstrap superuser, or leave them\nunattributed somehow. Because otherwise dropping superusers becomes a\npain in the tail for no good reason.\n\nWe might also need to think carefully about what happens if for\nexample the table owner is changed. If bob owns the table and we\nchange the owner to mary, but bob's previous grants are still\nattributed to bob, I'm not sure that's going to be very convenient.\nPossibly if the table owner changes we also change the owner of all\ngrants attributed to the old table owner to be attributed to the new\ntable owner?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Mar 2022 16:51:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 10, 2022 at 02:22:05PM -0500, Robert Haas wrote:\n> I mean, I didn't design pg_hba.conf, but I think it's part of the\n> database doing a reasonable thing, not an external system doing a\n> nonsensical thing.\n\nFYI, I think pg_hba.conf gets away with having negative/reject\npermissions only because it is strictly ordered.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 10 Mar 2022 17:00:11 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 10, 2022 at 4:00 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> I dislike changing the documented behavior of CREATEROLE to the degree suggested here. However, there are three choices here, only one of which can be chosen:\n>\n> 1. Leave CREATEROLE alone entirely\n> 2. Make it so CREATEROLE cannot assign membership to the predefined roles or superuser (inheritance included), but leave the rest alone. This would be the hard-coded version, not the role attribute one.\n> 3. Make it so CREATEROLE can only assign membership to roles for which it has been made an admin; as well as the other things mentioned\n>\n> Moving forward I'd prefer options 1 or 2, leaving the ability to create/alter/drop a role to be vested via predefined roles.\n\nIt sounds like you prefer a behavior where CREATEROLE gives power over\nall non-superusers, but that seems pretty limiting to me. Why can't\nsomeone want to create a user with power over some users but not\nothers? For example, the superuser might want to give alice the\nability to set up new users in the accounting department, but NOT give\nalice the right to tinker with the backup user (who is not a\nsuperuser, but doesn't have the replication privilege). How would they\naccomplish that in your view?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Mar 2022 17:01:43 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 10, 2022 at 5:00 PM Bruce Momjian <bruce@momjian.us> wrote:\n> On Thu, Mar 10, 2022 at 02:22:05PM -0500, Robert Haas wrote:\n> > I mean, I didn't design pg_hba.conf, but I think it's part of the\n> > database doing a reasonable thing, not an external system doing a\n> > nonsensical thing.\n>\n> FYI, I think pg_hba.conf gets away with having negative/reject\n> permissions only because it is strictly ordered.\n\nI agree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Mar 2022 17:02:05 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 10, 2022 at 3:01 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Mar 10, 2022 at 4:00 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> > I dislike changing the documented behavior of CREATEROLE to the degree\n> suggested here. However, there are three choices here, only one of which\n> can be chosen:\n> >\n> > 1. Leave CREATEROLE alone entirely\n> > 2. Make it so CREATEROLE cannot assign membership to the predefined\n> roles or superuser (inheritance included), but leave the rest alone. This\n> would be the hard-coded version, not the role attribute one.\n> > 3. Make it so CREATEROLE can only assign membership to roles for which\n> it has been made an admin; as well as the other things mentioned\n> >\n> > Moving forward I'd prefer options 1 or 2, leaving the ability to\n> create/alter/drop a role to be vested via predefined roles.\n>\n> It sounds like you prefer a behavior where CREATEROLE gives power over\n> all non-superusers, but that seems pretty limiting to me.\n>\n\nDoh! I edited out the part where I made clear I considered options 1 and 2\nas basically being done for a limited period of time while deprecating the\nCREATEROLE attribute altogether in favor of the fine-grained and predefined\nrole based permission granting. I don't want to nerf CREATEROLE as part of\nadding this new feature, instead leave it as close to status quo as\nreasonable so as not to mess up existing setups that make use of it. We\ncan note in the release notes and documentation that we consider CREATEROLE\nto be deprecated and that the new predefined role should be used to give a\nuser the ability to create/alter/drop roles, etc... DBAs should consider\nrevoking CREATEROLE from their users and granting them proper memberships\nin the predefined roles and the groups those roles should be administering.\n\nDavid J.\n\nOn Thu, Mar 10, 2022 at 3:01 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Mar 10, 2022 at 4:00 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> I dislike changing the documented behavior of CREATEROLE to the degree suggested here.  However, there are three choices here, only one of which can be chosen:\n>\n> 1. Leave CREATEROLE alone entirely\n> 2. Make it so CREATEROLE cannot assign membership to the predefined roles or superuser (inheritance included), but leave the rest alone.  This would be the hard-coded version, not the role attribute one.\n> 3. Make it so CREATEROLE can only assign membership to roles for which it has been made an admin; as well as the other things mentioned\n>\n> Moving forward I'd prefer options 1 or 2, leaving the ability to create/alter/drop a role to be vested via predefined roles.\n\nIt sounds like you prefer a behavior where CREATEROLE gives power over\nall non-superusers, but that seems pretty limiting to me.Doh!  I edited out the part where I made clear I considered options 1 and 2 as basically being done for a limited period of time while deprecating the CREATEROLE attribute altogether in favor of the fine-grained and predefined role based permission granting.  I don't want to nerf CREATEROLE as part of adding this new feature, instead leave it as close to status quo as reasonable so as not to mess up existing setups that make use of it.  We can note in the release notes and documentation that we consider CREATEROLE to be deprecated and that the new predefined role should be used to give a user the ability to create/alter/drop roles, etc...  DBAs should consider revoking CREATEROLE from their users and granting them proper memberships in the predefined roles and the groups those roles should be administering.David J.", "msg_date": "Thu, 10 Mar 2022 15:12:27 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Probably easier to just say it again: I want to have users that can\n> create roles and then have superuser-like powers with respect to those\n> roles. They can freely exercise the privileges of those roles, and\n> they can do all the things that a superuser can do but only with\n> respect to those roles.\n\nThis seems reasonable in isolation, but\n\n(1) it implies a persistent relationship between creating and created\nroles. Whether you want to call that ownership or not, it sure walks\nand quacks like ownership.\n\n(2) it seems exactly contradictory to your later point that\n\n> Agree. I also think that it would be a good idea to attribute grants\n> performed by any superuser to the bootstrap superuser, or leave them\n> unattributed somehow. Because otherwise dropping superusers becomes a\n> pain in the tail for no good reason.\n\nEither there's a persistent relationship or there's not. I don't\nthink it's sensible to treat superusers differently here.\n\nI think that this argument about the difficulty of dropping superusers\nmay in fact be the motivation for the existing behavior that object-\npermissions GRANTs done by superusers are attributed to the object\nowner; something you were unhappy about upthread.\n\nIn the end these requirements seem mutually contradictory. Either\nwe can have a persistent ownership relationship or not, but I don't\nthink we can have it apply in some cases and not others without\ncreating worse problems than we solve. I'm inclined to toss overboard\nthe requirement that superusers need to be an easy thing to drop.\nWhy is that important, anyway?\n\n> We might also need to think carefully about what happens if for\n> example the table owner is changed. If bob owns the table and we\n> change the owner to mary, but bob's previous grants are still\n> attributed to bob, I'm not sure that's going to be very convenient.\n\nThat's already handled, is it not?\n\nregression=# create user alice;\nCREATE ROLE\nregression=# create user bob;\nCREATE ROLE\nregression=# create user charlie;\nCREATE ROLE\nregression=# \\c - alice\nYou are now connected to database \"regression\" as user \"alice\".\nregression=> create table alices_table (f1 int);\nCREATE TABLE\nregression=> grant select on alices_table to bob;\nGRANT\nregression=> \\c - postgres\nYou are now connected to database \"regression\" as user \"postgres\".\nregression=# alter table alices_table owner to charlie;\nALTER TABLE\nregression=# \\dp alices_table\n Access privileges\n Schema | Name | Type | Access privileges | Column privileges | Policies \n--------+--------------+-------+-------------------------+-------------------+----------\n public | alices_table | table | charlie=arwdDxt/charlie+| | \n | | | bob=r/charlie | | \n(1 row)\n\nI'm a bit disturbed that parts of this discussion seem to be getting\nconducted with little understanding of the system's existing behaviors.\nWe should not be reinventing things we already have perfectly good\nsolutions for in the object-privileges domain.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Mar 2022 17:14:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "\n\n> On Mar 10, 2022, at 2:01 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> It sounds like you prefer a behavior where CREATEROLE gives power over\n> all non-superusers, but that seems pretty limiting to me. Why can't\n> someone want to create a user with power over some users but not\n> others?\n\nI agree with Robert on this.\n\nOver at [1], I introduced a patch series to (a) change CREATEROLE and (b) introduce role ownership. Part (a) wasn't that controversial. The patch series failed to make it for postgres 15 on account of (b). The patch didn't go quite far enough, but with it applied, this is an example of a min-superuser \"lord\" operating within database \"fiefdom\":\n\nfiefdom=# -- mini-superuser who can create roles and write all data\nfiefdom=# CREATE ROLE lord\nfiefdom-# WITH CREATEROLE\nfiefdom-# IN ROLE pg_write_all_data;\nCREATE ROLE\nfiefdom=# \nfiefdom=# -- group which \"lord\" belongs to\nfiefdom=# CREATE GROUP squire\nfiefdom-# ROLE lord;\nCREATE ROLE\nfiefdom=# \nfiefdom=# -- group which \"lord\" has no connection to\nfiefdom=# CREATE GROUP paladin;\nCREATE ROLE\nfiefdom=# \nfiefdom=# SET SESSION AUTHORIZATION lord;\nSET\nfiefdom=> \nfiefdom=> -- fail, merely a member of \"squire\"\nfiefdom=> CREATE ROLE peon IN ROLE squire;\nERROR: must have admin option on role \"squire\"\nfiefdom=> \nfiefdom=> -- fail, no privilege to grant CREATEDB \nfiefdom=> CREATE ROLE peon CREATEDB;\nERROR: must have createdb privilege to create createdb users\nfiefdom=> \nfiefdom=> RESET SESSION AUTHORIZATION;\nRESET\nfiefdom=# \nfiefdom=# -- grant admin over \"squire\" to \"lord\"\nfiefdom=# GRANT squire\nfiefdom-# TO lord\nfiefdom-# WITH ADMIN OPTION;\nGRANT ROLE\nfiefdom=# \nfiefdom=# SET SESSION AUTHORIZATION lord;\nSET\nfiefdom=> \nfiefdom=> -- ok, have both \"CREATEROLE\" and admin option for \"squire\"\nfiefdom=> CREATE ROLE peon IN ROLE squire;\nCREATE ROLE\nfiefdom=> \nfiefdom=> -- fail, no privilege to grant CREATEDB\nfiefdom=> CREATE ROLE peasant CREATEDB IN ROLE squire;\nERROR: must have createdb privilege to create createdb users\nfiefdom=> \nfiefdom=> RESET SESSION AUTHORIZATION;\nRESET\nfiefdom=# \nfiefdom=# -- Give lord the missing privilege\nfiefdom=# GRANT CREATEDB TO lord;\nERROR: role \"createdb\" does not exist\nfiefdom=# \nfiefdom=# RESET SESSION AUTHORIZATION;\nRESET\nfiefdom=# \nfiefdom=# -- ok, have \"CREATEROLE\", \"CREATEDB\", and admin option for \"squire\"\nfiefdom=# CREATE ROLE peasant CREATEDB IN ROLE squire;\nCREATE ROLE\n\nThe problem with this is that \"lord\" needs CREATEDB to grant CREATEDB, but really it should need something like grant option on \"CREATEDB\". But that's hard to do with the existing system, given the way these privilege bits are represented. If we added a few more built-in pg_* roles, such as pg_create_db, it would just work. CREATEROLE itself could be reimagined as pg_create_role, and then users could be granted into this role with or without admin option, meaning they could/couldn't further give it away. I think that would be a necessary component to Joshua's \"bot\" use-case, since the bot must itself have the privilege to create roles, but shouldn't necessarily be trusted with the privilege to create additional roles who have it.\n\n[1] https://www.postgresql.org/message-id/53C7DF4C-8463-4647-9DFD-779B5E1861C4@amazon.com\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 10 Mar 2022 14:17:08 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 10, 2022 at 5:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This seems reasonable in isolation, but\n>\n> (1) it implies a persistent relationship between creating and created\n> roles. Whether you want to call that ownership or not, it sure walks\n> and quacks like ownership.\n\nI agree. It's been obvious to me from the beginning that we needed\nsuch a persistent relationship, and also that it needed to be a\nrelationship from which the created role couldn't simply walk away.\nYet, more than six months after the first discussions of this topic,\nwe still don't have any kind of agreement on what that thing should be\ncalled. I like my TENANT idea best, but I'm perfectly willing to call\nit ownership as you seem to prefer or WITH ADMIN OPTION as Stephen\nseems to prefer if one of those ideas gains consensus. But we've\nmanaged to waste all hope of making any significant progress here for\nan entire release cycle for lack of ability to agree on spelling. I\nthink that's unfair to Mark, who put a lot of work into this area and\ngot nothing out of it, and I think it sucks for users of PostgreSQL,\ntoo.\n\n> (2) it seems exactly contradictory to your later point that\n>\n> > Agree. I also think that it would be a good idea to attribute grants\n> > performed by any superuser to the bootstrap superuser, or leave them\n> > unattributed somehow. Because otherwise dropping superusers becomes a\n> > pain in the tail for no good reason.\n>\n> Either there's a persistent relationship or there's not. I don't\n> think it's sensible to treat superusers differently here.\n>\n> I think that this argument about the difficulty of dropping superusers\n> may in fact be the motivation for the existing behavior that object-\n> permissions GRANTs done by superusers are attributed to the object\n> owner; something you were unhappy about upthread.\n>\n> In the end these requirements seem mutually contradictory. Either\n> we can have a persistent ownership relationship or not, but I don't\n> think we can have it apply in some cases and not others without\n> creating worse problems than we solve. I'm inclined to toss overboard\n> the requirement that superusers need to be an easy thing to drop.\n> Why is that important, anyway?\n\nWell, I think you're looking at it the wrong way. Compared to getting\nuseful functionality, the relative ease of dropping users is\ncompletely unimportant. I'm happy to surrender it in exchange for\nsomething else. I just don't see why we should give it up for nothing.\nIf Alice creates non-superusers Bob and Charlie, and Charlie creates\nDoug, we need the persistent relationship to know that Charlie is\nallowed to drop Doug and Bob is not. But if Charlie is a superuser\nanyway, then the persistent relationship is of no use. I don't see the\npoint of cluttering up the system with such dependencies. Will I do it\nthat way, if that's what it takes to get the patch accepted? Sure. But\nI can't imagine any end-user actually liking it.\n\n> I'm a bit disturbed that parts of this discussion seem to be getting\n> conducted with little understanding of the system's existing behaviors.\n> We should not be reinventing things we already have perfectly good\n> solutions for in the object-privileges domain.\n\nI did wonder whether that might be the existing behavior, but stopping\nto check right at that moment didn't seem that important to me. Maybe\nI should have taken the time, but it's not like we're writing the\nfinal patch for commit next Tuesday at this point. It's more important\nat this point to get agreement on the principles. That said, I do\nagree that there have been times when we haven't thought hard enough\nabout the existing behavior in proposing new behavior. On the third\nhand, though, part of the problem here is that neither Stephen nor I\nare entirely happy with the existing behavior, if for somewhat\ndifferent reasons. It really isn't \"perfectly good.\" On the one hand,\nfrom a purely technical standpoint, a lot of the behavior around roles\nin particular seems well below the standard that anyone would consider\ncommittable today. On the other hand, even the parts of the code that\nare in reasonable shape from a code quality point of view don't\nactually do the things that we think users want done.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 08:55:16 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Mar 10, 2022 at 5:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > This seems reasonable in isolation, but\n> >\n> > (1) it implies a persistent relationship between creating and created\n> > roles. Whether you want to call that ownership or not, it sure walks\n> > and quacks like ownership.\n\nI agree that there would be a recorded relationship (that is, one that\nwe write into the catalog and keep around until and unless it's removed\nby an admin) between creating and created roles and that's probably the\ndefault when CREATE ROLE is run but, unlike tables or such objects in\nthe system, I don't agree that we should require this to exist at\nabsolutely all times for every role (what would it be for the bootstrap\nsuperuser..?). At least today, that's distinct from how ownership in\nthe system works. I also don't believe that this is necessarily an\nissue for Robert's use-case, as long as there are appropriate\nrestrictions around who is allowed to remove or modify these\nrelationships.\n\n> I agree. It's been obvious to me from the beginning that we needed\n> such a persistent relationship, and also that it needed to be a\n> relationship from which the created role couldn't simply walk away.\n> Yet, more than six months after the first discussions of this topic,\n> we still don't have any kind of agreement on what that thing should be\n> called. I like my TENANT idea best, but I'm perfectly willing to call\n> it ownership as you seem to prefer or WITH ADMIN OPTION as Stephen\n> seems to prefer if one of those ideas gains consensus. But we've\n> managed to waste all hope of making any significant progress here for\n> an entire release cycle for lack of ability to agree on spelling. I\n> think that's unfair to Mark, who put a lot of work into this area and\n> got nothing out of it, and I think it sucks for users of PostgreSQL,\n> too.\n\nWell ... one of those actually already exists and also happens to be in\nthe SQL spec. I don't necessarily agree that we should absolutely\nrequire that the system always enforce that this relationship exist (I'd\nlike a superuser to be able to get rid of it and to be able to change it\ntoo if they want) and that seems a bit saner than having the bootstrap\nsuperuser be special in some way here as would seem to otherwise be\nrequired. I also feel that it would be generally useful to have more\nthan one of these relationships, if the user wishes, and that's\nsomething that ownership doesn't (directly) support today. Further,\nthat's supported and expected by the SQL spec too. Even if we invented\nsome concept of ownership of roles, it seems like we should make most of\nthe other changes discussed here to bring us closer to what the spec\nsays about CREATE ROLE, DROP ROLE, GRANT, REVOKE, etc. At that point\nthough, what's the point of having ownership?\n\n> > (2) it seems exactly contradictory to your later point that\n> >\n> > > Agree. I also think that it would be a good idea to attribute grants\n> > > performed by any superuser to the bootstrap superuser, or leave them\n> > > unattributed somehow. Because otherwise dropping superusers becomes a\n> > > pain in the tail for no good reason.\n> >\n> > Either there's a persistent relationship or there's not. I don't\n> > think it's sensible to treat superusers differently here.\n> >\n> > I think that this argument about the difficulty of dropping superusers\n> > may in fact be the motivation for the existing behavior that object-\n> > permissions GRANTs done by superusers are attributed to the object\n> > owner; something you were unhappy about upthread.\n> >\n> > In the end these requirements seem mutually contradictory. Either\n> > we can have a persistent ownership relationship or not, but I don't\n> > think we can have it apply in some cases and not others without\n> > creating worse problems than we solve. I'm inclined to toss overboard\n> > the requirement that superusers need to be an easy thing to drop.\n> > Why is that important, anyway?\n> \n> Well, I think you're looking at it the wrong way. Compared to getting\n> useful functionality, the relative ease of dropping users is\n> completely unimportant. I'm happy to surrender it in exchange for\n> something else. I just don't see why we should give it up for nothing.\n> If Alice creates non-superusers Bob and Charlie, and Charlie creates\n> Doug, we need the persistent relationship to know that Charlie is\n> allowed to drop Doug and Bob is not. But if Charlie is a superuser\n> anyway, then the persistent relationship is of no use. I don't see the\n> point of cluttering up the system with such dependencies. Will I do it\n> that way, if that's what it takes to get the patch accepted? Sure. But\n> I can't imagine any end-user actually liking it.\n\nWe need to know that Charlie is allowed to drop Doug and Bob isn't but\nthat doesn't make it absolutely required that this be tracked\npermanently or that Alice can't decide later to make it such that Doug\ncan't be dropped by Charlie for whatever reason she has. Also, I don't\nthink it would be such an issue to have a CASCADE for DROP ROLE which\nwould handle this case if we want it (and pg_auth_members is shared, so\nthere isn't an issue with multi-database concerns). We could also call\nit something else if people feel CASCADE would be confusing since it\nwouldn't cascade to owned objects. Or we could consider extending GRANT\nto make this situation something that could be handled more easily.\n\nThanks,\n\nStephen", "msg_date": "Fri, 11 Mar 2022 10:27:52 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Fri, Mar 11, 2022 at 6:55 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Mar 10, 2022 at 5:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > This seems reasonable in isolation, but\n> >\n> > (1) it implies a persistent relationship between creating and created\n> > roles. Whether you want to call that ownership or not, it sure walks\n> > and quacks like ownership.\n>\n\n\n> I like my TENANT idea best, but I'm perfectly willing to call\n> it ownership as you seem to prefer or WITH ADMIN OPTION as Stephen\n> seems to prefer if one of those ideas gains consensus.\n\n\nIf WITH ADMIN OPTION is sufficient to meet our immediate goals I do not see\nthe benefit of adding an ownership concept where there is not one today.\nIf added, I'd much rather have it be ownership as to fit in with the rest\nof the existing system rather than introduce an entirely new term.\n\n\n> If Alice creates non-superusers Bob and Charlie, and Charlie creates\n> Doug, we need the persistent relationship to know that Charlie is\n> allowed to drop Doug and Bob is not\n>\n\nThe interesting question seems to be whether Alice can drop Doug, not\nwhether Bob can.\n\n> It's more important\n> at this point to get agreement on the principles.\n>\n\nWhat are the principles you want to get agreement on and how do they differ\nfrom what we have in place today? What are the proposed changes you would\nmake to enforce the new principles. Which principles are now obsolete and\nwhat do you want to do about the features that were built to enforce them\n(including backward compatibility concerns)?\n\nDavid J.\n\nOn Fri, Mar 11, 2022 at 6:55 AM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Mar 10, 2022 at 5:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This seems reasonable in isolation, but\n>\n> (1) it implies a persistent relationship between creating and created\n> roles.  Whether you want to call that ownership or not, it sure walks\n> and quacks like ownership. I like my TENANT idea best, but I'm perfectly willing to call\nit ownership as you seem to prefer or WITH ADMIN OPTION as Stephen\nseems to prefer if one of those ideas gains consensus.If WITH ADMIN OPTION is sufficient to meet our immediate goals I do not see the benefit of adding an ownership concept where there is not one today.  If added, I'd much rather have it be ownership as to fit in with the rest of the existing system rather than introduce an entirely new term.\nIf Alice creates non-superusers Bob and Charlie, and Charlie creates\nDoug, we need the persistent relationship to know that Charlie is\nallowed to drop Doug and Bob is notThe interesting question seems to be whether Alice can drop Doug, not whether Bob can.It's more important\nat this point to get agreement on the principles.What are the principles you want to get agreement on and how do they differ from what we have in place today?  What are the proposed changes you would make to enforce the new principles.  Which principles are now obsolete and what do you want to do about the features that were built to enforce them (including backward compatibility concerns)?David J.", "msg_date": "Fri, 11 Mar 2022 08:27:56 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* David G. Johnston (david.g.johnston@gmail.com) wrote:\n> On Thu, Mar 10, 2022 at 3:01 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> > On Thu, Mar 10, 2022 at 4:00 PM David G. Johnston\n> > <david.g.johnston@gmail.com> wrote:\n> > > I dislike changing the documented behavior of CREATEROLE to the degree\n> > suggested here. However, there are three choices here, only one of which\n> > can be chosen:\n> > >\n> > > 1. Leave CREATEROLE alone entirely\n> > > 2. Make it so CREATEROLE cannot assign membership to the predefined\n> > roles or superuser (inheritance included), but leave the rest alone. This\n> > would be the hard-coded version, not the role attribute one.\n> > > 3. Make it so CREATEROLE can only assign membership to roles for which\n> > it has been made an admin; as well as the other things mentioned\n> > >\n> > > Moving forward I'd prefer options 1 or 2, leaving the ability to\n> > create/alter/drop a role to be vested via predefined roles.\n> >\n> > It sounds like you prefer a behavior where CREATEROLE gives power over\n> > all non-superusers, but that seems pretty limiting to me.\n> \n> Doh! I edited out the part where I made clear I considered options 1 and 2\n> as basically being done for a limited period of time while deprecating the\n> CREATEROLE attribute altogether in favor of the fine-grained and predefined\n> role based permission granting. I don't want to nerf CREATEROLE as part of\n> adding this new feature, instead leave it as close to status quo as\n> reasonable so as not to mess up existing setups that make use of it. We\n> can note in the release notes and documentation that we consider CREATEROLE\n> to be deprecated and that the new predefined role should be used to give a\n> user the ability to create/alter/drop roles, etc... DBAs should consider\n> revoking CREATEROLE from their users and granting them proper memberships\n> in the predefined roles and the groups those roles should be administering.\n\nI disagree entirely with the idea that we should push the out however\nmany years it'd take to get through some deprecation period. We are\nabsolutely terrible when it comes to that and what we're talking about\nhere, at this point anyway, is making changes that get us closer to what\nthe spec says. I agree that we can't back-patch these changes, but I\ndon't think we need a deprecation period. If we were just getting rid\nof CREATEROLE, I don't think we'd have a deprecation period. If we need\nto get rid of CREATEROLE and introduce something new that more-or-less\nmeans the same thing, to make it so that people's scripts break in a\nmore obvious way, maybe we can consider that, but I don't really think\nthat's actually the case here. Such scripts as will break will still\nbreak in a pretty clear way with a clear answer as to how to fix them\nand I don't think there's some kind of data corruption or something that\nwould happen.\n\nThanks,\n\nStephen", "msg_date": "Fri, 11 Mar 2022 10:32:07 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Fri, Mar 11, 2022 at 8:32 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n>\n> Such scripts as will break will still\n> break in a pretty clear way with a clear answer as to how to fix them\n> and I don't think there's some kind of data corruption or something that\n> would happen.\n>\n>\nI largely agree and am perfectly fine with going with the majority on this\npoint. My vote would just fall on the conservative side. But as so far no\none else seems to be overly concerned, nerfing CREATEROLE seems to be the\npath forward.\n\nDavid J.\n\nOn Fri, Mar 11, 2022 at 8:32 AM Stephen Frost <sfrost@snowman.net> wrote:Such scripts as will break will still\nbreak in a pretty clear way with a clear answer as to how to fix them\nand I don't think there's some kind of data corruption or something that\nwould happen.I largely agree and am perfectly fine with going with the majority on this point.  My vote would just fall on the conservative side.  But as so far no one else seems to be overly concerned, nerfing CREATEROLE seems to be the path forward.David J.", "msg_date": "Fri, 11 Mar 2022 08:36:44 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Fri, Mar 11, 2022 at 10:27 AM Stephen Frost <sfrost@snowman.net> wrote:\n> I agree that there would be a recorded relationship (that is, one that\n> we write into the catalog and keep around until and unless it's removed\n> by an admin) between creating and created roles and that's probably the\n> default when CREATE ROLE is run but, unlike tables or such objects in\n> the system, I don't agree that we should require this to exist at\n> absolutely all times for every role (what would it be for the bootstrap\n> superuser..?). At least today, that's distinct from how ownership in\n> the system works. I also don't believe that this is necessarily an\n> issue for Robert's use-case, as long as there are appropriate\n> restrictions around who is allowed to remove or modify these\n> relationships.\n\nI agree.\n\n> > I agree. [ but we need to get consensus ]\n>\n> Well ... [ how about we do it my way? ]\n\nRepeating the same argument over again isn't necessarily going to help\nanything here. I read your argument and I can believe there could be a\nsolution along those lines, although you haven't addressed my concern\nabout NOINHERIT. Tom is apparently less convinced, and you know, I\nthink that's OK. Not everybody has to agree with the way you want to\ndo it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 10:41:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Mar 11, 2022 at 10:27 AM Stephen Frost <sfrost@snowman.net> wrote:\n>> I agree that there would be a recorded relationship (that is, one that\n>> we write into the catalog and keep around until and unless it's removed\n>> by an admin) between creating and created roles and that's probably the\n>> default when CREATE ROLE is run but, unlike tables or such objects in\n>> the system, I don't agree that we should require this to exist at\n>> absolutely all times for every role (what would it be for the bootstrap\n>> superuser..?). At least today, that's distinct from how ownership in\n>> the system works. I also don't believe that this is necessarily an\n>> issue for Robert's use-case, as long as there are appropriate\n>> restrictions around who is allowed to remove or modify these\n>> relationships.\n\n> I agree.\n\nThe bootstrap superuser clearly must be a special case in some way.\nI'm not convinced that that means there should be other special\ncases. Maybe there is a use-case for other \"unowned\" roles, but in\nexactly what way would that be different from deeming such roles\nto be owned by the bootstrap superuser?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Mar 2022 10:46:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Fri, Mar 11, 2022 at 10:37 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> I largely agree and am perfectly fine with going with the majority on this point. My vote would just fall on the conservative side. But as so far no one else seems to be overly concerned, nerfing CREATEROLE seems to be the path forward.\n\nThis kind of thing is always a judgement call. If we were talking\nabout breaking 'SELECT * from table', I'm sure it would be hard to\nconvince anybody to agree to do that at all, let alone with no\ndeprecation period. Fortunately, CREATEROLE is less used, so breaking\nit will inconvenience fewer people. Moreover, unlike 'SELECT * FROM\ntable', CREATEROLE is kinda broken, and it's less scary to make\nchanges to behavior that sucks in the first place than it is to make\nchanges to the behavior of things that are working well. For all of\nthat, there's no hard-and-fast rule that we couldn't keep the existing\nbehavior around, introduce a substitute, and eventually drop the old\nthing. I'm just not clear that it's really worth it in this case. It'd\ncertainly be interesting to hear from anyone who is finding some\nutility in the current system. It looks pretty crap to me, but it's\neasy to bring too much of one's own bias to such judgements.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 10:58:18 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "\n\n> On Mar 11, 2022, at 7:58 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> This kind of thing is always a judgement call. If we were talking\n> about breaking 'SELECT * from table', I'm sure it would be hard to\n> convince anybody to agree to do that at all, let alone with no\n> deprecation period. Fortunately, CREATEROLE is less used, so breaking\n> it will inconvenience fewer people.\n\nThis issue of how much backwards compatibility breakage we're willing to tolerate is just as important as questions about how we would want roles to work in a green-field development project. The sense I got a year ago, on this list, was that changing CREATEROLE was acceptable, but changing other parts of the system, such as how ADMIN OPTION works, would go too far.\n\nRole ownership did not yet exist, and that was a big motivation in introducing that concept, because you couldn't credibly say it broke other existing features. It introduces the new notion that when a superuser creates a role, the superuser owns it, which is identical to how things implicitly work today; and when a CREATEROLE non-superuser creates a role, that role owns the new role, which is different from how it works today, arguably breaking CREATEROLE's prior behavior. *But it doesn't break anything else*.\n\nIf we're going to change how ADMIN OPTION works, or how role membership works, or how inherit/noinherit works, let's first be clear that we are willing to accept whatever backwards incompatibility that entails. This is not a green-field development project. The constant spinning around with regard to how much compatibility we need to preserve is giving me vertigo.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 11 Mar 2022 08:12:03 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Fri, Mar 11, 2022 at 10:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The bootstrap superuser clearly must be a special case in some way.\n> I'm not convinced that that means there should be other special\n> cases. Maybe there is a use-case for other \"unowned\" roles, but in\n> exactly what way would that be different from deeming such roles\n> to be owned by the bootstrap superuser?\n\nI think that just boils down to how many useless catalog entries you\nwant to make.\n\nIf we implement the link between the creating role and the created\nrole as role ownership, then we are surely just going to add a\nrolowner column to pg_authid, and when the role is owned by nobody, I\nthink we should always just store a valid OID in it, rather than\nsometimes storing 0. It just seems simpler. Any time we would store 0,\nstore the bootstrap superuser's pg_authid.oid value instead. That way\nthe OID is always valid, which probably lets us get by with fewer\nspecial cases in the code.\n\nIf we implement the link between the creating role and the created\nrole as an automatically-granted WITH ADMIN OPTION, then we could\nchoose to put (CREATOR_OID, CREATED_OID, whatever, TRUE) into\npg_auth_members for the creating superuser or, indeed, every superuser\nin the system. Or we can leave it out. The result will be exactly the\nsame. Here, I would favor leaving it out, because extra catalog\nentries that don't do anything are usually a thing that we do not\nwant. See a49d081235997c67e8aab7a523b17e8d1cb93184, for example.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 11:13:00 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> If we implement the link between the creating role and the created\n> role as role ownership, then we are surely just going to add a\n> rolowner column to pg_authid, and when the role is owned by nobody, I\n> think we should always just store a valid OID in it, rather than\n> sometimes storing 0. It just seems simpler. Any time we would store 0,\n> store the bootstrap superuser's pg_authid.oid value instead. That way\n> the OID is always valid, which probably lets us get by with fewer\n> special cases in the code.\n\n+1.\n\nNote that either case would also involve making entries in pg_shdepend;\nalthough for the case of roles owned by/granted to the bootstrap\nsuperuser, we could omit those on the usual grounds that we don't need\nto record dependencies on pinned objects.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Mar 2022 11:34:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Fri, Mar 11, 2022 at 11:12 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> This issue of how much backwards compatibility breakage we're willing to tolerate is just as important as questions about how we would want roles to work in a green-field development project. The sense I got a year ago, on this list, was that changing CREATEROLE was acceptable, but changing other parts of the system, such as how ADMIN OPTION works, would go too far.\n>\n> Role ownership did not yet exist, and that was a big motivation in introducing that concept, because you couldn't credibly say it broke other existing features. It introduces the new notion that when a superuser creates a role, the superuser owns it, which is identical to how things implicitly work today; and when a CREATEROLE non-superuser creates a role, that role owns the new role, which is different from how it works today, arguably breaking CREATEROLE's prior behavior. *But it doesn't break anything else*.\n>\n> If we're going to change how ADMIN OPTION works, or how role membership works, or how inherit/noinherit works, let's first be clear that we are willing to accept whatever backwards incompatibility that entails. This is not a green-field development project. The constant spinning around with regard to how much compatibility we need to preserve is giving me vertigo.\n\nI mean, I agree that the backward compatibility ramifications of every\nidea need to be considered, but I agree even more that the amount of\nspinning around here is pretty insane. My feeling is that neither role\nowners nor tenants introduce any real concerns about\nbackward-compatibility or, for that matter, SQL standards compliance,\nnonwithstanding Stephen's argument to the contrary. Every vendor\nextends the standard with their own stuff, and we've done that as\nwell, as we can do it in more places.\n\nOn the other hand, changing ADMIN OPTION does have compatibility and\nspec-compliance ramifications. I think Stephen is arguing that we can\nsolve this problem while coming closer to the spec, and I think we\nusually consider getting closer to the spec to be a sufficient reason\nfor breaking backward compatibility (cf. standard_conforming_strings).\nBut I don't know whether he is correct when he argues that the spec\nmakes admin option on a role sufficient to drop the role. I've never\nhad any luck understanding what the SQL specification is saying about\nany topic.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 11:36:21 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > If we implement the link between the creating role and the created\n> > role as role ownership, then we are surely just going to add a\n> > rolowner column to pg_authid, and when the role is owned by nobody, I\n> > think we should always just store a valid OID in it, rather than\n> > sometimes storing 0. It just seems simpler. Any time we would store 0,\n> > store the bootstrap superuser's pg_authid.oid value instead. That way\n> > the OID is always valid, which probably lets us get by with fewer\n> > special cases in the code.\n\nWe haven't got any particularly special cases in the code today for what\nhappens if we run up the role hierarchy to a point that it ends and so\nI'm not sure why adding in a whole new concept around role ownership,\nwhich doesn't exist in the spec, would somehow leave us with fewer such\nspecial cases.\n\n> +1.\n> \n> Note that either case would also involve making entries in pg_shdepend;\n> although for the case of roles owned by/granted to the bootstrap\n> superuser, we could omit those on the usual grounds that we don't need\n> to record dependencies on pinned objects.\n\nThat we aren't discussing the issues with the current GRANT ... WITH\nADMIN OPTION and how we deviate from what the spec calls for when it\ncomes to DROP ROLE, which seems to be the largest thing that's\n'solved' with this ownership concept, is concerning to me.\n\nIf we go down the route of adding role ownership, are we going to\ndocument that we explicitly go against the SQL standard when it comes to\nhow DROP ROLE works? Or are we going to fix DROP ROLE? I'd much prefer\nthe latter, but doing so then largely negates the point of this role\nownership concept. I don't see how it makes sense to do both.\n\nThanks,\n\nStephen", "msg_date": "Fri, 11 Mar 2022 11:41:05 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Mar 11, 2022 at 11:12 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > This issue of how much backwards compatibility breakage we're willing to tolerate is just as important as questions about how we would want roles to work in a green-field development project. The sense I got a year ago, on this list, was that changing CREATEROLE was acceptable, but changing other parts of the system, such as how ADMIN OPTION works, would go too far.\n\nThat we deviate as far as we do when it comes to the SQL spec is\nsomething that I don't feel like I had a good handle on when discussing\nthis previously (that the spec doesn't talk about 'admin option' really\nbut rather 'grantable authorization identifiers' or whatever it is\ndoesn't help... but still, that's on me, sorry about that).\n\n> > Role ownership did not yet exist, and that was a big motivation in introducing that concept, because you couldn't credibly say it broke other existing features. It introduces the new notion that when a superuser creates a role, the superuser owns it, which is identical to how things implicitly work today; and when a CREATEROLE non-superuser creates a role, that role owns the new role, which is different from how it works today, arguably breaking CREATEROLE's prior behavior. *But it doesn't break anything else*.\n> >\n> > If we're going to change how ADMIN OPTION works, or how role membership works, or how inherit/noinherit works, let's first be clear that we are willing to accept whatever backwards incompatibility that entails. This is not a green-field development project. The constant spinning around with regard to how much compatibility we need to preserve is giving me vertigo.\n\nI agree that it would have an impact on backwards compatibility to\nchange how WITH ADMIN works- but it would also get us more in line with\nwhat the SQL standard says for how WITH ADMIN is supposed to work and\nthat seems worth the change to me.\n\n> On the other hand, changing ADMIN OPTION does have compatibility and\n> spec-compliance ramifications. I think Stephen is arguing that we can\n> solve this problem while coming closer to the spec, and I think we\n> usually consider getting closer to the spec to be a sufficient reason\n> for breaking backward compatibility (cf. standard_conforming_strings).\n\nIndeed.\n\n> But I don't know whether he is correct when he argues that the spec\n> makes admin option on a role sufficient to drop the role. I've never\n> had any luck understanding what the SQL specification is saying about\n> any topic.\n\nI'm happy to point you to what the spec says and to discuss it further\nif that would be helpful, or to get other folks to comment on it. I\nagree that it's definitely hard to grok at times. In this particular\ncase what I'm looking at is, under DROP ROLE / Access Rules, there's\nonly one sentence:\n\nThere shall exist at least one grantable role authorization descriptor\nwhose role name is R and whose grantee is an enabled authorization\nidentifier.\n\nA bit of decoding: 'grantable role authorization descriptor' is a GRANT\nof a role WITH ADMIN OPTION. The role name 'R' is the role specified.\nThe 'grantee' is who that role R was GRANT'd to, and 'enabled\nauthorization identifier' is basically \"has_privs_of_role()\" (note that\nyou can in the spec hvae roles that you're a member of but which are\n*not* currently enabled).\n\nHopefully that helps.\n\nThanks,\n\nStephen", "msg_date": "Fri, 11 Mar 2022 11:48:08 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Fri, Mar 11, 2022 at 11:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Note that either case would also involve making entries in pg_shdepend;\n> although for the case of roles owned by/granted to the bootstrap\n> superuser, we could omit those on the usual grounds that we don't need\n> to record dependencies on pinned objects.\n\nThat makes sense to me, but it still doesn't solve the problem of\nagreeing on role ownership vs. WITH ADMIN OPTION vs. something else.\n\nI find it ironic (and frustrating) that Mark implemented what I think\nis basically what you're arguing for, it got stuck because Stephen\ndidn't like it, we then said OK so let's try to find out what Stephen\nwould like, only to have you show up and say that it's right the way\nhe already had it. I'm not saying that you're wrong, or for that\nmatter that he's wrong. I'm just saying that if both of you are\nabsolutely bent on having it the way you want it, either one of you is\ngoing to be sad, or we're not going to make any progress.\n\nNever mind the fact that neither of you seem interested in even giving\na hearing to my preferred way of doing it. :-(\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 11:51:38 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "\n\n> On Mar 11, 2022, at 8:48 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> I agree that it would have an impact on backwards compatibility to\n> change how WITH ADMIN works- but it would also get us more in line with\n> what the SQL standard says for how WITH ADMIN is supposed to work and\n> that seems worth the change to me.\n\nI'm fine with giving up some backwards compatibility to get some SQL standard compatibility, as long as we're clear that is what we're doing. What you say about the SQL spec isn't great, though, because too much power is vested in \"ADMIN\". I see \"ADMIN\" as at least three separate privileges together. Maybe it would be spec compliant to implement \"ADMIN\" as a synonym for a set of separate privileges? \n\n> On Mar 11, 2022, at 8:41 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> That we aren't discussing the issues with the current GRANT ... WITH\n> ADMIN OPTION and how we deviate from what the spec calls for when it\n> comes to DROP ROLE, which seems to be the largest thing that's\n> 'solved' with this ownership concept, is concerning to me.\n\nSure, let's discuss that a bit more. Here is my best interpretation of your post about the spec, when applied to postgres with an eye towards not doing any more damage than necessary:\n\n> On Mar 10, 2022, at 11:58 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> let's look at what the spec says:\n> \n> CREATE ROLE\n> - Who is allowed to run CREATE ROLE is implementation-defined\n\nThis should be anyone with membership in pg_create_role. \n\n> - After creation, this is effictively run:\n> GRANT new_role TO creator_role WITH ADMIN, GRANTOR \"_SYSTEM\"\n\nThis should internally be implemented as three separate privileges, one which means you can grant the role, another which means you can drop the role, and a third that means you're a member of the role. That way, they can be independently granted and revoked. We could make \"WITH ADMIN\" a short-hand for \"WITH G, D, M\" where G, D, and M are whatever we name the independent privileges Grant, Drop, and Member-of.\n\nSplitting G and D helps with backwards compatibility, because it gives people who want the traditional postgres \"admin\" a way to get there, by granting \"G+M\". Splitting M from G and D makes it simpler to implement the \"bot\" idea, since the bot shouldn't have M. But it does raise a question about always granting G+D+M to the creator, since the bot is the creator and we don't want the bot to have M. This isn't a problem I've invented from thin air, mind you, as G+D+M is just the definition of ADMIN per the SQL spec, if I've understood you correctly. So we need to think a bit more about the pg_create_role built-in role and whether that needs to be further refined to distinguish those who can get membership in roles they create vs. those who cannot. This line of reasoning takes me in the direction of what I think you were calling #5 upthread, but you'd have to elaborate on that, and how it interacts with the spec, for us to have a useful conversation about it.\n\n> DROP ROLE\n> - Any user who has been GRANT'd a role with ADMIN option is able to\n> DROP that role.\n\nChange this to \"Any role who has D on the role\". That's spec compliant, because anyone granted ADMIN necessarily has D.\n\n> GRANT ROLE\n> - No cycles allowed\n> - A role must have ADMIN rights on the role to be able to GRANT it to\n> another role.\n\nChange this to \"Any role who has G on the role\". That's spec compliant, because anyone grant ADMIN necessarily has G.\n\nWe should also fix the CREATE ROLE command to require the grantor have G on a role in order to give it to the new role as part of the command. Changing the CREATEROLE, CREATEDB, REPLICATION, and BYPASSRLS attributes into pg_create_role, pg_create_db, pg_replication, and pg_bypassrls, the creator could only give them to the created role if the creator has G on the roles. If we do this, we could keep the historical privilege bits and their syntax support for backward compatibility, or we could rip them out, but the decision between those two options is independent of the rest of the design.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 11 Mar 2022 09:31:58 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\nOn Fri, Mar 11, 2022 at 12:32 Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n>\n>\n> > On Mar 11, 2022, at 8:48 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> >\n> > I agree that it would have an impact on backwards compatibility to\n> > change how WITH ADMIN works- but it would also get us more in line with\n> > what the SQL standard says for how WITH ADMIN is supposed to work and\n> > that seems worth the change to me.\n>\n> I'm fine with giving up some backwards compatibility to get some SQL\n> standard compatibility, as long as we're clear that is what we're doing.\n> What you say about the SQL spec isn't great, though, because too much power\n> is vested in \"ADMIN\". I see \"ADMIN\" as at least three separate privileges\n> together. Maybe it would be spec compliant to implement \"ADMIN\" as a\n> synonym for a set of separate privileges?\n\n\nI do think that’s reasonable … and believe I suggested it about 3 messages\nago in this thread. ;) (step #3 I think it was? Or maybe 4).\n\n> On Mar 11, 2022, at 8:41 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> >\n> > That we aren't discussing the issues with the current GRANT ... WITH\n> > ADMIN OPTION and how we deviate from what the spec calls for when it\n> > comes to DROP ROLE, which seems to be the largest thing that's\n> > 'solved' with this ownership concept, is concerning to me.\n>\n> Sure, let's discuss that a bit more. Here is my best interpretation of\n> your post about the spec, when applied to postgres with an eye towards not\n> doing any more damage than necessary:\n>\n> > On Mar 10, 2022, at 11:58 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> >\n> > let's look at what the spec says:\n> >\n> > CREATE ROLE\n> > - Who is allowed to run CREATE ROLE is implementation-defined\n>\n> This should be anyone with membership in pg_create_role.\n\n\nThat could be the case if we wished to go that route. I’d think in such\ncase we’d then also remove CREATEROLE as otherwise the documentation feels\nlike it’d be quite confusing.\n\n> - After creation, this is effictively run:\n> > GRANT new_role TO creator_role WITH ADMIN, GRANTOR \"_SYSTEM\"\n>\n> This should internally be implemented as three separate privileges, one\n> which means you can grant the role, another which means you can drop the\n> role, and a third that means you're a member of the role. That way, they\n> can be independently granted and revoked. We could make \"WITH ADMIN\" a\n> short-hand for \"WITH G, D, M\" where G, D, and M are whatever we name the\n> independent privileges Grant, Drop, and Member-of.\n\n\nI mean, sure, we can get there, and possibly add more like if you’re\nallowed to change or reset that role’s password and other things, but I\ndon’t see that this piece is required as part of the very first change in\nthis area. Further, WITH ADMIN already gives grant and member today, so\nyou’re saying the only thing this change does that makes “WITH ADMIN” too\npowerful is adding DROP to it, yet that’s explicitly what the spec calls\nfor. In short, I disagree that moving the DROP ROLE right from CREATEROLE\nroles having that across the entire system (excluding superusers) to WITH\nADMIN where the role who has that right can:\n\nA) already become that role and drop all their objects\nB) already GRANT that role to some other role\n\nis a big issue.\n\nSplitting G and D helps with backwards compatibility, because it gives\n> people who want the traditional postgres \"admin\" a way to get there, by\n> granting \"G+M\". Splitting M from G and D makes it simpler to implement the\n> \"bot\" idea, since the bot shouldn't have M. But it does raise a question\n> about always granting G+D+M to the creator, since the bot is the creator\n> and we don't want the bot to have M. This isn't a problem I've invented\n> from thin air, mind you, as G+D+M is just the definition of ADMIN per the\n> SQL spec, if I've understood you correctly. So we need to think a bit more\n> about the pg_create_role built-in role and whether that needs to be further\n> refined to distinguish those who can get membership in roles they create\n> vs. those who cannot. This line of reasoning takes me in the direction of\n> what I think you were calling #5 upthread, but you'd have to elaborate on\n> that, and how it interacts with the spec, for us to have a useful\n> conversation about it.\n\n\nAll that said, as I said before, I’m in favor of splitting things up and so\nif you want to do that as part of this initial work, sure. Idk that it’s\nabsolutely required as part of this but I’m not going to complain if it’s\nincluded either. I agree that would allow folks to get something similar\nto what they could get today if they want.\n\nI agree that the split up helps with the “bot” idea, as we could at least\nthen create a security definer function that the bot runs and which creates\nroles that the bot then has G for but not M or D. Even better would be to\nalso provide a way for the “bot” to be able to create roles without the\nneed for a security definer function where it doesn’t automatically get all\nthree, and that was indeed what I was thinking about with the template\nidea. The general thought there being that an admin could define a template\nalong the lines of:\n\nCREATE TEMPLATE employee_template\nCREATOR WITH ADMIN, NOMEMBERSHIP\nROLE IN employee;\n\nAnd then provide a way for the bot to be given the right to use this\ntemplate. Thinking on it a bit further, I’m guessing that we wouldn’t\nactually give the bot pg_create_role in this case and instead would leave\nthat to mean “able to create arbitrary roles and have all privs in that”\nsimilar to what we are talking about where ADMIN implies the full set of\nrights.\n\n> DROP ROLE\n> > - Any user who has been GRANT'd a role with ADMIN option is able to\n> > DROP that role.\n>\n> Change this to \"Any role who has D on the role\". That's spec compliant,\n> because anyone granted ADMIN necessarily has D.\n\n\nYeah.\n\n> GRANT ROLE\n> > - No cycles allowed\n> > - A role must have ADMIN rights on the role to be able to GRANT it to\n> > another role.\n>\n> Change this to \"Any role who has G on the role\". That's spec compliant,\n> because anyone grant ADMIN necessarily has G.\n\n\nSure.\n\nWe should also fix the CREATE ROLE command to require the grantor have G on\n> a role in order to give it to the new role as part of the command.\n\n\n… or just get rid of it, which seems saner to me.\n\nChanging the CREATEROLE, CREATEDB, REPLICATION, and BYPASSRLS attributes\n> into pg_create_role, pg_create_db, pg_replication, and pg_bypassrls, the\n> creator could only give them to the created role if the creator has G on\n> the roles. If we do this, we could keep the historical privilege bits and\n> their syntax support for backward compatibility, or we could rip them out,\n> but the decision between those two options is independent of the rest of\n> the design.\n\n\nYeah, turning those into predefined roles which an admin can then decide to\ngive out (and to allow ADMIN on them to be given to folks who could then\npass that along if they wanted) is another thought I’ve had though one\nthat’s somewhat independent of the rest of this, but also shows how we\ncould make those be things that a superuser could choose to give out, or\nnot, to some set of roles who would then be able to create roles of their\nown with those privileges.\n\nOn the whole, using predefined roles as the source of certain capabilities,\nand the options discussed here which would allow an admin to grant those\ncapabilities out with or without the ability to grant them further, plus\nthe splitting out of the individual role-relationship rights (membership,\ngrantable, drop, etc) strikes me as being quite flexible and extendable and\ngenerally in the direction that we’ve been trending and which seems to be\nreasonably successful so far.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Fri, Mar 11, 2022 at 12:32 Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> On Mar 11, 2022, at 8:48 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> I agree that it would have an impact on backwards compatibility to\n> change how WITH ADMIN works- but it would also get us more in line with\n> what the SQL standard says for how WITH ADMIN is supposed to work and\n> that seems worth the change to me.\n\nI'm fine with giving up some backwards compatibility to get some SQL standard compatibility, as long as we're clear that is what we're doing.  What you say about the SQL spec isn't great, though, because too much power is vested in \"ADMIN\".  I see \"ADMIN\" as at least three separate privileges together.  Maybe it would be spec compliant to implement \"ADMIN\" as a synonym for a set of separate privileges?I do think that’s reasonable … and believe I suggested it about 3 messages ago in this thread. ;)  (step #3 I think it was?  Or maybe 4).\n> On Mar 11, 2022, at 8:41 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> That we aren't discussing the issues with the current GRANT ... WITH\n> ADMIN OPTION and how we deviate from what the spec calls for when it\n> comes to DROP ROLE, which seems to be the largest thing that's\n> 'solved' with this ownership concept, is concerning to me.\n\nSure, let's discuss that a bit more.  Here is my best interpretation of your post about the spec, when applied to postgres with an eye towards not doing any more damage than necessary:\n\n> On Mar 10, 2022, at 11:58 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> let's look at what the spec says:\n> \n> CREATE ROLE\n>  - Who is allowed to run CREATE ROLE is implementation-defined\n\nThis should be anyone with membership in pg_create_role.That could be the case if we wished to go that route. I’d think in such case we’d then also remove CREATEROLE as otherwise the documentation feels like it’d be quite confusing.\n>  - After creation, this is effictively run:\n>    GRANT new_role TO creator_role WITH ADMIN, GRANTOR \"_SYSTEM\"\n\nThis should internally be implemented as three separate privileges, one which means you can grant the role, another which means you can drop the role, and a third that means you're a member of the role.  That way, they can be independently granted and revoked.  We could make \"WITH ADMIN\" a short-hand for \"WITH G, D, M\" where G, D, and M are whatever we name the independent privileges Grant, Drop, and Member-of.I mean, sure, we can get there, and possibly add more like if you’re allowed to change or reset that role’s password and other things, but I don’t see that this piece is required as part of the very first change in this area.  Further, WITH ADMIN already gives grant and member today, so you’re saying the only thing this change does that makes “WITH ADMIN” too powerful is adding DROP to it, yet that’s explicitly what the spec calls for.  In short, I disagree that moving the DROP ROLE right from CREATEROLE roles having that across the entire system (excluding superusers) to WITH ADMIN where the role who has that right can:A) already become that role and drop all their objectsB) already GRANT that role to some other roleis a big issue.\nSplitting G and D helps with backwards compatibility, because it gives people who want the traditional postgres \"admin\" a way to get there, by granting \"G+M\".  Splitting M from G and D makes it simpler to implement the \"bot\" idea, since the bot shouldn't have M.  But it does raise a question about always granting G+D+M to the creator, since the bot is the creator and we don't want the bot to have M.  This isn't a problem I've invented from thin air, mind you, as G+D+M is just the definition of ADMIN per the SQL spec, if I've understood you correctly.  So we need to think a bit more about the pg_create_role built-in role and whether that needs to be further refined to distinguish those who can get membership in roles they create vs. those who cannot.  This line of reasoning takes me in the direction of what I think you were calling #5 upthread, but you'd have to elaborate on that, and how it interacts with the spec, for us to have a useful conversation about it.All that said, as I said before, I’m in favor of splitting things up and so if you want to do that as part of this initial work, sure. Idk that it’s absolutely required as part of this but I’m not going to complain if it’s included either.  I agree that would allow folks to get something similar to what they could get today if they want.I agree that the split up helps with the “bot” idea, as we could at least then create a security definer function that the bot runs and which creates roles that the bot then has G for but not M or D.  Even better would be to also provide a way for the “bot” to be able to create roles without the need for a security definer function where it doesn’t automatically get all three, and that was indeed what I was thinking about with the template idea. The general thought there being that an admin could define a template along the lines of:CREATE TEMPLATE employee_templateCREATOR WITH ADMIN, NOMEMBERSHIPROLE IN employee;And then provide a way for the bot to be given the right to use this template.  Thinking on it a bit further, I’m guessing that we wouldn’t actually give the bot pg_create_role in this case and instead would leave that to mean “able to create arbitrary roles and have all privs in that” similar to what we are talking about where ADMIN implies the full set of rights.\n> DROP ROLE\n>  - Any user who has been GRANT'd a role with ADMIN option is able to\n>    DROP that role.\n\nChange this to \"Any role who has D on the role\".  That's spec compliant, because anyone granted ADMIN necessarily has D.Yeah.\n> GRANT ROLE\n>  - No cycles allowed\n>  - A role must have ADMIN rights on the role to be able to GRANT it to\n>    another role.\n\nChange this to \"Any role who has G on the role\".  That's spec compliant, because anyone grant ADMIN necessarily has G.Sure.\nWe should also fix the CREATE ROLE command to require the grantor have G on a role in order to give it to the new role as part of the command. … or just get rid of it, which seems saner to me. Changing the CREATEROLE, CREATEDB, REPLICATION, and BYPASSRLS attributes into pg_create_role, pg_create_db, pg_replication, and pg_bypassrls, the creator could only give them to the created role if the creator has G on the roles.  If we do this, we could keep the historical privilege bits and their syntax support for backward compatibility, or we could rip them out, but the decision between those two options is independent of the rest of the design.Yeah, turning those into predefined roles which an admin can then decide to give out (and to allow ADMIN on them to be given to folks who could then pass that along if they wanted) is another thought I’ve had though one that’s somewhat independent of the rest of this, but also shows how we could make those be things that a superuser could choose to give out, or not, to some set of roles who would then be able to create roles of their own with those privileges.On the whole, using predefined roles as the source of certain capabilities, and the options discussed here which would allow an admin to grant those capabilities out with or without the ability to grant them further, plus the splitting out of the individual role-relationship rights (membership, grantable, drop, etc) strikes me as being quite flexible and extendable and generally in the direction that we’ve been trending and which seems to be reasonably successful so far.Thanks,Stephen", "msg_date": "Fri, 11 Mar 2022 17:46:59 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "\n\n> On Mar 11, 2022, at 2:46 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> I do think that’s reasonable … and believe I suggested it about 3 messages ago in this thread. ;) (step #3 I think it was? Or maybe 4).\n\nYes, and you mentioned it to me off-list.\n\nI'm soliciting a more concrete specification for what you are proposing. To me, that means understanding how the SQL spec behavior that you champion translates into specific changes. You specified some of this in steps #1 through #5, but I'd like a clearer indication of how many of those (#1 alone, both #1 and #2, or what?) constitute a competing idea to the idea of role ownership, and greater detail about how each of those steps translate into specific behavior changes in postgres. Your initial five-step email seems to be claiming that #1 by itself is competitive, but to me it seems at least #1 and #2 would be required.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 11 Mar 2022 16:03:06 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\nOn Fri, Mar 11, 2022 at 19:03 Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n> > On Mar 11, 2022, at 2:46 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> >\n> > I do think that’s reasonable … and believe I suggested it about 3\n> messages ago in this thread. ;) (step #3 I think it was? Or maybe 4).\n>\n> Yes, and you mentioned it to me off-list.\n\n\nIndeed.\n\nI'm soliciting a more concrete specification for what you are proposing.\n> To me, that means understanding how the SQL spec behavior that you champion\n> translates into specific changes. You specified some of this in steps #1\n> through #5, but I'd like a clearer indication of how many of those (#1\n> alone, both #1 and #2, or what?) constitute a competing idea to the idea of\n> role ownership, and greater detail about how each of those steps translate\n> into specific behavior changes in postgres. Your initial five-step email\n> seems to be claiming that #1 by itself is competitive, but to me it seems\n> at least #1 and #2 would be required.\n\n\nFirst … I outlined a fair bit of further description in the message you\njust responded to but neglected to include in your response, which strikes\nme as odd that you’re now asking for further explanation. When it comes to\ncompleting the idea of role ownership- I didn’t come up with that idea nor\nchampion it and therefore I’m not really sure how many of the steps are\nrequired to fully support that concept..? For my part, I would think that\nthose steps necessary to satisfy the spec would get us pretty darn close to\nwhat true folks advocating for role ownership are asking for, but that\ndoesn’t include the superuser-only alter role attributes piece. Is that\nincluded in role ownership? I wouldn’t think so, but some might argue\notherwise, and I don’t know that it is actually useful to divert into a\ndiscussion about what is or isn’t in that.\n\nIf we agree that the role attribute bits are independent then I think I\nagree that we need 1 and 2 to get the capabilities that the folks asking\nfor role ownership want, as 2 is where we make sure that one admin of a\nrole can’t revoke another admin’s rights over that role. Perhaps 2 isn’t\nstrictly necessary in a carefully managed environment where no one else is\ngiven admin rights over the mini-superuser roles, but I’d rather not have\nfolks depending on that. I’d still push back though and ask those\nadvocating for this if it meets what they’re asking for. I got the\nimpression that it did but maybe I misunderstood.\n\nIn terms of exactly how things would work with these changes… I thought I\nexplained it pretty clearly, so it’s kind of hard to answer that further\nwithout a specific question to answer. Did you have something specific in\nmind? Perhaps I could answer a specific question and provide more clarity\nthat way.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Fri, Mar 11, 2022 at 19:03 Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> On Mar 11, 2022, at 2:46 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> I do think that’s reasonable … and believe I suggested it about 3 messages ago in this thread. ;)  (step #3 I think it was?  Or maybe 4).\n\nYes, and you mentioned it to me off-list.Indeed.\nI'm soliciting a more concrete specification for what you are proposing.  To me, that means understanding how the SQL spec behavior that you champion translates into specific changes.  You specified some of this in steps #1 through #5, but I'd like a clearer indication of how many of those (#1 alone, both #1 and #2, or what?) constitute a competing idea to the idea of role ownership, and greater detail about how each of those steps translate into specific behavior changes in postgres.  Your initial five-step email seems to be claiming that #1 by itself is competitive, but to me it seems at least #1 and #2 would be required.First … I outlined a fair bit of further description in the message you just responded to but neglected to include in your response, which strikes me as odd that you’re now asking for further explanation.  When it comes to completing the idea of role ownership- I didn’t come up with that idea nor champion it and therefore I’m not really sure how many of the steps are required to fully support that concept..?  For my part, I would think that those steps necessary to satisfy the spec would get us pretty darn close to what true folks advocating for role ownership are asking for, but that doesn’t include the superuser-only alter role attributes piece.  Is that included in role ownership?  I wouldn’t think so, but some might argue otherwise, and I don’t know that it is actually useful to divert into a discussion about what is or isn’t in that.If we agree that the role attribute bits are independent then I think I agree that we need 1 and 2 to get the capabilities that the folks asking for role ownership want, as 2 is where we make sure that one admin of a role can’t revoke another admin’s rights over that role.  Perhaps 2 isn’t strictly necessary in a carefully managed environment where no one else is given admin rights over the mini-superuser roles, but I’d rather not have folks depending on that.  I’d still push back though and ask those advocating for this if it meets what they’re asking for.  I got the impression that it did but maybe I misunderstood.In terms of exactly how things would work with these changes… I thought I explained it pretty clearly, so it’s kind of hard to answer that further without a specific question to answer.  Did you have something specific in mind?  Perhaps I could answer a specific question and provide more clarity that way.Thanks,Stephen", "msg_date": "Fri, 11 Mar 2022 19:56:19 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "\n\n> On Mar 11, 2022, at 4:56 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> First … I outlined a fair bit of further description in the message you just responded to but neglected to include in your response, which strikes me as odd that you’re now asking for further explanation.\n\n\n\n> When it comes to completing the idea of role ownership- I didn’t come up with that idea nor champion it\n\nSorry, not \"completing\", but \"competing\". It seems we're discussing different ways to fix how roles and CREATEROLE work, and we have several ideas competing against each other. (This differs from *people* competing against each other, as I don't necessarily like the patch I wrote better than I like your idea.)\n\n> and therefore I’m not really sure how many of the steps are required to fully support that concept..?\n\nThere are problems that the ownership concepts solve. I strongly suspect that your proposal could also solve those same problems, and just trying to identify the specific portions of your proposal necessary to do so.\n\n> For my part, I would think that those steps necessary to satisfy the spec would get us pretty darn close to what true folks advocating for role ownership are asking for\n\nI have little idea what \"true folks\" means in this context. As for \"advocating for role ownership\", I'm not in that group. Whether role ownership or something else, I just want some solution to a set of problems, mostly to do with needing superuser to do role management tasks.\n\n> , but that doesn’t include the superuser-only alter role attributes piece. Is that included in role ownership? I wouldn’t think so, but some might argue otherwise, and I don’t know that it is actually useful to divert into a discussion about what is or isn’t in that.\n\nIntroducing the idea of role ownership doesn't fix that. But a patch which introduces role ownership is useless unless CREATEROLE is also fixed. There isn't any point having non-superusers create and own roles if, to do so, they need a privilege which can break into superuser. But that argument is no different with a patch along the lines of what you are proposing. CREATEROLE needs fixing either way.\n\n> If we agree that the role attribute bits are independent\n\nYes.\n\n> then I think I agree that we need 1 and 2 to get the capabilities that the folks asking for role ownership want\n\nYes.\n\n> as 2 is where we make sure that one admin of a role can’t revoke another admin’s rights over that role.\n\nExactly, so #2 is part of the competing proposal. (I get the sense you might not see these as competing proposals, but I find that framing useful for deciding which approach to pursue.)\n\n> Perhaps 2 isn’t strictly necessary in a carefully managed environment where no one else is given admin rights over the mini-superuser roles, but I’d rather not have folks depending on that.\n\nI think it is necessary, and for the reason you say.\n\n> I’d still push back though and ask those advocating for this if it meets what they’re asking for. I got the impression that it did but maybe I misunderstood.\n> \n> In terms of exactly how things would work with these changes… I thought I explained it pretty clearly, so it’s kind of hard to answer that further without a specific question to answer. Did you have something specific in mind? Perhaps I could answer a specific question and provide more clarity that way.\n\nYour emails contained a lot of \"we could do this or that depending on what people want, and maybe this other thing, but that isn't really necessary, and ....\" which left me unclear on the proposal. I don't mean to disparage your communication style; it's just that when trying to distill technical details, high level conversation can be hard to grok.\n\nI have the sense that you aren't going to submit a patch, so I wanted this thread to contain enough detail for somebody else to do so. Thanks.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 11 Mar 2022 18:08:36 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Mar 11, 2022, at 4:56 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> > First … I outlined a fair bit of further description in the message you just responded to but neglected to include in your response, which strikes me as odd that you’re now asking for further explanation.\n> \n> > When it comes to completing the idea of role ownership- I didn’t come up with that idea nor champion it\n> \n> Sorry, not \"completing\", but \"competing\". It seems we're discussing different ways to fix how roles and CREATEROLE work, and we have several ideas competing against each other. (This differs from *people* competing against each other, as I don't necessarily like the patch I wrote better than I like your idea.)\n> \n> > and therefore I’m not really sure how many of the steps are required to fully support that concept..?\n> \n> There are problems that the ownership concepts solve. I strongly suspect that your proposal could also solve those same problems, and just trying to identify the specific portions of your proposal necessary to do so.\n\nI'm happy to help try to identify those, but it seems we'd need to have\nthe exact problems that ownership solves defined first. Robert defined\nwhat he's looking for as:\n\nRobert Haas <robertmhaas@gmail.com> wrote:\n> Probably easier to just say it again: I want to have users that can\n> create roles and then have superuser-like powers with respect to those\n> roles. They can freely exercise the privileges of those roles, and\n> they can do all the things that a superuser can do but only with\n> respect to those roles. They cannot break out to the OS. I think it's\n> pretty similar to what you are describing, with a couple of possible\n> exceptions. For example, would you imagine that being an admin of a\n> login role would let you change that user's password? Because that\n> would be desirable behavior from where I sit.\n\nWhich sure sounds like it's just about covered in step #1 of what I\noutlined before, except that the above description implies that one\ncan't \"get away\" from the user who created their role, in which case we\ndo need step #2 also.\n\n> > For my part, I would think that those steps necessary to satisfy the spec would get us pretty darn close to what true folks advocating for role ownership are asking for\n> \n> I have little idea what \"true folks\" means in this context. As for \"advocating for role ownership\", I'm not in that group. Whether role ownership or something else, I just want some solution to a set of problems, mostly to do with needing superuser to do role management tasks.\n\n... I'm not entirely sure what I meant there either, my hunch is that\n'true' was actually just a leftover word from some other framing of that\nsentence and I had meant to remove it. Apologies for that. What I was\ntrying to get at there is that steps #1 & #2 are the ones that I view as\ngetting us closer to spec compliance and that doing so would get us to\nwhere Robert's ask above would be answered.\n\n> > , but that doesn’t include the superuser-only alter role attributes piece. Is that included in role ownership? I wouldn’t think so, but some might argue otherwise, and I don’t know that it is actually useful to divert into a discussion about what is or isn’t in that.\n> \n> Introducing the idea of role ownership doesn't fix that. But a patch which introduces role ownership is useless unless CREATEROLE is also fixed. There isn't any point having non-superusers create and own roles if, to do so, they need a privilege which can break into superuser. But that argument is no different with a patch along the lines of what you are proposing. CREATEROLE needs fixing either way.\n\nThere's a few ways to have the 'CREATEROLE' role attribute be fixed-\n\n- Remove it entirely (replacing with pg_create_role or such)\n- Remove its ability to GRANT out rights that the role running it\n doesn't have\n- Make it superfluous (leave it as-is, but add in pg_create_role which\n allows a role to create another role but doesn't include the magic\n GRANT whatever-role TO whatever-role that CREATEROLE has)\n\nI agree that we need to do something here to allow roles to create other\nroles while not having or being able to trivially get superuser\nthemselves.\n\n> > If we agree that the role attribute bits are independent\n> \n> Yes.\n\nGreat.\n\n> > then I think I agree that we need 1 and 2 to get the capabilities that the folks asking for role ownership want\n> \n> Yes.\n\nOk.\n\n> > as 2 is where we make sure that one admin of a role can’t revoke another admin’s rights over that role.\n> \n> Exactly, so #2 is part of the competing proposal. (I get the sense you might not see these as competing proposals, but I find that framing useful for deciding which approach to pursue.)\n\n... and is also part of getting us closer to the spec.\n\n> > Perhaps 2 isn’t strictly necessary in a carefully managed environment where no one else is given admin rights over the mini-superuser roles, but I’d rather not have folks depending on that.\n> \n> I think it is necessary, and for the reason you say.\n\nGreat.\n\n> > I’d still push back though and ask those advocating for this if it meets what they’re asking for. I got the impression that it did but maybe I misunderstood.\n> > \n> > In terms of exactly how things would work with these changes… I thought I explained it pretty clearly, so it’s kind of hard to answer that further without a specific question to answer. Did you have something specific in mind? Perhaps I could answer a specific question and provide more clarity that way.\n> \n> Your emails contained a lot of \"we could do this or that depending on what people want, and maybe this other thing, but that isn't really necessary, and ....\" which left me unclear on the proposal. I don't mean to disparage your communication style; it's just that when trying to distill technical details, high level conversation can be hard to grok.\n\nFeel free to quote me explicitly in such places that you're looking for\nclarification and I'd be happy to drill down on those.\n\n> I have the sense that you aren't going to submit a patch, so I wanted this thread to contain enough detail for somebody else to do so. Thanks.\n\nSo ... do you feel like that's now the case? Or were you looking for\nmore?\n\nThanks,\n\nStephen", "msg_date": "Mon, 14 Mar 2022 10:38:10 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "\n\n> On Mar 14, 2022, at 7:38 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> So ... do you feel like that's now the case? Or were you looking for\n> more?\n\nI don't have any more questions at the moment. Thanks!\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 14 Mar 2022 08:45:14 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Fri, Mar 11, 2022 at 11:51 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Mar 11, 2022 at 11:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Note that either case would also involve making entries in pg_shdepend;\n> > although for the case of roles owned by/granted to the bootstrap\n> > superuser, we could omit those on the usual grounds that we don't need\n> > to record dependencies on pinned objects.\n>\n> That makes sense to me, but it still doesn't solve the problem of\n> agreeing on role ownership vs. WITH ADMIN OPTION vs. something else.\n\nNotwithstanding the lack of agreement on that point, I believe that\nwhat we should do for v15 is remove the session user\nself-administration exception. We have pretty much established that it\nwas originally introduced in error. It later was found to be a\nsecurity vulnerability, and that resulted in the exception being\nnarrowed without removing it altogether. While there are differences\nof opinion on what the larger plan here ought to be, nobody's proposal\nplan involves retaining that exception. Neither has anyone offered a\nplausible use case for the current behavior, so there's no reason to\nthink that removing it would break anything.\n\nHowever, it might. And if it does, I think it would be best if\nremoving that exception were the *only* change in this area made by\nthat release. If for v16 or v17 or v23 we implement Plan Tom or Plan\nStephen or Plan Robert or something else, and along the way we remove\nthat self-administration exception, we're going to have a real fire\ndrill if it turns out that the self-administration exception was\nimportant for some reason we're not seeing right now. If, on the other\nhand, we remove that exception in v15, then if anything breaks, it'll\nbe a lot easier to deal with. Worst case scenario we just revert the\nremoval of that exception, which will be a very localized change if\nnothing else has been done that depends heavily on its having been\nremoved.\n\nSo I propose to commit something like what I posted here:\n\nhttp://postgr.es/m/CA+TgmobgeK0JraOwQVPqhSXcfBdFitXSomoebHMMMhmJ4gLonw@mail.gmail.com\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 24 Mar 2022 12:46:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Notwithstanding the lack of agreement on that point, I believe that\n> what we should do for v15 is remove the session user\n> self-administration exception. We have pretty much established that it\n> was originally introduced in error.\n\nAgreed.\n\n> However, it might. And if it does, I think it would be best if\n> removing that exception were the *only* change in this area made by\n> that release.\n\nGood idea, especially since it's getting to be too late to consider\nanything more invasive anyway.\n\n> So I propose to commit something like what I posted here:\n> http://postgr.es/m/CA+TgmobgeK0JraOwQVPqhSXcfBdFitXSomoebHMMMhmJ4gLonw@mail.gmail.com\n\n+1, although the comments might need some more work. In particular,\nI'm not sure that this bit is well stated:\n\n+\t * A role cannot have WITH ADMIN OPTION on itself, because that would\n+\t * imply a membership loop.\n\nWe already do consider a role to be a member of itself:\n\nregression=# create role r;\nCREATE ROLE\nregression=# grant r to r;\nERROR: role \"r\" is a member of role \"r\"\nregression=# grant r to r with admin option;\nERROR: role \"r\" is a member of role \"r\"\n\nIt might be better to just say \"By policy, a role cannot have WITH ADMIN\nOPTION on itself\". But if you want to write a defense of that policy,\nthis isn't a very good one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Mar 2022 13:10:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Thu, Mar 24, 2022 at 1:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > However, it might. And if it does, I think it would be best if\n> > removing that exception were the *only* change in this area made by\n> > that release.\n>\n> Good idea, especially since it's getting to be too late to consider\n> anything more invasive anyway.\n\nI'd say it's definitely too late at this point.\n\n> > So I propose to commit something like what I posted here:\n> > http://postgr.es/m/CA+TgmobgeK0JraOwQVPqhSXcfBdFitXSomoebHMMMhmJ4gLonw@mail.gmail.com\n>\n> +1, although the comments might need some more work. In particular,\n> I'm not sure that this bit is well stated:\n>\n> + * A role cannot have WITH ADMIN OPTION on itself, because that would\n> + * imply a membership loop.\n>\n> We already do consider a role to be a member of itself:\n>\n> regression=# create role r;\n> CREATE ROLE\n> regression=# grant r to r;\n> ERROR: role \"r\" is a member of role \"r\"\n> regression=# grant r to r with admin option;\n> ERROR: role \"r\" is a member of role \"r\"\n>\n> It might be better to just say \"By policy, a role cannot have WITH ADMIN\n> OPTION on itself\". But if you want to write a defense of that policy,\n> this isn't a very good one.\n\nThat sentence is present in the current code, along with a bunch of\nother sentences, which the patch renders irrelevant. So I just deleted\nall of the other stuff and kept the sentence that is still relevant to\nthe revised code. I think your proposed replacement is an improvement,\nbut let's be careful not to get sucked into too much of a wordsmithing\nexercise in a patch that's here to make a functional change.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 24 Mar 2022 13:26:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Mar 24, 2022 at 1:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > However, it might. And if it does, I think it would be best if\n> > > removing that exception were the *only* change in this area made by\n> > > that release.\n> >\n> > Good idea, especially since it's getting to be too late to consider\n> > anything more invasive anyway.\n> \n> I'd say it's definitely too late at this point.\n\nAgreed.\n\n> > > So I propose to commit something like what I posted here:\n> > > http://postgr.es/m/CA+TgmobgeK0JraOwQVPqhSXcfBdFitXSomoebHMMMhmJ4gLonw@mail.gmail.com\n> >\n> > +1, although the comments might need some more work. In particular,\n> > I'm not sure that this bit is well stated:\n\nAlso +1 on this.\n\nThanks,\n\nStephen", "msg_date": "Mon, 28 Mar 2022 10:51:37 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "On Mon, Mar 28, 2022 at 10:51 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > > > So I propose to commit something like what I posted here:\n> > > > http://postgr.es/m/CA+TgmobgeK0JraOwQVPqhSXcfBdFitXSomoebHMMMhmJ4gLonw@mail.gmail.com\n> > >\n> > > +1, although the comments might need some more work. In particular,\n> > > I'm not sure that this bit is well stated:\n>\n> Also +1 on this.\n\nOK, done using Tom's proposed wording.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Mar 2022 14:31:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: role self-revocation" }, { "msg_contents": "The cfbot is testing the last patch posted to this thread which is the\nremove-self-own patch which was already committed. I gather that\nthere's still (at least one) patch under discussion.\n\nCould I suggest reposting the last version of the main patch, perhaps\nrebasing it. That way the cfbot would at least continue to test for\nconflicts.\n\n\n", "msg_date": "Fri, 1 Apr 2022 10:46:07 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On Fri, Apr 1, 2022 at 10:46 AM Greg Stark <stark@mit.edu> wrote:\n> The cfbot is testing the last patch posted to this thread which is the\n> remove-self-own patch which was already committed. I gather that\n> there's still (at least one) patch under discussion.\n>\n> Could I suggest reposting the last version of the main patch, perhaps\n> rebasing it. That way the cfbot would at least continue to test for\n> conflicts.\n\nWe should move this patch to the next CF or maybe even mark it\nreturned with feedback. We're not going to get anything else done here\nfor v15, and I'm not sure whether what we do beyond that will take\nthis form or not.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Apr 2022 10:56:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE and role ownership hierarchies" }, { "msg_contents": "On Fri, Mar 4, 2022 at 4:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If we are not tracking the grantors of role authorizations,\n> then we are doing it wrong and we ought to fix that.\n\nWe are definitely doing it wrong. It's not that we aren't doing it at\nall, but we are doing it incorrectly. If user foo executes \"GRANT foo\nTO bar GRANTED BY quux\", pg_auth_members.grantor gets the OID of role\n\"quux\". Without the \"GRANTED BY\" clause, it gets the OID of role\n\"foo\". But no dependency is created; therefore, the OID of that column\ncan point to a role that no longer exists, or potentially to a role\nthat did exist at one point, was dropped, and was later replaced by\nsome other role that happens to get the same OID. pg_dump handles this\nby dumping \"GRANTED BY whoever\" if pg_auth_members.grantor is a\nstill-extant role and omitting the clause if not. This would be a\nsecurity vulnerability if there were any logic in the backend that\nactually did anything with pg_auth_members.grantor, because if the\noriginal grantor is removed and replaced by another role with the same\nOID, a dump/restore could change the notional grantor. Since there is\nno such logic, I don't think it's insecure, but it's still really\nlame.\n\nSo let's talk about how we could fix this. In a vacuum I'd say this is\njust a feature that never got finished and we should rip the whole\nthing out. That is, remove pg_auth_members.grantor entirely and at\nmost keep some do-nothing syntax around for backward compatibility.\nHowever, what Tom is saying in the text quoted above is that we ought\nto have something that actually works, which is more challenging.\nApparently, the desired behavior here is for this to work like grants\non non-role objects, where executing \"GRANT SELECT ON TABLE s1 TO foo\"\nunder two different user accounts bar and baz that both have\npermissions to grant that privilege creates two independent grants\nthat can be independently revoked. To get there, we'll have to change\na good few things -- not only will we need a dependency to prevent a\ngrantor from being dropped without revoking the grant, but we need to\nchange the primary key of pg_auth_members from (roleid, member) to\n(roleid, member, grantor). Then we'll also have to change the behavior\nof the GRANT and REVOKE commands at the SQL level, and also the\nbehavior of pg_dump, which will need to dump and restore all grants.\n\nI'm open to other proposals, but my thought is that it might be\nsimplest to try to clean this up in two steps. In step one, the only\ngoal would be to make pg_auth_members.grantor reliably sane. In other\nwords, we'd add a dependency on the grantor when a role is granted to\nanother role. You could still only have one grant of role A to role B,\nbut the notional grantor C would always be a user that actually\nexists. I suspect it would be a really good idea to also patch pg_dump\nto not ever dump the grantor when working from an older release,\nbecause the information is not necessarily reliable and I fear that\npropagating it forward could lead to broken stuff or maybe even\nsecurity hazards as noted above. Then, in step two, we change things\naround to allow multiple grants of the same role to the same other\nrole, one per grantor. Now you've achieved parity between the behavior\nwe have for roles and the behavior we have for permissions on other\nkinds of SQL objects.\n\nThere may be other improvements we want to make in this area -\nprevious discussions have suggested various ideas - but it seems to me\nthat making the behavior sane and consistent with other types of\nobjects would be a good start. That way, if we decide we do want to\nchange anything else, we will be starting from a firm foundation,\nrather than building on sand.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Jun 2022 14:31:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "pg_auth_members.grantor is bunk" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Mar 4, 2022 at 4:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> If we are not tracking the grantors of role authorizations,\n>> then we are doing it wrong and we ought to fix that.\n\n> So let's talk about how we could fix this. In a vacuum I'd say this is\n> just a feature that never got finished and we should rip the whole\n> thing out. That is, remove pg_auth_members.grantor entirely and at\n> most keep some do-nothing syntax around for backward compatibility.\n> However, what Tom is saying in the text quoted above is that we ought\n> to have something that actually works, which is more challenging.\n> Apparently, the desired behavior here is for this to work like grants\n> on non-role objects, where executing \"GRANT SELECT ON TABLE s1 TO foo\"\n> under two different user accounts bar and baz that both have\n> permissions to grant that privilege creates two independent grants\n> that can be independently revoked.\n\nMaybe. What I was pointing out is that this is SQL-standard syntax\nand there are SQL-standard semantics that it ought to be implementing.\nProbably those semantics match what you describe here, but we ought\nto dive into the spec and make sure before we spend a lot of effort.\nIt's not quite clear to me whether the spec defines any particular\nunique key (identity) for the set of role authorizations.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jun 2022 15:15:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Thu, Jun 2, 2022 at 3:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Maybe. What I was pointing out is that this is SQL-standard syntax\n> and there are SQL-standard semantics that it ought to be implementing.\n> Probably those semantics match what you describe here, but we ought\n> to dive into the spec and make sure before we spend a lot of effort.\n> It's not quite clear to me whether the spec defines any particular\n> unique key (identity) for the set of role authorizations.\n\nI sort of thought http://postgr.es/m/3981966.1646429663@sss.pgh.pa.us\nconstituted a completed investigation of this sort. No?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Jun 2022 15:40:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jun 2, 2022 at 3:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Maybe. What I was pointing out is that this is SQL-standard syntax\n>> and there are SQL-standard semantics that it ought to be implementing.\n>> Probably those semantics match what you describe here, but we ought\n>> to dive into the spec and make sure before we spend a lot of effort.\n>> It's not quite clear to me whether the spec defines any particular\n>> unique key (identity) for the set of role authorizations.\n\n> I sort of thought http://postgr.es/m/3981966.1646429663@sss.pgh.pa.us\n> constituted a completed investigation of this sort. No?\n\nI didn't think so. It's clear that the spec expects us to track the\ngrantor, but I didn't chase down what it expects us to *do* with that\ninformation, nor what it thinks the rules are for merging multiple\nauthorizations.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jun 2022 15:50:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Thu, Jun 2, 2022 at 3:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I sort of thought http://postgr.es/m/3981966.1646429663@sss.pgh.pa.us\n> > constituted a completed investigation of this sort. No?\n>\n> I didn't think so. It's clear that the spec expects us to track the\n> grantor, but I didn't chase down what it expects us to *do* with that\n> information, nor what it thinks the rules are for merging multiple\n> authorizations.\n\nHmm, OK. Well, one problem is that I've never had any luck\ninterpreting what the spec says about anything, and I've sort of given\nup. But even if that were not so, I'm a little unclear what other\nconclusion is possible here. The spec either wants the same behavior\nthat we already have for other object types, which is what I am here\nproposing that we do, or it wants something different. If it wants\nsomething different, it probably wants that for all object types, not\njust roles. Since I doubt we would want the behavior for roles to be\ninconsistent with what we do for all other object types, in that case\nwe would probably either change the behavior for all other object\ntypes to something new, and then clean up the role stuff afterwards,\nor else first do what I proposed here and then later change it all at\nonce. In which case the proposal that I've made is as good a way to\nstart as any.\n\nNow, if it happens to be the case that the spec proposes a different\nbehavior for roles than for non-role objects, and if the behavior for\nroles is something other than the only we currently have for non-role\nobjects, then I'd agree that the plan I propose here needs revision. I\nsuspect that's unlikely but I can't make anything of the spec so ....\nmaybe?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Jun 2022 16:44:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Jun 2, 2022 at 3:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > I sort of thought http://postgr.es/m/3981966.1646429663@sss.pgh.pa.us\n> > > constituted a completed investigation of this sort. No?\n> >\n> > I didn't think so. It's clear that the spec expects us to track the\n> > grantor, but I didn't chase down what it expects us to *do* with that\n> > information, nor what it thinks the rules are for merging multiple\n> > authorizations.\n> \n> Hmm, OK. Well, one problem is that I've never had any luck\n> interpreting what the spec says about anything, and I've sort of given\n> up. But even if that were not so, I'm a little unclear what other\n> conclusion is possible here. The spec either wants the same behavior\n> that we already have for other object types, which is what I am here\n> proposing that we do, or it wants something different. If it wants\n> something different, it probably wants that for all object types, not\n> just roles. Since I doubt we would want the behavior for roles to be\n> inconsistent with what we do for all other object types, in that case\n> we would probably either change the behavior for all other object\n> types to something new, and then clean up the role stuff afterwards,\n> or else first do what I proposed here and then later change it all at\n> once. In which case the proposal that I've made is as good a way to\n> start as any.\n> \n> Now, if it happens to be the case that the spec proposes a different\n> behavior for roles than for non-role objects, and if the behavior for\n> roles is something other than the only we currently have for non-role\n> objects, then I'd agree that the plan I propose here needs revision. I\n> suspect that's unlikely but I can't make anything of the spec so ....\n> maybe?\n\nThankfully, at least from my reading, the spec isn't all that\ncomplicated on this particular point. The spec talks about \"role\nauthorization descriptor\"s and those are \"created with role name,\ngrantee, and grantor\" and then further says \"redundant duplicate role\nauthorization descriptors are destroyed\", presumably meaning that the\nentire thing has to be identical. In other words, yeah, the PK should\ninclude the grantor. There's a further comment that the 'set of\ninvolved grantees' is the union of all the 'grantees', clearly\nindicating that you can have multiple GRANT 'foo' to 'bar's with\ndistinct grantees.\n\nIn terms of how that's then used, yeah, it's during REVOKE because a\nREVOKE is only able to 'find' role authorization descriptors which match\nthe triple of role revoked, grantee, grantor (though there's a caveat in\nthat the 'grantor' role could be the current role, or the current user).\n\nInterestingly, at least in my looking it over today, it doesn't seem\nthat the 'grantor' could be 'any applicable role' (which is what's\nusually used to indicate that it could be any role that the current role\ninherits), meaning you have to include the GRANTED BY in the REVOKE\nstatement or do a SET ROLE first when doing a REVOKE if it's for a role\nthat you aren't currently running as (but which you are a member of).\n\nAnyhow, in other words, I do think Robert's got it right here. Happy to\ndiscuss further though if there are doubts.\n\nThanks,\n\nStephen", "msg_date": "Mon, 6 Jun 2022 19:41:02 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Mon, Jun 6, 2022 at 7:41 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Thankfully, at least from my reading, the spec isn't all that\n> complicated on this particular point. The spec talks about \"role\n> authorization descriptor\"s and those are \"created with role name,\n> grantee, and grantor\" and then further says \"redundant duplicate role\n> authorization descriptors are destroyed\", presumably meaning that the\n> entire thing has to be identical. In other words, yeah, the PK should\n> include the grantor. There's a further comment that the 'set of\n> involved grantees' is the union of all the 'grantees', clearly\n> indicating that you can have multiple GRANT 'foo' to 'bar's with\n> distinct grantees.\n>\n> In terms of how that's then used, yeah, it's during REVOKE because a\n> REVOKE is only able to 'find' role authorization descriptors which match\n> the triple of role revoked, grantee, grantor (though there's a caveat in\n> that the 'grantor' role could be the current role, or the current user).\n\nWhat is supposed to happen if someone tries to execute DROP ROLE on a\nrole that has previously been used as a grantor?\n\nConsider:\n\ncreate role foo;\ncreate role bar;\ncreate role baz;\ngrant foo to bar granted by baz;\ndrop role baz;\n\nUpthread, I proposed that \"drop role baz\" should fail here, but\nthere's at least one other option: it could silently remove the grant,\nas we would do if either foo or bar were dropped. The situation is not\nquite comparable, though: a grant from foo to bar makes no logical\nsense if either of those roles cease to exist, but it does make at\nleast some sense if baz ceases to exist. Therefore I think someone\ncould argue either for an error or for removing the grant -- or\npossibly even for some other behavior, though the other behaviors that\nI can think of don't make much sense in a world where the primary key\nof pg_auth_members is (roleid, member, grantor).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 Jun 2022 16:19:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Fri, Jun 24, 2022 at 1:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Jun 6, 2022 at 7:41 PM Stephen Frost <sfrost@snowman.net> wrote:\n> >\n> > In terms of how that's then used, yeah, it's during REVOKE because a\n> > REVOKE is only able to 'find' role authorization descriptors which match\n> > the triple of role revoked, grantee, grantor (though there's a caveat in\n> > that the 'grantor' role could be the current role, or the current user).\n>\n> What is supposed to happen if someone tries to execute DROP ROLE on a\n> role that has previously been used as a grantor?\n>\n> Upthread, I proposed that \"drop role baz\" should fail here\n>\n\nI concur with this.\n\nI think that the grantor owns the grant, and that REASSIGNED OWNED should\nbe able to move those grants to someone else.\n\nBy extension, DROP OWNED should remove them.\n\nDavid J.\n\nOn Fri, Jun 24, 2022 at 1:19 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Jun 6, 2022 at 7:41 PM Stephen Frost <sfrost@snowman.net> wrote:>\n> In terms of how that's then used, yeah, it's during REVOKE because a\n> REVOKE is only able to 'find' role authorization descriptors which match\n> the triple of role revoked, grantee, grantor (though there's a caveat in\n> that the 'grantor' role could be the current role, or the current user).\n\nWhat is supposed to happen if someone tries to execute DROP ROLE on a\nrole that has previously been used as a grantor?\nUpthread, I proposed that \"drop role baz\" should fail hereI concur with this.I think that the grantor owns the grant, and that REASSIGNED OWNED should be able to move those grants to someone else.By extension, DROP OWNED should remove them.David J.", "msg_date": "Fri, 24 Jun 2022 13:29:47 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Fri, Jun 24, 2022 at 4:30 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>> Upthread, I proposed that \"drop role baz\" should fail here\n>\n> I concur with this.\n>\n> I think that the grantor owns the grant, and that REASSIGNED OWNED should be able to move those grants to someone else.\n>\n> By extension, DROP OWNED should remove them.\n\nInteresting. I hadn't thought about changing the behavior of DROP\nOWNED BY and REASSIGN OWNED BY. A quick experiment supports your\ninterpretation:\n\nrhaas=# grant select on table foo to bar;\nGRANT\nrhaas=# revoke select on table foo from bar;\nREVOKE\nrhaas=# grant select on table foo to bar with grant option;\nGRANT\nrhaas=# set role bar;\nSET\nrhaas=> grant select on table foo to baz;\nGRANT\nrhaas=> reset role;\nRESET\nrhaas=# drop role bar;\nERROR: role \"bar\" cannot be dropped because some objects depend on it\nDETAIL: privileges for table foo\nrhaas=# drop owned by bar;\nDROP OWNED\nrhaas=# drop role bar;\nDROP ROLE\n\nSo, privileges on tables (and presumably all other SQL objects)\nalready work the way that you propose here. If we choose to make role\nmemberships work in some other way then the two will be inconsistent.\nProbably we shouldn't do that. There is still the question of what the\nSQL specification says about this, but I would guess that it mandates\nthe same behavior for all kinds of privileges rather than treating\nrole memberships and table permissions in different ways. I could be\nwrong, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 Jun 2022 16:46:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Fri, Jun 24, 2022 at 4:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Interesting. I hadn't thought about changing the behavior of DROP\n> OWNED BY and REASSIGN OWNED BY. A quick experiment supports your\n> interpretation:\n\nHere is a minimal patch fixing exactly $SUBJECT. Granting a role to\nanother role now creates a dependency on the grantor, so if you try to\ndrop the grantor you get an ERROR. You can resolve that by revoking\nthe grant, or by using DROP OWNED BY or REASSIGN OWNED BY. To make\nthis work, I had to make role memberships participate in the\ndependency system, which means pg_auth_members gains an OID column.\nThe tricky part is that removing either of the two roles directly\ninvolved in a grant currently does, and should still, silently remove\nthe grant. So, if you do \"GRANT foo TO bar GRANTED BY baz\", and then\ntry to \"DROP ROLE baz\", that should fail, but if you instead try to\n\"DROP ROLE baz, bar\", that should work, because when bar is removed,\nthe grant is silently removed, and then it's OK to drop baz. If these\nwere database-local objects I think this could have all been sorted\nout quite easily by creating dependencies on all three roles involved\nin the GRANT and using the right deptype for each, but shared objects\nhave their own set of deptypes which seemed to present no easy\nsolution to this problem. I resolved the issue by having DropRole()\nmake two loops over the list of roles to be dropped rather than one;\nsee patch for details.\n\nThere are several things that I think ought to be changed which this\npatch does not change. Most likely, I'll try to write separate patches\nfor those things rather than making this one bigger.\n\nFirst, as discussed upthread, I think we ought to change things so\nthat you can have multiple simultaneous grants of role A to role B\neach with a different grantor. That is what we do for other types of\ngrants and Stephen at least thinks it's what the SQL standard\nspecifies.\n\nSecond, I think we ought to enforce that the grantor has to be a role\nwhich has the ability to perform the grant, just as we do for other\nobject types. This is a little thorny, though, because we play some\ntricks with other types of objects that don't work for roles. If\nsuperuser alice executes \"GRANT SELECT ON bobs_table TO fred\" we\nrecord the owner of the grant as being the table owner and update the\nownership of the grant each time the table owner is changed. That way,\neven if alice ceases to be a superuser, we maintain the invariant that\nthe grantor of record must have privileges to perform the grant. But\nif superuser alice executes \"GRANT accounting TO fred\", we can't use\nthe same trick, because the \"accounting\" role doesn't have an owner.\nIf we attribute the grant to alice and she ceases to be a superuser\n(and also doesn't have CREATEROLE) then the invariant is violated.\nAttributing the grant to the bootstrap superuser doesn't help, as that\nuser can also be made not a superuser. Attributing the grant to\naccounting is no good, as accounting doesn't and can't have ADMIN\nOPTION on itself; and fred doesn't have to have ADMIN OPTION on\naccounting either.\n\nOne way to fix this problem would be to prohibit the removal of\nsuperuser privileges from the booststrap superuser. Then, we could\nattribute grants made by users who lack ADMIN OPTION on the granted\nrole to the bootstrap superuser. Grants made by users who do possess\nADMIN OPTION would be attributed to the actual grantor (unless GRANTED\nBY was used) and removing ADMIN OPTION from such a user could be made\nto fail if they had outstanding role grants. I think that's probably\nthe nearest analogue of what we do for other object types, but if\nyou've got another idea what to do here, I'd love to hear it.\n\nThoughts on this patch would be great, too.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 30 Jun 2022 11:23:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Fri, Jun 24, 2022 at 4:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Jun 24, 2022 at 4:30 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >> Upthread, I proposed that \"drop role baz\" should fail here\n> >\n> > I concur with this.\n> >\n> > I think that the grantor owns the grant, and that REASSIGNED OWNED should be able to move those grants to someone else.\n> >\n> > By extension, DROP OWNED should remove them.\n>\n> Interesting. I hadn't thought about changing the behavior of DROP\n> OWNED BY and REASSIGN OWNED BY. A quick experiment supports your\n> interpretation:\n\nThis experiment was insufficiently thorough. I see now that, for other\nobject types, DROP OWNED BY does work in the way that you propose, but\nREASSIGN OWNED BY does not. Here's a better test:\n\nrhaas=# create table foo();\nCREATE TABLE\nrhaas=# create role bar;\nCREATE ROLE\nrhaas=# create role baz;\nCREATE ROLE\nrhaas=# grant select on table foo to bar with grant option;\nGRANT\nrhaas=# set role bar;\nSET\nrhaas=> grant select on table foo to baz;\nGRANT\nrhaas=> reset role;\nRESET\nrhaas=# drop role bar;\nERROR: role \"bar\" cannot be dropped because some objects depend on it\nDETAIL: privileges for table foo\nrhaas=# create role quux;\nCREATE ROLE\nrhaas=# reassign owned by bar to quux;\nREASSIGN OWNED\nrhaas=# drop role bar;\nERROR: role \"bar\" cannot be dropped because some objects depend on it\nDETAIL: privileges for table foo\nrhaas=# drop owned by bar;\nDROP OWNED\nrhaas=# drop role bar;\nDROP ROLE\n\nThis behavior might look somewhat bizarre, but there's actually a good\nreason for it: the system guarantees that whoever is listed as the\ngrantor of a privilege has the *current* right to grant that\nprivilege. It can't categorically change the grantor of every\nprivilege given by bar to quux because quux might not and in fact does\nnot have the right to grant select on table foo to baz. Now, you might\nbe thinking, ah, but what if the superuser performed the grant? They\ncould cease to be the superuser later, and then the rule would be\nviolated! But actually not, because a grant by the superuser is\nimputed to the table owner, who always has the right to grant all\nrights on the table, and if the table owner is ever changed, all the\ngrants imputed to the old table owner are changed to have their\ngrantor as the new table owner. Similarly, trying to revoke select, or\nthe grant option on it, from bar would fail. So it looks pretty\nintentional, and pretty tightly-enforced, that every role listed as a\ngrantor must be one which is currently able to grant that privilege.\n\nAnd that means that REASSIGN OWNED can't just do a blanket change to\nthe recorded grantor. It could try to do so, I suppose, and just throw\nan error if it doesn't work out, but that might make REASSIGN OWNED\nfail a lot more often, which could suck. In any event, the implemented\nbehavior is that REASSIGN OWNED does nothing about permissions, but\nDROP OWNED cascades to grantors. This is SORT OF documented, although\nthe documentation only mentions that DROP OWNED cascades to privileges\ngranted *to* the target role, and does not mention that it also\ncascades to privileges granted *by* the target role.\n\nThe previous version of the patch makes both DROP OWNED and REASSIGN\nOWNED cascade to grantors, but I now think that, for consistency, I'd\nbetter look into changing it so that only DROP OWNED cascades. I think\nperhaps I should be using SHARED_DEPENDENCY_ACL instead of\nSHARED_DEPENDENCY_OWNER.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Jul 2022 15:11:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Wed, Jul 20, 2022 at 3:11 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> The previous version of the patch makes both DROP OWNED and REASSIGN\n> OWNED cascade to grantors, but I now think that, for consistency, I'd\n> better look into changing it so that only DROP OWNED cascades. I think\n> perhaps I should be using SHARED_DEPENDENCY_ACL instead of\n> SHARED_DEPENDENCY_OWNER.\n\nAll right, here's a new patch set, now with a second patch added to the series.\n\n0001, as before, is a minimal fix for $SUBJECT, but it now uses\nSHARED_DEPENDENCY_ACL instead of SHARED_DEPENDENCY_OWNER, because that\ngives behavior which is more like what we do for other object types.\nHowever, it confines itself to making sure that\npg_auth_members.grantor is a valid user, and that's it.\n\n0002 then revises the behavior substantially further to make role\ngrants work like other grants. The grantor of record is required to be\na user with ADMIN OPTION on the grant, or the bootstrap superuser,\njust as for other object types the grantor of record must have GRANT\nOPTION or be the object owner (but roles don't have owners). Dependent\ngrants are tracked and must be revoked before the grants upon which\nthey depend, but REVOKE .. CASCADE now works. Dependent grants must be\nacyclic: you can't have alice getting ADMIN OPTION from bob and bob\ngetting it from alice; somebody's got to get it from the bootstrap\nsuperuser. This is all just by analogy with what we do for grants on\nobject types, and making role grants do something similar instead of\nthe completely random treatment we have at present.\n\nI believe that these patches are mostly complete, but I think that\ndumpRoleMembership() probably needs some more work. I don't know what\nexactly, but there's nothing to cause it to dump the role grants in an\norder that will create dependent grants after the things that they\ndepend on, which seems essential.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 26 Jul 2022 12:46:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Tue, Jul 26, 2022 at 12:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I believe that these patches are mostly complete, but I think that\n> dumpRoleMembership() probably needs some more work. I don't know what\n> exactly, but there's nothing to cause it to dump the role grants in an\n> order that will create dependent grants after the things that they\n> depend on, which seems essential.\n\nOK, so I fixed that, and also updated the documentation a bit more. I\nthink these patches are basically done, and I'd like to get them\ncommitted before too much more time goes by, because I have other\nthings that depend on this which I also want to get done for this\nrelease. Anybody object?\n\nI'm hoping not, because, while this is a behavior change, the current\nstate of play in this area is just terrible. To my knowledge, this is\nthe only place in the system where we allow a dangling OID reference\nin a catalog table to persist after the object to which it refers has\nbeen dropped. I believe it's also the object type where multiple\ngrants by different grantors aren't tracked separately, and where the\ngrantor need not themselves have the permission being granted. It\ndoesn't really look like any of these things were intentional behavior\nso much as just ... nobody ever bothered to write the code to make it\nwork properly. I'm hoping the fact that I have now done that will be\nviewed as a good thing, but maybe that won't turn out to be the case.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 28 Jul 2022 15:09:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Thu, Jul 28, 2022 at 12:09 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Jul 26, 2022 at 12:46 PM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> > I believe that these patches are mostly complete, but I think that\n> > dumpRoleMembership() probably needs some more work. I don't know what\n> > exactly, but there's nothing to cause it to dump the role grants in an\n> > order that will create dependent grants after the things that they\n> > depend on, which seems essential.\n>\n> OK, so I fixed that, and also updated the documentation a bit more. I\n> think these patches are basically done, and I'd like to get them\n> committed before too much more time goes by, because I have other\n> things that depend on this which I also want to get done for this\n> release. Anybody object?\n>\n> I'm hoping not, because, while this is a behavior change, the current\n> state of play in this area is just terrible. To my knowledge, this is\n> the only place in the system where we allow a dangling OID reference\n> in a catalog table to persist after the object to which it refers has\n> been dropped. I believe it's also the object type where multiple\n> grants by different grantors aren't tracked separately, and where the\n> grantor need not themselves have the permission being granted. It\n> doesn't really look like any of these things were intentional behavior\n> so much as just ... nobody ever bothered to write the code to make it\n> work properly. I'm hoping the fact that I have now done that will be\n> viewed as a good thing, but maybe that won't turn out to be the case.\n>\n>\nI suggest changing \\du memberof to output something like this:\n\nselect r.rolname,\narray(\n select format('%s:%s/%s',\n b.rolname,\n case when m.admin_option then 'admin' else 'member' end,\n g.rolname)\n from pg_catalog.pg_auth_members m\n join pg_catalog.pg_roles b on (m.roleid = b.oid)\n join pg_catalog.pg_roles g on (m.grantor = g.oid)\n where m.member = r.oid\n) as memberof\nfrom pg_catalog.pg_roles r where r.rolname !~ '^pg_';\n\n rolname | memberof\n---------+------------------------------------\n vagrant | {}\n o | {}\n a | {o:admin/p,o:admin/vagrant}\n x | {o:admin/a,p:member/vagrant}\n b | {o:admin/a}\n p | {o:admin/vagrant}\n y | {x:member/vagrant}\n q | {}\n r | {q:admin/vagrant}\n s | {}\n t | {q:admin/vagrant,s:member/vagrant}\n\n\n(needs sorting, tried to model it after ACL - column privileges\nspecifically)\n\n=> \\dp mytable\n Access privileges\n Schema | Name | Type | Access privileges | Column privileges |\nPolicies\n--------+---------+-------+-----------------------+-----------------------+----------\n public | mytable | table | miriam=arwdDxt/miriam+| col1: +|\n | | | =r/miriam +| miriam_rw=rw/miriam |\n | | | admin=arw/miriam | |\n(1 row)\n\nIf we aren't dead set on having \\du and \\dg be aliases for each other I'd\nrather redesign \\dg (or add a new meta-command) to be a group-centric view\nof this exact same data instead of user-centric one. Namely it has a\n\"members\" column instead of \"memberof\" and have it output, one line per\nmember:\n\nuser=[admin|member]/grantor\n\nI looked over the rest of the patch and played with the circularity a bit,\nwhich motivated the expanded info in \\du, and the confirmation that two\nseparate admin grants that are not circular can exist.\n\nI don't have any meaningful insight as to breaking things with these\nchanges but I am strongly in favor of tightening this up and formalizing it.\n\nDavid J.\n\nOn Thu, Jul 28, 2022 at 12:09 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Jul 26, 2022 at 12:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I believe that these patches are mostly complete, but I think that\n> dumpRoleMembership() probably needs some more work. I don't know what\n> exactly, but there's nothing to cause it to dump the role grants in an\n> order that will create dependent grants after the things that they\n> depend on, which seems essential.\n\nOK, so I fixed that, and also updated the documentation a bit more. I\nthink these patches are basically done, and I'd like to get them\ncommitted before too much more time goes by, because I have other\nthings that depend on this which I also want to get done for this\nrelease. Anybody object?\n\nI'm hoping not, because, while this is a behavior change, the current\nstate of play in this area is just terrible. To my knowledge, this is\nthe only place in the system where we allow a dangling OID reference\nin a catalog table to persist after the object to which it refers has\nbeen dropped. I believe it's also the object type where multiple\ngrants by different grantors aren't tracked separately, and where the\ngrantor need not themselves have the permission being granted. It\ndoesn't really look like any of these things were intentional behavior\nso much as just ... nobody ever bothered to write the code to make it\nwork properly. I'm hoping the fact that I have now done that will be\nviewed as a good thing, but maybe that won't turn out to be the case.I suggest changing \\du memberof to output something like this:select r.rolname, array(  select format('%s:%s/%s',    b.rolname,     case when m.admin_option then 'admin' else 'member' end,     g.rolname)  from pg_catalog.pg_auth_members m  join pg_catalog.pg_roles b on (m.roleid = b.oid)  join pg_catalog.pg_roles g on (m.grantor = g.oid)   where m.member = r.oid) as memberoffrom pg_catalog.pg_roles r where r.rolname !~ '^pg_'; rolname |              memberof              ---------+------------------------------------ vagrant | {} o       | {} a       | {o:admin/p,o:admin/vagrant} x       | {o:admin/a,p:member/vagrant} b       | {o:admin/a} p       | {o:admin/vagrant} y       | {x:member/vagrant} q       | {} r       | {q:admin/vagrant} s       | {} t       | {q:admin/vagrant,s:member/vagrant}(needs sorting, tried to model it after ACL - column privileges specifically)=> \\dp mytable                                  Access privileges Schema |  Name   | Type  |   Access privileges   |   Column privileges   | Policies--------+---------+-------+-----------------------+-----------------------+---------- public | mytable | table | miriam=arwdDxt/miriam+| col1:                +|        |         |       | =r/miriam            +|   miriam_rw=rw/miriam |        |         |       | admin=arw/miriam      |                       |(1 row)If we aren't dead set on having \\du and \\dg be aliases for each other I'd rather redesign \\dg (or add a new meta-command) to be a group-centric view of this exact same data instead of user-centric one.  Namely it has a \"members\" column instead of \"memberof\" and have it output, one line per member:user=[admin|member]/grantorI looked over the rest of the patch and played with the circularity a bit, which motivated the expanded info in \\du, and the confirmation that two separate admin grants that are not circular can exist.I don't have any meaningful insight as to breaking things with these changes but I am strongly in favor of tightening this up and formalizing it.David J.", "msg_date": "Thu, 28 Jul 2022 14:17:06 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Thu, Jul 28, 2022 at 5:17 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> I suggest changing \\du memberof to output something like this:\n>\n> rolname | memberof\n> ---------+------------------------------------\n> vagrant | {}\n> r | {q:admin/vagrant}\n> t | {q:admin/vagrant,s:member/vagrant}\n>\n> (needs sorting, tried to model it after ACL - column privileges specifically)\n\nI don't know. I agree with you that we should probably think about\nchanging the \\du output, but I'm not sure if I like this particular\nidea about how to do it. I mean, the ACL format that we use for tables\nand other objects is basically an internal format which we throw at\nthe user, hoping they'll know how to interpret it. I don't know if\nit's what we should pick when we don't have that kind of internal\nformat already. On the other hand, consistency is worth something, and\nI'm not sure that I have a better idea.\n\nhttps://commitfest.postgresql.org/38/3744/ might affect what we want\nto do here, too.\n\n> If we aren't dead set on having \\du and \\dg be aliases for each other I'd rather redesign \\dg (or add a new meta-command) to be a group-centric view of this exact same data instead of user-centric one. Namely it has a \"members\" column instead of \"memberof\" and have it output, one line per member:\n>\n> user=[admin|member]/grantor\n\nThat seems like a topic for a separate thread, but I agree that a\nflipped view of this data would be more useful than using two letters\nof the alphabet for exactly the same thing, especially given that\nwe're pretty short on unused letters.\n\n> I don't have any meaningful insight as to breaking things with these changes but I am strongly in favor of tightening this up and formalizing it.\n\nCool.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jul 2022 08:46:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, Jul 26, 2022 at 12:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I believe that these patches are mostly complete, but I think that\n> > dumpRoleMembership() probably needs some more work. I don't know what\n> > exactly, but there's nothing to cause it to dump the role grants in an\n> > order that will create dependent grants after the things that they\n> > depend on, which seems essential.\n> \n> OK, so I fixed that, and also updated the documentation a bit more. I\n> think these patches are basically done, and I'd like to get them\n> committed before too much more time goes by, because I have other\n> things that depend on this which I also want to get done for this\n> release. Anybody object?\n\nThanks for working on this.\n\nSubject: [PATCH v3 1/2] Ensure that pg_auth_members.grantor is always valid.\n\ndiff --git a/src/backend/commands/user.c b/src/backend/commands/user.c\nindex 94135fdd6b..258943094a 100644\n--- a/src/backend/commands/user.c\n+++ b/src/backend/commands/user.c\n@@ -919,7 +920,7 @@ DropRole(DropRoleStmt *stmt)\n \n \t/*\n \t * Scan the pg_authid relation to find the Oid of the role(s) to be\n-\t * deleted.\n+\t * deleted and perform prleliminary permissions and sanity checks.\n\nShould be preliminary, I'm guessing.\n\nOverall, this looks like a solid improvement.\n\nSubject: [PATCH v3 2/2] Make role grant system more consistent with other\n privileges.\n\n> Previously, only the superuser could specify GRANTED BY with a user\n> other than the current user. Relax that rule to allow the grantor\n> to be any role whose privileges the current user posseses. This\n> doesn't improve compatibility with what we do for other object types,\n> where support for GRANTED BY is entirely vestigial, but it makes this\n> feature more usable and seems to make sense to change at the same time\n> we're changing related behaviors.\n\nPresumably the GRANTED BY user in this case still has to have the\nability to have performed the GRANT themselves? Looks that way below\nand it's just the commit message, but was the first question that came\nto mind when I read through this.\n\ndiff --git a/doc/src/sgml/ref/grant.sgml b/doc/src/sgml/ref/grant.sgml\nindex f744b05b55..1f828d386a 100644\n--- a/doc/src/sgml/ref/grant.sgml\n+++ b/doc/src/sgml/ref/grant.sgml\n@@ -267,8 +267,14 @@ GRANT <replaceable class=\"parameter\">role_name</replaceable> [, ...] TO <replace\n \n <para>\n If <literal>GRANTED BY</literal> is specified, the grant is recorded as\n- having been done by the specified role. Only database superusers may\n- use this option, except when it names the same role executing the command.\n+ having been done by the specified role. A user can only attribute a grant\n+ to another role if they possess the privileges of that role. A role can\n+ only be recorded as a grantor if has <literal>ADMIN OPTION</literal> on\n\nShould be: if they have\n\n+ a role or is the bootstrap superuser. When a grant is recorded as having\n\non *that* role seems like it'd be better. And maybe 'or if they are the\nbootstrap superuser'?\n\ndiff --git a/src/backend/commands/user.c b/src/backend/commands/user.c\nindex 258943094a..8ab2fecf3a 100644\n--- a/src/backend/commands/user.c\n+++ b/src/backend/commands/user.c\n@@ -805,11 +842,12 @@ AlterRole(ParseState *pstate, AlterRoleStmt *stmt)\n \t\tif (stmt->action == +1) /* add members to role */\n \t\t\tAddRoleMems(rolename, roleid,\n \t\t\t\t\t\trolemembers, roleSpecsToIds(rolemembers),\n-\t\t\t\t\t\tGetUserId(), false);\n+\t\t\t\t\t\tInvalidOid, false);\n \t\telse if (stmt->action == -1)\t/* drop members from role */\n \t\t\tDelRoleMems(rolename, roleid,\n \t\t\t\t\t\trolemembers, roleSpecsToIds(rolemembers),\n-\t\t\t\t\t\tfalse);\n+\t\t\t\t\t\tInvalidOid, false, DROP_RESTRICT);\t/* XXX sketchy - hint\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t * may mislead */\n \t}\n\nThis comment seems a little concerning..? Also isn't very clear.\n\n@@ -1027,7 +1065,7 @@ DropRole(DropRoleStmt *stmt)\n \n \t\twhile (HeapTupleIsValid(tmp_tuple = systable_getnext(sscan)))\n \t\t{\n-\t\t\tForm_pg_auth_members\tauthmem_form;\n+\t\t\tForm_pg_auth_members authmem_form;\n \n \t\t\tauthmem_form = (Form_pg_auth_members) GETSTRUCT(tmp_tuple);\n \t\t\tdeleteSharedDependencyRecordsFor(AuthMemRelationId,\n\nSome random whitespace changes that seems a bit odd given that they\nshould have been already correct thanks to pgindent- will these end up\njust getting undone again?\n\n@@ -1543,14 +1578,94 @@ AddRoleMems(const char *rolename, Oid roleid,\n \t\t\t\t\t(errcode(ERRCODE_INVALID_GRANT_OPERATION),\n \t\t\t\t\t errmsg(\"role \\\"%s\\\" is a member of role \\\"%s\\\"\",\n \t\t\t\t\t\t\trolename, get_rolespec_name(memberRole))));\n+\t}\n+\n+\t/*\n+\t * Disallow attempts to grant ADMIN OPTION back to a user who granted it\n+\t * to you, similar to what check_circularity does for ACLs. We want the\n+\t * chains of grants to remain acyclic, so that it's always possible to use\n+\t * REVOKE .. CASCADE to clean up all grants that depend on the one being\n+\t * revoked.\n+\t *\n+\t * NB: This check might look redundant with the check for membership\n+\t * loops above, but it isn't. That's checking for role-member loop (e.g.\n+\t * A is a member of B and B is a member of A) while this is checking for\n+\t * a member-grantor loop (e.g. A gave ADMIN OPTION to X to B and now B, who\n+\t * has no other source of ADMIN OPTION on X, tries to give ADMIN OPTION\n+\t * on X back to A).\n+\t */\n\nWith this exact scenario, wouldn't it just be a no-op as A must have\nADMIN OPTION already on X? The spec says that no cycles of role\nauthorizations are allowed. Presumably we'd continue this for other\nGRANT'able things which can be further GRANT'd (should we add them) in\nthe future? Just trying to think ahead a bit here in case it's\nworthwhile. Those would likely be ABC WITH GRANT OPTION too, right?\n\n+\tif (admin_opt && grantorId != BOOTSTRAP_SUPERUSERID)\n+\t{\n+\t\tCatCList *memlist;\n+\t\tRevokeRoleGrantAction *actions;\n+\t\tint\t\t\ti;\n+\n+\t\t/* Get the list of members for this role. */\n+\t\tmemlist = SearchSysCacheList1(AUTHMEMROLEMEM,\n+\t\t\t\t\t\t\t\t\t ObjectIdGetDatum(roleid));\n+\n+\t\t/*\n+\t\t * Figure out what would happen if we removed all existing grants to\n+\t\t * every role to which we've been asked to make a new grant.\n+\t\t */\n+\t\tactions = initialize_revoke_actions(memlist);\n+\t\tforeach(iditem, memberIds)\n+\t\t{\n+\t\t\tOid\t\t\tmemberid = lfirst_oid(iditem);\n+\n+\t\t\tif (memberid == BOOTSTRAP_SUPERUSERID)\n+\t\t\t\tereport(ERROR,\n+\t\t\t\t\t\t(errcode(ERRCODE_INVALID_GRANT_OPERATION),\n+\t\t\t\t\t\t errmsg(\"grants with admin options cannot be circular\")));\n+\t\t\tplan_member_revoke(memlist, actions, memberid);\n+\t\t}\n\nI don't see a regression test added which produces the above error\nmessage. The memberid == BOOTSTRAP_SUPERUSERID seems odd too?\n\n+\t\t/*\n+\t\t * If the result would be that the grantor role would no longer have\n+\t\t * the ability to perform the grant, then the proposed grant would\n+\t\t * create a circularity.\n+\t\t */\n+\t\tfor (i = 0; i < memlist->n_members; ++i)\n+\t\t{\n+\t\t\tHeapTuple\tauthmem_tuple;\n+\t\t\tForm_pg_auth_members authmem_form;\n+\n+\t\t\tauthmem_tuple = &memlist->members[i]->tuple;\n+\t\t\tauthmem_form = (Form_pg_auth_members) GETSTRUCT(authmem_tuple);\n+\n+\t\t\tif (actions[i] == RRG_NOOP &&\n+\t\t\t\tauthmem_form->member == grantorId &&\n+\t\t\t\tauthmem_form->admin_option)\n+\t\t\t\tbreak;\n+\t\t}\n+\t\tif (i >= memlist->n_members)\n+\t\t\tereport(ERROR,\n+\t\t\t\t\t(errcode(ERRCODE_INVALID_GRANT_OPERATION),\n+\t\t\t\t\t errmsg(\"admin options cannot be granted back to your own grantor\")));\n\nI do see this in the regression tests. There though, the GRANTs are\nbeing performed by someone else, so saying 'your' isn't quite right.\nI'm trying to get this review out sooner than later and so I might be\nmissing something, but looking at the regression test for this and these\nerror messages, feels like the 'circular' error message makes more sense\nthan the 'your own grantor' message that actually ends up being\nreturned in that regression test.\n\n@@ -1637,17 +1737,22 @@ AddRoleMems(const char *rolename, Oid roleid,\n * roleid: OID of role to del from\n * memberSpecs: list of RoleSpec of roles to del (used only for error messages)\n * memberIds: OIDs of roles to del\n+ * grantorId: who is revoking the membership\n * admin_opt: remove admin option only?\n */\n static void\n DelRoleMems(const char *rolename, Oid roleid,\n \t\t\tList *memberSpecs, List *memberIds,\n-\t\t\tbool admin_opt)\n+\t\t\tOid grantorId, bool admin_opt, DropBehavior behavior)\n\n\nThe comment above DropRoleMems missed adding a description for\nthe 'behavior' parameter.\n\n@@ -1669,40 +1774,69 @@ DelRoleMems(const char *rolename, Oid roleid,\n+\t/*\n+\t * We may need to recurse to dependent privileges if DROP_CASCADE was\n+\t * specified, or refuse to perform the operation if dependent privileges\n+\t * exist and DROP_RECURSE was specified. plan_single_revoke() will\n+\t * figure out what to do with each catalog tuple.\n+\t */\n\nPretty sure that should be DROP_RESTRICT, not DROP_RECURSE.\n\n+/*\n+ * Sanity-check, or infer, the grantor for a GRANT or REVOKE statement\n+ * targeting a role.\n+ *\n+ * The grantor must always be either a role with ADMIN OPTION on the role in\n+ * which membership is being granted, or the bootstrap superuser. This is\n+ * similar to the restriction enforced by select_best_grantor, except that\n+ * roles don't have owners, so we regard the bootstrap superuser as the\n+ * implicit owner.\n+ *\n+ * The return value is the OID to be regarded as the grantor when executing\n+ * the operation.\n+ */\n+static Oid\n+check_role_grantor(Oid currentUserId, Oid roleid, Oid grantorId, bool is_grant)\n\nAs this also does some permission checks, it seems like it'd be good to\nmention that in the function description. Maybe also in the places that\ncall into this function with the expectation that the privilege check\nwill be taken care of here.\n\nIndeed, I wonder if maybe we should really split this function into two\nas the \"give me what the best grantor is\" is a fair bit different from\n\"check if this user has permission to grant as this role\". As noted in\nthe comments, the current function also only does privilege checking in\nsome case- when InvalidOid is passed in we've also already done\npermissions checks to make sure that the GRANT will succeed.\n\n+/*\n+ * Initialize an array of RevokeRoleGrantAction objects.\n+ *\n+ * 'memlist' should be a list of all grants for the target role.\n+ *\n+ * We here construct an array indicating that no actions are to be performed;\n+ * that is, every element is intiially RRG_NOOP.\n+ */\n\n\"We here construct\" seems odd wording to me. Maybe \"Here we construct\"?\n\nThanks,\n\nStephen", "msg_date": "Sun, 31 Jul 2022 14:18:27 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n>> OK, so I fixed that, and also updated the documentation a bit more. I\n>> think these patches are basically done, and I'd like to get them\n>> committed before too much more time goes by, because I have other\n>> things that depend on this which I also want to get done for this\n>> release. Anybody object?\n\n> Thanks for working on this.\n\nIndeed. I've not read the patch, but I just wanted to mention that\nthe cfbot shows it as failing regression tests on all platforms.\nPossibly a conflict with some recent commit?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 31 Jul 2022 14:33:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Sun, Jul 31, 2022 at 11:18 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n> > On Tue, Jul 26, 2022 at 12:46 PM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n>\n> + }\n> +\n> + /*\n> + * Disallow attempts to grant ADMIN OPTION back to a user who\n> granted it\n> + * to you, similar to what check_circularity does for ACLs. We\n> want the\n> + * chains of grants to remain acyclic, so that it's always\n> possible to use\n> + * REVOKE .. CASCADE to clean up all grants that depend on the one\n> being\n> + * revoked.\n> + *\n> + * NB: This check might look redundant with the check for\n> membership\n> + * loops above, but it isn't. That's checking for role-member loop\n> (e.g.\n> + * A is a member of B and B is a member of A) while this is\n> checking for\n> + * a member-grantor loop (e.g. A gave ADMIN OPTION to X to B and\n> now B, who\n> + * has no other source of ADMIN OPTION on X, tries to give ADMIN\n> OPTION\n> + * on X back to A).\n> + */\n>\n> With this exact scenario, wouldn't it just be a no-op as A must have\n> ADMIN OPTION already on X? The spec says that no cycles of role\n> authorizations are allowed.\n\n\nRole A must have admin option for X to grant membership in X (with or\nwithout admin option) to B. But that doesn't preclude A from getting\nanother admin option from someone else. That someone else cannot be\nsomeone to whom they gave admin option to however. So B cannot grant admin\noption back to A but role P could if it was basically a sibling of A (i.e.,\nboth getting their initial admin option from someone else).\n\nIf they do have admin option twice it should be possible to drop one of\nthem, the prohibition should be on dropping the only admin option\npermission a role has for some other role. The commit message for 2\ncontemplates this though I haven't gone through the revocation code in\ndetail.\n\n\n> I'm trying to get this review out sooner than later and so I might be\n> missing something, but looking at the regression test for this and these\n> error messages, feels like the 'circular' error message makes more sense\n> than the 'your own grantor' message that actually ends up being\n> returned in that regression test.\n>\n\nHaving a more specific error seems reasonable, faster to track down what\nthe problem is.\n\nI think that the whole graph dynamic of this might need some presentation\nwork (messages and/or psql and/or functions) ; but assuming the errors are\nhandled improved messages and/or presentation of graphs can be a separate\nenhancement.\n\nDavid J.\n\nOn Sun, Jul 31, 2022 at 11:18 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, Jul 26, 2022 at 12:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n+       }\n+\n+       /*\n+        * Disallow attempts to grant ADMIN OPTION back to a user who granted it\n+        * to you, similar to what check_circularity does for ACLs. We want the\n+        * chains of grants to remain acyclic, so that it's always possible to use\n+        * REVOKE .. CASCADE to clean up all grants that depend on the one being\n+        * revoked.\n+        *\n+        * NB: This check might look redundant with the check for membership\n+        * loops above, but it isn't. That's checking for role-member loop (e.g.\n+        * A is a member of B and B is a member of A) while this is checking for\n+        * a member-grantor loop (e.g. A gave ADMIN OPTION to X to B and now B, who\n+        * has no other source of ADMIN OPTION on X, tries to give ADMIN OPTION\n+        * on X back to A).\n+        */\n\nWith this exact scenario, wouldn't it just be a no-op as A must have\nADMIN OPTION already on X?  The spec says that no cycles of role\nauthorizations are allowed.Role A must have admin option for X to grant membership in X (with or without admin option) to B.  But that doesn't preclude A from getting another admin option from someone else.  That someone else cannot be someone to whom they gave admin option to however. So B cannot grant admin option back to A but role P could if it was basically a sibling of A (i.e., both getting their initial admin option from someone else).If they do have admin option twice it should be possible to drop one of them, the prohibition should be on dropping the only admin option permission a role has for some other role.  The commit message for 2 contemplates this though I haven't gone through the revocation code in detail. \nI'm trying to get this review out sooner than later and so I might be\nmissing something, but looking at the regression test for this and these\nerror messages, feels like the 'circular' error message makes more sense\nthan the 'your own grantor' message that actually ends up being\nreturned in that regression test.Having a more specific error seems reasonable, faster to track down what the problem is.I think that the whole graph dynamic of this might need some presentation work (messages and/or psql and/or functions) ; but assuming the errors are handled improved messages and/or presentation of graphs can be a separate enhancement.David J.", "msg_date": "Sun, 31 Jul 2022 11:43:48 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "Greetings,\n\nOn Sun, Jul 31, 2022 at 11:44 David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Sun, Jul 31, 2022 at 11:18 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n>> Greetings,\n>>\n>> * Robert Haas (robertmhaas@gmail.com) wrote:\n>> > On Tue, Jul 26, 2022 at 12:46 PM Robert Haas <robertmhaas@gmail.com>\n>> wrote:\n>>\n>> + }\n>> +\n>> + /*\n>> + * Disallow attempts to grant ADMIN OPTION back to a user who\n>> granted it\n>> + * to you, similar to what check_circularity does for ACLs. We\n>> want the\n>> + * chains of grants to remain acyclic, so that it's always\n>> possible to use\n>> + * REVOKE .. CASCADE to clean up all grants that depend on the\n>> one being\n>> + * revoked.\n>> + *\n>> + * NB: This check might look redundant with the check for\n>> membership\n>> + * loops above, but it isn't. That's checking for role-member\n>> loop (e.g.\n>> + * A is a member of B and B is a member of A) while this is\n>> checking for\n>> + * a member-grantor loop (e.g. A gave ADMIN OPTION to X to B and\n>> now B, who\n>> + * has no other source of ADMIN OPTION on X, tries to give ADMIN\n>> OPTION\n>> + * on X back to A).\n>> + */\n>>\n>> With this exact scenario, wouldn't it just be a no-op as A must have\n>> ADMIN OPTION already on X? The spec says that no cycles of role\n>> authorizations are allowed.\n>\n>\nI’ve realized that what I hadn’t been contemplating here is actually that\nthe GRANT from B to A for X wouldn’t be redundant because grantor is part\nof the key (A got the right from someone else, but this would be giving it\nto A from B and therefore would be distinct and would also create a loop\nwhich is no good). Haven’t got a good idea on how to improve on the\ncomment based off of that though it still feels like it could be clearer.\nIf I think of something, I’ll share it.\n\nRole A must have admin option for X to grant membership in X (with or\n> without admin option) to B. But that doesn't preclude A from getting\n> another admin option from someone else. That someone else cannot be\n> someone to whom they gave admin option to however. So B cannot grant admin\n> option back to A but role P could if it was basically a sibling of A (i.e.,\n> both getting their initial admin option from someone else).\n>\n\nRight but that wasn’t what I had been trying to get at above.\n\nIf they do have admin option twice it should be possible to drop one of\n> them, the prohibition should be on dropping the only admin option\n> permission a role has for some other role. The commit message for 2\n> contemplates this though I haven't gone through the revocation code in\n> detail.\n>\n\nYes, think I agree with this also- if A has been given the WITH ADMIN right\nfrom Q and P to GRANT X to other roles, and uses that to GRANT X to B, then\nthe GRANT of X to B should be retained even if Q decides to revoke their\nGRANT as A still has the right from P. If both remove the right, however,\neither B should lose the right (if CASCADE was passed in) or an error\nshould be returned saying that there’s a dependent GRANT and CASCADE wasn’t\ngiven.\n\nI'm trying to get this review out sooner than later and so I might be\n>> missing something, but looking at the regression test for this and these\n>> error messages, feels like the 'circular' error message makes more sense\n>> than the 'your own grantor' message that actually ends up being\n>> returned in that regression test.\n>>\n>\n> Having a more specific error seems reasonable, faster to track down what\n> the problem is.\n>\n\nYeah, but also making sure that all the error messages we have in this area\nare in the regression test output would be good.\n\nMakes me wonder if we might try to figure out a way to globally check for\nthat. I suppose one could review coverage.p.o for any ereport() calls that\naren’t ever called. I wonder what that would turn up.\n\nI think that the whole graph dynamic of this might need some presentation\n> work (messages and/or psql and/or functions) ; but assuming the errors are\n> handled improved messages and/or presentation of graphs can be a separate\n> enhancement.\n>\n\nYes, we can further improve this later too but that doesn’t mean we should\njust commit this as-is when some deficiencies have been pointed out. If the\nonly comments were “would be good to improve this error message but I\nhaven’t got a great idea how”, then sure, but there were other items\npointed out which were clear corrections and we should make sure to cover\nin the regression tests all these scenarios that we are checking for and\nerroring on, lest we end up breaking them unintentionally later.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Sun, Jul 31, 2022 at 11:44 David G. Johnston <david.g.johnston@gmail.com> wrote:On Sun, Jul 31, 2022 at 11:18 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, Jul 26, 2022 at 12:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n+       }\n+\n+       /*\n+        * Disallow attempts to grant ADMIN OPTION back to a user who granted it\n+        * to you, similar to what check_circularity does for ACLs. We want the\n+        * chains of grants to remain acyclic, so that it's always possible to use\n+        * REVOKE .. CASCADE to clean up all grants that depend on the one being\n+        * revoked.\n+        *\n+        * NB: This check might look redundant with the check for membership\n+        * loops above, but it isn't. That's checking for role-member loop (e.g.\n+        * A is a member of B and B is a member of A) while this is checking for\n+        * a member-grantor loop (e.g. A gave ADMIN OPTION to X to B and now B, who\n+        * has no other source of ADMIN OPTION on X, tries to give ADMIN OPTION\n+        * on X back to A).\n+        */\n\nWith this exact scenario, wouldn't it just be a no-op as A must have\nADMIN OPTION already on X?  The spec says that no cycles of role\nauthorizations are allowed.I’ve realized that what I hadn’t been contemplating here is actually that the GRANT from B to A for X wouldn’t be redundant because grantor is part of the key (A got the right from someone else, but this would be giving it to A from B and therefore would be distinct and would also create a loop which is no good).  Haven’t got a good idea on how to improve on the comment based off of that though it still feels like it could be clearer. If I think of something, I’ll share it.Role A must have admin option for X to grant membership in X (with or without admin option) to B.  But that doesn't preclude A from getting another admin option from someone else.  That someone else cannot be someone to whom they gave admin option to however. So B cannot grant admin option back to A but role P could if it was basically a sibling of A (i.e., both getting their initial admin option from someone else).Right but that wasn’t what I had been trying to get at above.If they do have admin option twice it should be possible to drop one of them, the prohibition should be on dropping the only admin option permission a role has for some other role.  The commit message for 2 contemplates this though I haven't gone through the revocation code in detail.Yes, think I agree with this also- if A has been given the WITH ADMIN right from Q and P to GRANT X to other roles, and uses that to GRANT X to B, then the GRANT of X to B should be retained even if Q decides to revoke their GRANT as A still has the right from P. If both remove the right, however, either B should lose the right (if CASCADE was passed in) or an error should be returned saying that there’s a dependent GRANT and CASCADE wasn’t given.\nI'm trying to get this review out sooner than later and so I might be\nmissing something, but looking at the regression test for this and these\nerror messages, feels like the 'circular' error message makes more sense\nthan the 'your own grantor' message that actually ends up being\nreturned in that regression test.Having a more specific error seems reasonable, faster to track down what the problem is.Yeah, but also making sure that all the error messages we have in this area are in the regression test output would be good.Makes me wonder if we might try to figure out a way to globally check for that. I suppose one could review coverage.p.o for any ereport() calls that aren’t ever called.  I wonder what that would turn up.I think that the whole graph dynamic of this might need some presentation work (messages and/or psql and/or functions) ; but assuming the errors are handled improved messages and/or presentation of graphs can be a separate enhancement.Yes, we can further improve this later too but that doesn’t mean we should just commit this as-is when some deficiencies have been pointed out. If the only comments were “would be good to improve this error message but I haven’t got a great idea how”, then sure, but there were other items pointed out which were clear corrections and we should make sure to cover in the regression tests all these scenarios that we are checking for and erroring on, lest we end up breaking them unintentionally later.Thanks,Stephen", "msg_date": "Sun, 31 Jul 2022 14:19:34 -0700", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Sun, Jul 31, 2022 at 2:18 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Thanks for working on this.\n\nThanks for the review.\n\n> > Previously, only the superuser could specify GRANTED BY with a user\n> > other than the current user. Relax that rule to allow the grantor\n> > to be any role whose privileges the current user posseses. This\n> > doesn't improve compatibility with what we do for other object types,\n> > where support for GRANTED BY is entirely vestigial, but it makes this\n> > feature more usable and seems to make sense to change at the same time\n> > we're changing related behaviors.\n>\n> Presumably the GRANTED BY user in this case still has to have the\n> ability to have performed the GRANT themselves? Looks that way below\n> and it's just the commit message, but was the first question that came\n> to mind when I read through this.\n\nYes. The previous paragraph in this commit message seems to cover this\npoint pretty thoroughly.\n\n> <para>\n> If <literal>GRANTED BY</literal> is specified, the grant is recorded as\n> - having been done by the specified role. Only database superusers may\n> - use this option, except when it names the same role executing the command.\n> + having been done by the specified role. A user can only attribute a grant\n> + to another role if they possess the privileges of that role. A role can\n> + only be recorded as a grantor if has <literal>ADMIN OPTION</literal> on\n>\n> Should be: if they have\n>\n> + a role or is the bootstrap superuser. When a grant is recorded as having\n>\n> on *that* role seems like it'd be better. And maybe 'or if they are the\n> bootstrap superuser'?\n\nWill fix.\n\n> diff --git a/src/backend/commands/user.c b/src/backend/commands/user.c\n> index 258943094a..8ab2fecf3a 100644\n> --- a/src/backend/commands/user.c\n> +++ b/src/backend/commands/user.c\n> @@ -805,11 +842,12 @@ AlterRole(ParseState *pstate, AlterRoleStmt *stmt)\n> if (stmt->action == +1) /* add members to role */\n> AddRoleMems(rolename, roleid,\n> rolemembers, roleSpecsToIds(rolemembers),\n> - GetUserId(), false);\n> + InvalidOid, false);\n> else if (stmt->action == -1) /* drop members from role */\n> DelRoleMems(rolename, roleid,\n> rolemembers, roleSpecsToIds(rolemembers),\n> - false);\n> + InvalidOid, false, DROP_RESTRICT); /* XXX sketchy - hint\n> + * may mislead */\n> }\n>\n> This comment seems a little concerning..? Also isn't very clear.\n\nOh right. That was a note to myself to look into that more. And then I\ndidn't. I'll look into that more and report back.\n\n> @@ -1027,7 +1065,7 @@ DropRole(DropRoleStmt *stmt)\n>\n> while (HeapTupleIsValid(tmp_tuple = systable_getnext(sscan)))\n> {\n> - Form_pg_auth_members authmem_form;\n> + Form_pg_auth_members authmem_form;\n>\n> authmem_form = (Form_pg_auth_members) GETSTRUCT(tmp_tuple);\n> deleteSharedDependencyRecordsFor(AuthMemRelationId,\n>\n> Some random whitespace changes that seems a bit odd given that they\n> should have been already correct thanks to pgindent- will these end up\n> just getting undone again?\n\nWill fix.\n\n> @@ -1543,14 +1578,94 @@ AddRoleMems(const char *rolename, Oid roleid,\n> (errcode(ERRCODE_INVALID_GRANT_OPERATION),\n> errmsg(\"role \\\"%s\\\" is a member of role \\\"%s\\\"\",\n> rolename, get_rolespec_name(memberRole))));\n> + }\n> +\n> + /*\n> + * Disallow attempts to grant ADMIN OPTION back to a user who granted it\n> + * to you, similar to what check_circularity does for ACLs. We want the\n> + * chains of grants to remain acyclic, so that it's always possible to use\n> + * REVOKE .. CASCADE to clean up all grants that depend on the one being\n> + * revoked.\n> + *\n> + * NB: This check might look redundant with the check for membership\n> + * loops above, but it isn't. That's checking for role-member loop (e.g.\n> + * A is a member of B and B is a member of A) while this is checking for\n> + * a member-grantor loop (e.g. A gave ADMIN OPTION to X to B and now B, who\n> + * has no other source of ADMIN OPTION on X, tries to give ADMIN OPTION\n> + * on X back to A).\n> + */\n>\n> With this exact scenario, wouldn't it just be a no-op as A must have\n> ADMIN OPTION already on X? The spec says that no cycles of role\n> authorizations are allowed. Presumably we'd continue this for other\n> GRANT'able things which can be further GRANT'd (should we add them) in\n> the future? Just trying to think ahead a bit here in case it's\n> worthwhile. Those would likely be ABC WITH GRANT OPTION too, right?\n\nI don't believe there's anything novel here - at least there isn't\nsupposed to be. Here's the equivalent with table privileges:\n\nrhaas=# create table t1();\nCREATE TABLE\nrhaas=# create role foo;\nCREATE ROLE\nrhaas=# create role bar;\nCREATE ROLE\nrhaas=# grant select on t1 to foo with grant option;\nGRANT\nrhaas=# set role foo;\nSET\nrhaas=> grant select on t1 to bar with grant option;\nGRANT\nrhaas=> set role bar;\nSET\nrhaas=> grant select on t1 to foo with grant option;\nERROR: grant options cannot be granted back to your own grantor\n\n\n> + if (admin_opt && grantorId != BOOTSTRAP_SUPERUSERID)\n> + {\n> + CatCList *memlist;\n> + RevokeRoleGrantAction *actions;\n> + int i;\n> +\n> + /* Get the list of members for this role. */\n> + memlist = SearchSysCacheList1(AUTHMEMROLEMEM,\n> + ObjectIdGetDatum(roleid));\n> +\n> + /*\n> + * Figure out what would happen if we removed all existing grants to\n> + * every role to which we've been asked to make a new grant.\n> + */\n> + actions = initialize_revoke_actions(memlist);\n> + foreach(iditem, memberIds)\n> + {\n> + Oid memberid = lfirst_oid(iditem);\n> +\n> + if (memberid == BOOTSTRAP_SUPERUSERID)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_GRANT_OPERATION),\n> + errmsg(\"grants with admin options cannot be circular\")));\n> + plan_member_revoke(memlist, actions, memberid);\n> + }\n>\n> I don't see a regression test added which produces the above error\n> message. The memberid == BOOTSTRAP_SUPERUSERID seems odd too?\n\nDo we guarantee that the regression tests are running as the bootstrap\nsuperuser, or just as some superuser? I am a bit reluctant to add a\nregression test that assumes the former unless we're assuming it\nalready. For 'make check' it doesn't matter but 'make installcheck' is\nanother story.\n\nThe memberid == BOOTSTRAP_SUPERUSERID case is very much intentional.\nThe code will detect any loops in the catalog, but the implicit grant\nto the bootstrap superuser doesn't exist in the catalog\nrepresentation, so it needs a separate check. I think I should sync\nthe two error messages though, i.e. this should say \"admin options\ncannot be granted back to your own grantor\" like the other one just\nbelow.\n\n> I do see this in the regression tests. There though, the GRANTs are\n> being performed by someone else, so saying 'your' isn't quite right.\n> I'm trying to get this review out sooner than later and so I might be\n> missing something, but looking at the regression test for this and these\n> error messages, feels like the 'circular' error message makes more sense\n> than the 'your own grantor' message that actually ends up being\n> returned in that regression test.\n\nI think it's a bit off too, but I didn't invent it. See check_circularity():\n\n if ((ACLITEM_GET_GOPTIONS(*mod_aip) & ~own_privs) != 0)\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_GRANT_OPERATION),\n errmsg(\"grant options cannot be granted back to your\nown grantor\")));\n\nLooks like Tom Lane, vintage 2004, 4b2dafcc0b1a579ef5daaa2728223006d1ff98e9.\n\n> The comment above DropRoleMems missed adding a description for\n> the 'behavior' parameter.\n\nWill fix.\n\n> @@ -1669,40 +1774,69 @@ DelRoleMems(const char *rolename, Oid roleid,\n> + /*\n> + * We may need to recurse to dependent privileges if DROP_CASCADE was\n> + * specified, or refuse to perform the operation if dependent privileges\n> + * exist and DROP_RECURSE was specified. plan_single_revoke() will\n> + * figure out what to do with each catalog tuple.\n> + */\n>\n> Pretty sure that should be DROP_RESTRICT, not DROP_RECURSE.\n\nI'm pretty sure you are right.\n\n> +/*\n> + * Sanity-check, or infer, the grantor for a GRANT or REVOKE statement\n> + * targeting a role.\n> + *\n> + * The grantor must always be either a role with ADMIN OPTION on the role in\n> + * which membership is being granted, or the bootstrap superuser. This is\n> + * similar to the restriction enforced by select_best_grantor, except that\n> + * roles don't have owners, so we regard the bootstrap superuser as the\n> + * implicit owner.\n> + *\n> + * The return value is the OID to be regarded as the grantor when executing\n> + * the operation.\n> + */\n> +static Oid\n> +check_role_grantor(Oid currentUserId, Oid roleid, Oid grantorId, bool is_grant)\n>\n> As this also does some permission checks, it seems like it'd be good to\n> mention that in the function description. Maybe also in the places that\n> call into this function with the expectation that the privilege check\n> will be taken care of here.\n>\n> Indeed, I wonder if maybe we should really split this function into two\n> as the \"give me what the best grantor is\" is a fair bit different from\n> \"check if this user has permission to grant as this role\". As noted in\n> the comments, the current function also only does privilege checking in\n> some case- when InvalidOid is passed in we've also already done\n> permissions checks to make sure that the GRANT will succeed.\n\nI'll think about this some more, but I don't want to commit to\nchanging it very much. IMHO, the whole split into AddRoleMems() and\nDelRoleMems() for what is basically the same operation seems pretty\ndubious, but this commit's intended purpose is to clean up the\nbehavior rather than to rewrite the code. So I left the existing logic\nin AddRoleMems() and DelRoleMems() alone, and when I realized I needed\nsomething else that was mostly common to both, I made this function\ninstead of duplicating the logic in two places. I realize there are\nother ways that it could be split up, and maybe some of those are\nbetter in theory, but they'd likely also expand the scope of the patch\nto things that it doesn't quite need to touch. I'm not real keen to go\nthere. That can be done later, in a separate patch, or never, and I\ndon't think we'll really be any the worse for it.\n\n> +/*\n> + * Initialize an array of RevokeRoleGrantAction objects.\n> + *\n> + * 'memlist' should be a list of all grants for the target role.\n> + *\n> + * We here construct an array indicating that no actions are to be performed;\n> + * that is, every element is intiially RRG_NOOP.\n> + */\n>\n> \"We here construct\" seems odd wording to me. Maybe \"Here we construct\"?\n\nIt seems completely fine to me, but I'll change it somehow to avoid\nannoying you. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 Aug 2022 13:00:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Sun, Jul 31, 2022 at 2:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Indeed. I've not read the patch, but I just wanted to mention that\n> the cfbot shows it as failing regression tests on all platforms.\n> Possibly a conflict with some recent commit?\n\nI can't see this on cfbot - either I don't know how to use it\nproperly, which is quite possible, or the results aren't showing up\nbecause of the close of the July CommitFest.\n\nI tried a rebase locally and it didn't seem to change anything\nmaterial, not even context lines.\n\nCan you provide a link or something that I can look at?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 Aug 2022 13:33:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sun, Jul 31, 2022 at 2:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Indeed. I've not read the patch, but I just wanted to mention that\n>> the cfbot shows it as failing regression tests on all platforms.\n>> Possibly a conflict with some recent commit?\n\n> I can't see this on cfbot - either I don't know how to use it\n> properly, which is quite possible, or the results aren't showing up\n> because of the close of the July CommitFest.\n\nI think the latter --- the cfbot thinks the July CF is no longer relevant,\nbut Jacob hasn't yet moved your patches forward. You could wait for\nhim to do that, or do it yourself.\n\n(Probably our nonexistent SOP manual for CFMs ought to say \"don't\nclose the old CF till you've moved everything forward\".)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Aug 2022 13:38:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Mon, Aug 1, 2022 at 1:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think the latter --- the cfbot thinks the July CF is no longer relevant,\n> but Jacob hasn't yet moved your patches forward. You could wait for\n> him to do that, or do it yourself.\n\nDone. New patches attached.\n\nChanges in v4, for 0001:\n\n- Typo fix.\n- Whitespace fixes.\n\nChanges in v4, for 0002:\n\n- Remove \"XXX sketchy\" comment because the thing in question turns out\nnot to be sketchy. It has to do with the behavior of ALTER GROUP ..\nDROP USER and, having investigated the situation, I think the\nmessaging is clear enough.\n- But just to be sure, add a note to the ALTER GROUP documentation to\ntry to make things more clear.\n- Wording fixes to the \"If <literal>GRANTED BY</literal> is\nspecified...\" paragraph of the GRANT documentation. I reworded this a\nbit more extensively than what Stephen proposed. Hopefully this is\nclearer now, or at least no longer missing any words.\n- Change message to \"admin option cannot be granted back to your own\ngrantor\". The choice of message is intended to be consistent with the\nexisting message \"grant options cannot be granted back to your own\ngrantor,\" but while there's one grant option per privilege, there's\nonly one admin option. Stephen suggested adopting a message that I had\nmeant to take out of the version I posted, but which ended up\nsurviving in one place, \"grants with admin options cannot be\ncircular\". And we could still decide to do something like that, but my\nenthusiasm for that direction was considerably reduced when I realized\nthat \"circular\" is not very clear at all, because there are multiple\nkinds of circularities (role-member, member-grantor).\n- Fix comment to say DROP_RESTRICT instead of DROP_RECURSE.\n- Make the comment for check_role_grantor() longer so that it can\nbetter explain itself.\n- Rephrase part of the header comment for initialize_revoke_actions()\nbecause Stephen found it awkward.\n- Whitespace fixes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 1 Aug 2022 15:51:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Mon, Aug 1, 2022 at 10:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I can't see this on cfbot - either I don't know how to use it\n> > properly, which is quite possible, or the results aren't showing up\n> > because of the close of the July CommitFest.\n>\n> I think the latter --- the cfbot thinks the July CF is no longer relevant,\n> but Jacob hasn't yet moved your patches forward. You could wait for\n> him to do that, or do it yourself.\n>\n> (Probably our nonexistent SOP manual for CFMs ought to say \"don't\n> close the old CF till you've moved everything forward\".)\n\nSorry about that. I've made a note to add this to the manual later.\n\n--Jacob\n\n\n", "msg_date": "Tue, 2 Aug 2022 10:48:53 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Mon, Aug 1, 2022 at 3:51 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Aug 1, 2022 at 1:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I think the latter --- the cfbot thinks the July CF is no longer relevant,\n> > but Jacob hasn't yet moved your patches forward. You could wait for\n> > him to do that, or do it yourself.\n>\n> Done. New patches attached.\n\nWell, CI isn't happy with this, and for good reason:\n\n ALTER GROUP regress_priv_group2 ADD USER regress_priv_user2; -- duplicate\n-NOTICE: role \"regress_priv_user2\" has already been granted\nmembership in role \"regress_priv_group2\" by role \"rhaas\"\n+NOTICE: role \"regress_priv_user2\" has already been granted\nmembership in role \"regress_priv_group2\" by role \"postgres\"\n\nThe problem here is that I revised the error message to include the\nname of the grantor, since that's now a part of the identity of the\ngrant. It would be misleading to say, as we did previously...\n\nNOTICE: role \"regress_priv_user2\" is already a member of role\n\"regress_priv_group2\"\n\n...because them being in the group isn't relevant so much as them\nbeing in the group by means of the same grantor. However, I suspect\nthat I can't persuade all of you that we should hard-code the name of\nthe bootstrap superuser as \"rhaas\", so this test case needs some\nalteration. I found, however, that the original intent of the test\ncase couldn't be preserved with the patch as written, because when you\ngrant membership in one role to another role as the superuser or a\nCREATEROLE user, the grant is attributed to the bootstrap superuser,\nwhose name is variable, as this test failure shows. Therefore, to fix\nthe test, I needed to use ALTER GROUP as a non-CREATEROLE user, some\nuser created as part of the test, for the results to be stable. But\nthat was impossible, because even though \"GRANT user_name TO\ngroup_name\" requires *either* CREATEROLE *or* ADMIN OPTION on the\ngroup, the equivalent command \"ALTER GROUP group_name ADD USER\nuser_name\" requires specifically CREATEROLE.\n\nI debated whether to fix that inconsistency or just remove this test\ncase and eventually came down on the side of fixing the inconsistency,\nso the attached version does it that way.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 10 Aug 2022 16:28:45 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Wed, Aug 10, 2022 at 4:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Well, CI isn't happy with this, and for good reason:\n\nCI is happier with this version, so I've committed 0001. If no major\nproblems emerge, I'll proceed with 0002 as well.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Aug 2022 13:26:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Thu, Aug 18, 2022 at 1:26 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Aug 10, 2022 at 4:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Well, CI isn't happy with this, and for good reason:\n>\n> CI is happier with this version, so I've committed 0001. If no major\n> problems emerge, I'll proceed with 0002 as well.\n\nDone.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Aug 2022 11:47:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Mon, 2022-08-22 at 11:47 -0400, Robert Haas wrote:\n> On Thu, Aug 18, 2022 at 1:26 PM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> > On Wed, Aug 10, 2022 at 4:28 PM Robert Haas <robertmhaas@gmail.com>\n> > wrote:\n> > > Well, CI isn't happy with this, and for good reason:\n> > \n> > CI is happier with this version, so I've committed 0001. If no\n> > major\n> > problems emerge, I'll proceed with 0002 as well.\n> \n> Done.\n\nIt's still on the CF, so I took a look.\n\nThere's still some weirdness around superusers:\n\n1. \"GRANTED BY current_user\" differs from not specifying \"GRANTED BY\"\nat all.\n\n a. With GRANTED BY current_user, weird because current_user is a\nsuperuser:\n\n CREATE USER su1 SUPERUSER;\n CREATE ROLE u1;\n CREATE ROLE u2;\n \\c - su1\n GRANT u2 TO u1 GRANTED BY current_user;\n ERROR: grantor must have ADMIN OPTION on \"u2\"\n\n b. Without GRANTED BY:\n\n CREATE USER su1 SUPERUSER;\n CREATE ROLE u1;\n CREATE ROLE u2;\n \\c - su1\n GRANT u2 TO u1;\n -- grantor is bootstrap superuser\n\n2. Grantor can depend on the path to get there:\n\n a. Already superuser:\n\n CREATE USER su1 SUPERUSER;\n CREATE ROLE u1;\n CREATE ROLE u2;\n GRANT u2 TO su1 WITH ADMIN OPTION;\n \\c - su1\n GRANT u2 TO u1;\n -- grantor is bootstrap superuser\n\n b. Becomes superuser after GRANT:\n\n CREATE USER su1;\n CREATE ROLE u1;\n CREATE ROLE u2;\n GRANT u2 TO su1 WITH ADMIN OPTION;\n \\c - su1\n GRANT u2 TO u1;\n \\c - bootstrap_superuser\n ALTER ROLE su1 SUPERUSER;\n -- grantor is su1\n\n3. Another case where \"GRANTED BY current_user\" differs from no\n\"GRANTED BY\" at all, with slightly different consequences:\n\n a. GRANTED BY current_user, throws error:\n\n  CREATE USER su1 SUPERUSER;\n CREATE ROLE u1;\n CREATE ROLE u2;\n GRANT u2 TO su1 WITH ADMIN OPTION;\n \\c - su1\n GRANT u2 TO u1 GRANTED BY current_user;\n -- grantor is su1\n \\c - bootstrap_superuser\n REVOKE ADMIN OPTION FOR u2 FROM su1;\n ERROR: dependent privileges exist\n\n b. No GRANTED BY, no error:\n\n CREATE USER su1 SUPERUSER;\n CREATE ROLE u1;\n CREATE ROLE u2;\n GRANT u2 TO su1 WITH ADMIN OPTION;\n \\c - su1\n GRANT u2 TO u1;\n -- grantor is bootstrap superuser\n \\c - boostrap_superuser\n REVOKE ADMIN OPTION FOR u2 FROM su1;\n\n\n\nWe seem to be trying very hard to satisfy two things that seem\nimpossible to satisfy:\n\n i. \"ALTER ROLE ... NOSUPERUSER\" must always succeed, and probably\nexecute quickly, too.\n ii. We want to maintain catalog invariants that are based, in part,\non roles having superuser privileges or not.\n\nThe hacks we are using to try to make this work are just that: hacks.\nAnd it's all to satisfy a fairly rare case: removing superuser\nprivileges and expecting the catalogs to be consistent.\n\nI think we'd be better off without these hacks. I'm not sure exactly\nhow, but the benefit doesn't seem to be worth the cost. Some\nalternative ideas:\n\n * Have a \"safe\" version of removing superuser that can error or\ncascade, and an \"unsafe\" version that always succeeds but might leave\ninconsistent catalogs.\n * Ignore the problems with removing superuser, but issue a WARNING\n * Superusers would auto-grant themselves the privileges that a normal\nuser would need to do something before doing it. For instance, if a\nsuperuser did \"GRANT u2 TO u1\", it would first automatically issue a\n\"GRANT u2 TO current_user WITH ADMIN OPTION GRANTED BY\nbootstrap_superuser\", then do the grant normally. If the superuser\nprivileges are removed, then the catalogs would still be consistent.\nThis is a new idea and I didn't think it through very carefully, but\nmight be an interesting approach.\n\nAlso, it would be nice to have REASSIGN OWNED work with grants, perhaps\nby adding a \"WITH[OUT] GRANT\" or something.\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n\n\n", "msg_date": "Thu, 01 Sep 2022 13:34:06 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "Thanks for having a look.\n\nOn Thu, Sep 1, 2022 at 4:34 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> There's still some weirdness around superusers:\n>\n> 1. \"GRANTED BY current_user\" differs from not specifying \"GRANTED BY\"\n> at all.\n\nYes. I figured that, when GRANTED BY is not specified, it is OK to\ninfer a valid grantor, but if it is specified, it does not seem right\nto infer a grantor other than the one specified. Admittedly, this case\nis without precedent elsewhere in the system, because nobody has made\nGRANTED BY work for other object types, outside of trivial cases.\nStill, it seems like the right behavior to me.\n\n> 2. Grantor can depend on the path to get there:\n>\n> a. Already superuser:\n>\n> CREATE USER su1 SUPERUSER;\n> CREATE ROLE u1;\n> CREATE ROLE u2;\n> GRANT u2 TO su1 WITH ADMIN OPTION;\n> \\c - su1\n> GRANT u2 TO u1;\n> -- grantor is bootstrap superuser\n>\n> b. Becomes superuser after GRANT:\n>\n> CREATE USER su1;\n> CREATE ROLE u1;\n> CREATE ROLE u2;\n> GRANT u2 TO su1 WITH ADMIN OPTION;\n> \\c - su1\n> GRANT u2 TO u1;\n> \\c - bootstrap_superuser\n> ALTER ROLE su1 SUPERUSER;\n> -- grantor is su1\n\nThis also seems correct to me, and here I believe you could construct\nsimilar examples with other object types. We infer the grantor based\non the state of the system at the time the grant was performed. We\ncan't change our mind later even if things have changed that would\ncause us to make a different inference. In the case of a table, for\nexample, consider:\n\ncreate role p1;\ncreate role p2;\ncreate role a;\ncreate table t1 (a int);\ncreate role b;\ngrant select on table t1 to p1 with grant option;\ngrant select on table t1 to p2 with grant option;\ngrant p1 to a;\nset session authorization a;\ngrant select on table t1 to b;\n\nAt this point, b has SELECT permission on table t1 and the grantor of\nrecord is p1. But if you had done \"GRANT p2 TO a\" then the grantor of\nrecord would be p2 rather than p1. And you can still \"REVOKE p1 FROM\na;\" and then \"GRANT p2 to a;\". As in your example, doing so won't\nchange the grantor recorded for the grant already made.\n\n> 3. Another case where \"GRANTED BY current_user\" differs from no\n> \"GRANTED BY\" at all, with slightly different consequences:\n\nIt's extremely difficult for me to imagine what other behavior would\nbe sane here. In this example, the inferred best grantor is different\nfrom the current user, so forcing the grantor to be the current user\nchanges the behavior. There are only two ways that anything different\ncan happen: either we'd have to change the algorithm for inferring the\nbest grantor, or we'd have to be willing to disregard the user's\nexplicit specification that the grantor be the current user rather\nthan somebody else.\n\nAs to the first, the algorithm being used to select the best grantor\nhere is analogous to the one we use for privileges on other object\ntypes, such as tables, namely, we prefer to create a grant that is not\ndependent on some other grant, rather than one that is. Maybe that's\nthe best policy and maybe it isn't, but I can't see it being\nreasonable to have one policy for grants on tables, functions, etc.\nand another policy for grants on roles.\n\nAs to the second, this is somewhat similar to the case you already\nraised in your example #1. However, in that case, the\nexplicitly-specified grantor wasn't valid, so the grant failed. I\ndon't think it's right to allow inference in the presence of an\nexplicit specification, but if the consensus was that we really ought\nto make that case succeed, I suppose we could. Here, however, the\nexplicitly-specified grantor *is a legal grantor*. I think it would be\nextremely surprising if we just ignored that and selected some other\nvalid grantor instead.\n\n> We seem to be trying very hard to satisfy two things that seem\n> impossible to satisfy:\n>\n> i. \"ALTER ROLE ... NOSUPERUSER\" must always succeed, and probably\n> execute quickly, too.\n> ii. We want to maintain catalog invariants that are based, in part,\n> on roles having superuser privileges or not.\n>\n> The hacks we are using to try to make this work are just that: hacks.\n> And it's all to satisfy a fairly rare case: removing superuser\n> privileges and expecting the catalogs to be consistent.\n\nI guess I don't really agree with that view of it. The primary purpose\nof the patch was to make the handing of role grants consistent with\nthe handling of grants on other object types. I did extend the\nexisting functionality, because the GRANTED BY <whoever> clause works\nfor role grants and does not work for other grants. However, that also\nworked for role grants before these patches, whereas it's never worked\nfor other object types. So I chose to restrict that functionality as\nlittle as possible, and basically make it work, rather than removing\nit completely, which would have been the most consistent with what we\ndo elsewhere.\n\nWhen you view this in the context of how other types of grants work,\nALTER ROLE ... NOSUPERUSER isn't as much of a special case. Just as we\nwant ALTER ROLE ... NOSUPERUSER to succeed quickly, we also insist\nthat REVOKE role1 FROM role2 to succeed quickly. It isn't allowed to\nfail due to the existence of dependent privileges, because there\naren't allowed to be any dependent privileges. GRANT role1 TO role2\ndoesn't really give role2 the privileges of role1; what it does is\nallow role2 to act on behalf of role1. Similarly, ALTER ROLE ...\nSUPERUSER lets the target role act on behalf of any user at all,\nincluding the bootstrap superuser. In either case, actions are\nattributed to the user on behalf of whom they were performed, not the\nuser who actually typed the command.\n\nAs another example, consider a superuser (the bootstrap superuser or\nany other one) who executes GRANT SELECT ON some_random_table TO\nsome_random_user. Who will be recorded as the grantor? The answer is\nthat the table owner will be recorded as the grantor, because the\ntable owner is the one who actually has permission to perform the\noperation. The superuser doesn't, except by virtue of their ability to\nact on behalf of any other user in the system. In most cases, that's\njust an academic distinction, because the question is only whether or\nnot the operation can be performed, and not who has to perform it. But\ngrants are different: it matters who does it, and when someone uses\nsuperuser powers or other special privileges to perform an operation,\nwe have to ask on whose behalf they are acting.\n\n> I think we'd be better off without these hacks. I'm not sure exactly\n> how, but the benefit doesn't seem to be worth the cost. Some\n> alternative ideas:\n>\n> * Have a \"safe\" version of removing superuser that can error or\n> cascade, and an \"unsafe\" version that always succeeds but might leave\n> inconsistent catalogs.\n> * Ignore the problems with removing superuser, but issue a WARNING\n\nI don't like either of these. I think the fact that we have strong\nintegrity constraints around who can be recorded as the grantor of a\nprivilege is a good thing, and, again, the purpose of this patch was\nto bring role grants up to the level of other parts of the system.\n\n> * Superusers would auto-grant themselves the privileges that a normal\n> user would need to do something before doing it. For instance, if a\n> superuser did \"GRANT u2 TO u1\", it would first automatically issue a\n> \"GRANT u2 TO current_user WITH ADMIN OPTION GRANTED BY\n> bootstrap_superuser\", then do the grant normally. If the superuser\n> privileges are removed, then the catalogs would still be consistent.\n> This is a new idea and I didn't think it through very carefully, but\n> might be an interesting approach.\n\nIf we did this, we ought to do it for all object types, so that if a\nsuperuser grants privileges on a table they don't own, they implicitly\ngrant themselves those privileges with grant option and then grant\nthem to the requested recipient. I doubt that behavior change would be\npopular, and I bet somebody would complain about the SQL standard or\nsomething, but it seems more theoretically sound than the previous two\nideas, because it doesn't just throw the idea of integrity constraints\nout the window.\n\n> Also, it would be nice to have REASSIGN OWNED work with grants, perhaps\n> by adding a \"WITH[OUT] GRANT\" or something.\n\nI thought about this, too. It's a bit tricky. Right now, DROP OWNED\ndrops grants, but REASSIGN OWNED doesn't change their owner. On first\nglance, this seems inconsistent: either grants are a kind of object\nand DROP OWNED and REASSIGN OWNED ought to apply to them like anything\nelse, or they are not a type of object and neither command should\ntouch them. However, there's a pretty significant difference between\n(1) a table and (2) a grant of privileges on a table. Ownership on the\ntable itself can be freely changed to any role in the system at any\ntime. We rewrite the table's ACL on the fly to preserve the invariants\nabout who can be listed as the grantor. But for the grant of\nprivileges on the table, we can't freely change the grantor of record\nto an arbitrary user at any time: the set of valid grantors is\nconstrained.\n\nWhat might be useful is a command that says \"OK, for every existing\ngrant that is attributed to user A, change the recorded grantor to\nuser B, if that's allowable, for the others, do nothing\". Or maybe\nthere's some possible idea where we try to somehow make B into a valid\ngrantor, but it's not clear to me what the algorithm would be.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Sep 2022 09:30:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Fri, 2022-09-02 at 09:30 -0400, Robert Haas wrote:\n> Thanks for having a look.\n\nThanks for doing the work.\n\n> Yes. I figured that, when GRANTED BY is not specified, it is OK to\n> infer a valid grantor\n\nThe spec is clear that the grantor should be either the current user or\nthe current role. We also have a concept of INHERIT, which allows us to\nchoose a role we're a member of if the current one does not suffice.\n\nBut to choose a different role (the bootstrap superuser) even when the\ncurrent (super) user role *does* suffice seems like an outright\nviolation of both the spec and the principle of least surprise.\n> \n\n> set session authorization a;\n> grant select on table t1 to b;\n> \n> At this point, b has SELECT permission on table t1 and the grantor of\n> record is p1\n\nThat's because \"a\" does not have permision to grant select on t1, so\nINHERIT kicks in to implicitly \"SET ROLE p1\". What keeps INHERIT sane\nis that it only kicks in when required (i.e. it would otherwise result\nin failure).\n\nBut in the case I raised, the current user is an entirely valid\ngrantor, so it doesn't make sense to me to infer a different grantor.\n\n> \n\n> As to the first, the algorithm being used to select the best grantor\n> here is analogous to the one we use for privileges on other object\n> types, such as tables, namely, we prefer to create a grant that is\n> not\n> dependent on some other grant, rather than one that is.\n\nI don't quite follow. It seems like we're conflating a policy based on\nINHERIT with the policy around grants by superusers.\n\nIn the case of role membership and INHERIT, our current behavior seems\nwise (and closer to the standard): to prefer a grantor that is closer\nto the current user/role, and therefore less dependent on other grants.\n\nBut for the new policy around superusers, the current superuser is a\ncompletely valid grantor, and we instead preferring the bootstrap\nsuperuser. That doesn't seem consistent or wise to me.\n\n\n> \n> The primary purpose\n> of the patch was to make the handing of role grants consistent with\n> the handling of grants on other object types.\n\nI certainly don't want to pin every weird thing about our privilege\nsystem on you just because you're the last one to touch it. But your\nchanges did extend the behavior, and create some new analogous\nbehavior, so it seems like a reasonable time to discuss whether those\nextensions are in the right direction.\n\n> When you view this in the context of how other types of grants work,\n> ALTER ROLE ... NOSUPERUSER isn't as much of a special case. Just as\n> we\n> want ALTER ROLE ... NOSUPERUSER to succeed quickly, we also insist\n> that REVOKE role1 FROM role2 to succeed quickly. It isn't allowed to\n> fail due to the existence of dependent privileges, because there\n> aren't allowed to be any dependent privileges.\n\n create user u1;\n create user u2;\n create user u3;\n grant u2 to u1 with admin option;\n \\c - u1\n grant u2 to u3;\n \\c - bootstrap_superuser\n revoke u2 from u1;\n ERROR: dependent privileges exist\n\n> But\n> grants are different: it matters who does it, and when someone uses\n> superuser powers or other special privileges to perform an operation,\n> we have to ask on whose behalf they are acting.\n\nIf superusers merely act on behalf of others, then:\n\n 1. Why can the bootstrap superuser be a grantor?\n 2. Why can non-bootstrap superusers specify themselves in GRANTED BY\nif they are not suitable grantors?\n\n> \n> I think the fact that we have strong\n> integrity constraints around who can be recorded as the grantor of a\n> privilege is a good thing, and, again, the purpose of this patch was\n> to bring role grants up to the level of other parts of the system.\n\nI like integrity constriants, too. But it feels like we're recording\nthe wrong information (losing the actual grantor) because it's easier\nto keep it \"consistent\", which doesn't necessarily seem like a win.\n\nAnd the whole reason we are jumping through all of these hoops is\nbecause we want to allow the removal of superuser privileges quickly\nwithout the possibility of failure. In other words, we don't have time\nto do the work of cascading to dependent objects, or erroring when we\nfind them. I'm not entirely sure I agree that's a hard requirement,\nbecause dropping a superuser can fail. But even if it is a requirement,\nare we even meeting it if we preserve the grants that the former\nsuperuser created? I'd like to know more about this requirement, and\nwhether we are still meeting it, and whether there are alternatives.\n\nIt just feels like this edge case requirement about dropping superuser\nprivileges is driving the whole design, and that feels wrong to me.\n\n> >   * Superusers would auto-grant themselves the privileges that a\n> > normal\n> > user would need to do something before doing it. For instance, if a\n> > superuser did \"GRANT u2 TO u1\", it would first automatically issue\n> > a\n> > \"GRANT u2 TO current_user WITH ADMIN OPTION GRANTED BY\n> > bootstrap_superuser\", then do the grant normally. \n\n...\n\n> it seems more theoretically sound than the previous two\n> ideas, because it doesn't just throw the idea of integrity\n> constraints\n> out the window.\n\nPerhaps it's worth considering further. Would be a separate patch, of\ncourse.\n\n> > Also, it would be nice to have REASSIGN OWNED work with grants,\n> > perhaps\n> > by adding a \"WITH[OUT] GRANT\" or something.\n\n...\n\n> What might be useful is a command that says \"OK, for every existing\n> grant that is attributed to user A, change the recorded grantor to\n> user B, if that's allowable, for the others, do nothing\". Or maybe\n> there's some possible idea where we try to somehow make B into a\n> valid\n> grantor, but it's not clear to me what the algorithm would be.\n\nI was thinking that if the new grantor is not allowable, and \"WITH\nGRANT\" (or whatever) was specified, then it would throw an error.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 02 Sep 2022 15:01:10 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Fri, Sep 2, 2022 at 6:01 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > Yes. I figured that, when GRANTED BY is not specified, it is OK to\n> > infer a valid grantor\n>\n> The spec is clear that the grantor should be either the current user or\n> the current role. We also have a concept of INHERIT, which allows us to\n> choose a role we're a member of if the current one does not suffice.\n>\n> But to choose a different role (the bootstrap superuser) even when the\n> current (super) user role *does* suffice seems like an outright\n> violation of both the spec and the principle of least surprise.\n\nI don't think that the current superuser role suffices. For non-role\nobjects, privileges originate in the table owner and can then be\ngranted to others. Roles don't have an explicit owner, so I treated\nthe bootstrap superuser as the implicit owner of every role. Perhaps\nthere is some other way we could go here - e.g. it's been proposed by\nmultiple people that maybe roles should have owners - but I do not\nthink it is viable to regard the owner of a role as being anyone who\nhappens to be a superuser right at the moment. To some extent that's\nrelated to your concern about whether ALTER USER .. NOSUPERUSER should\nbe fast and immune to failure, but I also think that it is a good idea\nto have all of the privileges originating from a single owner. That\nensures, for example, that anyone who can act as the object owner can\nrevoke any privilege, which wouldn't necessarily be true if the object\nhad multiple owners. Now if all of the owners are themselves\nsuperusers who all have the power to become any of the other owners\nthen perhaps it wouldn't end up mattering too much, but it doesn't\nseem like a good idea to rely on that. In fact, part of my goal here\nis to get to a world where there's less need to rely on superuser\npowers to do system administration. I also just think it's less\nconfusing if objects have single owners rather than nebulous groups of\nowners.\n\n> > set session authorization a;\n> > grant select on table t1 to b;\n> >\n> > At this point, b has SELECT permission on table t1 and the grantor of\n> > record is p1\n>\n> That's because \"a\" does not have permision to grant select on t1, so\n> INHERIT kicks in to implicitly \"SET ROLE p1\". What keeps INHERIT sane\n> is that it only kicks in when required (i.e. it would otherwise result\n> in failure).\n>\n> But in the case I raised, the current user is an entirely valid\n> grantor, so it doesn't make sense to me to infer a different grantor.\n\nSee above, but also, see the first stanza of select_best_grantor(). If\nalice is a table owner, and grants permissions to bob WITH GRANT\nOPTION, and bob is a superuser and grants permissions on the table,\nthe grantor will be alice, not bob.\n\n> > As to the first, the algorithm being used to select the best grantor\n> > here is analogous to the one we use for privileges on other object\n> > types, such as tables, namely, we prefer to create a grant that is\n> > not\n> > dependent on some other grant, rather than one that is.\n>\n> I don't quite follow. It seems like we're conflating a policy based on\n> INHERIT with the policy around grants by superusers.\n>\n> In the case of role membership and INHERIT, our current behavior seems\n> wise (and closer to the standard): to prefer a grantor that is closer\n> to the current user/role, and therefore less dependent on other grants.\n>\n> But for the new policy around superusers, the current superuser is a\n> completely valid grantor, and we instead preferring the bootstrap\n> superuser. That doesn't seem consistent or wise to me.\n\nI hope that the above comments on treating the bootstrap superuser as\nthe object owner explain why it works this way.\n\n> I certainly don't want to pin every weird thing about our privilege\n> system on you just because you're the last one to touch it. But your\n> changes did extend the behavior, and create some new analogous\n> behavior, so it seems like a reasonable time to discuss whether those\n> extensions are in the right direction.\n\nSure.\n\n> > When you view this in the context of how other types of grants work,\n> > ALTER ROLE ... NOSUPERUSER isn't as much of a special case. Just as\n> > we\n> > want ALTER ROLE ... NOSUPERUSER to succeed quickly, we also insist\n> > that REVOKE role1 FROM role2 to succeed quickly. It isn't allowed to\n> > fail due to the existence of dependent privileges, because there\n> > aren't allowed to be any dependent privileges.\n>\n> create user u1;\n> create user u2;\n> create user u3;\n> grant u2 to u1 with admin option;\n> \\c - u1\n> grant u2 to u3;\n> \\c - bootstrap_superuser\n> revoke u2 from u1;\n> ERROR: dependent privileges exist\n\nHmm, I stand corrected. I was thinking of a case in which the grant\nwas used to perform an action on behalf of an inherited role. Here the\ngrant from u2 to u3 is performed as u1 and attributed to u1.\n\n> And the whole reason we are jumping through all of these hoops is\n> because we want to allow the removal of superuser privileges quickly\n> without the possibility of failure. In other words, we don't have time\n> to do the work of cascading to dependent objects, or erroring when we\n> find them. I'm not entirely sure I agree that's a hard requirement,\n> because dropping a superuser can fail. But even if it is a requirement,\n> are we even meeting it if we preserve the grants that the former\n> superuser created? I'd like to know more about this requirement, and\n> whether we are still meeting it, and whether there are alternatives.\n>\n> It just feels like this edge case requirement about dropping superuser\n> privileges is driving the whole design, and that feels wrong to me.\n\nI'm struggling to figure out how to reply to this exactly. I do agree\nthat the way ALTER ROLE .. [NO]SUPERUSER thing works is something of a\nwart, and if we were designing SQL from scratch all over again in\n2022, I think it's reasonably likely that a lot of things would end up\nworking quite a bit differently than they actually do. But, at the\nsame time, it also seems to me that (1) the way ALTER ROLE ..\n[NO]SUPERUSER works is pretty firmly entrenched at this point and we\ncan't easily get away with changing it; (2) I don't really see an easy\nway of changing it that wouldn't cause more problems than it solves;\nand (3) it all seems relatively unrelated to this patch.\n\nLike, the logic to infer the grantor in check_role_grantor() and\nselect_best_admin() is intended to be, and as far as I know actually\nis, an exact clone of the logic in select_best_grantor(). It is\ndifferent only in that we regard the bootstrap superuser as the object\nowner because there is no other owner stored in the catalogs; and in\nthat we check CREATEROLE permission rather than SUPERUSER permission.\nEverything else is the same. To be unhappy with the patch, you have to\nthink either that (1) treating the bootstrap superuser as the owner of\nevery role is the wrong idea or (2) that role grants should not choose\nan implicit grantor in the same way that other types of grants do or\n(3) that the code has a bug.\n\nIf you don't think any of those things but believe that the way we've\nmade superusers interact with the grant system is lame in general, I\nsomewhat agree, but if we came up with some new paradigm for how it\nshould work, we'd have to explain why it was sufficiently better than\nthe status quo to justify breaking backward compatibility, and I think\nthat would be a hard argument to make. The current system feels kind\nof old-fashioned and awkward, but it's self-consistent on its terms\nand I bet a lot of people are relying on it to keep working. And I\nthink if we were going to replace it with something that feels fresh\nand modern, focusing on the behavior of ALTER ROLE .. [NO]SUPERUSER\nwould be the wrong place to start. That, to me, seems like it's a\n*mostly* a consequence of much broader design choices, like:\n\n- Having hard-coded superuser checks in many places instead of making\neverything a capability.\n- Having potentially any number of superusers, instead of just one root user.\n- Having granted privileges depend on the grantor continuing to hold\nthe granted privilege, instead of existing independently.\n\nIf I were designing a privilege system for a new piece of software\nthat didn't need to comply with the SQL standard, I think I'd throw at\nleast some and maybe all of those things right out the window. But I\ndesigned a system that had to work within that set of assumptions, I\nthink I'd make it work pretty much the way it actually does.\n\n> > What might be useful is a command that says \"OK, for every existing\n> > grant that is attributed to user A, change the recorded grantor to\n> > user B, if that's allowable, for the others, do nothing\". Or maybe\n> > there's some possible idea where we try to somehow make B into a\n> > valid\n> > grantor, but it's not clear to me what the algorithm would be.\n>\n> I was thinking that if the new grantor is not allowable, and \"WITH\n> GRANT\" (or whatever) was specified, then it would throw an error.\n\nThat could be done too, but then every grant attributed to the target\nrole would have to be validly reattributable to the same new grantor.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Sep 2022 13:15:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Tue, 2022-09-06 at 13:15 -0400, Robert Haas wrote:\n\n\n> Like, the logic to infer the grantor in check_role_grantor() and\n> select_best_admin() is intended to be, and as far as I know actually\n> is, an exact clone of the logic in select_best_grantor(). It is\n> different only in that we regard the bootstrap superuser as the\n> object\n> owner because there is no other owner stored in the catalogs; and in\n> that we check CREATEROLE permission rather than SUPERUSER permission.\n\nThere's at least one other difference: if you specify \"GRANTED BY su1\"\nfor a table grant, it still selects the table owner as the grantor;\nwhereas if you specify \"GRANTED BY su1\" for a role grant, it selects\n\"su1\".\n\n grant all privileges on schema public to public;\n create user su1 superuser;\n create user u1;\n create user u2;\n create user aa;\n grant u2 to su1 with admin option;\n \\c - aa\n create table t_aa(i int);\n grant all privileges on t_aa to su1 with grant option;\n \\c - su1\n grant select on t_aa to u1 granted by su1;\n -- grantor aa\n select relname, relacl from pg_class where relname='t_aa';\n grant u2 to u1 granted by su1; -- grantor su1\n -- grantor su1\n select grantor::regrole from pg_auth_members\n where member='u1'::regrole;\n\n[ If you run the same example but where su1 is not a superuser, then\nboth select \"su1\" as the grantor because that's the only valid grantor\nthat can be inferred. ]\n\nNow that I understand the underlying philosophy better, and I've\nexperimented with more cases, I propose the following grantor inference\nbehavior which I believe is in the spirit of your changes:\n\n * Let the granting user be the one specified in the GRANTED BY clause\nif it exists; otherwise the current user. In other words, omitting\nGRANTED BY is the same as specifying \"GRANTED BY current_user\".\n * If the granting user has privileges to be the grantor (ADMIN OPTION\nfor roles, GRANT OPTION for other objects) then the granting user is\nthe grantor.\n * Else if the granting user inherits from a user with the privileges\nto be the grantor, then it selects a role with the fewest inheritance\nhops as the grantor.\n * Else if the current user is any superuser:\n - If the grant is a role grant, it selects the bootstrap superuser\nas the grantor.\n - Else the object owner is the grantor.\n * Else error (or if an error would break important backwards\ncompatibility, silently make it work like before or perhaps issue a\nWARNING).\n\nIn other words, try to issue the grant normally if at all possible, and\nplay the superuser card as a last resort. I believe that will lead to\nthe fewest surprising cases, and make them easiest to explain, because\nsuperuser-ness doesn't influence the outcome in as many cases.\n\nIt cements the idea that the bootstrap superuser is the \"real\"\nsuperuser, and must always remain so, and that all other superusers are\ntemporary stand-ins (kind of but not quite the same as inheritance).\nAnd it leaves the ugliness that we lose the information about the\n\"real\" grantor when we play the superuser card, but, as I say above,\nthat would be a last resort.\n\nThe proposal would be a slight behavior change from v15 in the\nfollowing case:\n\n grant all privileges on schema public to public;\n create user su1 superuser;\n create user u1;\n create user aa;\n \\c - aa\n create table t_aa(i int);\n grant all privileges on t_aa to su1 with grant option;\n \\c - su1\n grant select on t_aa to u1 granted by su1;\n -- grantor \"aa\" in v15, grantor \"su1\" after my proposal\n select relname, relacl from pg_class where relname='t_aa';\n\nAnother change in behavior would be that the bootstrap superuser could\nbe the grantor for table privileges, if the bootstrap superuser has\nWITH GRANT OPTION privileges.\n\nBut those seems minor to me.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 06 Sep 2022 16:26:12 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Tue, 2022-09-06 at 16:26 -0700, Jeff Davis wrote:\n> In other words, omitting\n> GRANTED BY is the same as specifying \"GRANTED BY current_user\".\n\nLet me correct this thinko to distinguish between specifying GRANTED BY\nand not:\n\n * Let the granting user be the one specified in the GRANTED BY clause\nif it exists; otherwise the current user.\n * If the granting user has privileges to be the grantor (ADMIN OPTION\nfor roles, GRANT OPTION for other objects) then the granting user is\nthe grantor.\n * Else if GRANTED BY was *not* specified, infer the grantor:\n - If the granting user inherits from a role with the privileges\nto be the grantor, then it selects a role with the fewest inheritance\nhops as the grantor.\n - Else if the current user is any superuser, the grantor is the top\n\"owner\" (bootstrap superuser for roles; object owner for other objects)\n * Else error (or if an error would break important backwards\ncompatibility, silently make it work like before and perhaps issue a\nWARNING).\n\nThe basic idea is to use superuser privileges as a last resort in order\nto maximize the cases that work normally (independent of superuser-\nness).\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 06 Sep 2022 18:27:32 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Tue, Sep 6, 2022 at 7:26 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> There's at least one other difference: if you specify \"GRANTED BY su1\"\n> for a table grant, it still selects the table owner as the grantor;\n> whereas if you specify \"GRANTED BY su1\" for a role grant, it selects\n> \"su1\".\n\nRight. Personally, I'm inclined to view that as a defect in the\n\"GRANTED BY whoever\" implementation for other object types, and I\nthink it should be resolved by making other object types error out if\nthe user explicitly mentioned in the \"GRANTED BY\" clause isn't a valid\ngrantor. It also seems possible to view it as a defect in the new\nimplementation, and argue that inference should always be performed\nstarting at the named user. I find that a POLA violation, but someone\ncould disagree.\n\nParenthetically, I think we should also fix GRANTED BY for other\nobject types so that it actually works, but that is a bit of headache\nbecause it doesn't seem like that code is relying as heavily on common\ninfrastructure as some things, so I believe it's actually a fair\namount of work to make that happen.\n\n> In other words, try to issue the grant normally if at all possible, and\n> play the superuser card as a last resort. I believe that will lead to\n> the fewest surprising cases, and make them easiest to explain, because\n> superuser-ness doesn't influence the outcome in as many cases.\n\nIt seems to me that this policy would reverse select_best_grantor()'s\ndecision about whether we should prefer to rely on superuser\nprivileges or on privileges actually granted to the current user. I\nthink either behavior is defensible, but the existing precedent is to\nprefer relying on superuser privileges. Like you, I found that a bit\nweird when I realized that's what it was doing, but it does have some\nadvantages. In particular, it means that the privileges granted by a\nsuperuser don't depend on any other grants, which is something that a\nuser might value.\n\nNow that is not to say that we couldn't decide that\nselect_best_grantor() got it wrong and choose to break backward\ncompatibility in order to fix it ... but I'm not even convinced that\nthe alternative behavior you propose is clearly better, let alone that\nit's enough better to justify changing things. However, I don't\npersonally have a strong preference about it one way or the other; if\nthere's a strong consensus to change it, so be it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Sep 2022 09:39:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Wed, 2022-09-07 at 09:39 -0400, Robert Haas wrote:\n> Now that is not to say that we couldn't decide that\n> select_best_grantor() got it wrong and choose to break backward\n> compatibility in order to fix it ... but I'm not even convinced that\n> the alternative behavior you propose is clearly better, let alone\n> that\n> it's enough better to justify changing things.\n\nOK. I suppose the best path forward is to just try to improve the\nability to administer the system without relying as much on superusers,\nwhich will allow us to safely ignore some of the weirdness caused by\nsuperusers issuing grants.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 07 Sep 2022 07:56:28 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "On Wed, Sep 7, 2022 at 10:56 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> OK. I suppose the best path forward is to just try to improve the\n> ability to administer the system without relying as much on superusers,\n> which will allow us to safely ignore some of the weirdness caused by\n> superusers issuing grants.\n\nYeah, and I think we might not even be that far away from making that\nhappen. There are still a few thorny design issues to work out, I\nbelieve, and there's also some complexity that is introduced by the\nfact that different people want different things. For example, last\nrelease cycle, I believed that the NOINHERIT behavior was a weird wart\nthat probably nobody cared about. That turned out to be false, really\nfalse.\n\nWhat I *personally* want most as an alternative to superuser is an\naccount that inherits all the privileges of the other accounts that it\nmanages, which might not be all the accounts on the system, and which\ncan also SET ROLE to those accounts. If you're logged into such an\naccount, you can do many of the things a superuser can do and in the\nsame ways that a superuser can do them. For example, if you've got\nsome pg_dump output, you could probably restore the dump using such an\naccount and privilege restoration would work, provided that the\nrequired accounts exist and that they're among the accounts managed by\nyour account.\n\nHowever, I think that other people want different things. For example,\nI think that Joshua Brindle mentioned wanting to have a user-creation\nbot that should be able to make new accounts but not access them in\nany way, and I think Stephen Frost was interested in semantics where\nyou could make accounts and be able to SET ROLE into them but not\ninherit their privileges. Or maybe they were both proposing the same\nthing: not quite sure. Anyway, it will perhaps turn out to be\nimpossible to give everybody 100% of everything they would like, but\nI'm thinking about a few ideas that might enable us to cater to a few\ndifferent scenarios - and I'm hopeful that it will be possible to\npropose something in time for inclusion in v16, but my ideas aren't\nquite well enough formulated yet to make a concrete proposal just yet,\nand when I do make such a proposal I want to do it on a new thread for\nbetter visibility.\n\nIn the meantime, I think that what has already been committed is\nclearly a step in the right direction. The patch which is the subject\nof this thread has basically brought the role grant code up to the\nlevel of other object types. I don't think it's an overstatement to\nsay that the previous state of affairs was that this feature just\ndidn't work properly and no one had cared enough to bother fixing it.\nThat always makes discussions about future enhancements harder. The\npatch to add grant-level control to the INHERIT option also seems to\nme to be a step in the right direction, since, at least IMHO, it is\nreally hard to reason about behavior when the heritability of a\nparticular grant is a property of the grantee rather than something\nwhich can be controlled by the grantor, or the system. If we can reach\nagreement on some of the other things that I have proposed,\nspecifically sorting out the issues under discussion on the\n\"has_privs_of_role vs. is_member_of_role, redux\" thread and adding the\nnew capability discussed on the \"allowing for control over SET ROLE\"\nthread, I think will be a further, useful step.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Sep 2022 13:00:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Aug 18, 2022 at 1:26 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> CI is happier with this version, so I've committed 0001. If no major\n>> problems emerge, I'll proceed with 0002 as well.\n\n> Done.\n\nShouldn't the CF entry [1] be closed as committed?\n\n\t\t\tregards, tom lane\n\n[1] https://commitfest.postgresql.org/39/3745/\n\n\n", "msg_date": "Wed, 21 Sep 2022 16:53:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_auth_members.grantor is bunk" } ]
[ { "msg_contents": "These patches have been split off the now deprecated monolithic \"Delegating superuser tasks to new security roles\" thread at [1].\n\nThe purpose of these patches is to allow non-superuser subscription owners without risk of them overwriting tables they lack privilege to write directly. This both allows subscriptions to be managed by non-superusers, and protects servers with subscriptions from malicious activity on the publisher side.\n\n\n\n\n\n\n\n[1] https://www.postgresql.org/message-id/flat/F9408A5A-B20B-42D2-9E7F-49CD3D1547BC%40enterprisedb.com\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 20 Oct 2021 11:40:39 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Non-superuser subscription owners" }, { "msg_contents": "Le mercredi 20 octobre 2021, 20:40:39 CEST Mark Dilger a écrit :\n> These patches have been split off the now deprecated monolithic \"Delegating\n> superuser tasks to new security roles\" thread at [1].\n> \n> The purpose of these patches is to allow non-superuser subscription owners\n> without risk of them overwriting tables they lack privilege to write\n> directly. This both allows subscriptions to be managed by non-superusers,\n> and protects servers with subscriptions from malicious activity on the\n> publisher side.\n\nThank you Mark for splitting this.\n\nThis patch looks good to me, and provides both better security (by closing the \n\"dropping superuser role\" loophole) and usefule features. \n\n\n-- \nRonan Dunklau\n\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 09:26:30 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\nOn 10/20/21 14:40, Mark Dilger wrote:\n> These patches have been split off the now deprecated monolithic \"Delegating superuser tasks to new security roles\" thread at [1].\n>\n> The purpose of these patches is to allow non-superuser subscription owners without risk of them overwriting tables they lack privilege to write directly. This both allows subscriptions to be managed by non-superusers, and protects servers with subscriptions from malicious activity on the publisher side.\n>\n> [1] https://www.postgresql.org/message-id/flat/F9408A5A-B20B-42D2-9E7F-49CD3D1547BC%40enterprisedb.com\n\n\nThese patches look good on their face. The code changes are very\nstraightforward.\n\n\nw.r.t. this:\n\n+   On the subscriber, the subscription owner's privileges are\nre-checked for\n+   each change record when applied, but beware that a change of\nownership for a\n+   subscription may not be noticed immediately by the replication workers.\n+   Changes made on the publisher may be applied on the subscriber as\n+   the old owner.  In such cases, the old owner's privileges will be\nthe ones\n+   that matter.  Worse still, it may be hard to predict when replication\n+   workers will notice the new ownership.  Subscriptions created\ndisabled and\n+   only enabled after ownership has been changed will not be subject to\nthis\n+   race condition.\n\n\nmaybe we should disable the subscription before making such a change and\nthen re-enable it?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 1 Nov 2021 10:18:24 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "> On Nov 1, 2021, at 7:18 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> w.r.t. this:\n> \n> + On the subscriber, the subscription owner's privileges are\n> re-checked for\n> + each change record when applied, but beware that a change of\n> ownership for a\n> + subscription may not be noticed immediately by the replication workers.\n> + Changes made on the publisher may be applied on the subscriber as\n> + the old owner. In such cases, the old owner's privileges will be\n> the ones\n> + that matter. Worse still, it may be hard to predict when replication\n> + workers will notice the new ownership. Subscriptions created\n> disabled and\n> + only enabled after ownership has been changed will not be subject to\n> this\n> + race condition.\n> \n> \n> maybe we should disable the subscription before making such a change and\n> then re-enable it?\n\nRight. I commented the code that way because there is a clear concern, but I was uncertain which way around the problem was best.\n\nALTER SUBSCRIPTION..[ENABLE | DISABLE] do not synchronously start or stop subscription workers. The ALTER command updates the catalog's subenabled field, but workers only lazily respond to that. Disabling and enabling the subscription as part of the OWNER TO would not reliably accomplish anything.\n\nThe attached patch demonstrates the race condition. It sets up a publisher and subscriber, and toggles the subscription on and off on the subscriber node, interleaved with inserts and deletes on the publisher node. If the ALTER SUBSCRIPTION commands were synchronous, the test results would be deterministic, with only the inserts performed while the subscription is enabled being replicated, but because the ALTER commands are asynchronous, the results are nondeterministic.\n\nIt is unclear that I can make ALTER SUBSCRIPTION..OWNER TO synchronous without redesigning the way workers respond to pg_subscription catalog updates generally. That may be a good project to eventually tackle, but I don't see that it is more important to close the race condition in an OWNER TO than for a DISABLE.\n\nThoughts?\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 1 Nov 2021 10:58:25 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Nov 1, 2021, at 10:58 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> ALTER SUBSCRIPTION..[ENABLE | DISABLE] do not synchronously start or stop subscription workers. The ALTER command updates the catalog's subenabled field, but workers only lazily respond to that. Disabling and enabling the subscription as part of the OWNER TO would not reliably accomplish anything.\n\nHaving discussed this with Andrew off-list, we've concluded that updating the documentation for logical replication to make this point more clear is probably sufficient, but I wonder if anyone thinks otherwise?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 1 Nov 2021 15:44:32 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Nov 1, 2021 at 6:44 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > ALTER SUBSCRIPTION..[ENABLE | DISABLE] do not synchronously start or stop subscription workers. The ALTER command updates the catalog's subenabled field, but workers only lazily respond to that. Disabling and enabling the subscription as part of the OWNER TO would not reliably accomplish anything.\n>\n> Having discussed this with Andrew off-list, we've concluded that updating the documentation for logical replication to make this point more clear is probably sufficient, but I wonder if anyone thinks otherwise?\n\nThe question in my mind is whether there's some reasonable amount of\ntime that a user should expect to have to wait for the changes to take\neffect. If it could easily happen that the old permissions are still\nin use a month after the change is made, I think that's probably not\ngood. If there's reason to think that, barring unusual circumstances,\nchanges will be noticed within a few minutes, I think that's fine.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Nov 2021 11:17:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "> On Nov 1, 2021, at 10:58 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> ALTER SUBSCRIPTION..[ENABLE | DISABLE] do not synchronously start or stop subscription workers. The ALTER command updates the catalog's subenabled field, but workers only lazily respond to that. Disabling and enabling the subscription as part of the OWNER TO would not reliably accomplish anything.\n\nI have rethought my prior analysis. The problem in the previous patch was that the subscription apply workers did not check for a change in ownership the way they checked for other changes, instead only picking up the new ownership information when the worker restarted for some other reason. This next patch set fixes that. The application of a change record may continue under the old ownership permissions when a concurrent command changes the ownership of the subscription, but the worker will pick up the new permissions before applying the next record. I think that is consistent enough with reasonable expectations.\n\nThe first two patches are virtually unchanged. The third updates the behavior of the apply workers, and updates the documentation to match.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 3 Nov 2021 12:50:28 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, 2021-11-01 at 10:58 -0700, Mark Dilger wrote:\n> It is unclear that I can make ALTER SUBSCRIPTION..OWNER TO\n> synchronous without redesigning the way workers respond to\n> pg_subscription catalog updates generally. That may be a good\n> project to eventually tackle, but I don't see that it is more\n> important to close the race condition in an OWNER TO than for a\n> DISABLE.\n> \n> Thoughts?\n\nWhat if we just say that OWNER TO must be done by a superuser, changing\nfrom one superuser to another, just like today? That would preserve\nbackwards compatibility, but people with non-superuser subscriptions\nwould need to drop/recreate them.\n\nWhen we eventually do tackle the problem, we can lift the restriction.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 16 Nov 2021 10:08:06 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Nov 16, 2021, at 10:08 AM, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> On Mon, 2021-11-01 at 10:58 -0700, Mark Dilger wrote:\n>> It is unclear .....\n>> \n>> Thoughts?\n> \n> What if we just say that OWNER TO must be done by a superuser, changing\n> from one superuser to another, just like today? That would preserve\n> backwards compatibility, but people with non-superuser subscriptions\n> would need to drop/recreate them.\n\nThe paragraph I wrote on 11/01 and you are responding to is no longer relevant. The patch submission on 11/03 tackled the problem. Have you had a chance to take a look at the new design?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 16 Nov 2021 10:12:26 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\nOn 11/3/21 15:50, Mark Dilger wrote:\n>> On Nov 1, 2021, at 10:58 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>>\n>> ALTER SUBSCRIPTION..[ENABLE | DISABLE] do not synchronously start or stop subscription workers. The ALTER command updates the catalog's subenabled field, but workers only lazily respond to that. Disabling and enabling the subscription as part of the OWNER TO would not reliably accomplish anything.\n> I have rethought my prior analysis. The problem in the previous patch was that the subscription apply workers did not check for a change in ownership the way they checked for other changes, instead only picking up the new ownership information when the worker restarted for some other reason. This next patch set fixes that. The application of a change record may continue under the old ownership permissions when a concurrent command changes the ownership of the subscription, but the worker will pick up the new permissions before applying the next record. I think that is consistent enough with reasonable expectations.\n>\n> The first two patches are virtually unchanged. The third updates the behavior of the apply workers, and updates the documentation to match.\n\n\nI'm generally happier about this than the previous patch set. With the\nexception of some slight documentation modifications I think it's\nbasically committable. There doesn't seem to be a CF item for it but I'm\ninclined to commit it in a couple of days time.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 16 Nov 2021 15:06:23 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Nov 16, 2021, at 12:06 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> There doesn't seem to be a CF item for it but I'm\n> inclined to commit it in a couple of days time.\n\nhttps://commitfest.postgresql.org/36/3414/\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 16 Nov 2021 12:08:04 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\nOn 11/16/21 15:08, Mark Dilger wrote:\n>\n>> On Nov 16, 2021, at 12:06 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> There doesn't seem to be a CF item for it but I'm\n>> inclined to commit it in a couple of days time.\n> https://commitfest.postgresql.org/36/3414/\n>\n\nOK, got it, thanks.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 16 Nov 2021 15:40:54 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, 2021-11-03 at 12:50 -0700, Mark Dilger wrote:\n> The first two patches are virtually unchanged. The third updates the\n> behavior of the apply workers, and updates the documentation to\n> match.\n\nv2-0001 corrects some surprises, but may create others. Why is renaming\nallowed, but not changing the options? What if we add new options, and\nsome of them seem benign for a non-superuser to change?\n\nThe commit message part of the patch says that it's to prevent non-\nsuperusers from being able to (effectively) create subscriptions, but\ndon't we want privileged non-superusers to be able to create\nsubscriptions?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 16 Nov 2021 20:11:59 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Nov 16, 2021, at 8:11 PM, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> On Wed, 2021-11-03 at 12:50 -0700, Mark Dilger wrote:\n>> The first two patches are virtually unchanged. The third updates the\n>> behavior of the apply workers, and updates the documentation to\n>> match.\n> \n> v2-0001 corrects some surprises, but may create others. Why is renaming\n> allowed, but not changing the options? What if we add new options, and\n> some of them seem benign for a non-superuser to change?\n\nThe patch cannot anticipate which logical replication options may be added to the project in some later commit. We can let that commit adjust the behavior to allow the option if we agree it is sensible for non-superusers to do so.\n\n> The commit message part of the patch says that it's to prevent non-\n> superusers from being able to (effectively) create subscriptions, but\n> don't we want privileged non-superusers to be able to create\n> subscriptions?\n\nPerhaps, but I don't think merely owning a subscription should entitle a role to create new subscriptions. Administrators may quite intentionally create low-power users, ones without access to anything but a single table, or a single schema, as a means of restricting the damage that a subscription might do (or more precisely, what the publisher might do via the subscription.) It would be surprising if that low-power user was then able to recreate the subscription into something different.\n\nWe should probably come back to this topic in a different patch, perhaps a patch that introduces a new pg_manage_subscriptions role or such.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 17 Nov 2021 07:44:27 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, 2021-11-17 at 07:44 -0800, Mark Dilger wrote:\n> Administrators may quite\n> intentionally create low-power users, ones without access to anything\n> but a single table, or a single schema, as a means of restricting the\n> damage that a subscription might do (or more precisely, what the\n> publisher might do via the subscription.) It would be surprising if\n> that low-power user was then able to recreate the subscription into\n> something different.\n\nI am still trying to understand this use case. It doesn't feel like\n\"ownership\" to me, it feels more like some kind of delegation.\n\nIs GRANT a better fit here? That would allow more than one user to\nREFRESH, or ENABLE/DISABLE the same subscription. It wouldn't allow\nRENAME, but I don't see why we'd separate privileges for\nCREATE/DROP/RENAME anyway.\n\nThis would not address the weirdness of the existing code where a\nsuperuser loses their superuser privileges but still owns a\nsubscription. But perhaps we can solve that a different way, like just\nperforming a check when someone loses their superuser privileges that\nthey don't own any subscriptions.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 17 Nov 2021 09:33:24 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Nov 17, 2021, at 9:33 AM, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> I am still trying to understand this use case. It doesn't feel like\n> \"ownership\" to me, it feels more like some kind of delegation.\n> \n> Is GRANT a better fit here? That would allow more than one user to\n> REFRESH, or ENABLE/DISABLE the same subscription. It wouldn't allow\n> RENAME, but I don't see why we'd separate privileges for\n> CREATE/DROP/RENAME anyway.\n\nWe may eventually allow non-superusers to create subscriptions, but there are lots of details to work out. Should there be limits on how many subscriptions they can create? Should there be limits to the number of simultaneously open connections they can create out to other database servers (publishers)? Should they need to be granted USAGE on a database publisher in order to use the connection string for that publisher in a subscription they create? Should they need to be granted USAGE on a publication in order to replicate it? Yes, there may be restrictions on the publisher side, too, but the user model on subscriber and publisher might differ, and the connection string used might not match the subscription owner, so some restriction on the subscriber side may be needed.\n\nThe implementation of [CREATE | ALTER] SUBSCRIPTION was designed at a time when only superusers could execute them, and as far as I can infer from the design, no effort to constrain the effects of those commands was made. Since we're trying to make subscriptions into things that non-superusers can use, we have to deal with some things in those functions. For example, ALTER SUBSCRIPTION can change the database connection parameter, or the publication subscribed, or whether synchronous_commit is used. I don't see that a subscription owner should necessarily be allowed to mess with that, at least not without some other privilege checks beyond mere ownership.\n\nI think this is pretty analogous to how security definer functions work. You might call those \"delegation\" also, but the basic idea is that the function will run under the privileges of the function's owner, who might be quite privileged if you want the function to do highly secure things for you, but who could also intentionally be limited in privilege. It wouldn't make much sense to say the owner of a security definer function can arbitrarily escalate their privileges to do things like open connections to other database servers, or have the transactions in which they run have a different setting of synchronous_commit. Yet with subscriptions, if the subscription owner can run all forms of ALTER SUBSCRIPTION, that's what they can do.\n\nI took a conservative position in the design of the patch to avoid giving away too much. I suspect that we'll come back to these design decisions and relax them at some point, but the exact way in which we relax them is not obvious. We could just agree to remove them (as you seem to propose), or we might agree to create predefined roles and say that the subscription owner can change certain aspects of the subscription if and only if they are members of one or more of those roles, or we may create new grantable privileges. Each of those debates may be long and hard fought, so I don't want to invite that as part of this thread, or this patch will almost surely miss the cutoff for v15.\n\n> This would not address the weirdness of the existing code where a\n> superuser loses their superuser privileges but still owns a\n> subscription. But perhaps we can solve that a different way, like just\n> performing a check when someone loses their superuser privileges that\n> they don't own any subscriptions.\n\nI gave that a slight amount of thought during the design of this patch, but didn't think we could refuse to revoke superuser on such a basis, and didn't see what we should do with the subscription other than have it continue to be owned by the recently-non-superuser. If you have a better idea, we can discuss it, but to some degree I think that is also orthogonal to the purpose of this patch. The only sense in which this patch depends on that issue is that this patch proposes that non-superuser subscription owners are already an issue, and therefore that this patch isn't creating a new issue, but rather making more sane something that already can happen.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 17 Nov 2021 10:25:50 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Nov 17, 2021, at 9:33 AM, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> Is GRANT a better fit here? That would allow more than one user to\n> REFRESH, or ENABLE/DISABLE the same subscription. It wouldn't allow\n> RENAME, but I don't see why we'd separate privileges for\n> CREATE/DROP/RENAME anyway.\n\nI don't think I answered this directly in my last reply.\n\nGRANT *might* be part of some solution, but it is unclear to me how best to do it. The various configuration parameters on subscriptions entail different security concerns. We might take a fine-grained approach and create a predefined role for each, or we might take a course-grained approach and create a single pg_manage_subscriptions role which can set any parameter on any subscription, or maybe just parameters on subscriptions that the role also owns, or we might do something else, like burn some privilege bits and define new privileges that can be granted per subscription rather than globally. (I think that last one is a non-starter, but just mention it as an example of another approach.)\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 17 Nov 2021 10:48:44 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\nOn 11/16/21 15:06, Andrew Dunstan wrote:\n> On 11/3/21 15:50, Mark Dilger wrote:\n>>> On Nov 1, 2021, at 10:58 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>>>\n>>> ALTER SUBSCRIPTION..[ENABLE | DISABLE] do not synchronously start or stop subscription workers. The ALTER command updates the catalog's subenabled field, but workers only lazily respond to that. Disabling and enabling the subscription as part of the OWNER TO would not reliably accomplish anything.\n>> I have rethought my prior analysis. The problem in the previous patch was that the subscription apply workers did not check for a change in ownership the way they checked for other changes, instead only picking up the new ownership information when the worker restarted for some other reason. This next patch set fixes that. The application of a change record may continue under the old ownership permissions when a concurrent command changes the ownership of the subscription, but the worker will pick up the new permissions before applying the next record. I think that is consistent enough with reasonable expectations.\n>>\n>> The first two patches are virtually unchanged. The third updates the behavior of the apply workers, and updates the documentation to match.\n>\n> I'm generally happier about this than the previous patch set. With the\n> exception of some slight documentation modifications I think it's\n> basically committable. There doesn't seem to be a CF item for it but I'm\n> inclined to commit it in a couple of days time.\n>\n>\n\nGiven there is some debate about the patch set I will hold off any\naction for the time being.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 17 Nov 2021 15:26:41 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, 2021-11-17 at 10:25 -0800, Mark Dilger wrote:\n> We may eventually allow non-superusers to create subscriptions, but\n> there are lots of details to work out.\n\nI am setting aside the idea of subscriptions created by non-superusers.\n\nMy comments were about your idea for \"low-power users\" that can still\ndo things with subscriptions. And for that, GRANT seems like a better\nfit than ownership.\n\nWith v2-0001, there are several things that seem weird to me:\n\n * Why can there be only one low-power user per subscription?\n * Why is RENAME a separate capability from CREATE/DROP?\n * What if you want to make the privileges more fine-grained, or make\nchanges in the future? Ownership is a single bit, so it requires that\neveryone agree. Maybe some people want RENAME to be a part of that, and\nothers don't.\n\nGRANT seems to provide better answers here.\n\n> Since we're trying to make subscriptions into things that non-\n> superusers can use, we have to deal with some things in those\n> functions.\n\nI understand the use case where a superuser isn't required anywhere in\nthe process, and some special users can create and own subscriptions. I\nalso understand that's not what these patches are trying to accomplish\n(though v2-0003 seems like a good step in that direction).\n\nI don't understand the use case as well where a non-superuser can\nmerely \"use\" a subscription. I'm sure such use cases exist and I'm fine\nto go along with that idea, but I'd like to understand why ownership\n(partial ownership?) is the right way to do this and GRANT is the wrong\nway.\n\n> For example, ALTER SUBSCRIPTION can change the database connection\n> parameter, or the publication subscribed, or whether\n> synchronous_commit is used. I don't see that a subscription owner\n> should necessarily be allowed to mess with that, at least not without\n> some other privilege checks beyond mere ownership.\n\nThat violates my expectations of what \"ownership\" means.\n\n> I think this is pretty analogous to how security definer functions\n> work.\n\nThe analogy to SECURITY DEFINER functions seems to support my\nsuggestion for GRANT at least as much as your modified definition of\nownership.\n\n> > This would not address the weirdness of the existing code where a\n> > superuser loses their superuser privileges but still owns a\n> > subscription. But perhaps we can solve that a different way, like\n> > just\n> > performing a check when someone loses their superuser privileges\n> > that\n> > they don't own any subscriptions.\n> \n> I gave that a slight amount of thought during the design of this\n> patch, but didn't think we could refuse to revoke superuser on such a\n> basis,\n\nI don't necessarily see a problem there, but I could be missing\nsomething.\n\n> and didn't see what we should do with the subscription other than\n> have it continue to be owned by the recently-non-superuser. If you\n> have a better idea, we can discuss it, but to some degree I think\n> that is also orthogonal to the purpose of this patch. The only sense\n> in which this patch depends on that issue is that this patch proposes\n> that non-superuser subscription owners are already an issue, and\n> therefore that this patch isn't creating a new issue, but rather\n> making more sane something that already can happen.\n\nBy introducing and documenting a way to get non-superusers to own a\nsubscription, it makes it more likely that people will do it, and\nharder for us to change. That means the standard should be \"this is\nwhat we really want\", rather than just \"more sane than before\".\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 17 Nov 2021 13:06:32 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, 2021-11-17 at 10:48 -0800, Mark Dilger wrote:\n> GRANT *might* be part of some solution, but it is unclear to me how\n> best to do it. The various configuration parameters on subscriptions\n> entail different security concerns. We might take a fine-grained\n> approach and create a predefined role for each\n\nI think you misunderstood the idea: not using predefined roles, just\nplain old ordinary GRANT on a subscription object to ordinary roles.\n\n GRANT REFRESH ON SUBSCRIPTION sub1 TO nonsuper;\n\nThis should be easy enough because the subscription is a real object,\nright?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 17 Nov 2021 13:10:20 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Nov 17, 2021, at 1:10 PM, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> I think you misunderstood the idea: not using predefined roles, just\n> plain old ordinary GRANT on a subscription object to ordinary roles.\n> \n> GRANT REFRESH ON SUBSCRIPTION sub1 TO nonsuper;\n> \n> This should be easy enough because the subscription is a real object,\n> right?\n\n/*\n * Grantable rights are encoded so that we can OR them together in a bitmask.\n * The present representation of AclItem limits us to 16 distinct rights,\n * even though AclMode is defined as uint32. See utils/acl.h.\n *\n * Caution: changing these codes breaks stored ACLs, hence forces initdb.\n */\ntypedef uint32 AclMode; /* a bitmask of privilege bits */\n\n#define ACL_INSERT (1<<0) /* for relations */\n#define ACL_SELECT (1<<1)\n#define ACL_UPDATE (1<<2)\n#define ACL_DELETE (1<<3)\n#define ACL_TRUNCATE (1<<4)\n#define ACL_REFERENCES (1<<5)\n#define ACL_TRIGGER (1<<6)\n#define ACL_EXECUTE (1<<7) /* for functions */\n#define ACL_USAGE (1<<8) /* for languages, namespaces, FDWs, and\n * servers */\n#define ACL_CREATE (1<<9) /* for namespaces and databases */\n#define ACL_CREATE_TEMP (1<<10) /* for databases */\n#define ACL_CONNECT (1<<11) /* for databases */\n\n\nWe only have 4 values left in the bitmask, and I doubt that burning those slots for multiple new types of rights that only have meaning for subscriptions is going to be accepted. For full disclosure, I'm proposing adding ACL_SET and ACL_ALTER_SYSTEM in another patch and my proposal there could get shot down for the same reasons, but I think your argument would be even harder to defend. Maybe others feel differently.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 17 Nov 2021 15:07:10 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, 2021-11-17 at 15:07 -0800, Mark Dilger wrote:\n> We only have 4 values left in the bitmask, and I doubt that burning\n> those slots for multiple new types of rights that only have meaning\n> for subscriptions is going to be accepted. For full disclosure, I'm\n> proposing adding ACL_SET and ACL_ALTER_SYSTEM in another patch and my\n> proposal there could get shot down for the same reasons, but I think\n> your argument would be even harder to defend. Maybe others feel\n> differently.\n\nWhy not overload ACL_USAGE again, and say:\n\n GRANT USAGE ON SUBSCRIPTION sub1 TO nonsuper;\n\nwould allow ENABLE/DISABLE and REFRESH.\n\nAgain, I don't really understand the use case behind \"can use a\nsubscription but not create one\", so I'm not making a proposal. But\nassuming that the use case exists, GRANT seems like a much better\napproach.\n\n(Aside: for me to commit something like this I'd want to understand the\n\"can use a subscription but not create one\" use case better.)\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 17 Nov 2021 15:46:55 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Nov 17, 2021, at 1:06 PM, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> On Wed, 2021-11-17 at 10:25 -0800, Mark Dilger wrote:\n>> We may eventually allow non-superusers to create subscriptions, but\n>> there are lots of details to work out.\n> \n> I am setting aside the idea of subscriptions created by non-superusers.\n\nOk, fair enough. I think eventually we'll want that, but I'm also setting that aside for this patch.\n\n> My comments were about your idea for \"low-power users\" that can still\n> do things with subscriptions. And for that, GRANT seems like a better\n> fit than ownership.\n\nThis patch has basically no value beyond the fact that it allows the replication to be *applied* as a user other than superuser. Throw that out, and there isn't any point. Everything else is window dressing. The real security problem with subscriptions is that they act with superuser power. That naturally means that they must be owned and operated by superuser, too, otherwise they serve as a privilege escalation attack vector. It really doesn't make any sense to think of subscriptions as operating under the permissions of multiple non-superusers. You must choose a single role you want the subscription to run under. What purpose would be served by GRANTing privileges on a subscription to more than one non-superuser? It still operates as just the one user. I agree you *could* give multiple users privileges to mess with it, but you'd still need to assign a single role as the one whose privileges matter for the purpose of applying replication changes. I'm using \"owner\" for that purpose, and I think that is consistent with how security definer functions work. They run as the owner, too. It's perfectly well-precedented to use \"owner\" for this.\n\nI think the longer term plan is that non-superusers who have some privileged role will be allowed to create subscriptions, and naturally they will own the subscriptions that they create, at least until an ALTER SUBSCRIPTION..OWNER TO is successfully executed to transfer ownership. Once that longer term plan is complete, non-superusers will be able to create publications of their tables on one database, and subscriptions to those publications on another database, all without needing the help of a superuser. This patch doesn't get us all the way there, but it heads directly toward that goal.\n\n> With v2-0001, there are several things that seem weird to me:\n> \n> * Why can there be only one low-power user per subscription?\n\nBecause the apply workers run as only one user. Currently it is always superuser. After this patch, it is always the owner, which amounts to the same thing for legacy subscriptions created and owned by superuser prior to upgrading to v15, but not necessarily for new ones or ones that have ownership transferred after upgrade.\n\nWe could think about subscriptions that act under multiple roles, perhaps taking role information as part of the data-stream from the publisher, but that's a pretty complicated proposal, and it is far from clear that we want it anyway. There is a security case to be made for *not* allowing the publisher to call all the shots, so such a proposal would at best be an alternate mode of operation, not the one and only mode.\n\n> * Why is RENAME a separate capability from CREATE/DROP?\n\nI don't care enough to argue this point. If you want me to remove RENAME privilege from the owner, I can resubmit with it removed. It just doesn't seem like it's dangerous to allow a non-superuser to rename their subscriptions, so I saw no compelling reason to disallow it.\n\nCREATE clearly must be disallowed since it gives the creator the ability to form network connections, set fsync modes, etc., and there is no reason to assume arbitrary non-superusers should be able to do that.\n\nThe argument against DROP is a bit weaker. It doesn't seem like a user who cannot bring subscriptions into existence should be able to drop them either. I was expecting to visit that issue in a follow-on patch which deals with non-superuser predefined roles that have some power to create and drop subscriptions. What that patch will propose to do is not obvious, since some of what you can do with subscriptions is so powerful we may not want non-superusers doing it, even with a privileged role. If you can't picture what I mean, consider that you might use a connection parameter that connects outside and embeds data into the connection string, with a server listening on the other end, not really to publish data, but to harvest the secret data that you are embedding in the network connection attempt.\n\n> * What if you want to make the privileges more fine-grained, or make\n> changes in the future? Ownership is a single bit, so it requires that\n> everyone agree.\n\nWe can modify the patch to have the subscription owner have zero privileges on the subscription, not even the ability to see how it is defined, and just have \"owner\" mean the role under whose privileges the logical replication workers apply changes. Would that be better? I would expect people to find that odd.\n\nThe problem is that we want a setuid/setgid type behavior. Actual setuid/setgid programs act as the user/group of the executable. There's no reason that user/group needs to be one that any real human uses to log into the system. Likewise, we need the subscription to act under a role, and we're establishing which role that is by having that role own the subscription. That is like how setuid/setgid programs work by executing as the user/group that owns the executable, except that postgres doesn't have separate user/group concepts, just roles. Isn't this design pattern completely commonplace?\n\n> Maybe some people want RENAME to be a part of that, and\n> others don't.\n\nFair enough. Should I remove RENAME from what the patch allows the owner to do? On this particular point, I genuinely don't care. I think it can be reasonably argued either way.\n\n> GRANT seems to provide better answers here.\n\nNo, because we don't have infinite privilege bits to burn.\n\n>> Since we're trying to make subscriptions into things that non-\n>> superusers can use, we have to deal with some things in those\n>> functions.\n> \n> I understand the use case where a superuser isn't required anywhere in\n> the process, and some special users can create and own subscriptions. I\n> also understand that's not what these patches are trying to accomplish\n> (though v2-0003 seems like a good step in that direction).\n\nThere is a cart-before-the-horse problem here. If I propose a patch with a privileged role for creating and owning subscriptions *before* I tighten down how non-superuser-owned subscriptions work, that patch would surely be rejected. So I either propose this first, and only if/when it gets accepted, propose the other, or I propose them together. That's a damned-if-you-do--damned-if-you-dont situation, because if I propose them together, I'll get arguments that they are clearly separable and should be proposed separately, and if I do them one before the other, I'll get the argument that you are making now. I fully expect the privileged role proposal to be made (possibly by me), though it is unclear if there will be time left to do it in v15.\n\n> I don't understand the use case as well where a non-superuser can\n> merely \"use\" a subscription. I'm sure such use cases exist and I'm fine\n> to go along with that idea, but I'd like to understand why ownership\n> (partial ownership?) is the right way to do this and GRANT is the wrong\n> way.\n\nEven if we had the privilege bits to burn, no spelling of that GRANT idea sounds all that great:\n\n\tGRANT RUN AS ON subscription TO role;\n\tGRANT RUN AS ON role TO subscription;\n\tGRANT SUDO ON subscription TO role;\n\tGRANT SETUID ON role TO subscription;\n\t...\n\nI just don't see how that really works. I'm not inclined to spend time being more clever, since I already know that privilege bits are in short supply, but if you want to propose something, go ahead. Elsewhere you proposed GRANT REFRESH or something, not looking at that email just now, but that's not the same thing as GRANT RUN AS, and burns another privilege bit, and still doesn't get us all the way there, because you presumably also want GRANT RENAME, GRANT ALTER CONNECTION SETTING, GRANT ALTER FSYNC SETTING, ..., and we're out of privilege bits before we're done.\n\n>> For example, ALTER SUBSCRIPTION can change the database connection\n>> parameter, or the publication subscribed, or whether\n>> synchronous_commit is used. I don't see that a subscription owner\n>> should necessarily be allowed to mess with that, at least not without\n>> some other privilege checks beyond mere ownership.\n> \n> That violates my expectations of what \"ownership\" means.\n\nI think that's because you're thinking of these settings as properties of the subscription. You may *own* the subscription, but the subscription doesn't *own* the right to make connections to arbitrary databases, nor *own* the right to change buffer cache settings, nor *own* the right to bring data from a publication on some other server which, if it existed on the local server, would violate site policy and possibly constitute a civil or criminal violation of data privacy laws. I may own my house, and the land it sits on, and my driveway, but that doesn't mean I own the ability to make my driveway go across my neighbor's field, down through town, and to the waterfront. But that's the kind of ownership definition you seem to be defending.\n\nSome of what I perceive as the screwiness of your argument I must admin is not your fault. The properties of subscriptions are defined in ways that don't make sense to me. It would be far more sensible if connection strings were objects in their own right, and you could grant USAGE on a connection string to a role, and USAGE on a subscription to a role, and only if the subscription worker's role had privileges on the connection string could they use it as part of fulfilling their task of replicating the data, and otherwise they'd error out in the attempt. Likewise, fsync modes could be proper objects, and only if the subscription's role had privileges on the fsync mode they wanted to use would they be able to use it. But we don't have these things as proper objects, with acl lists on them, so we're stuck trying to design around that. To my mind, that means subscription owners *do not own* properties associated with the subscription. To your mind, that's not what \"ownership\" means. What to do?\n\n>> I think this is pretty analogous to how security definer functions\n>> work.\n> \n> The analogy to SECURITY DEFINER functions seems to support my\n> suggestion for GRANT at least as much as your modified definition of\n> ownership.\n\nI don't see how. Can you please explain?\n\n>>> This would not address the weirdness of the existing code where a\n>>> superuser loses their superuser privileges but still owns a\n>>> subscription. But perhaps we can solve that a different way, like\n>>> just\n>>> performing a check when someone loses their superuser privileges\n>>> that\n>>> they don't own any subscriptions.\n>> \n>> I gave that a slight amount of thought during the design of this\n>> patch, but didn't think we could refuse to revoke superuser on such a\n>> basis,\n> \n> I don't necessarily see a problem there, but I could be missing\n> something.\n\nClose your eyes and imagine that I have superuser on your database... really picture it in your mind. Now, do you want the revoke command you are issuing to work?\n\n>> and didn't see what we should do with the subscription other than\n>> have it continue to be owned by the recently-non-superuser. If you\n>> have a better idea, we can discuss it, but to some degree I think\n>> that is also orthogonal to the purpose of this patch. The only sense\n>> in which this patch depends on that issue is that this patch proposes\n>> that non-superuser subscription owners are already an issue, and\n>> therefore that this patch isn't creating a new issue, but rather\n>> making more sane something that already can happen.\n> \n> By introducing and documenting a way to get non-superusers to own a\n> subscription, it makes it more likely that people will do it, and\n> harder for us to change. That means the standard should be \"this is\n> what we really want\", rather than just \"more sane than before\".\n\nOk, I'll wait to hear back from you on the points above.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 17 Nov 2021 16:17:42 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Nov 17, 2021 at 11:56 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > On Nov 17, 2021, at 9:33 AM, Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n>\n> > This would not address the weirdness of the existing code where a\n> > superuser loses their superuser privileges but still owns a\n> > subscription. But perhaps we can solve that a different way, like just\n> > performing a check when someone loses their superuser privileges that\n> > they don't own any subscriptions.\n>\n> I gave that a slight amount of thought during the design of this patch, but didn't think we could refuse to revoke superuser on such a basis, and didn't see what we should do with the subscription other than have it continue to be owned by the recently-non-superuser. If you have a better idea, we can discuss it, but to some degree I think that is also orthogonal to the purpose of this patch. The only sense in which this patch depends on that issue is that this patch proposes that non-superuser subscription owners are already an issue, and therefore that this patch isn't creating a new issue, but rather making more sane something that already can happen.\n>\n\nDon't we want to close this gap irrespective of the other part of the\nfeature? I mean if we take out the part of your 0003 patch that checks\nwhether the current user has permission to perform a particular\noperation on the target table then the gap related to the owner losing\nsuperuser privileges should be addressed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 18 Nov 2021 16:20:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Nov 4, 2021 at 1:20 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n> > On Nov 1, 2021, at 10:58 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> >\n> > ALTER SUBSCRIPTION..[ENABLE | DISABLE] do not synchronously start or stop subscription workers. The ALTER command updates the catalog's subenabled field, but workers only lazily respond to that. Disabling and enabling the subscription as part of the OWNER TO would not reliably accomplish anything.\n>\n> I have rethought my prior analysis. The problem in the previous patch was that the subscription apply workers did not check for a change in ownership the way they checked for other changes, instead only picking up the new ownership information when the worker restarted for some other reason. This next patch set fixes that. The application of a change record may continue under the old ownership permissions when a concurrent command changes the ownership of the subscription, but the worker will pick up the new permissions before applying the next record.\n>\n\nAre you talking about the below change in the above paragraph?\n\n@@ -2912,6 +2941,7 @@ maybe_reread_subscription(void)\n strcmp(newsub->slotname, MySubscription->slotname) != 0 ||\n newsub->binary != MySubscription->binary ||\n newsub->stream != MySubscription->stream ||\n+ newsub->owner != MySubscription->owner ||\n !equal(newsub->publications, MySubscription->publications))\n {\n\nIf so, I am not sure how it will ensure that we check the ownership\nchange before applying each change? I think this will be invoked at\neach transaction boundary, so, if there is a transaction with a large\nnumber of changes, all the changes will be processed under the\nprevious owner.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 18 Nov 2021 17:07:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Nov 18, 2021, at 2:50 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n>> I gave that a slight amount of thought during the design of this patch, but didn't think we could refuse to revoke superuser on such a basis, and didn't see what we should do with the subscription other than have it continue to be owned by the recently-non-superuser. If you have a better idea, we can discuss it, but to some degree I think that is also orthogonal to the purpose of this patch. The only sense in which this patch depends on that issue is that this patch proposes that non-superuser subscription owners are already an issue, and therefore that this patch isn't creating a new issue, but rather making more sane something that already can happen.\n>> \n> \n> Don't we want to close this gap irrespective of the other part of the\n> feature? I mean if we take out the part of your 0003 patch that checks\n> whether the current user has permission to perform a particular\n> operation on the target table then the gap related to the owner losing\n> superuser privileges should be addressed.\n\nI don't think there is a gap. The patch does the right thing, causing the subscription whose owner has had superuser revoked to itself no longer function with superuser privileges. Whether that causes the subscription to fail depends on whether the previously-superuser now non-superuser owner now lacks sufficient privileges on the target relation(s). I think removing that part of the patch would be a regression.\n\nLet's compare two scenarios. In the first, we have a regular user \"alice\" who owns a subscription which replicates into table \"accounting.receipts\" for which she has been granted privileges by the table's owner. What would you expect to happen after the table's owner revokes privileges from alice? I would expect that the subscription can no longer function, and periodic attempts to replicate into that table result in permission denied errors in the logs.\n\nIn the second, we have a superuser \"alice\" who owns a subscription that replicates into table \"accounting.receipts\", and she only has sufficient privileges to modify \"accounting.receipts\" by virtue of being superuser. I would expect that when she has superuser revoked, the subscription can likewise no longer function. \n\nNow, maybe I'm wrong in both cases, and both should continue to function. But I would find it really strange if the first situation behaved differently from the second.\n\nI think intuitions about how subscriptions behave differ depending on the reason you expect the subscription to be owned by a particular user. If the reason the user owns the subscription is that the user just happens to be the user who created it, but isn't in your mind associated with the subscription, then having the subscription continue to function regardless of what happens to the user, even the user being dropped, is probably consistent with your expectations. In a sense, you think of the user who creates the subscription as having gifted it to the universe rather than continuing to own it. Or perhaps you think of the creator of the subscription as a solicitor/lawyer/agent working on behalf of client, and once that legal transaction is completed, you don't expect the lawyer being disbarred should impact the subscription which exists for the benefit of the client.\n\nIf instead you think about the subscription owner as continuing to be closely associated with the subscription (as I do), then you expect changes in the owner's permissions to impact the subscription.\n\nI think the \"gifted to the universe\"/\"lawyer\" mental model is not consistent with how the system is already designed to work. You can't drop the subscription's owner without first running REASSIGN OWNED, or ALTER SUBSCRIPTION..OWNER TO, or simply dropping the subscription:\n\n DROP ROLE regress_subscription_user;\n ERROR: role \"regress_subscription_user\" cannot be dropped because some objects depend on it\n DETAIL: owner of subscription regress_testsub\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 18 Nov 2021 07:33:53 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Nov 18, 2021, at 3:37 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n>> I have rethought my prior analysis. The problem in the previous patch was that the subscription apply workers did not check for a change in ownership the way they checked for other changes, instead only picking up the new ownership information when the worker restarted for some other reason. This next patch set fixes that. The application of a change record may continue under the old ownership permissions when a concurrent command changes the ownership of the subscription, but the worker will pick up the new permissions before applying the next record.\n>> \n> \n> Are you talking about the below change in the above paragraph?\n> \n> @@ -2912,6 +2941,7 @@ maybe_reread_subscription(void)\n> strcmp(newsub->slotname, MySubscription->slotname) != 0 ||\n> newsub->binary != MySubscription->binary ||\n> newsub->stream != MySubscription->stream ||\n> + newsub->owner != MySubscription->owner ||\n> !equal(newsub->publications, MySubscription->publications))\n> {\n> \n> If so, I am not sure how it will ensure that we check the ownership\n> change before applying each change? I think this will be invoked at\n> each transaction boundary, so, if there is a transaction with a large\n> number of changes, all the changes will be processed under the\n> previous owner.\n\nYes, your analysis appears correct. I was sloppy to say \"before applying the next record\". It will pick up the change before applying the next transaction.\n\nThe prior version of the patch only picked up the change if it happened to start a new worker, but could process multiple transactions without noticing the change. Now, it is limited to finishing the current transaction. Would you prefer that the worker noticed the change in ownership and aborted the transaction on the subscriber side? Or should the ALTER SUBSCRIPTION..OWNER TO block? I don't see much advantage to either of those options, but I also don't think I have any knock-down argument for my approach either. What do you think?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 18 Nov 2021 07:45:13 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, 2021-11-17 at 16:17 -0800, Mark Dilger wrote:\n> You must choose a single role you want the subscription to run\n> under.\n\nI think that was the source of a lot of my confusion: \n\nYour patches are creating the notion of a \"run as\" user by assigning\nownership-that-isn't-really-ownership. I got distracted wondering why\nyou would really care if some user could enable/disable/refresh/rename\na subscription, but the main point was to change who the subscription\nruns as.\n\nThat's a more general idea: I could see how \"run as\" could apply to\nsubscriptions as well as functions (right now it can only run as the\nowner or the invoker, not an arbitrary role). And I better understand\nyour analogy to security definer now.\n\nBut it's also not exactly a simple idea, and I think the current\npatches oversimplify it and conflate it with ownership. \n\n> I think the longer term plan is that non-superusers who have some\n> privileged role will be allowed to create subscriptions,\n\nYou earlier listed some challenges with that:\n\n\nhttps://postgr.es/m/CF56AC0D-7495-4E8D-A48F-FF38BD8074EB@enterprisedb.com\n\nBut it seems like it's really the right direction to go. Probably the\nbiggest concern is connection strings that read server files, but\ndblink solved that by requiring password auth.\n\nWhat are the reasonable steps to get there? Do you think anything is\ndoable for v15?\n\n> There is a cart-before-the-horse problem here.\n\nI don't think we need to hold up v2-0003. It seems like a step in the\nright direction, though I haven't looked closely yet.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 18 Nov 2021 10:29:26 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, 2021-11-17 at 16:17 -0800, Mark Dilger wrote:\n> Some of what I perceive as the screwiness of your argument I must\n> admin is not your fault. The properties of subscriptions are defined\n> in ways that don't make sense to me. It would be far more sensible\n> if connection strings were objects in their own right, and you could\n> grant USAGE on a connection string to a role,\n\nWe sort of have that with CREATE SERVER, in fact dblink can use a\nserver instead of a string. \n\nRegards,\n\tJeff Davis\n\n\n\n\n\n", "msg_date": "Thu, 18 Nov 2021 10:50:29 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Nov 18, 2021 at 9:03 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > On Nov 18, 2021, at 2:50 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >> I gave that a slight amount of thought during the design of this patch, but didn't think we could refuse to revoke superuser on such a basis, and didn't see what we should do with the subscription other than have it continue to be owned by the recently-non-superuser. If you have a better idea, we can discuss it, but to some degree I think that is also orthogonal to the purpose of this patch. The only sense in which this patch depends on that issue is that this patch proposes that non-superuser subscription owners are already an issue, and therefore that this patch isn't creating a new issue, but rather making more sane something that already can happen.\n> >>\n> >\n> > Don't we want to close this gap irrespective of the other part of the\n> > feature? I mean if we take out the part of your 0003 patch that checks\n> > whether the current user has permission to perform a particular\n> > operation on the target table then the gap related to the owner losing\n> > superuser privileges should be addressed.\n>\n> I don't think there is a gap. The patch does the right thing, causing the subscription whose owner has had superuser revoked to itself no longer function with superuser privileges. Whether that causes the subscription to fail depends on whether the previously-superuser now non-superuser owner now lacks sufficient privileges on the target relation(s). I think removing that part of the patch would be a regression.\n>\n\nI think we are saying the same thing. I intend to say that your 0003*\npatch closes the current gap in the code and we should consider\napplying it irrespective of what we do with respect to changing the\n... OWNER TO .. behavior. Is there a reason why 0003* patch (or\nsomething on those lines) shouldn't be considered to be applied?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 19 Nov 2021 15:14:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Nov 18, 2021 at 9:15 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> The prior version of the patch only picked up the change if it happened to start a new worker, but could process multiple transactions without noticing the change. Now, it is limited to finishing the current transaction. Would you prefer that the worker noticed the change in ownership and aborted the transaction on the subscriber side? Or should the ALTER SUBSCRIPTION..OWNER TO block? I don't see much advantage to either of those options, but I also don't think I have any knock-down argument for my approach either. What do you think?\n>\n\nHow about allowing to change ownership only for disabled\nsubscriptions? Basically, users need to first disable the subscription\nand then change its ownership. Now, disabling is an asynchronous\noperation but we won't allow the ownership change command to proceed\nunless the subscription is marked disabled and all the apply/sync\nworkers are not running. After the ownership is changed, users can\nenable it. We already have 'slot_name' parameter's dependency on\nwhether the subscription is marked enabled or not.\n\nThis will add some steps in changing the ownership of a subscription\nbut I think it will be predictable.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 19 Nov 2021 15:26:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, Nov 19, 2021 at 12:00 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Wed, 2021-11-17 at 16:17 -0800, Mark Dilger wrote:\n> > You must choose a single role you want the subscription to run\n> > under.\n>\n> I think that was the source of a lot of my confusion:\n>\n> Your patches are creating the notion of a \"run as\" user by assigning\n> ownership-that-isn't-really-ownership. I got distracted wondering why\n> you would really care if some user could enable/disable/refresh/rename\n> a subscription, but the main point was to change who the subscription\n> runs as.\n>\n> That's a more general idea: I could see how \"run as\" could apply to\n> subscriptions as well as functions (right now it can only run as the\n> owner or the invoker, not an arbitrary role). And I better understand\n> your analogy to security definer now.\n>\n\nI was thinking why not separate the ownership and \"run as\" privileges\nfor the subscriptions? We can introduce a new syntax in addition to\nthe current syntax for \"Owner\" for this as:\n\nCreate Subscription sub RUNAS <role_name> ...;\nAlter Subscription sub RUNAS <role_name>\n\nNow, RUNAS role will be used to apply changes and perform the initial\ntable sync. The owner will be tied to changing some of the other\nproperties (enabling, disabling, etc.) of the subscription. Now, we\nstill need a superuser to create subscription and change properties\nlike CONNECTION for a subscription but we can solve that by having\nroles specific to it as being indicated by Mark in some of his\nprevious emails.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 19 Nov 2021 15:53:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Nov 19, 2021, at 1:44 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> I think we are saying the same thing. I intend to say that your 0003*\n> patch closes the current gap in the code and we should consider\n> applying it irrespective of what we do with respect to changing the\n> ... OWNER TO .. behavior. Is there a reason why 0003* patch (or\n> something on those lines) shouldn't be considered to be applied?\n\nJeff Davis and I had a long conversation off-list yesterday and reached the same conclusion. I will be submitting a version of 0003 which does not depend on the prior two patches.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 19 Nov 2021 07:25:49 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Nov 19, 2021, at 1:56 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> How about allowing to change ownership only for disabled\n> subscriptions? Basically, users need to first disable the subscription\n> and then change its ownership.\n\nThere are some open issues about non-superuser owners that Jeff would like to address before allowing transfers of ownership to non-superusers. Your proposal about requiring the subscription to be disabled seems reasonable to me, but I'd like to see how it would interact with whatever Jeff proposes. So I think I will change the patch as you suggest, but consider it a WIP patch until then.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 19 Nov 2021 07:47:06 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Nov 19, 2021, at 2:23 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> I was thinking why not separate the ownership and \"run as\" privileges\n> for the subscriptions? We can introduce a new syntax in addition to\n> the current syntax for \"Owner\" for this as:\n> \n> Create Subscription sub RUNAS <role_name> ...;\n> Alter Subscription sub RUNAS <role_name>\n> \n> Now, RUNAS role will be used to apply changes and perform the initial\n> table sync. The owner will be tied to changing some of the other\n> properties (enabling, disabling, etc.) of the subscription. Now, we\n> still need a superuser to create subscription and change properties\n> like CONNECTION for a subscription but we can solve that by having\n> roles specific to it as being indicated by Mark in some of his\n> previous emails.\n\nI feel disquieted about the \"runas\" idea. I can't really say why yet. Maybe it is ok, but it feels like a larger design decision than just an implementation detail about how subscriptions work. We should consider if we won't soon be doing the same thing for other parts of the system. If so, we should choose a solution that makes sense when applied more broadly.\n\nSecurity definer functions could benefit from splitting the owner from the runas role.\n\nEvent triggers might benefit from having a runas role. Currently, event triggers are always owned by superusers, but we've discussed allowing non-superuser owners. That proposal still has outstanding issues to be resolved, so I can't be sure if runas would be helpful, but it might.\n\nTable triggers might benefit from having a runas role. I don't have a proposal here, just an intuition that we should think about this before designing how \"runas\" works.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 19 Nov 2021 08:12:27 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "> On Nov 19, 2021, at 7:25 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Jeff Davis and I had a long conversation off-list yesterday and reached the same conclusion. I will be submitting a version of 0003 which does not depend on the prior two patches.\n\nRenamed as 0001 in version 3, as it is the only remaining patch. For anyone who reviewed the older patch set, please note that I made some changes to the src/test/subscription/t/026_nosuperuser.pl test case relative to the prior version.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 19 Nov 2021 16:45:50 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, 2021-11-19 at 16:45 -0800, Mark Dilger wrote:\n> Renamed as 0001 in version 3, as it is the only remaining patch. For\n> anyone who reviewed the older patch set, please note that I made some\n> changes to the src/test/subscription/t/026_nosuperuser.pl test case\n> relative to the prior version.\n\nWe need to do permission checking for WITH CHECK OPTION and RLS. The\npatch right now allows the subscription to write data that an RLS\npolicy forbids.\n\nA couple other points:\n\n * We shouldn't refer to the behavior of previous versions in the docs\nunless there's a compelling reason\n * Do we need to be smarter about partitioned tables, where an insert\ncan turn into an update?\n * Should we refactor to borrow logic from ExecInsert so that it's less\nlikely that we miss something in the future?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 24 Nov 2021 16:30:06 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Nov 25, 2021 at 6:00 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Fri, 2021-11-19 at 16:45 -0800, Mark Dilger wrote:\n> > Renamed as 0001 in version 3, as it is the only remaining patch. For\n> > anyone who reviewed the older patch set, please note that I made some\n> > changes to the src/test/subscription/t/026_nosuperuser.pl test case\n> > relative to the prior version.\n>\n> We need to do permission checking for WITH CHECK OPTION and RLS. The\n> patch right now allows the subscription to write data that an RLS\n> policy forbids.\n>\n\nWon't it be better to just check if the current user is superuser\nbefore applying each change as a matter of this first patch? Sorry, I\nwas under impression that first, we want to close the current gap\nwhere we allow to proceed with replication if the user's superuser\nprivileges were revoked during replication. To allow non-superusers\nowners, I thought it might be better to first try to detect the change\nof ownership as soon as possible instead of at the transaction\nboundary.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 25 Nov 2021 09:51:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, 2021-11-25 at 09:51 +0530, Amit Kapila wrote:\n> Won't it be better to just check if the current user is superuser\n> before applying each change as a matter of this first patch? Sorry, I\n> was under impression that first, we want to close the current gap\n> where we allow to proceed with replication if the user's superuser\n> privileges were revoked during replication.\n\nThat could be a first step, and I don't oppose it. But it seems like a\nvery small first step that would be made obsolete when v3-0001 is\nready, which I think will be very soon.\n\n> To allow non-superusers\n> owners, I thought it might be better to first try to detect the\n> change\n> of ownership\n\nIn the case of revoked superuser privileges, there's no change in\nownership, just a change of privileges (SUPERUSER -> NOSUPERUSER). And\nif we're detecting a change of privileges, why not just do it in\nsomething closer to the right way, which is what v3-0001 is attempting\nto do.\n\n> as soon as possible instead of at the transaction\n> boundary.\n\nI don't understand why it's important to detect a loss of privileges\nfaster than a transaction boundary. Can you elaborate?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 25 Nov 2021 12:06:36 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Nov 24, 2021, at 4:30 PM, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> We need to do permission checking for WITH CHECK OPTION and RLS. The\n> patch right now allows the subscription to write data that an RLS\n> policy forbids.\n\nThanks for reviewing and for this observation! I can verify that RLS is not being honored on the subscriber side. I agree this is a problem for subscriptions owned by non-superusers.\n\nThe implementation of the table sync worker uses COPY FROM, which makes this problem hard to fix, because COPY FROM does not support row level security. We could do some work to honor the RLS policies during the apply workers' INSERT statements, but then some data would circumvent RLS during table sync and other data would honor RLS during worker apply, which would make the implementation not only wrong but inconsistently so.\n\nI think a more sensible approach for v15 is to raise an ERROR if a non-superuser owned subscription is trying to replicate into a table which has RLS enabled. We might try to be more clever and check whether the RLS policies could possibly reject the operation (by comparing the TO and FOR clauses of the policies against the role and operation type) but that seems like a partial re-implementation of RLS. It would be simpler and more likely correct if we just unconditionally reject replicating into tables which have RLS enabled.\n\nWhat do you think?\n\n> A couple other points:\n> \n> * We shouldn't refer to the behavior of previous versions in the docs\n> unless there's a compelling reason\n\nFair enough.\n\n> * Do we need to be smarter about partitioned tables, where an insert\n> can turn into an update?\n\nDo you mean an INSERT statement with an ON CONFLICT DO UPDATE clause that is running against a partitioned table? If so, I don't think we need to handle that on the subscriber side under the current logical replication design. I would expect the plain INSERT or UPDATE that ultimately executes on the publisher to be what gets replicated to the subscriber, and not the original INSERT .. ON CONFLICT DO UPDATE statement.\n\n> * Should we refactor to borrow logic from ExecInsert so that it's less\n> likely that we miss something in the future?\n\nHooking into the executor at a higher level, possibly ExecInsert or ExecModifyTable would do a lot more than what logical replication currently does. If we also always used INSERT/UPDATE/DELETE statements and never COPY FROM statements, we might solve several problems at once, including honoring RLS policies and honoring rules defined for the target table on the subscriber side.\n\nDoing this would clearly be a major design change, and possibly one we do not want. Can we consider this out of scope?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sat, 27 Nov 2021 10:05:16 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, Nov 26, 2021 at 1:36 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> > as soon as possible instead of at the transaction\n> > boundary.\n>\n> I don't understand why it's important to detect a loss of privileges\n> faster than a transaction boundary. Can you elaborate?\n>\n\nThe first reason is that way it would be consistent with what we can\nsee while doing the operations from the backend. For example, if we\nrevoke privileges from the user during the transaction, the results\nwill be reflected.\npostgres=> Begin;\nBEGIN\npostgres=*> insert into t1 values(1);\nINSERT 0 1\npostgres=*> insert into t1 values(2);\nERROR: permission denied for table t1\n\nIn this case, after the first insert, I have revoked the privileges of\nthe user from table t1 and the same is reflected in the very next\noperation. Another reason is to make behavior predictable as users can\nalways expect when exactly the privilege change will be reflected and\nit won't depend on the number of changes in the transaction.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 29 Nov 2021 09:43:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Sat, Nov 27, 2021 at 11:37 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Nov 24, 2021, at 4:30 PM, Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > We need to do permission checking for WITH CHECK OPTION and RLS. The\n> > patch right now allows the subscription to write data that an RLS\n> > policy forbids.\n>\n> Thanks for reviewing and for this observation! I can verify that RLS is not being honored on the subscriber side. I agree this is a problem for subscriptions owned by non-superusers.\n>\n...\n>\n> > A couple other points:\n> >\n>\n> > * Do we need to be smarter about partitioned tables, where an insert\n> > can turn into an update?\n>\n> Do you mean an INSERT statement with an ON CONFLICT DO UPDATE clause that is running against a partitioned table? If so, I don't think we need to handle that on the subscriber side under the current logical replication design. I would expect the plain INSERT or UPDATE that ultimately executes on the publisher to be what gets replicated to the subscriber, and not the original INSERT .. ON CONFLICT DO UPDATE statement.\n>\n\nYeah, that is correct but I think the update case is more relevant\nhere. In ExecUpdate(), we convert Update to DELETE+INSERT when the\npartition constraint is failed whereas, on the subscriber-side, it\nwill simply fail in this case. It is not clear to me how that is\ndirectly related to this patch but surely it will be a good\nimprovement on its own and might help if that requires us to change\nsome infrastructure here like hooking into executor at a higher level.\n\n> > * Should we refactor to borrow logic from ExecInsert so that it's less\n> > likely that we miss something in the future?\n>\n> Hooking into the executor at a higher level, possibly ExecInsert or ExecModifyTable would do a lot more than what logical replication currently does. If we also always used INSERT/UPDATE/DELETE statements and never COPY FROM statements, we might solve several problems at once, including honoring RLS policies and honoring rules defined for the target table on the subscriber side.\n>\n> Doing this would clearly be a major design change, and possibly one we do not want. Can we consider this out of scope?\n>\n\nI agree that if we want to do all of this then that would require a\nlot of changes. However, giving an error for RLS-enabled tables might\nalso be too restrictive. The few alternatives could be that (a) we\nallow subscription owners to be either have \"bypassrls\" attribute or\nthey could be superusers. (b) don't allow initial table_sync for rls\nenabled tables. (c) evaluate/analyze what is required to allow Copy\n From to start respecting RLS policies. (d) reject replicating any\nchanges to tables that have RLS enabled.\n\nI see that you are favoring (d) which clearly has merits like lesser\ncode/design change but not sure if that is the best way forward or we\ncan do something better than that either by following one of (a), (b),\n(c), or something less restrictive than (d).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 29 Nov 2021 11:26:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Nov 28, 2021, at 9:56 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> In ExecUpdate(), we convert Update to DELETE+INSERT when the\n> partition constraint is failed whereas, on the subscriber-side, it\n> will simply fail in this case. It is not clear to me how that is\n> directly related to this patch but surely it will be a good\n> improvement on its own and might help if that requires us to change\n> some infrastructure here like hooking into executor at a higher level.\n\nI would rather get a fix for non-superuser subscription owners committed than expand the scope of work and have this patch linger until the v16 development cycle. This particular DELETE+INSERT problem sounds important but unrelated and out of scope.\n\n> I agree that if we want to do all of this then that would require a\n> lot of changes. However, giving an error for RLS-enabled tables might\n> also be too restrictive. The few alternatives could be that (a) we\n> allow subscription owners to be either have \"bypassrls\" attribute or\n> they could be superusers. (b) don't allow initial table_sync for rls\n> enabled tables. (c) evaluate/analyze what is required to allow Copy\n> From to start respecting RLS policies. (d) reject replicating any\n> changes to tables that have RLS enabled.\n> \n> I see that you are favoring (d) which clearly has merits like lesser\n> code/design change but not sure if that is the best way forward or we\n> can do something better than that either by following one of (a), (b),\n> (c), or something less restrictive than (d).\n\nI was favoring option (d) only when RLS policies exist for one or more of the target relations.\n\nSkipping the table_sync step while replicating tables that have RLS policies for subscriptions that are owned by users who lack bypassrls is interesting. If we make that work, it will be a more complete solution than option (d).\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 29 Nov 2021 08:26:38 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, 2021-11-29 at 08:26 -0800, Mark Dilger wrote:\n> > On Nov 28, 2021, at 9:56 PM, Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > \n> > In ExecUpdate(), we convert Update to DELETE+INSERT when the\n> > partition constraint is failed whereas, on the subscriber-side, it\n> > will simply fail in this case.\n\nThank you, yes, that's the more important case.\n\n> This particular DELETE+INSERT problem sounds important but unrelated\n> and out of scope.\n\n+1\n\n> > I agree that if we want to do all of this then that would require a\n> > lot of changes. However, giving an error for RLS-enabled tables\n> > might\n> > also be too restrictive. The few alternatives could be that (a) we\n> > allow subscription owners to be either have \"bypassrls\" attribute\n> > or\n> > they could be superusers. (b) don't allow initial table_sync for\n> > rls\n> > enabled tables. (c) evaluate/analyze what is required to allow Copy\n> > From to start respecting RLS policies. (d) reject replicating any\n> > changes to tables that have RLS enabled.\n\nMaybe a combination?\n\nAllow subscriptions with copy_data=true iff the subscription owner is\nbypassrls or superuser. And then enforce RLS+WCO during\ninsert/update/delete.\n\nI don't think it's a big change (correct me if I'm wrong), and it\nallows good functionality now, and room to improve in the future if we\nwant to bring in more of ExecInsert into logical replication.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 29 Nov 2021 10:22:47 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, 2021-11-29 at 09:43 +0530, Amit Kapila wrote:\n> The first reason is that way it would be consistent with what we can\n> see while doing the operations from the backend.\n\nLogical replication is not interactive, so it doesn't seem quite the\nsame.\n\nIf you have a long running INSERT INTO SELECT or COPY FROM, the\npermission checks just happen at the beginning. As a user, it wouldn't\nsurprise me if logical replication was similar.\n\n> operation. Another reason is to make behavior predictable as users\n> can\n> always expect when exactly the privilege change will be reflected and\n> it won't depend on the number of changes in the transaction.\n\nThis patch does detect ownership changes more quickly (at the\ntransaction boundary) than the current code (only when it reloads for\nsome other reason). Transaction boundary seems like a reasonable time\nto detect the change to me.\n\nDetecting faster might be nice, but I don't have a strong opinion about\nit and I don't see why it necessarily needs to happen before this patch\ngoes in.\n\nAlso, do you think the cost of doing maybe_reread_subscription() per-\ntuple instead of per-transaction would be detectable? If we lock\nourselves into semantics that detect changes quickly, it will be harder\nto optimize the per-tuple path later.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 29 Nov 2021 11:26:01 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Nov 29, 2021 at 11:52 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Mon, 2021-11-29 at 08:26 -0800, Mark Dilger wrote:\n>\n> > > I agree that if we want to do all of this then that would require a\n> > > lot of changes. However, giving an error for RLS-enabled tables\n> > > might\n> > > also be too restrictive. The few alternatives could be that (a) we\n> > > allow subscription owners to be either have \"bypassrls\" attribute\n> > > or\n> > > they could be superusers. (b) don't allow initial table_sync for\n> > > rls\n> > > enabled tables. (c) evaluate/analyze what is required to allow Copy\n> > > From to start respecting RLS policies. (d) reject replicating any\n> > > changes to tables that have RLS enabled.\n>\n> Maybe a combination?\n>\n> Allow subscriptions with copy_data=true iff the subscription owner is\n> bypassrls or superuser. And then enforce RLS+WCO during\n> insert/update/delete.\n>\n\nYeah, that sounds reasonable to me.\n\n> I don't think it's a big change (correct me if I'm wrong),\n>\n\nYeah, I also don't think it should be a big change.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 30 Nov 2021 17:19:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Tue, Nov 30, 2021 at 12:56 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Mon, 2021-11-29 at 09:43 +0530, Amit Kapila wrote:\n> > The first reason is that way it would be consistent with what we can\n> > see while doing the operations from the backend.\n>\n> Logical replication is not interactive, so it doesn't seem quite the\n> same.\n>\n> If you have a long running INSERT INTO SELECT or COPY FROM, the\n> permission checks just happen at the beginning. As a user, it wouldn't\n> surprise me if logical replication was similar.\n>\n> > operation. Another reason is to make behavior predictable as users\n> > can\n> > always expect when exactly the privilege change will be reflected and\n> > it won't depend on the number of changes in the transaction.\n>\n> This patch does detect ownership changes more quickly (at the\n> transaction boundary) than the current code (only when it reloads for\n> some other reason). Transaction boundary seems like a reasonable time\n> to detect the change to me.\n>\n> Detecting faster might be nice, but I don't have a strong opinion about\n> it and I don't see why it necessarily needs to happen before this patch\n> goes in.\n>\n\nI think it would be better to do it before we allow subscription\nowners to be non-superusers.\n\n> Also, do you think the cost of doing maybe_reread_subscription() per-\n> tuple instead of per-transaction would be detectable?\n>\n\nYeah, it is possible that is why I suggested in one of the emails\nabove to allow changing the owners only for disabled subscriptions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 30 Nov 2021 17:25:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Tue, 2021-11-30 at 17:25 +0530, Amit Kapila wrote:\n> I think it would be better to do it before we allow subscription\n> owners to be non-superusers.\n\nThere are a couple other things to consider before allowing non-\nsuperusers to create subscriptions anyway. For instance, a non-\nsuperuser shouldn't be able to use a connection string that reads the\ncertificate file from the server unless they also have\npg_read_server_files privs.\n\n> Yeah, it is possible that is why I suggested in one of the emails\n> above to allow changing the owners only for disabled subscriptions.\n\nThe current patch detects the following cases at the transaction\nboundary:\n\n * ALTER SUBSCRIPTION ... OWNER TO ...\n * ALTER ROLE ... NOSUPERUSER\n * privileges revoked one way or another (aside from the RLS/WCO\nproblems, which will be fixed)\n\nIf we want to detect at row boundaries we need to capture all of those\ncases too, or else we're being inconsistent. The latter two cannot be\ntied to whether the subscription is disabled or not, so I don't think\nthat's a complete solution.\n\nHow about (as a separate patch) we just do maybe_reread_subscription()\nevery K operations within a transaction? That would speed up\npermissions errors if a revoke happens.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 30 Nov 2021 12:42:13 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Dec 1, 2021 at 2:12 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Tue, 2021-11-30 at 17:25 +0530, Amit Kapila wrote:\n> > I think it would be better to do it before we allow subscription\n> > owners to be non-superusers.\n>\n> There are a couple other things to consider before allowing non-\n> superusers to create subscriptions anyway. For instance, a non-\n> superuser shouldn't be able to use a connection string that reads the\n> certificate file from the server unless they also have\n> pg_read_server_files privs.\n>\n\nIsn't allowing to create subscriptions via non-superusers and allowing\nto change the owner two different things? I am under the impression\nthat the latter one is more towards allowing the workers to apply\nchanges with a non-superuser role.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 1 Dec 2021 19:06:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Dec 1, 2021, at 5:36 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> On Wed, Dec 1, 2021 at 2:12 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>> \n>> On Tue, 2021-11-30 at 17:25 +0530, Amit Kapila wrote:\n>>> I think it would be better to do it before we allow subscription\n>>> owners to be non-superusers.\n>> \n>> There are a couple other things to consider before allowing non-\n>> superusers to create subscriptions anyway. For instance, a non-\n>> superuser shouldn't be able to use a connection string that reads the\n>> certificate file from the server unless they also have\n>> pg_read_server_files privs.\n>> \n> \n> Isn't allowing to create subscriptions via non-superusers and allowing\n> to change the owner two different things? I am under the impression\n> that the latter one is more towards allowing the workers to apply\n> changes with a non-superuser role.\n\nThe short-term goal is to have logical replication workers respect the privileges of the role which owns the subscription.\n\nThe long-term work probably includes creating a predefined role with permission to create subscriptions, and the ability to transfer those subscriptions to roles who might be neither superuser nor members of any particular predefined role; the idea being that logical replication subscriptions can be established without any superuser involvement, and may thereafter run without any special privilege.\n\nThe more recent patches on this thread are not as ambitious as the earlier patch-sets. We are no longer trying to support transferring subscriptions to non-superusers.\n\nRight now, on HEAD, if a subscription owner has superuser revoked, the subscription can continue to operate as superuser in so far as its replication actions are concerned. That seems like a pretty big security hole.\n\nThis patch mostly plugs that hole by adding permissions checks, so that a subscription owned by a role who has privileges revoked cannot (for the most part) continue to act under the old privileges.\n\nThere are two problematic edge cases that can occur after transfer of ownership. Remember, the new owner is required to be superuser for the transfer of ownership to occur.\n\n1) A subscription is transferred to a new owner, and the new owner then has privilege revoked.\n\n2) A subscription is transferred to a new owner, and then the old owner has privileges increased.\n\nIn both cases, a currently running logical replication worker may finish a transaction in progress acting with the current privileges of the old owner. The clearest solution is, as you suggest, to refuse transfer of ownership of subscriptions that are enabled.\n\nDoing so will create a failure case for REASSIGN OWNED BY. Will that be ok?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 1 Dec 2021 11:21:35 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Dec 2, 2021 at 12:51 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n> > On Dec 1, 2021, at 5:36 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Dec 1, 2021 at 2:12 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> >>\n> >> On Tue, 2021-11-30 at 17:25 +0530, Amit Kapila wrote:\n> >>> I think it would be better to do it before we allow subscription\n> >>> owners to be non-superusers.\n> >>\n> >> There are a couple other things to consider before allowing non-\n> >> superusers to create subscriptions anyway. For instance, a non-\n> >> superuser shouldn't be able to use a connection string that reads the\n> >> certificate file from the server unless they also have\n> >> pg_read_server_files privs.\n> >>\n> >\n> > Isn't allowing to create subscriptions via non-superusers and allowing\n> > to change the owner two different things? I am under the impression\n> > that the latter one is more towards allowing the workers to apply\n> > changes with a non-superuser role.\n>\n> The short-term goal is to have logical replication workers respect the privileges of the role which owns the subscription.\n>\n> The long-term work probably includes creating a predefined role with permission to create subscriptions, and the ability to transfer those subscriptions to roles who might be neither superuser nor members of any particular predefined role; the idea being that logical replication subscriptions can be established without any superuser involvement, and may thereafter run without any special privilege.\n>\n> The more recent patches on this thread are not as ambitious as the earlier patch-sets. We are no longer trying to support transferring subscriptions to non-superusers.\n>\n> Right now, on HEAD, if a subscription owner has superuser revoked, the subscription can continue to operate as superuser in so far as its replication actions are concerned. That seems like a pretty big security hole.\n>\n> This patch mostly plugs that hole by adding permissions checks, so that a subscription owned by a role who has privileges revoked cannot (for the most part) continue to act under the old privileges.\n>\n\nIf we want to maintain the property that subscriptions can only be\nowned by superuser for your first version then isn't a simple check\nlike ((!superuser()) for each of the operations is sufficient?\n\n> There are two problematic edge cases that can occur after transfer of ownership. Remember, the new owner is required to be superuser for the transfer of ownership to occur.\n>\n> 1) A subscription is transferred to a new owner, and the new owner then has privilege revoked.\n>\n> 2) A subscription is transferred to a new owner, and then the old owner has privileges increased.\n>\n\nIn (2), I am not clear what do you mean by \"the old owner has\nprivileges increased\"? If the owners can only be superusers then what\ndoes it mean to increase the privileges.\n\n> In both cases, a currently running logical replication worker may finish a transaction in progress acting with the current privileges of the old owner. The clearest solution is, as you suggest, to refuse transfer of ownership of subscriptions that are enabled.\n>\n> Doing so will create a failure case for REASSIGN OWNED BY. Will that be ok?\n>\n\nI think so. Do we see any problem with that? I think we have some\nfailure cases currently as well like \"All Tables Publication\" can only\nbe owned by superusers whereas ownership for others can be to\nnon-superusers and similarly we can't change ownership for pinned\nobjects. I think the case being discussed is not exactly the same but\nI am not able to see a problem with it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 2 Dec 2021 14:59:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Dec 2, 2021, at 1:29 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> If we want to maintain the property that subscriptions can only be\n> owned by superuser for your first version then isn't a simple check\n> like ((!superuser()) for each of the operations is sufficient?\n\nAs things stand today, nothing prevents a superuser subscription owner from having superuser revoked. The patch does nothing to change this.\n\n> In (2), I am not clear what do you mean by \"the old owner has\n> privileges increased\"? If the owners can only be superusers then what\n> does it mean to increase the privileges.\n\nThe old owner may have had privileges reduced (no superuser, only permission to write into a specific schema, etc.) and the subscription enabled only after those privilege reductions were put in place. This is a usage pattern this patch is intended to support, by honoring those privilege restrictions.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 3 Dec 2021 05:31:46 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, Dec 3, 2021 at 10:37 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > On Dec 2, 2021, at 1:29 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > If we want to maintain the property that subscriptions can only be\n> > owned by superuser for your first version then isn't a simple check\n> > like ((!superuser()) for each of the operations is sufficient?\n>\n> As things stand today, nothing prevents a superuser subscription owner from having superuser revoked. The patch does nothing to change this.\n>\n\nI understand that but won't that get verified when we look up the\ninformation in pg_authid as part of superuser() check?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 6 Dec 2021 15:49:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Dec 6, 2021, at 2:19 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n>>> If we want to maintain the property that subscriptions can only be\n>>> owned by superuser\n\nWe don't want to maintain such a property, or at least, that's not what I want. I don't think that's what Jeff wants, either.\n\nTo clarify, I'm not entirely sure how to interpret the verb \"maintain\" in your question, since before the patch the property does not exist, and after the patch, it continues to not exist. We could *add* such a property, of course, though this patch does not attempt any such thing.\n\n> I understand that but won't that get verified when we look up the\n> information in pg_authid as part of superuser() check?\n\nIf we added a superuser() check, then yes, but that would take things in a direction I do not want to go.\n\nAs I perceive the roadmap:\n\n1) Fix the current bug wherein subscription changes are applied with superuser force after the subscription owner has superuser privileges revoked.\n2) Allow the transfer of subscriptions to non-superuser owners.\n3) Allow the creation of subscriptions by non-superusers who are members of some as yet to be created predefined role, say \"pg_create_subscriptions\"\n\nI may be wrong, but it sounds like you interpret the intent of this patch as enforcing superuserness. That's not so. This patch intends to correctly handle the situation where a subscription is owned by a non-superuser (task 1, above) without going so far as creating new paths by which that situation could arise (tasks 2 and 3, above).\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 6 Dec 2021 07:56:56 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Le lundi 6 décembre 2021, 16:56:56 CET Mark Dilger a écrit :\n> > On Dec 6, 2021, at 2:19 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>> If we want to maintain the property that subscriptions can only be\n> >>> owned by superuser\n> \n> We don't want to maintain such a property, or at least, that's not what I\n> want. I don't think that's what Jeff wants, either.\n\nThat's not what I want either: the ability to run and refresh subscriptions as \na non superuser is a desirable feature. \n\nThe REFRESH part was possible before PG 14, when it was allowed to run REFRESH \nin a function, which could be made to run as security definer. \n\n\n> As I perceive the roadmap:\n> \n> 1) Fix the current bug wherein subscription changes are applied with\n> superuser force after the subscription owner has superuser privileges\n> revoked. 2) Allow the transfer of subscriptions to non-superuser owners.\n> 3) Allow the creation of subscriptions by non-superusers who are members of\n> some as yet to be created predefined role, say \"pg_create_subscriptions\"\n\nThis roadmap seems sensible.\n\n-- \nRonan Dunklau\n\n\n\n\n", "msg_date": "Tue, 07 Dec 2021 10:39:43 +0100", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Dec 6, 2021 at 9:26 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n> > On Dec 6, 2021, at 2:19 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >>> If we want to maintain the property that subscriptions can only be\n> >>> owned by superuser\n>\n> We don't want to maintain such a property, or at least, that's not what I want. I don't think that's what Jeff wants, either.\n>\n> To clarify, I'm not entirely sure how to interpret the verb \"maintain\" in your question, since before the patch the property does not exist, and after the patch, it continues to not exist. We could *add* such a property, of course, though this patch does not attempt any such thing.\n>\n\nOkay, let me try to explain again. Following is the text from docs\n[1]: \" (a) To create a subscription, the user must be a superuser. (b)\nThe subscription apply process will run in the local database with the\nprivileges of a superuser. (c) Privileges are only checked once at the\nstart of a replication connection. They are not re-checked as each\nchange record is read from the publisher, nor are they re-checked for\neach change when applied.\n\nMy understanding is that we want to improve what is written as (c)\nwhich I think is the same as what you mentioned later as \"Fix the\ncurrent bug wherein subscription changes are applied with superuser\nforce after the subscription owner has superuser privileges revoked.\".\nAm I correct till here? If so, I think what I am suggesting should fix\nthis with the assumption that we still want to follow (b) at least for\nthe first patch. One possibility is that our understanding of the\nfirst problem is the same but you want to allow apply worker running\neven when superuser privileges are revoked provided the user with\nwhich it is running has appropriate privileges on the objects being\naccessed by apply worker.\n\nWe will talk about other points of the roadmap you mentioned once our\nunderstanding for the first one matches.\n\n[1] - https://www.postgresql.org/docs/devel/logical-replication-security.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 7 Dec 2021 15:59:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Dec 7, 2021, at 2:29 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> Okay, let me try to explain again. Following is the text from docs\n> [1]: \" (a) To create a subscription, the user must be a superuser. (b)\n> The subscription apply process will run in the local database with the\n> privileges of a superuser. (c) Privileges are only checked once at the\n> start of a replication connection. They are not re-checked as each\n> change record is read from the publisher, nor are they re-checked for\n> each change when applied.\n> \n> My understanding is that we want to improve what is written as (c)\n> which I think is the same as what you mentioned later as \"Fix the\n> current bug wherein subscription changes are applied with superuser\n> force after the subscription owner has superuser privileges revoked.\".\n> Am I correct till here? If so, I think what I am suggesting should fix\n> this with the assumption that we still want to follow (b) at least for\n> the first patch.\n\nOk, that's a point of disagreement. I was trying to fix both (b) and (c) in the first patch.\n\n> One possibility is that our understanding of the\n> first problem is the same but you want to allow apply worker running\n> even when superuser privileges are revoked provided the user with\n> which it is running has appropriate privileges on the objects being\n> accessed by apply worker.\n\nCorrect, that's what I'm trying to make safe.\n\n> We will talk about other points of the roadmap you mentioned once our\n> understanding for the first one matches.\n\nI am happy to have an off-list phone call with you, if you like.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 7 Dec 2021 06:55:51 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Tue, Dec 7, 2021 at 8:25 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n> > On Dec 7, 2021, at 2:29 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Okay, let me try to explain again. Following is the text from docs\n> > [1]: \" (a) To create a subscription, the user must be a superuser. (b)\n> > The subscription apply process will run in the local database with the\n> > privileges of a superuser. (c) Privileges are only checked once at the\n> > start of a replication connection. They are not re-checked as each\n> > change record is read from the publisher, nor are they re-checked for\n> > each change when applied.\n> >\n> > My understanding is that we want to improve what is written as (c)\n> > which I think is the same as what you mentioned later as \"Fix the\n> > current bug wherein subscription changes are applied with superuser\n> > force after the subscription owner has superuser privileges revoked.\".\n> > Am I correct till here? If so, I think what I am suggesting should fix\n> > this with the assumption that we still want to follow (b) at least for\n> > the first patch.\n>\n> Ok, that's a point of disagreement. I was trying to fix both (b) and (c) in the first patch.\n>\n\nBut, I think as soon as we are trying to fix (b), we seem to be\nallowing non-superusers to apply changes. If we want to do that then\nwe should be even allowed to change the owners to non-superusers. I\nwas thinking of the below order:\n1. First fix (c) from the above description \"Privileges are only\nchecked once at the start of a replication connection.\"\n2A. Allow the transfer of subscriptions to non-superuser owners. This\nwill be allowed only on disabled subscriptions to make this action\npredictable.\n2B. The apply worker should be able to apply the changes provided the\nuser has appropriate privileges on the objects being accessed by apply\nworker.\n3) Allow the creation of subscriptions by non-superusers who are\nmembers of some as yet to be created predefined role, say\n\"pg_create_subscriptions\"\n\nWe all seem to agree that (3) can be done later as an independent\nproject. 2A, 2B can be developed as separate patches but they need to\nbe considered together for commit. After 2A, 2B, the first one (1)\nwon't be required so, in fact, we can just ignore (1) but the only\nbenefit I see is that if we stuck with some design problem during the\ndevelopment of 2A, 2B, we would have at least something better than\nwhat we have now.\n\nYou seem to be indicating let's do 2B first as that will anyway be\nused later after 2A and 1 won't be required if we do that. I see that\nbut I personally feel either we should follow 1, 2(A, B) or just do\n2(A, B).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 9 Dec 2021 10:28:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Tue, Nov 30, 2021 at 6:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > This patch does detect ownership changes more quickly (at the\n> > transaction boundary) than the current code (only when it reloads for\n> > some other reason). Transaction boundary seems like a reasonable time\n> > to detect the change to me.\n> >\n> > Detecting faster might be nice, but I don't have a strong opinion about\n> > it and I don't see why it necessarily needs to happen before this patch\n> > goes in.\n>\n> I think it would be better to do it before we allow subscription\n> owners to be non-superusers.\n\nI think it would be better not to ever do it at any time.\n\nIt seems like a really bad idea to me to change the run-as user in the\nmiddle of a transaction. That seems prone to producing all sorts of\nconfusing behavior that's hard to understand, and hard to test. So\nwhat are we to do if a change occurs mid-transaction? I think we can\neither finish replicating the current transaction and then switch to\nthe new owner for the next transaction, or we could abort the current\nattempt to replicate the transaction and retry the whole transaction\nwith the new run-as user. My guess is that most users would prefer the\nformer behavior to the latter.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Dec 2021 10:41:21 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Dec 8, 2021 at 11:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> But, I think as soon as we are trying to fix (b), we seem to be\n> allowing non-superusers to apply changes. If we want to do that then\n> we should be even allowed to change the owners to non-superusers. I\n> was thinking of the below order:\n> 1. First fix (c) from the above description \"Privileges are only\n> checked once at the start of a replication connection.\"\n> 2A. Allow the transfer of subscriptions to non-superuser owners. This\n> will be allowed only on disabled subscriptions to make this action\n> predictable.\n> 2B. The apply worker should be able to apply the changes provided the\n> user has appropriate privileges on the objects being accessed by apply\n> worker.\n> 3) Allow the creation of subscriptions by non-superusers who are\n> members of some as yet to be created predefined role, say\n> \"pg_create_subscriptions\"\n>\n> We all seem to agree that (3) can be done later as an independent\n> project. 2A, 2B can be developed as separate patches but they need to\n> be considered together for commit. After 2A, 2B, the first one (1)\n> won't be required so, in fact, we can just ignore (1) but the only\n> benefit I see is that if we stuck with some design problem during the\n> development of 2A, 2B, we would have at least something better than\n> what we have now.\n>\n> You seem to be indicating let's do 2B first as that will anyway be\n> used later after 2A and 1 won't be required if we do that. I see that\n> but I personally feel either we should follow 1, 2(A, B) or just do\n> 2(A, B).\n\n1 and 2B seem to require changing the same code, or related code. 1A\nseems to require a completely different set of changes. If I'm right\nabout that, it seems like a good reason for doing 1+2B first and\nleaving 2A for a separate patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Dec 2021 10:47:03 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Dec 9, 2021, at 7:41 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Tue, Nov 30, 2021 at 6:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>> This patch does detect ownership changes more quickly (at the\n>>> transaction boundary) than the current code (only when it reloads for\n>>> some other reason). Transaction boundary seems like a reasonable time\n>>> to detect the change to me.\n>>> \n>>> Detecting faster might be nice, but I don't have a strong opinion about\n>>> it and I don't see why it necessarily needs to happen before this patch\n>>> goes in.\n>> \n>> I think it would be better to do it before we allow subscription\n>> owners to be non-superusers.\n> \n> I think it would be better not to ever do it at any time.\n> \n> It seems like a really bad idea to me to change the run-as user in the\n> middle of a transaction.\n\nI agree. We allow SET ROLE inside transactions, but faking one on the subscriber seems odd. No such role change was performed on the publisher side, nor is there a principled reason for assuming the old run-as role has membership in the new run-as role, so we'd be pretending to do something that might otherwise be impossible.\n\nThere was some discussion off-list about having the apply worker take out a lock on its subscription, thereby blocking ownership changes mid-transaction. I coded that and it seems to work fine, but I have a hard time seeing how the lock traffic would be worth expending. Between (a) changing roles mid-transaction, and (b) locking the subscription for each transaction, I'd prefer to do neither, but (b) seems far better than (a). Thoughts?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 9 Dec 2021 09:22:07 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Dec 9, 2021, at 7:47 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> 1 and 2B seem to require changing the same code, or related code. 1A\n> seems to require a completely different set of changes. If I'm right\n> about that, it seems like a good reason for doing 1+2B first and\n> leaving 2A for a separate patch.\n\nThere are unresolved problems with 2A and 3 which were discussed upthread. I don't want to include fixes for them in this patch, as it greatly expands the scope of this patch, and is a logically separate effort. We can come back to those problems after this first patch is committed.\n\n\nSpecifically, a non-superuser owner can perform ALTER SUBSCRIPTION and do things that are morally equivalent to creating a new subscription. This is problematic where things like the connection string are concerned, because it means the non-superuser owner can connect out to entirely different servers, without any access control checks to make sure the owner should be able to connect to these servers.\n\nThis problem already exists, right now. I'm not fixing it in this first patch, but I'm also not making it any worse.\n\nThe solution Jeff Davis proposed seems right to me. We change subscriptions to use a foreign server rather than a freeform connection string. When creating or altering a subscription, the role performing the action must have privileges on any foreign server they use.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 9 Dec 2021 09:48:59 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Dec 9, 2021 at 10:52 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > On Dec 9, 2021, at 7:41 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Tue, Nov 30, 2021 at 6:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>> This patch does detect ownership changes more quickly (at the\n> >>> transaction boundary) than the current code (only when it reloads for\n> >>> some other reason). Transaction boundary seems like a reasonable time\n> >>> to detect the change to me.\n> >>>\n> >>> Detecting faster might be nice, but I don't have a strong opinion about\n> >>> it and I don't see why it necessarily needs to happen before this patch\n> >>> goes in.\n> >>\n> >> I think it would be better to do it before we allow subscription\n> >> owners to be non-superusers.\n> >\n> > I think it would be better not to ever do it at any time.\n> >\n> > It seems like a really bad idea to me to change the run-as user in the\n> > middle of a transaction.\n>\n> I agree. We allow SET ROLE inside transactions, but faking one on the subscriber seems odd. No such role change was performed on the publisher side, nor is there a principled reason for assuming the old run-as role has membership in the new run-as role, so we'd be pretending to do something that might otherwise be impossible.\n>\n> There was some discussion off-list about having the apply worker take out a lock on its subscription, thereby blocking ownership changes mid-transaction. I coded that and it seems to work fine, but I have a hard time seeing how the lock traffic would be worth expending. Between (a) changing roles mid-transaction, and (b) locking the subscription for each transaction, I'd prefer to do neither, but (b) seems far better than (a). Thoughts?\n>\n\nYeah, to me also (b) sounds better than (a). However, a few points\nthat we might want to consider in that regard are as follows: 1.\nlocking the subscription for each transaction will add new blocking\nareas considering we acquire AccessExclusiveLock to change any\nproperty of subscription. But as Alter Subscription won't be that\nfrequent operation it might be acceptable. 2. It might lead to adding\nsome cost to small transactions but not sure if that will be\nnoticeable. 3. Tomorrow, if we want to make the apply-process parallel\n(IIRC, we do have the patch for that somewhere in archives) especially\nfor large in-progress transactions then this locking will have\nadditional blocking w.r.t Altering Subscription. But again, this also\nmight be acceptable.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 10 Dec 2021 09:45:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Dec 9, 2021 at 11:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Yeah, to me also (b) sounds better than (a). However, a few points\n> that we might want to consider in that regard are as follows: 1.\n> locking the subscription for each transaction will add new blocking\n> areas considering we acquire AccessExclusiveLock to change any\n> property of subscription. But as Alter Subscription won't be that\n> frequent operation it might be acceptable.\n\nThe problem isn't the cost of the locks taken by ALTER SUBSCRIPTION.\nIt's the cost of locking and unlocking the relation for every\ntransaction we apply. Suppose it's a pgbench-type workload with a\nsingle UPDATE per transaction. You've just limited the maximum\npossible apply speed to about, I think, 30,000 transactions per second\nno matter how many parallel workers you use, because that's how fast\nthe lock manager is (or was, unless newer hardware or newer PG\nversions have changed things in a way I don't know about). That seems\nlike a poor idea. There's nothing wrong with noticing changes at the\nnext transaction boundary, as long as we document it. So why would we\nincur a possibly-significant performance cost to provide a stricter\nguarantee?\n\nI bet users wouldn't even like this behavior. It would mean that if\nyou are replicating a long-running transaction, an ALTER SUBSCRIPTION\ncommand might block for a long time until replication of that\ntransaction completes. I have a hard time understanding why anyone\nwould consider that an improvement.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Dec 2021 09:09:26 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\nOn 12/10/21 09:09, Robert Haas wrote:\n> On Thu, Dec 9, 2021 at 11:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> Yeah, to me also (b) sounds better than (a). However, a few points\n>> that we might want to consider in that regard are as follows: 1.\n>> locking the subscription for each transaction will add new blocking\n>> areas considering we acquire AccessExclusiveLock to change any\n>> property of subscription. But as Alter Subscription won't be that\n>> frequent operation it might be acceptable.\n> The problem isn't the cost of the locks taken by ALTER SUBSCRIPTION.\n> It's the cost of locking and unlocking the relation for every\n> transaction we apply. Suppose it's a pgbench-type workload with a\n> single UPDATE per transaction. You've just limited the maximum\n> possible apply speed to about, I think, 30,000 transactions per second\n> no matter how many parallel workers you use, because that's how fast\n> the lock manager is (or was, unless newer hardware or newer PG\n> versions have changed things in a way I don't know about). That seems\n> like a poor idea. There's nothing wrong with noticing changes at the\n> next transaction boundary, as long as we document it. So why would we\n> incur a possibly-significant performance cost to provide a stricter\n> guarantee?\n>\n> I bet users wouldn't even like this behavior. It would mean that if\n> you are replicating a long-running transaction, an ALTER SUBSCRIPTION\n> command might block for a long time until replication of that\n> transaction completes. I have a hard time understanding why anyone\n> would consider that an improvement.\n>\n\n\n+1\n\n\nI think noticing changes at the transaction boundary is perfectly\nacceptable.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 10 Dec 2021 10:20:02 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, Dec 10, 2021 at 7:39 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Dec 9, 2021 at 11:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Yeah, to me also (b) sounds better than (a). However, a few points\n> > that we might want to consider in that regard are as follows: 1.\n> > locking the subscription for each transaction will add new blocking\n> > areas considering we acquire AccessExclusiveLock to change any\n> > property of subscription. But as Alter Subscription won't be that\n> > frequent operation it might be acceptable.\n>\n> The problem isn't the cost of the locks taken by ALTER SUBSCRIPTION.\n> It's the cost of locking and unlocking the relation for every\n> transaction we apply.\n>\n\nThis point is not clear to me as we are already locking the relation\nwhile applying changes. I think here additional cost is to lock a\nparticular subscription as well in addition to the relation on which\nwe are going to perform apply. I agree that has a cost and that is why\nI mentioned it as one of the points above and then also the\nconcurrency effect as you also noted could make this idea moot.\n\n> Suppose it's a pgbench-type workload with a\n> single UPDATE per transaction. You've just limited the maximum\n> possible apply speed to about, I think, 30,000 transactions per second\n> no matter how many parallel workers you use, because that's how fast\n> the lock manager is (or was, unless newer hardware or newer PG\n> versions have changed things in a way I don't know about). That seems\n> like a poor idea. There's nothing wrong with noticing changes at the\n> next transaction boundary, as long as we document it.\n>\n\nIf we want to just document this then I think we should also keep in\nmind that these could be N transactions as well if say tomorrow we\nhave N parallel apply workers applying the N transactions in parallel.\nI think it might also be possible that RLS policies won't be applied\nfor initial table sync whereas those will be applied for later changes\neven though the ownership has changed before both operations and one\nof those happens to miss it. If that is possible, then it might be\nbetter to avoid the same as it could appear inconsistent as mentioned\nby Mark [1] as well. Now, it might be possible to avoid this by\nimplementation or we can say that we don't care about this or just\ndocument it. But it seems to me that if we have some way to detect the\nchange of ownership at each operation level then no such possibilities\nwould arise.\n\nThe other alternative we discussed was to allow a change of ownership\non disabled subscriptions that way the apply behavior will always be\npredictable.\n\nThere is clearly a merit in noticing the change of ownership at\ntransaction boundary but just wanted to consider other possibilities.\nIt could be that detecting at transaction-boundary is the best we can\ndo but I think there is no harm in considering other possibilities.\n\n> So why would we\n> incur a possibly-significant performance cost to provide a stricter\n> guarantee?\n>\n> I bet users wouldn't even like this behavior. It would mean that if\n> you are replicating a long-running transaction, an ALTER SUBSCRIPTION\n> command might block for a long time until replication of that\n> transaction completes.\n>\n\nAgreed and if we decide to lock the subscription during the initial\ntable sync phase then that could also take a long time for which again\nusers might not be happy.\n\n[1] - https://www.postgresql.org/message-id/FE7D7024-6723-4ACB-82AB-94F6A194BE0D%40enterprisedb.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 11 Dec 2021 10:25:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "> On Nov 24, 2021, at 4:30 PM, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> We need to do permission checking for WITH CHECK OPTION and RLS. The\n> patch right now allows the subscription to write data that an RLS\n> policy forbids.\n\nVersion 4 of the patch, attached, no longer allows RLS to be circumvented, but does so in a course-grained fashion. If the target table has row-level security policies which are enforced against the subscription owner, the replication draws an error, much as with a permissions failure. This seems sufficient for now, as superusers, roles with bypassrls, and target table owners should be able to replicate as before. We may want to revisit this later, perhaps if/when we address your ExecInsert question, below.\n\n> \n> A couple other points:\n> \n> * We shouldn't refer to the behavior of previous versions in the docs\n> unless there's a compelling reason\n\nFixed.\n\n> * Do we need to be smarter about partitioned tables, where an insert\n> can turn into an update?\n\nIndeed, the logic of apply_handle_tuple_routing() required a bit of refactoring. Fixed in v4.\n\n> * Should we refactor to borrow logic from ExecInsert so that it's less\n> likely that we miss something in the future?\n\nLet's just punt on this for now.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 15 Dec 2021 12:23:02 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Dec 16, 2021 at 1:53 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > On Nov 24, 2021, at 4:30 PM, Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > We need to do permission checking for WITH CHECK OPTION and RLS. The\n> > patch right now allows the subscription to write data that an RLS\n> > policy forbids.\n>\n> Version 4 of the patch, attached.\n>\n\nFor Update/Delete, we do read the table first via\nFindReplTupleInLocalRel(), so is there a need to check ACL_SELECT\nbefore that?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 8 Jan 2022 12:27:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Sat, 2022-01-08 at 12:27 +0530, Amit Kapila wrote:\n> For Update/Delete, we do read the table first via\n> FindReplTupleInLocalRel(), so is there a need to check ACL_SELECT\n> before that?\n\nIf it's logically an update/delete, then I think ACL_UPDATE/DELETE is\nthe right one to check. Do you have a different opinion?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 07 Jan 2022 23:31:06 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, 2021-12-15 at 12:23 -0800, Mark Dilger wrote:\n> > On Nov 24, 2021, at 4:30 PM, Jeff Davis <pgsql@j-davis.com> wrote:\n> > \n> > We need to do permission checking for WITH CHECK OPTION and RLS.\n> > The\n> > patch right now allows the subscription to write data that an RLS\n> > policy forbids.\n> \n> Version 4 of the patch, attached, no longer allows RLS to be\n> circumvented, but does so in a course-grained fashion.\n\nCommitted.\n\nI tried to do some performance testing to see if there was any impact\nof the extra catalog + ACL checks. Logical replication seems slow\nenough -- something like 3X slower than local inserts -- that it didn't\nseem to make a difference.\n\nTo test it, I did the following:\n 1. sent a SIGSTOP to the logical apply worker\n 2. loaded more data in publisher\n 3. made the subscriber a sync replica\n 4. timed the following:\n a. sent a SIGCONT to the logical apply worker\n b. insert a single tuple on the publisher side\n c. wait for the insert to return, indicating that logical\n replication is done up to that point\n\nDoes anyone have a better way to measure logical replication\nperformance?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 07 Jan 2022 23:38:31 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Sat, Jan 8, 2022 at 1:01 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Sat, 2022-01-08 at 12:27 +0530, Amit Kapila wrote:\n> > For Update/Delete, we do read the table first via\n> > FindReplTupleInLocalRel(), so is there a need to check ACL_SELECT\n> > before that?\n>\n> If it's logically an update/delete, then I think ACL_UPDATE/DELETE is\n> the right one to check. Do you have a different opinion?\n>\n\nBut shouldn't we do it the first time before accessing the table?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 8 Jan 2022 15:35:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Sat, 2022-01-08 at 15:35 +0530, Amit Kapila wrote:\n> On Sat, Jan 8, 2022 at 1:01 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > \n> > On Sat, 2022-01-08 at 12:27 +0530, Amit Kapila wrote:\n> > > For Update/Delete, we do read the table first via\n> > > FindReplTupleInLocalRel(), so is there a need to check ACL_SELECT\n> > > before that?\n> > \n> > If it's logically an update/delete, then I think ACL_UPDATE/DELETE\n> > is\n> > the right one to check. Do you have a different opinion?\n> > \n> \n> But shouldn't we do it the first time before accessing the table?\n\nI'm not sure I follow the reasoning. Are you saying that, to logically\nreplay a simple DELETE, the subscription owner should have SELECT\nprivileges on the destination table?\n\nIs there a way that a subscription owner could somehow exploit a DELETE\nprivilege to see the contents of a table on which they have no SELECT\nprivileges? Or is it purely an internal read, which is necessary for\nany ordinary local DELETE/UPDATE anyway?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 08 Jan 2022 09:14:59 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> I'm not sure I follow the reasoning. Are you saying that, to logically\n> replay a simple DELETE, the subscription owner should have SELECT\n> privileges on the destination table?\n\nWe consider that DELETE WHERE <condition> requires SELECT privilege\non the column(s) read by the <condition>. I suppose that the point\nhere is to enforce the same privilege checks that occur in normal\nSQL operation, so yes.\n\n> Is there a way that a subscription owner could somehow exploit a DELETE\n> privilege to see the contents of a table on which they have no SELECT\n> privileges?\n\nBEGIN;\nDELETE FROM tab WHERE col = 'foo';\n-- note deletion count\nROLLBACK;\n\nNow you have some information about whether \"col\" contains 'foo'.\nAdmittedly, it might be a pretty low-bandwidth way to extract data,\nbut we still regard it as a privilege issue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 08 Jan 2022 12:37:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "... btw, I'd like to complain that this new test script consumes\na completely excessive amount of time. On my fairly-new primary\nworkstation:\n\n[12:48:00] t/027_nosuperuser.pl ............... ok 22146 ms ( 0.02 usr 0.00 sys + 1.12 cusr 0.95 csys = 2.09 CPU)\n\nThe previously-slowest script in the subscription suite is\n\n[12:48:23] t/100_bugs.pl ...................... ok 7048 ms ( 0.00 usr 0.00 sys + 2.85 cusr 0.99 csys = 3.84 CPU)\n\nand the majority of the scripts clock in at more like 2 to 4 seconds.\nSo I don't think I'm out of line in saying that this test is consuming\nan order of magnitude more time than is justified. I do not wish to\nsee this much time added to every check-world run till kingdom come\nfor this one feature/issue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 08 Jan 2022 12:57:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Sat, 2022-01-08 at 12:57 -0500, Tom Lane wrote:\n> ... btw, I'd like to complain that this new test script consumes\n> a completely excessive amount of time.\n\nShould be fixed now; I brought the number of tests down from 100 to 14.\nIt's not running in 2 seconds on my machine, but it's in line with the\nother tests.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sun, 09 Jan 2022 10:09:03 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> On Sat, 2022-01-08 at 12:57 -0500, Tom Lane wrote:\n>> ... btw, I'd like to complain that this new test script consumes\n>> a completely excessive amount of time.\n\n> Should be fixed now; I brought the number of tests down from 100 to 14.\n> It's not running in 2 seconds on my machine, but it's in line with the\n> other tests.\n\nThanks, I appreciate that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 09 Jan 2022 14:10:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Sat, Jan 8, 2022 at 2:38 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> Committed.\n\nI was just noticing that what was committed here didn't actually fix\nthe problem implied by the subject line. That is, non-superuser still\ncan't own subscriptions. To put that another way, there's no way for\nthe superuser to delegate the setup and administration of logical\nreplication to a non-superuser. That's a bummer.\n\nReading the thread, I'm not quite sure why we seemingly did all the\npreparatory work and then didn't actually fix the problem. It was\npreviously proposed that we introduce a new predefined role\npg_create_subscriptions and allow users who have the privileges of\nthat predefined role to create and alter subscriptions. There are a\nfew issues with that which, however, seem fairly solvable to me:\n\n1. Jeff pointed out that if you supply a connection string that is\ngoing to try to access local files, you'd better have\npg_read_server_files, or else we should not let you use that\nconnection string. I guess that's mostly a function of which\nparameters you specify, e.g. passfile, sslcert, sslkey, though maybe\nfor host it depends on whether the value starts with a slash. We might\nneed to think a bit here to make sure we get the rules right but it\nseems like a pretty solvable problem.\n\n2. There was also quite a bit of discussion of what to do if a user\nwho was previously eligible to own a subscription ceases to be\neligible, in particular around a superuser who is made into a\nnon-superuser, but the same problem would apply if you had\npg_create_subscriptions or pg_read_server_files and then lost it. My\nvote is to not worry about it too much. Specifically, I think we\nshould certainly check whether the user has permission to create a\nsubscription before letting them do so, but we could handle the case\nwhere the user already owns a subscription and tries to modify it by\neither allowing or denying the operation and I think either of those\nwould be fine. I even think we could do one of those in some cases and\nthe other in other cases and as long as there is some principle to the\nthing, it's fine. I argue that it's not a normal configuration and\ntherefore it doesn't have to work in a particularly useful way. It\nshouldn't break the world in some horrendous way, but that's about as\ngood as it needs to be. I'd argue for example that DROP SUBSCRIPTION\ncould just check whether you own the object, and that ALTER\nSUBSCRIPTION could check whether you own the object and, if you're\nchanging the connection string, also whether you would have privileges\nto set that new connection string on a new subscription.\n\n3. There was a bit of discussion of maybe wanting to allow users to\ncreate subscriptions with some connection strings but not others,\nperhaps by having some kind of intermediate object that owns the\nconnection string and is owned by a superuser or someone with lots of\nprivileges, and then letting a less-privileged user point a\nsubscription at that object. I agree that might be useful to somebody,\nbut I don't see why it's a hard requirement to get anything at all\ndone here. Right now, a subscription contains a connection string\ndirectly. If in the future someone wants to introduce a CREATE\nREPLICATION DESTINATION command (or whatever) and have a way to point\na subscription at a replication destination rather than a connection\nstring directly, cool. Or if someone wants to wire this into CREATE\nSERVER somehow, also cool. But if you don't care about restricting\nwhich IPs somebody can try to access by providing a connection string\nof their choice, then you would be happy if we just did something\nsimple here and left this problem for another day.\n\nI am very curious to know (a) why work on this was abandoned (perhaps\nthe answer is just lack of round tuits, in which case there is no more\nto be said), and (b) what people think of (1)-(3) above, and (c)\nwhether anyone knows of further problems that need to be considered\nhere.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 18 Jan 2023 14:38:21 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Jan 18, 2023, at 11:38 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> I was just noticing that what was committed here didn't actually fix\n> the problem implied by the subject line. That is, non-superuser still\n> can't own subscriptions.\n\nNot so. They can. See src/test/subscription/027_nosuperuser.pl\n\n> To put that another way, there's no way for\n> the superuser to delegate the setup and administration of logical\n> replication to a non-superuser.\n\nTrue.\n\n> That's a bummer.\n\nAlso true.\n\n> Reading the thread, I'm not quite sure why we seemingly did all the\n> preparatory work and then didn't actually fix the problem.\n\nPrior to the patch, if a superuser created a subscription, then later was demoted to non-superuser, the subscription apply workers still applied the changes with superuser force. So creating a superuser Alice, letting Alice create a subscription, then revoking superuser from Alice didn't accomplish anything interesting. But after the patch, it does. The superuser can now create non-superuser subscriptions. (I'm not sure this ability is well advertised.) But the problem of non-superuser roles creating non-superuser subscriptions is not solved.\n\nFrom a security perspective, the bit that was solved may be the more important part; from a usability perspective, perhaps not.\n\n> It was\n> previously proposed that we introduce a new predefined role\n> pg_create_subscriptions and allow users who have the privileges of\n> that predefined role to create and alter subscriptions. There are a\n> few issues with that which, however, seem fairly solvable to me:\n> \n> 1. Jeff pointed out that if you supply a connection string that is\n> going to try to access local files, you'd better have\n> pg_read_server_files, or else we should not let you use that\n> connection string. I guess that's mostly a function of which\n> parameters you specify, e.g. passfile, sslcert, sslkey, though maybe\n> for host it depends on whether the value starts with a slash. We might\n> need to think a bit here to make sure we get the rules right but it\n> seems like a pretty solvable problem.\n> \n> 2. There was also quite a bit of discussion of what to do if a user\n> who was previously eligible to own a subscription ceases to be\n> eligible, in particular around a superuser who is made into a\n> non-superuser, but the same problem would apply if you had\n> pg_create_subscriptions or pg_read_server_files and then lost it. My\n> vote is to not worry about it too much. Specifically, I think we\n> should certainly check whether the user has permission to create a\n> subscription before letting them do so, but we could handle the case\n> where the user already owns a subscription and tries to modify it by\n> either allowing or denying the operation and I think either of those\n> would be fine. I even think we could do one of those in some cases and\n> the other in other cases and as long as there is some principle to the\n> thing, it's fine. I argue that it's not a normal configuration and\n> therefore it doesn't have to work in a particularly useful way. It\n> shouldn't break the world in some horrendous way, but that's about as\n> good as it needs to be. I'd argue for example that DROP SUBSCRIPTION\n> could just check whether you own the object, and that ALTER\n> SUBSCRIPTION could check whether you own the object and, if you're\n> changing the connection string, also whether you would have privileges\n> to set that new connection string on a new subscription.\n> \n> 3. There was a bit of discussion of maybe wanting to allow users to\n> create subscriptions with some connection strings but not others,\n> perhaps by having some kind of intermediate object that owns the\n> connection string and is owned by a superuser or someone with lots of\n> privileges, and then letting a less-privileged user point a\n> subscription at that object. I agree that might be useful to somebody,\n> but I don't see why it's a hard requirement to get anything at all\n> done here. Right now, a subscription contains a connection string\n> directly. If in the future someone wants to introduce a CREATE\n> REPLICATION DESTINATION command (or whatever) and have a way to point\n> a subscription at a replication destination rather than a connection\n> string directly, cool. Or if someone wants to wire this into CREATE\n> SERVER somehow, also cool. But if you don't care about restricting\n> which IPs somebody can try to access by providing a connection string\n> of their choice, then you would be happy if we just did something\n> simple here and left this problem for another day.\n> \n> I am very curious to know (a) why work on this was abandoned (perhaps\n> the answer is just lack of round tuits, in which case there is no more\n> to be said)\n\nMostly, it was a lack of round-tuits. After the patch was committed, I quickly switched my focus elsewhere.\n\n> , and (b) what people think of (1)-(3) above\n\nThere are different ways of solving (1), and Jeff and I discussed them in Dec 2021. My recollection was that idea (3) was the cleanest. Other ideas might be simpler than (3), or they may just appear simpler but in truth turn into a can of worms. I don't know, since I never went as far as trying to implement either approach.\n\nIdea (2) seems to contemplate non-superuser subscription owners as a theoretical thing, but it's quite real already. Again, see 027_nosuperuser.pl.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 18 Jan 2023 12:26:43 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Jan 18, 2023 at 3:26 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Prior to the patch, if a superuser created a subscription, then later was demoted to non-superuser, the subscription apply workers still applied the changes with superuser force. So creating a superuser Alice, letting Alice create a subscription, then revoking superuser from Alice didn't accomplish anything interesting. But after the patch, it does. The superuser can now create non-superuser subscriptions. (I'm not sure this ability is well advertised.) But the problem of non-superuser roles creating non-superuser subscriptions is not solved.\n\nAh, OK, thanks for the clarification!\n\n> There are different ways of solving (1), and Jeff and I discussed them in Dec 2021. My recollection was that idea (3) was the cleanest. Other ideas might be simpler than (3), or they may just appear simpler but in truth turn into a can of worms. I don't know, since I never went as far as trying to implement either approach.\n>\n> Idea (2) seems to contemplate non-superuser subscription owners as a theoretical thing, but it's quite real already. Again, see 027_nosuperuser.pl.\n\nI think the solution to the problem of a connection string trying to\naccess local files is to just look at the connection string, decide\nwhether it does that, and if yes, require the owner to have\npg_read_server_files as well as pg_create_subscription. (3) is about\ncreating some more sophisticated and powerful solution to that\nproblem, but that seems like a nice-to-have, not something essential,\nand a lot more complicated to implement.\n\nI guess what I listed as (2) is not relevant since I didn't understand\ncorrectly what the current state of things is.\n\nUnless I'm missing something, it seems like this could be a quite small patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 18 Jan 2023 15:51:08 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Jan 18, 2023, at 12:51 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> Unless I'm missing something, it seems like this could be a quite small patch.\n\nI didn't like the idea of the create/alter subscription commands needing to parse the connection string and think about what it might do, because at some point in the future we might extend what things are allowed in that string, and we have to keep everything that contemplates that string in sync. I may have been overly hesitant to tackle that problem. Or maybe I just ran short of round tuits.\n \n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 18 Jan 2023 12:58:26 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Jan 18, 2023 at 3:58 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On Jan 18, 2023, at 12:51 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > Unless I'm missing something, it seems like this could be a quite small patch.\n>\n> I didn't like the idea of the create/alter subscription commands needing to parse the connection string and think about what it might do, because at some point in the future we might extend what things are allowed in that string, and we have to keep everything that contemplates that string in sync. I may have been overly hesitant to tackle that problem. Or maybe I just ran short of round tuits.\n\nI wouldn't be OK with writing our own connection string parser for\nthis purpose, but using PQconninfoParse seems OK. We still have to\nembed knowledge of which connection string parameters can trigger\nlocal file access, but that doesn't seem like a massive problem to me.\nIf we already had (or have) that logic someplace else, it would\nprobably make sense to reuse it, but if we don't, writing new logic\ndoesn't seem prohibitively scary. I'm not 100% confident of my ability\nto get those rules right on the first try, but I feel like whatever\nproblems are there are just bugs that can be fixed with a few lines of\ncode changes. The basic idea that by looking at which connection\nstring properties are set we can tell what kinds of things the\nconnection string is going to do seems sound to me.\n\nIf there's some reason that it isn't, that would be good to discover\nnow rather than later.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Jan 2023 10:45:35 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, 2023-01-18 at 14:38 -0500, Robert Haas wrote:\n> I was just noticing that what was committed here didn't actually fix\n> the problem implied by the subject line. That is, non-superuser still\n> can't own subscriptions. To put that another way, there's no way for\n> the superuser to delegate the setup and administration of logical\n> replication to a non-superuser. That's a bummer.\n\nRight, though as Mark pointed out, it does accomplish something even if\nit's a bit unsatisfying. We could certainly do better here.\n\n> 2. There was also quite a bit of discussion of what to do if a user\n> who was previously eligible to own a subscription ceases to be\n> eligible, in particular around a superuser who is made into a\n> non-superuser, but the same problem would apply\n\nCorrect, that's not a new problem, but exists in only a few places now.\nOur privilege system is focused on \"what action can the user take right\nnow?\", and gets weirder when it comes to object ownership, which is a\nmore permanent thing.\n\nExtending that system to a subscription object, which has its own\ncapabilities including a long-lived process, is cause for some\nhesitation. I agree it's not necessarily a blocker.\n\n> 3. There was a bit of discussion of maybe wanting to allow users to\n> create subscriptions with some connection strings but not others,\n\nThis was an alternative to trying to sanitize connection strings,\nbecause it's a bit difficult to reason about what might be \"safe\"\nconnection strings for a non-superuser, because it's environment-\ndependent. But if we do identify a reasonable set of sanitization\nrules, we can proceed without 3.\n\n> I am very curious to know (a) why work on this was abandoned (perhaps\n> the answer is just lack of round tuits, in which case there is no\n> more\n> to be said), and (b) what people think of (1)-(3) above, and (c)\n> whether anyone knows of further problems that need to be considered\n> here.\n\n(a) Mostly round-tuits. There are problems and questions; but there are\nwith any work, and they could be solved. Or, if they don't turn out to\nbe terribly serious, we could ignore them.\n\n(b) When I pick this up again I would be inclined towards the\nfollowing: try to solve 4-5 (listed below) first, which are\nindependently useful; then look at both 1 and 3 to see which one\npresents an agreeable solution faster. I'll probably ignore 2 because I\ncouldn't get agreement the last time around (I think Mark objected to\nthe idea of denying a drop in privileges).\n\n(c) Let me add:\n\n4. There are still differences between the subscription worker applying\na change and going through the ordinary INSERT paths, for instance with\nRLS. Also solvable.\n\n5. Andres raised in another thread the idea of switching to the table\nowner when applying changes (perhaps in a\nSECURITY_RESTRICTED_OPERATION?): \n\nhttps://www.postgresql.org/message-id/20230112033355.u5tiyr2bmuoc4jf4@awork3.anarazel.de\n\nThat seems related, and I like the idea.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Thu, 19 Jan 2023 10:32:45 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, 2023-01-19 at 10:45 -0500, Robert Haas wrote:\n> I wouldn't be OK with writing our own connection string parser for\n> this purpose, but using PQconninfoParse seems OK. We still have to\n> embed knowledge of which connection string parameters can trigger\n> local file access, but that doesn't seem like a massive problem to\n> me.\n\nAnother idea (I discussed with Andres some time ago) was to have an\noption to libpq to turn off file access entirely. That could be a new\nAPI function or a new connection option.\n\nThat would be pretty valuable by itself. Though we might want to\nsupport a way to pass SSL keys as values rather than file paths, so\nthat we can still do SSL.\n\nSo perhaps the answer is that it will be a small patch to get non-\nsuperuser subscription owners, but we need three or four preliminary\npatches first.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Thu, 19 Jan 2023 10:40:12 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Jan 19, 2023 at 1:40 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Thu, 2023-01-19 at 10:45 -0500, Robert Haas wrote:\n> > I wouldn't be OK with writing our own connection string parser for\n> > this purpose, but using PQconninfoParse seems OK. We still have to\n> > embed knowledge of which connection string parameters can trigger\n> > local file access, but that doesn't seem like a massive problem to\n> > me.\n>\n> Another idea (I discussed with Andres some time ago) was to have an\n> option to libpq to turn off file access entirely. That could be a new\n> API function or a new connection option.\n>\n> That would be pretty valuable by itself. Though we might want to\n> support a way to pass SSL keys as values rather than file paths, so\n> that we can still do SSL.\n\nMaybe all of that would be useful, but it doesn't seem that mandatory.\n\n> So perhaps the answer is that it will be a small patch to get non-\n> superuser subscription owners, but we need three or four preliminary\n> patches first.\n\nI guess I'm not quite seeing it. Why can't we write a small patch to\nget this working right now, probably in a few hours, and deal with any\nimprovements that people want at a later time?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Jan 2023 14:11:46 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, 2023-01-19 at 14:11 -0500, Robert Haas wrote:\n> I guess I'm not quite seeing it. Why can't we write a small patch to\n> get this working right now, probably in a few hours, and deal with\n> any\n> improvements that people want at a later time?\n\nTo me, it's worrisome when there are more than a few loose ends, and\nhere it seems like there are more like five. No single issue is a\nblocker, but I believe we'd end up with a better user-facing solution\nif we solved a couple of these lower-level issues (and think a little\nmore about the other ones) before we expose new functionality to the\nuser.\n\nThe predefined role is probably the biggest user-facing part of the\nchange. Does it mean that members can create any number of any kind of\nsubscription? If so it may be hard to tighten down later, because we\ndon't know what existing setups might break.\n\nPerhaps we can just permit a superuser to \"ALTER SUBSCRIPTION ... OWNER\nTO <non-super>\", which makes it simpler to use while still leaving the\nresponisbility with the superuser to get it right. Maybe we even block\nthe user from altering their own subscription (would be weird but not\nmuch weirder than what we have now)? I don't know if that solves the\nproblem you're trying to solve, but it seems lower-risk.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Thu, 19 Jan 2023 17:16:20 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-01-19 10:45:35 -0500, Robert Haas wrote:\n> On Wed, Jan 18, 2023 at 3:58 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > > On Jan 18, 2023, at 12:51 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > Unless I'm missing something, it seems like this could be a quite small patch.\n> >\n> > I didn't like the idea of the create/alter subscription commands needing to parse the connection string and think about what it might do, because at some point in the future we might extend what things are allowed in that string, and we have to keep everything that contemplates that string in sync. I may have been overly hesitant to tackle that problem. Or maybe I just ran short of round tuits.\n> \n> I wouldn't be OK with writing our own connection string parser for\n> this purpose, but using PQconninfoParse seems OK. We still have to\n> embed knowledge of which connection string parameters can trigger\n> local file access, but that doesn't seem like a massive problem to me.\n\n> If we already had (or have) that logic someplace else, it would\n> probably make sense to reuse it\n\nWe hve. See at least postgres_fdw's check_conn_params(), dblink's\ndblink_connstr_check() and dblink_security_check().\n\nAs part of the fix for https://postgr.es/m/20220925232237.p6uskba2dw6fnwj2%40awork3.anarazel.de\nI am planning to introduce a bunch of server side helpers for dealing with\nlibpq (for establishing a connection while accepting interrupts). We could try\nto centralize knowledge for those checks there.\n\nThe approach of checking, after connection establishment (see\ndblink_security_check()), that we did in fact use the specified password,\nscares me somewhat. See also below.\n\n\n> The basic idea that by looking at which connection string properties are set\n> we can tell what kinds of things the connection string is going to do seems\n> sound to me.\n\nI don't think you *can* check it purely based on existing connection string\nproperties, unfortunately. Think of e.g. a pg_hba.conf line of \"local all user\npeer\" (quite reasonable config) or \"host all all 127.0.0.1/32 trust\" (less so).\n\nHence the hack with dblink_security_check().\n\n\nI think there might be a discussion somewhere about adding an option to force\nlibpq to not use certain auth methods, e.g. plaintext password/md5. It's\npossible this could be integrated.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 19 Jan 2023 17:46:28 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-01-19 17:16:20 -0800, Jeff Davis wrote:\n> The predefined role is probably the biggest user-facing part of the\n> change. Does it mean that members can create any number of any kind of\n> subscription?\n\nI don't think we need to support complicated restriction schemes around this\nnow. I'm sure such needs exist, but I think there's more places where a simple\n\"allowed/not allowed\" suffices.\n\nYou'd presumably just grant such a permission to \"pseudo superuser\"\nusers. They can typically do a lot of bad things already, so I don't really\nsee the common need to prevent them from creating many subscriptions.\n\n\n> If so it may be hard to tighten down later, because we don't know what\n> existing setups might break.\n\nPresumably the unlimited number of subs case would still exist as an option\nlater - so I don't see the problem?\n\n\n> Perhaps we can just permit a superuser to \"ALTER SUBSCRIPTION ... OWNER\n> TO <non-super>\", which makes it simpler to use while still leaving the\n> responisbility with the superuser to get it right. Maybe we even block\n> the user from altering their own subscription (would be weird but not\n> much weirder than what we have now)? I don't know if that solves the\n> problem you're trying to solve, but it seems lower-risk.\n\nThat seems to not really get us very far. It's hard to use for users, and hard\nto make secure for the hosted PG providers.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 19 Jan 2023 17:51:22 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, 2023-01-19 at 17:51 -0800, Andres Freund wrote:\n> I don't think we need to support complicated restriction schemes\n> around this\n> now. I'm sure such needs exist, but I think there's more places where\n> a simple\n> \"allowed/not allowed\" suffices.\n\nIf we did follow a path like 3 (having some kind of other object\nrepresent the connection string), then it would create two different\nkinds of subscriptions that might be controlled different ways, and\nthere might be some rough edges. Might also be fine, or we might never\npursue 3.\n\nI feel like my words are being interpreted as though I don't want this\nfeature. I do, and I'm happy Robert re-raised it. I'm just trying to\nanswer his questions about why I set the work down, which is that I\nfelt some groundwork should be done before proceeding to a documented\nfeature, and I still feel that's the right thing.\n\nBut (a) that's not a very strong objection; and (b) my efforts are\nbetter spent doing some of that groundwork than arguing about the order\nin which the work should be done. So, time permitting, I may be able to\nput out a patch or two for the next 'fest.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Fri, 20 Jan 2023 00:04:12 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Jan 19, 2023 at 8:46 PM Andres Freund <andres@anarazel.de> wrote:\n> > I wouldn't be OK with writing our own connection string parser for\n> > this purpose, but using PQconninfoParse seems OK. We still have to\n> > embed knowledge of which connection string parameters can trigger\n> > local file access, but that doesn't seem like a massive problem to me.\n>\n> > If we already had (or have) that logic someplace else, it would\n> > probably make sense to reuse it\n>\n> We hve. See at least postgres_fdw's check_conn_params(), dblink's\n> dblink_connstr_check() and dblink_security_check().\n\nThat's not the same thing. It doesn't know anything about other\nparameters that might try to consult a local file, like sslcert,\nsslkey, sslrootcert, sslca, sslcrl, sslcrldir, and maybe service.\nMaybe you want to argue we don't need that, but that's what the\nearlier discussion was about.\n\n> As part of the fix for https://postgr.es/m/20220925232237.p6uskba2dw6fnwj2%40awork3.anarazel.de\n> I am planning to introduce a bunch of server side helpers for dealing with\n> libpq (for establishing a connection while accepting interrupts). We could try\n> to centralize knowledge for those checks there.\n\nMaybe. We could also add something into libpq, as Jeff proposed, e.g.\na new connection parameter\nthe_other_connection_parameters_might_try_to_trojan_the_local_host=1\nblocks all that stuff from doing anything.\n\n> The approach of checking, after connection establishment (see\n> dblink_security_check()), that we did in fact use the specified password,\n> scares me somewhat. See also below.\n\nYes, I find that extremely dubious. It blocks things that you might\nwant to do for legitimate reasons, including things that might be more\nsecure than using a password. And there's no guarantee that it\naccomplishes the intended objective either. The stated motivation for\nthat restriction was. I believe, that we don't want the outbound\nconnection to rely on the privileges available from the context in\nwhich PostgreSQL itself is running -- but for all we know the remote\nside has an IP filter that only allows the PostgreSQL host and no\nothers. Moreover, it relies on us knowing what the behavior of the\nremote server is, even though we have no way of knowing that that\nserver shares our security interests.\n\nWorse still, I have always felt that the security vulnerability that\nled to these controls being installed is pretty much fabricated: it's\nan imaginary problem. Today I went back and found the original CVE at\nhttps://nvd.nist.gov/vuln/detail/CVE-2007-3278 and it seems that at\nleast one other person agrees. The Red Hat vendor statement on that\npage says: \"Red Hat does not consider this do be a security issue.\ndblink is disabled in default configuration of PostgreSQL packages as\nshipped with Red Hat Enterprise Linux versions 2.1, 3, 4 and 5, and it\nis a configuration decision whether to grant local users arbitrary\naccess.\" I think whoever wrote that has an excellent point. I'm unable\nto discern any legitimate security purpose for this restriction. What\nI think it mostly does is (a) inconvenience users or (b) force them to\nrely on a less-secure authentication method than they would otherwise\nhave chosen.\n\n> > The basic idea that by looking at which connection string properties are set\n> > we can tell what kinds of things the connection string is going to do seems\n> > sound to me.\n>\n> I don't think you *can* check it purely based on existing connection string\n> properties, unfortunately. Think of e.g. a pg_hba.conf line of \"local all user\n> peer\" (quite reasonable config) or \"host all all 127.0.0.1/32 trust\" (less so).\n>\n> Hence the hack with dblink_security_check().\n>\n> I think there might be a discussion somewhere about adding an option to force\n> libpq to not use certain auth methods, e.g. plaintext password/md5. It's\n> possible this could be integrated.\n\nI still think you're talking about a different problem here. I'm\ntalking about the problem of knowing whether local files are going to\nbe accessed by the connection string.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 20 Jan 2023 08:25:46 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, Jan 20, 2023 at 8:25 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I still think you're talking about a different problem here. I'm\n> talking about the problem of knowing whether local files are going to\n> be accessed by the connection string.\n\nSo here's a dumb patch for this. At least in my mind, the connection\nstring sanitization/validation is the major design problem here, and\nI'm not at all sure that what I did in the attached patch is right.\nBut let's talk about that. This approach is inspired by Jeff's\ncomments about local file access upthread, but as Andres pointed out,\nthat's a completely different set of things than we worry about in\nother places. I'm not quite sure what the right model is here.\n\nThis patch incidentally allows ALTER SUBSCRIPTION .. SKIP for any\nsubscription owner, removing the existing check that limits that\noperation to superusers and replacing it with nothing. I can't really\nsee why this needs to be any more restricted than that, and\nregrettably neither the check in the existing code nor the commit that\nadded it have any comments explaining the logic behind that check. If,\nfor example, skipping a subscription could lead to a server crash,\nthat would be a reason to restrict the feature to superusers (or\nrevert it). If it's just a case of the operation being maybe not the\nright thing to do, that's not a sufficient reason to restrict it to\nsuperusers. This change is really independent of the rest of the patch\nand, if we want to do this, I will separate it into its own patch. But\nsince this is just for discussion, I didn't worry about that right\nnow.\n\nAside from the above, I don't yet see a problem here that I would\nconsider to be serious enough that we couldn't proceed. I'll try to\navoid too much repetition of what's already been said on this topic,\nbut I do want to add that I think that creating subscriptions is\nproperly viewed as a *slightly* scary operation, not a *very* scary\noperation. It lets you do two things that you couldn't otherwise. One\nis get background processes running that take up process slots and\nconsume resources -- but note that your ability to consume resources\nwith however many normal database connections you can make is\nvirtually unlimited. The other thing it lets you do is poke around the\nnetwork, maybe figure out whether some ports are open or closed, and\ntry to replicate data from any accessible servers you can find, which\ncould include ports or servers that you can't access directly. I think\nthat the superuser will be in a good position to evaluate whether that\nis a risk in a certain environment or not, and I think many superusers\nwill conclude that it isn't a big risk. I think that the main\nmotivation for NOT handing out pg_create_subscription will turn out to\nbe administrative rather than security-related i.e. they'll want to be\nsomething that falls under their authority rather than someone else's.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 20 Jan 2023 11:08:54 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-01-20 08:25:46 -0500, Robert Haas wrote:\n> Worse still, I have always felt that the security vulnerability that\n> led to these controls being installed is pretty much fabricated: it's\n> an imaginary problem. Today I went back and found the original CVE at\n> https://nvd.nist.gov/vuln/detail/CVE-2007-3278 and it seems that at\n> least one other person agrees. The Red Hat vendor statement on that\n> page says: \"Red Hat does not consider this do be a security issue.\n> dblink is disabled in default configuration of PostgreSQL packages as\n> shipped with Red Hat Enterprise Linux versions 2.1, 3, 4 and 5, and it\n> is a configuration decision whether to grant local users arbitrary\n> access.\" I think whoever wrote that has an excellent point. I'm unable\n> to discern any legitimate security purpose for this restriction. What\n> I think it mostly does is (a) inconvenience users or (b) force them to\n> rely on a less-secure authentication method than they would otherwise\n> have chosen.\n\nFWIW, I've certainly seen situations where having the checks prevented easy\npaths to privilege escalations. That's not to say that I like the checks, but\nI also don't think we can get away without them (or a better replacement, of\ncourse).\n\nThere are good reasons to have 'peer' authentication set up for the user\nrunning postgres, so admin scripts can connect without issues. Which\nunfortunately then also means that postgres_fdw etc can connect to the current\ndatabase as superuser, without that check. Which imo clearly is an issue.\n\nWhy do you think this is a fabricated issue?\n\n\nThe solution we have is quite bad, of course. Just because the user isn't a\nsuperuser \"immediately\" doesn't mean it doesn't have the rights to become\none somehow.\n\n\n> > > The basic idea that by looking at which connection string properties are set\n> > > we can tell what kinds of things the connection string is going to do seems\n> > > sound to me.\n> >\n> > I don't think you *can* check it purely based on existing connection string\n> > properties, unfortunately. Think of e.g. a pg_hba.conf line of \"local all user\n> > peer\" (quite reasonable config) or \"host all all 127.0.0.1/32 trust\" (less so).\n> >\n> > Hence the hack with dblink_security_check().\n> >\n> > I think there might be a discussion somewhere about adding an option to force\n> > libpq to not use certain auth methods, e.g. plaintext password/md5. It's\n> > possible this could be integrated.\n> \n> I still think you're talking about a different problem here. I'm\n> talking about the problem of knowing whether local files are going to\n> be accessed by the connection string.\n\nWhy is this only about local files, rather than e.g. also using the local\nuser?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 21 Jan 2023 14:01:34 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-01-20 11:08:54 -0500, Robert Haas wrote:\n> /*\n> - * Validate connection info string (just try to parse it)\n> + * Validate connection info string, and determine whether it might cause\n> + * local filesystem access to be attempted.\n> + *\n> + * If the connection string can't be parsed, this function will raise\n> + * an error and will not return. If it can, it will return true if local\n> + * filesystem access may be attempted and false otherwise.\n> */\n> -static void\n> +static bool\n> libpqrcv_check_conninfo(const char *conninfo)\n> {\n> \tPQconninfoOption *opts = NULL;\n> +\tPQconninfoOption *opt;\n> \tchar\t *err = NULL;\n> +\tbool\t\tresult = false;\n> \n> \topts = PQconninfoParse(conninfo, &err);\n> \tif (opts == NULL)\n> @@ -267,7 +274,40 @@ libpqrcv_check_conninfo(const char *conninfo)\n> \t\t\t\t errmsg(\"invalid connection string syntax: %s\", errcopy)));\n> \t}\n> \n> +\tfor (opt = opts; opt->keyword != NULL; ++opt)\n> +\t{\n> +\t\t/* Ignore connection options that are not present. */\n> +\t\tif (opt->val == NULL)\n> +\t\t\tcontinue;\n> +\n> +\t\t/* For all these parameters, the value is a local filename. */\n> +\t\tif (strcmp(opt->keyword, \"passfile\") == 0 ||\n> +\t\t\tstrcmp(opt->keyword, \"sslcert\") == 0 ||\n> +\t\t\tstrcmp(opt->keyword, \"sslkey\") == 0 ||\n> +\t\t\tstrcmp(opt->keyword, \"sslrootcert\") == 0 ||\n> +\t\t\tstrcmp(opt->keyword, \"sslcrl\") == 0 ||\n> +\t\t\tstrcmp(opt->keyword, \"sslcrldir\") == 0 ||\n> +\t\t\tstrcmp(opt->keyword, \"service\") == 0)\n> +\t\t{\n> +\t\t\tresult = true;\n> +\t\t\tbreak;\n> +\t\t}\n\nDo we need to think about 'options' allowing anything bad? I don't\nimmediately* see a problem, but ...\n\n\n> +\n> +\t\t/*\n> +\t\t * For the host parameter, the value might be a local filename.\n> +\t\t * It might also be a reference to the local host's abstract UNIX\n> +\t\t * socket namespace, which we consider equivalent to a local pathname\n> +\t\t * for security purporses.\n> +\t\t */\n> +\t\tif (strcmp(opt->keyword, \"host\") == 0 && is_unixsock_path(opt->val))\n> +\t\t{\n> +\t\t\tresult = true;\n> +\t\t\tbreak;\n> +\t\t}\n> +\t}\n\nHm, what about kerberos / gss / SSPI? Aren't those essentially also tied to\nthe local filesystem / user?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 21 Jan 2023 14:10:57 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Sat, 2023-01-21 at 14:01 -0800, Andres Freund wrote:\n> There are good reasons to have 'peer' authentication set up for the\n> user\n> running postgres, so admin scripts can connect without issues. Which\n> unfortunately then also means that postgres_fdw etc can connect to\n> the current\n> database as superuser, without that check. Which imo clearly is an\n> issue.\n\nPerhaps we should have a way to directly turn on/off authentication\nmethods in libpq through API functions and/or options?\n\nThis reminds me of the \"channel_binding=required\" option. We considered\nsome similar alternatives for that feature.\n\n> Why is this only about local files, rather than e.g. also using the\n> local\n> user?\n\nIt's not, but we happen to already have pg_read_server_files, and it\nmakes sense to use that at least for files referenced directly in the\nconnection string. You're right that it's incomplete, and also that it\ndoesn't make a lot of sense for files accessed indirectly.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Sun, 22 Jan 2023 09:05:27 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-01-22 09:05:27 -0800, Jeff Davis wrote:\n> On Sat, 2023-01-21 at 14:01 -0800, Andres Freund wrote:\n> > There are good reasons to have 'peer' authentication set up for the\n> > user\n> > running postgres, so admin scripts can connect without issues. Which\n> > unfortunately then also means that postgres_fdw etc can connect to\n> > the current\n> > database as superuser, without that check. Which imo clearly is an\n> > issue.\n> \n> Perhaps we should have a way to directly turn on/off authentication\n> methods in libpq through API functions and/or options?\n\nYes. There's an in-progress patch adding, I think, pretty much what is\nrequired here:\nhttps://www.postgresql.org/message-id/9e5a8ccddb8355ea9fa4b75a1e3a9edc88a70cd3.camel@vmware.com\n\nrequire_auth=a,b,c\n\nI think an allowlist approach is the right thing for the subscription (and\npostgres_fdw/dblink) use case, otherwise we'll add some auth method down the\nline without updating what's disallowed in the subscription code.\n\n\n> > Why is this only about local files, rather than e.g. also using the local\n> > user?\n> \n> It's not, but we happen to already have pg_read_server_files, and it\n> makes sense to use that at least for files referenced directly in the\n> connection string. You're right that it's incomplete, and also that it\n> doesn't make a lot of sense for files accessed indirectly.\n\nI just meant that we need to pay attention to user-based permissions as well.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 22 Jan 2023 17:52:55 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Sat, Jan 21, 2023 at 5:01 PM Andres Freund <andres@anarazel.de> wrote:\n> There are good reasons to have 'peer' authentication set up for the user\n> running postgres, so admin scripts can connect without issues. Which\n> unfortunately then also means that postgres_fdw etc can connect to the current\n> database as superuser, without that check. Which imo clearly is an issue.\n>\n> Why do you think this is a fabricated issue?\n\nWell, if I have a bunch of PostgreSQL machines on the network that all\nallow each other to log in without requiring anything much in the way\nof passwords or closely-guarded SSL certificates or anything, and then\nI grant to the users on those machines the right to make connections\nto the other machines using arbitrary connection strings, whose fault\nis it when security is compromised? We seem to be taking the policy\nthat it's PostgreSQL's fault if it doesn't block something bad from\nhappening there, but it seems to me that if you gate incoming\nPostgreSQL connections only by source IP address and then also give\nunprivileged users the ability to choose their source IP address, you\nshould expect to have a problem.\n\nI will admit that this is not an open-and-shut case, because a\npasswordless login back to the bootstrap superuser account from the\nlocal machine is a pretty common scenario and doesn't feel\nintrinsically unreasonable to me, and I hadn't thought about that as a\npotential attack vector.\n\nHowever, I still think there's a problem with putting all the\nresponsibility on PostgreSQL. The problem, specifically, is that we're\nspeculating wildly as to the user's intent. If we say, as we currently\ndo, that we're only going to allow connections if they require a\npassword, then we're making a judgement that the superuser couldn't\nhave intended to allow the postgres_fdw to make a passwordless\nconnection. On the other hand, if we say, as we also currently do,\nthat the postgres_fdw user is free to set the sslcert parameter to\nanything they like, then we're making a judgement that the superuser\nis totally OK with that being set to any file on the local filesystem.\nNeither of those conclusions seems sound to me. The superuser may, or\nmay not, have intended to allow passwordless logins, and they may, or\nmay not, have intended for any SSL certificates stored locally to be\nusable by outbound connection attempts.\n\nAnd that's what I really dislike about the you-must-use-a-password\nrule that we have right now. It embeds a policy decision about what\nusers do or do not want to allow. We've uncritically copied that\npolicy decision around to more and more places, and we've added\nworkarounds in some places for the fact that, well, you know, it might\nnot actually be what everybody wants (6136e94d), but it doesn't seem\nlike we've ever really acknowledged that we *made* a policy decision.\nAnd that means we haven't really had a debate about the merits of this\n*particular* rule, which seems to me to be highly debatable. It looks\nto me like there's both stuff you might not want to allow that this\nrule does not block, and also stuff you might want to allow that this\nrule does block, and also that different people can want different\nthings yet this rule applies uniformly to everyone.\n\n> > I still think you're talking about a different problem here. I'm\n> > talking about the problem of knowing whether local files are going to\n> > be accessed by the connection string.\n>\n> Why is this only about local files, rather than e.g. also using the local\n> user?\n\nBecause there's nothing you can do about the local-user case.\n\nIf I'm asked to attempt to connect to a PostgreSQL server, and I\nchoose to do that, and the connection succeeds, all I know is that the\nconnection actually succeeded. I do not know why the remote machine\nchose to accept the connection. If I supplied a password or an SSL\ncertificate or some such thing, then it seems likely that the remote\nmachine accepted that connection because I supplied that particular\npassword or SSL certificate, but it could also be because the remote\nmachine accepts all connections from Robert, or all connections\nwhatsoever, or all connections on Mondays. I just don't know. If I'm\nworried that the person is asking me to make the connection is trying\nto trick me into doing something that they can't do themselves, I\ncould refuse to read a password from a local password store or an SSL\ncertificate from a local certificate file or otherwise refuse to do\nanything special to try to get their connection request accepted, and\nthen if it does get accepted anyway, I know that they weren't relying\non any of those resources that I refused to use. But, if I attempt a\nplain vanilla, totally password-less connection and it works, I have\nno way of knowing whether that happened because I'm Robert or for some\nother reason.\n\nTo put that another way, if I'm making a connection on behalf of an\nuntrusted party, I can choose not to supply an SSL certificate, or not\nto supply a password. But I cannot choose to not be myself.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Jan 2023 11:34:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Sun, Jan 22, 2023 at 8:52 PM Andres Freund <andres@anarazel.de> wrote:\n> > Perhaps we should have a way to directly turn on/off authentication\n> > methods in libpq through API functions and/or options?\n>\n> Yes. There's an in-progress patch adding, I think, pretty much what is\n> required here:\n> https://www.postgresql.org/message-id/9e5a8ccddb8355ea9fa4b75a1e3a9edc88a70cd3.camel@vmware.com\n>\n> require_auth=a,b,c\n>\n> I think an allowlist approach is the right thing for the subscription (and\n> postgres_fdw/dblink) use case, otherwise we'll add some auth method down the\n> line without updating what's disallowed in the subscription code.\n\nSo what would we do here, exactly? We could force a require_auth\nparameter into the provided connection string, although I'm not quite\nsure of the details there, but what value should we force? Is that\ngoing to be something hard-coded, or something configurable? If\nconfigurable, where does that configuration get stored?\n\nRegardless, this only allows connection strings to be restricted along\none axis: authentication type. If you want to let people connect only\nto a certain subnet or whatever, you're still out of luck.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Jan 2023 12:39:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Sat, Jan 21, 2023 at 5:11 PM Andres Freund <andres@anarazel.de> wrote:\n> > + /* For all these parameters, the value is a local filename. */\n> > + if (strcmp(opt->keyword, \"passfile\") == 0 ||\n> > + strcmp(opt->keyword, \"sslcert\") == 0 ||\n> > + strcmp(opt->keyword, \"sslkey\") == 0 ||\n> > + strcmp(opt->keyword, \"sslrootcert\") == 0 ||\n> > + strcmp(opt->keyword, \"sslcrl\") == 0 ||\n> > + strcmp(opt->keyword, \"sslcrldir\") == 0 ||\n> > + strcmp(opt->keyword, \"service\") == 0)\n> > + {\n> > + result = true;\n> > + break;\n> > + }\n>\n> Do we need to think about 'options' allowing anything bad? I don't\n> immediately* see a problem, but ...\n\nIf it is, it'd be a different kind of bad. What these parameters all\nhave in common is that they allow you to read some local file and\nmaybe benefit from that during the authentication process. options\ndoesn't let you to do anything like that, and by definition kind of\ncan't, because it's just a string to be sent to the remote server. As\nI noted in my other responses, the local superuser could want to\nimpose any arbitrary restriction the connection strings users can\nchoose, and so it's just as plausible that they want to restrict\noptions as anything else; but this test is about something more\nspecific.\n\n> > + /*\n> > + * For the host parameter, the value might be a local filename.\n> > + * It might also be a reference to the local host's abstract UNIX\n> > + * socket namespace, which we consider equivalent to a local pathname\n> > + * for security purporses.\n> > + */\n> > + if (strcmp(opt->keyword, \"host\") == 0 && is_unixsock_path(opt->val))\n> > + {\n> > + result = true;\n> > + break;\n> > + }\n> > + }\n>\n> Hm, what about kerberos / gss / SSPI? Aren't those essentially also tied to\n> the local filesystem / user?\n\nUh, I don't know. It doesn't seem so directly true as in these cases,\nbut what's your thought?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Jan 2023 13:21:31 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-01-23 11:34:32 -0500, Robert Haas wrote:\n> I will admit that this is not an open-and-shut case, because a\n> passwordless login back to the bootstrap superuser account from the\n> local machine is a pretty common scenario and doesn't feel\n> intrinsically unreasonable to me, and I hadn't thought about that as a\n> potential attack vector.\n\nI think it's 90% of the problem... There's IMO no particularly good\nalternative to a passwordless login for the bootstrap superuser, and it's the\naccount you least want to expose...\n\n\n> > > I still think you're talking about a different problem here. I'm\n> > > talking about the problem of knowing whether local files are going to\n> > > be accessed by the connection string.\n> >\n> > Why is this only about local files, rather than e.g. also using the local\n> > user?\n> \n> Because there's nothing you can do about the local-user case.\n> \n> If I'm asked to attempt to connect to a PostgreSQL server, and I\n> choose to do that, and the connection succeeds, all I know is that the\n> connection actually succeeded.\n\nWell, there is PQconnectionUsedPassword()... Not that it's a great answer.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 23 Jan 2023 10:26:39 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Jan 23, 2023 at 8:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I will admit that this is not an open-and-shut case, because a\n> passwordless login back to the bootstrap superuser account from the\n> local machine is a pretty common scenario and doesn't feel\n> intrinsically unreasonable to me, and I hadn't thought about that as a\n> potential attack vector.\n\nIt seems to me like that's _the_ primary attack vector. I think I\nagree with you that the password requirement is an overly large\nhammer, but I don't think it's right (or safe/helpful to DBAs reading\nalong) to describe it as a manufactured concern.\n\n> If I'm asked to attempt to connect to a PostgreSQL server, and I\n> choose to do that, and the connection succeeds, all I know is that the\n> connection actually succeeded. I do not know why the remote machine\n> chose to accept the connection. If I supplied a password or an SSL\n> certificate or some such thing, then it seems likely that the remote\n> machine accepted that connection because I supplied that particular\n> password or SSL certificate, but it could also be because the remote\n> machine accepts all connections from Robert, or all connections\n> whatsoever, or all connections on Mondays. I just don't know.\n\nAs of SYSTEM_USER, I think this is no longer the case -- after\nconnection establishment, you can ask the server who was authenticated\nand why. (It doesn't explain why you were authorized to be that\nparticular user, but that seems maybe less important wen you're trying\nto disallow ambient authentication.)\n\nIf my require_auth patchset gets in, you'd be able to improve on this\nby rejecting all ambient forms of authentication at the protocol level\n(require_auth=password,md5,scram-sha-256). You could even go a step\nfurther and disable ambient transport authentication\n(sslcertmode=disable gssencmode=disable), which keeps a proxied\nconnection from making use of a client cert or a Kerberos cache. But\nfor postgres_fdw, at least, that carries a risk of disabling current\nuse cases. Stephen and I had a discussion about one such case in the\nKerberos delegation thread [1].\n\nIt doesn't help you if you want to differentiate one form of ambient\nauth (trust/peer/etc.) from another, since they look the same to the\nprotocol. But for e.g. postgres_fdw I'm not sure why you would want to\ndifferentiate between those cases, because they all seem bad.\n\n> To put that another way, if I'm making a connection on behalf of an\n> untrusted party, I can choose not to supply an SSL certificate, or not\n> to supply a password. But I cannot choose to not be myself.\n\n(IMO, you're driving towards a separation of the proxy identity from\nthe user identity. Other protocols do that too.)\n\n--Jacob\n\n[1] https://www.postgresql.org/message-id/flat/23337c51-7a48-d5a8-569d-ef3ce6fe235f%40timescale.com#38b4033256d9d95773963ce938cbe3ea\n\n\n", "msg_date": "Mon, 23 Jan 2023 10:27:27 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-01-23 12:39:50 -0500, Robert Haas wrote:\n> On Sun, Jan 22, 2023 at 8:52 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Perhaps we should have a way to directly turn on/off authentication\n> > > methods in libpq through API functions and/or options?\n> >\n> > Yes. There's an in-progress patch adding, I think, pretty much what is\n> > required here:\n> > https://www.postgresql.org/message-id/9e5a8ccddb8355ea9fa4b75a1e3a9edc88a70cd3.camel@vmware.com\n> >\n> > require_auth=a,b,c\n> >\n> > I think an allowlist approach is the right thing for the subscription (and\n> > postgres_fdw/dblink) use case, otherwise we'll add some auth method down the\n> > line without updating what's disallowed in the subscription code.\n> \n> So what would we do here, exactly? We could force a require_auth\n> parameter into the provided connection string, although I'm not quite\n> sure of the details there\n\nIf we parse the connection string first, we can ensure that our values take\nprecedence, that shouldn't be an issue, I think.\n\n\n> , but what value should we force? Is that going to be something hard-coded,\n> or something configurable? If configurable, where does that configuration\n> get stored?\n\nI would probably start with something hardcoded, perhaps with an adjusted\nvalue depending on things like pg_read_server_files.\n\nI'd say just allowing password (whichever submethod), ssl is a good start,\nwith something like your existing code to prevent file access for ssl unless\npg_read_server_files is granted.\n\n\nI don't think kerberos, gss, peer, sspi would be safe.\n\n\n> Regardless, this only allows connection strings to be restricted along\n> one axis: authentication type. If you want to let people connect only\n> to a certain subnet or whatever, you're still out of luck.\n\nTrue. But I think it'd get us a large percentage of the use cases.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 23 Jan 2023 10:35:33 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Jan 23, 2023 at 1:26 PM Andres Freund <andres@anarazel.de> wrote:\n> > If I'm asked to attempt to connect to a PostgreSQL server, and I\n> > choose to do that, and the connection succeeds, all I know is that the\n> > connection actually succeeded.\n>\n> Well, there is PQconnectionUsedPassword()... Not that it's a great answer.\n\nSure, but that's making an inference about why the remote side did\nwhat it did. It's not fantastic to have a security model that relies\non connecting to a server chosen by the user and having it tell us\ntruthfully whether or not it relied on the password. Granted, it won't\nlie unless it's been hacked, and we're trying to protect it, not\nourselves, so the only thing that happens if it does lie is that it\ngets hacked a second time, so I guess there's no real vulnerability?\nBut I feel like we'd be on far sounder footing if we our security\npolicy were based on deciding what we are willing to do (are we\nwilling to read that file? are we willing to attempt that\nauthentication method?) and before we actually do it, rather than on\ntrying to decide after-the-fact whether what we did is OK based on\nwhat the remote side tells us about how things turned out.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Jan 2023 13:39:43 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-01-23 10:27:27 -0800, Jacob Champion wrote:\n> On Mon, Jan 23, 2023 at 8:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I will admit that this is not an open-and-shut case, because a\n> > passwordless login back to the bootstrap superuser account from the\n> > local machine is a pretty common scenario and doesn't feel\n> > intrinsically unreasonable to me, and I hadn't thought about that as a\n> > potential attack vector.\n> \n> It seems to me like that's _the_ primary attack vector. I think I\n> agree with you that the password requirement is an overly large\n> hammer, but I don't think it's right (or safe/helpful to DBAs reading\n> along) to describe it as a manufactured concern.\n\n+1\n\n\n> > If I'm asked to attempt to connect to a PostgreSQL server, and I\n> > choose to do that, and the connection succeeds, all I know is that the\n> > connection actually succeeded. I do not know why the remote machine\n> > chose to accept the connection. If I supplied a password or an SSL\n> > certificate or some such thing, then it seems likely that the remote\n> > machine accepted that connection because I supplied that particular\n> > password or SSL certificate, but it could also be because the remote\n> > machine accepts all connections from Robert, or all connections\n> > whatsoever, or all connections on Mondays. I just don't know.\n> \n> As of SYSTEM_USER, I think this is no longer the case -- after\n> connection establishment, you can ask the server who was authenticated\n> and why. (It doesn't explain why you were authorized to be that\n> particular user, but that seems maybe less important wen you're trying\n> to disallow ambient authentication.)\n\nThere's not enough documentation for SYSTEM_USER imo.\n\n\n\n> You could even go a step further and disable ambient transport\n> authentication (sslcertmode=disable gssencmode=disable), which keeps a\n> proxied connection from making use of a client cert or a Kerberos cache. But\n> for postgres_fdw, at least, that carries a risk of disabling current use\n> cases. Stephen and I had a discussion about one such case in the Kerberos\n> delegation thread [1].\n\nI did not find that very convincing for today's code. The likelihood of\nsomething useful being prevented seems far far lower than preventing privilege\nleakage...\n\n\n> It doesn't help you if you want to differentiate one form of ambient\n> auth (trust/peer/etc.) from another, since they look the same to the\n> protocol. But for e.g. postgres_fdw I'm not sure why you would want to\n> differentiate between those cases, because they all seem bad.\n\nIt might be possible to teach libpq to differentiate peer from trust (by\ndisabling passing the current user), or we could tell the server via an option\nto disable peer. But as you say, I don't think it'd buy us much.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 23 Jan 2023 11:05:33 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Jan 23, 2023 at 1:27 PM Jacob Champion <jchampion@timescale.com> wrote:\n> On Mon, Jan 23, 2023 at 8:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I will admit that this is not an open-and-shut case, because a\n> > passwordless login back to the bootstrap superuser account from the\n> > local machine is a pretty common scenario and doesn't feel\n> > intrinsically unreasonable to me, and I hadn't thought about that as a\n> > potential attack vector.\n>\n> It seems to me like that's _the_ primary attack vector. I think I\n> agree with you that the password requirement is an overly large\n> hammer, but I don't think it's right (or safe/helpful to DBAs reading\n> along) to describe it as a manufactured concern.\n\nFirst, sorry about the wording. I try to get it right, but sometimes I don't.\n\nSecond, the reason why I described it as a manufactured issue is\nbecause it's a bit like asking someone to stand under a ladder and\nthen complaining when they get hit in the head by a falling object.\nIt's not that I think it's good for people to get a free exploit to\nsuperuser, or to get hit in the head by falling objects. It's just\nthat you can't have the things that together lead to some outcome\nwithout also getting the outcome. It seems to me that we basically let\nthe malicious connection to the target host succeed, and then say ...\noh, never mind, we may have made this connection under false\npretenses, so we shan't use it after all. What I was attempting to\nargue is that we shouldn't let things get that far. Either the victim\nshould be able to protect itself from the malicious connection, or the\nconnection attempt shouldn't be allowed in the first place, or both.\nBlocking the connection attempt after the fact feels like too little,\ntoo late.\n\nFor instance, what if the connection string itself caused SQL to be\nexecuted on the remote side, as in the case of target_session_attrs?\nOr what if we got those logon triggers that people have been wanting\nfor years? Or what if the remote server speaks the PostgreSQL protocol\nbut isn't really PostgreSQL and does ... whatever ... when you just\nconnect to it?\n\n> As of SYSTEM_USER, I think this is no longer the case -- after\n> connection establishment, you can ask the server who was authenticated\n> and why. (It doesn't explain why you were authorized to be that\n> particular user, but that seems maybe less important wen you're trying\n> to disallow ambient authentication.)\n\nI think this is too after-the-fact, as discussed above.\n\n> If my require_auth patchset gets in, you'd be able to improve on this\n> by rejecting all ambient forms of authentication at the protocol level\n> (require_auth=password,md5,scram-sha-256). You could even go a step\n> further and disable ambient transport authentication\n> (sslcertmode=disable gssencmode=disable), which keeps a proxied\n> connection from making use of a client cert or a Kerberos cache. But\n> for postgres_fdw, at least, that carries a risk of disabling current\n> use cases. Stephen and I had a discussion about one such case in the\n> Kerberos delegation thread [1].\n\nYes, this is why I think that the system administrator needs to have\nsome control over policy, instead of just having a hard-coded rule\nthat applies to everyone.\n\nI'm not completely sure that this is good enough in terms of blocking\nthe attack as early as I think we should. This is all happening in the\nmidst of a connection attempt. If the remote server says, \"hey, what's\nyour password?\" and we refuse to answer that question, well that seems\nsomewhat OK. But what if we're hoping to be asked for a password and\nthe remote server doesn't ask? Then we don't find out that things\naren't right until after we've already logged in, and that gets back\nto what I talk about above.\n\n> > To put that another way, if I'm making a connection on behalf of an\n> > untrusted party, I can choose not to supply an SSL certificate, or not\n> > to supply a password. But I cannot choose to not be myself.\n>\n> (IMO, you're driving towards a separation of the proxy identity from\n> the user identity. Other protocols do that too.)\n\nHmm, interesting.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Jan 2023 14:47:06 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Jan 23, 2023 at 2:47 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Second, the reason why I described it as a manufactured issue is\n> because it's a bit like asking someone to stand under a ladder and\n> then complaining when they get hit in the head by a falling object.\n> It's not that I think it's good for people to get a free exploit to\n> superuser, or to get hit in the head by falling objects. It's just\n> that you can't have the things that together lead to some outcome\n> without also getting the outcome.\n\nI left out a sentence here. What I meant to say was we can't both\nallow passwordless loopback connections to the bootstrap superuser and\nalso allow postgres_fdw to connect to anything that the user requests\nand then be surprised when that user can get into the superuser\naccount. The natural outcome of combining those two things is that\nsuperuser gets hacked.\n\nThe password requirement just *barely* prevents that attack from\nworking, almost, maybe, while at the same time managing to block\nthings that people want to do for totally legitimate reasons. But\nIMHO, the real problem is that combining those two things is extremely\ndangerous.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Jan 2023 14:52:25 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, 2023-01-20 at 11:08 -0500, Robert Haas wrote:\n> On Fri, Jan 20, 2023 at 8:25 AM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> > I still think you're talking about a different problem here. I'm\n> > talking about the problem of knowing whether local files are going\n> > to\n> > be accessed by the connection string.\n> \n> So here's a dumb patch for this. At least in my mind, the connection\n> string sanitization/validation is the major design problem here\n\nI believe your patch conflates two use cases:\n\n(A) Tightly-coupled servers that are managed by the administrator. In\nthis case, there are a finite number of connection strings to make, and\nthe admin knows about all of them. Validation is a poor solution for\nthis use case, because we get into the weeds trying to figure out\nwhat's safe or not, overriding the admin's better judgement in some\ncases and letting through connection strings that might be unsafe. A\nmuch better solution is to simply declare the connection strings as\nsome kind of object (perhaps a SERVER object), and hand out privileges\nor inherit them from a predefined role. Having connection string\nobjects is also just a better UI: it allows changes to connection\nstrings over time to adapt to changing security needs, and allows a\nsimple name that is much easier to type and read.\n\n(B) Loosely-coupled servers that the admin doesn't know about, but\nwhich might be perfectly safe to access. Validation is useful here, but\nit's a long road of fine-grained privileges around acceptable hosts,\nIPs, authentication types, file access, password sources, password\nprotocols, connection options, etc. The right solution here is to\nidentify the sub-usecases of loosely-coupled servers, and enable them\n(with the appropriate controls) one at a time. Arguably, that's already\nwhat's happened by demanding a password (even if we don't like the\nmechanism, it does seem to work for some important cases).\n\nIs your patch targeted at use case (A), (B), or both?\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Mon, 23 Jan 2023 12:50:04 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On 1/23/23 11:05, Andres Freund wrote:\n> There's not enough documentation for SYSTEM_USER imo.\n\nIf we were to make use of SYSTEM_USER programmatically (and based on\nwhat Robert wrote downthread, that's probably not what's desired), I\nthink we'd have to make more guarantees about how it can be parsed and\nthe values that you can expect. Right now it's meant mostly for human\nconsumption.\n\n>> You could even go a step further and disable ambient transport\n>> authentication (sslcertmode=disable gssencmode=disable), which keeps a\n>> proxied connection from making use of a client cert or a Kerberos cache. But\n>> for postgres_fdw, at least, that carries a risk of disabling current use\n>> cases. Stephen and I had a discussion about one such case in the Kerberos\n>> delegation thread [1].\n> \n> I did not find that very convincing for today's code. The likelihood of\n> something useful being prevented seems far far lower than preventing privilege\n> leakage...\n\nFair enough. Preventing those credentials from being pulled in by\ndefault would effectively neutralize my concern for the delegation\npatchset, too.\n\n--Jacob\n\n\n\n", "msg_date": "Mon, 23 Jan 2023 16:23:52 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On 1/23/23 11:52, Robert Haas wrote:\n> On Mon, Jan 23, 2023 at 2:47 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> Second, the reason why I described it as a manufactured issue is\n>> because it's a bit like asking someone to stand under a ladder and\n>> then complaining when they get hit in the head by a falling object.\n>> It's not that I think it's good for people to get a free exploit to\n>> superuser, or to get hit in the head by falling objects. It's just\n>> that you can't have the things that together lead to some outcome\n>> without also getting the outcome.\n>\n> I left out a sentence here. What I meant to say was we can't both\n> allow passwordless loopback connections to the bootstrap superuser and\n> also allow postgres_fdw to connect to anything that the user requests\n> and then be surprised when that user can get into the superuser\n> account. The natural outcome of combining those two things is that\n> superuser gets hacked.\n>\n> The password requirement just *barely* prevents that attack from\n> working, almost, maybe, while at the same time managing to block\n> things that people want to do for totally legitimate reasons. But\n> IMHO, the real problem is that combining those two things is extremely\n> dangerous.\n\nI don't disagree. I'm worried that the unspoken conclusion being\npresented is \"it's such an obvious problem that we should just leave it\nto the DBAs,\" which I very much disagree with, but I may be reading too\nmuch into it.\n\n> It seems to me that we basically let\n> the malicious connection to the target host succeed, and then say ...\n> oh, never mind, we may have made this connection under false\n> pretenses, so we shan't use it after all. What I was attempting to\n> argue is that we shouldn't let things get that far. Either the victim\n> should be able to protect itself from the malicious connection, or the\n> connection attempt shouldn't be allowed in the first place, or both.\n> Blocking the connection attempt after the fact feels like too little,\n> too late.\n\nExpanding on my previous comment, you could give the client a way to say\n\"I am a proxy, and I'm connecting on behalf of this user, and here are\nboth my credentials and their credentials. So if you were planning to,\nsay, authorize me as superuser based on my IP address... maybe don't do\nthat?\"\n\n(You can sort of implement this today, by giving the proxy a client\ncertificate for transport authn, having it provide the in-band authn for\nthe user, and requiring both at the server. It's not very flexible.)\n\nI think this has potential overlap with Magnus' PROXY proposal [1], and\nalso the case where we want pgbouncer to authenticate itself and then\nperform actions on behalf of someone else [2], and maybe SASL's authzid\nconcept. I don't think one solution will hit all of the desired use\ncases, but there are directions that can be investigated.\n\n> I'm not completely sure that this is good enough in terms of blocking\n> the attack as early as I think we should. This is all happening in the\n> midst of a connection attempt. If the remote server says, \"hey, what's\n> your password?\" and we refuse to answer that question, well that seems\n> somewhat OK. But what if we're hoping to be asked for a password and\n> the remote server doesn't ask?\n\nrequire_auth should still successfully mitigate the target_session_attrs\ncase (going back to the examples you provided). It looks like the SQL is\ninitiated from the client side, so require_auth will notice that there\nwas no authentication performed and bail out before we get there.\n\nFor the hypothetical logon trigger, or any case where the server does\nsomething on behalf of a user upon connection, I agree it doesn't help you.\n\n--Jacob\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CABUevExJ0ifpUEiX4uOREy0s2kHBrBrb=pXLEHhpMTR1vVR1XA@mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/CAMT0RQR2fxeaPLHXappBCGEjHJiPCBJMPOHoDWiaYLjuieR0sg%40mail.gmail.com\n\n\n", "msg_date": "Mon, 23 Jan 2023 16:24:34 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Jan 23, 2023 at 7:24 PM Jacob Champion <jchampion@timescale.com> wrote:\n> > The password requirement just *barely* prevents that attack from\n> > working, almost, maybe, while at the same time managing to block\n> > things that people want to do for totally legitimate reasons. But\n> > IMHO, the real problem is that combining those two things is extremely\n> > dangerous.\n>\n> I don't disagree. I'm worried that the unspoken conclusion being\n> presented is \"it's such an obvious problem that we should just leave it\n> to the DBAs,\" which I very much disagree with, but I may be reading too\n> much into it.\n\nTo be honest, that was my first instinct here, but I see the problems\nbetter now than I did at the beginning of this discussion.\n\n> Expanding on my previous comment, you could give the client a way to say\n> \"I am a proxy, and I'm connecting on behalf of this user, and here are\n> both my credentials and their credentials. So if you were planning to,\n> say, authorize me as superuser based on my IP address... maybe don't do\n> that?\"\n>\n> (You can sort of implement this today, by giving the proxy a client\n> certificate for transport authn, having it provide the in-band authn for\n> the user, and requiring both at the server. It's not very flexible.)\n>\n> I think this has potential overlap with Magnus' PROXY proposal [1], and\n> also the case where we want pgbouncer to authenticate itself and then\n> perform actions on behalf of someone else [2], and maybe SASL's authzid\n> concept. I don't think one solution will hit all of the desired use\n> cases, but there are directions that can be investigated.\n\nI think this has some potential, but it's pretty complex, seeming to\nrequire protocol extensions and having backward-compatibility problems\nand so on. What do you think about something in the spirit of a\nreverse-pg_hba.conf? The idea being that PostgreSQL facilities that\nmake outbound connections are supposed to ask it whether those\nconnections are OK to initiate. Then you could have a default\nconfiguration that basically says \"don't allow loopback connections\"\nor \"require passwords all the time\" or whatever we like, and the DBA\ncan change that as desired. We could teach dblink, postgres_fdw, and\nCREATE SUBSCRIPTION to use this new thing, and third-party code could\nadopt it if it likes.\n\nEven if we do that, some kind of proxy protocol support might be very\ndesirable. I'm not against that. But I think that DBAs need better\ncontrol over what kind of outbound connections they want to permit,\ntoo.\n\n> For the hypothetical logon trigger, or any case where the server does\n> something on behalf of a user upon connection, I agree it doesn't help you.\n\nI don't think the logon trigger thing is all *that* hypothetical. We\ndon't have it yet, but there have been patches proposed repeatedly for\nmany years.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 24 Jan 2023 08:50:42 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\nOn 2023-01-24 Tu 08:50, Robert Haas wrote:\n>\n> What do you think about something in the spirit of a\n> reverse-pg_hba.conf? The idea being that PostgreSQL facilities that\n> make outbound connections are supposed to ask it whether those\n> connections are OK to initiate. Then you could have a default\n> configuration that basically says \"don't allow loopback connections\"\n> or \"require passwords all the time\" or whatever we like, and the DBA\n> can change that as desired. We could teach dblink, postgres_fdw, and\n> CREATE SUBSCRIPTION to use this new thing, and third-party code could\n> adopt it if it likes.\n>\n\nI kinda like this idea, especially if we could specify the context that\nrules are to apply in. e.g. postgres_fdw, mysql_fdw etc. I'd certainly\ngive it an outing in the redis_fdw if appropriate.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 24 Jan 2023 12:24:05 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Tue, Jan 24, 2023 at 5:50 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think this has some potential, but it's pretty complex, seeming to\n> require protocol extensions and having backward-compatibility problems\n> and so on.\n\nYeah.\n\n> What do you think about something in the spirit of a\n> reverse-pg_hba.conf? The idea being that PostgreSQL facilities that\n> make outbound connections are supposed to ask it whether those\n> connections are OK to initiate. Then you could have a default\n> configuration that basically says \"don't allow loopback connections\"\n> or \"require passwords all the time\" or whatever we like, and the DBA\n> can change that as desired.\n\nWell, I'll have to kick the idea around a little bit. Kneejerk reactions:\n\n- It's completely reasonable to let a proxy operator restrict how that\nproxy is used. I doubt very much that a typical DBA wants to be\noperating an open proxy.\n\n- I think the devil will be in the details of the configuration\ndesign. Lists of allowed destination authorities (in the URI sense),\noptions that must be present/absent/overridden, those sound great. But\nyour initial examples of allow-loopback and require-passwords options\nare in the \"make the DBA deal with it\" line of thinking, IMO. I think\nit's difficult for someone to reason through those correctly the first\ntime, even for experts. I'd like to instead see the core problem --\nthat *any* ambient authentication used by a proxy is inherently risky\n-- exposed as a highly visible concept in the config, so that it's\nhard to make mistakes.\n\n- I'm inherently skeptical of solutions that require all clients --\nproxies, in this case -- to be configured correctly in order for a\nserver to be able to protect itself. (But I also have a larger\nappetite for security options that break compatibility when turned on.\n:D)\n\n> > For the hypothetical logon trigger, or any case where the server does\n> > something on behalf of a user upon connection, I agree it doesn't help you.\n>\n> I don't think the logon trigger thing is all *that* hypothetical. We\n> don't have it yet, but there have been patches proposed repeatedly for\n> many years.\n\nOkay. I think this thread has applicable lessons -- if connection\nestablishment itself leads to side effects, all actors in the\necosystem (bouncers, proxies) have to be hardened against making those\nconnections passively. I know we're very different from HTTP, but it\nfeels similar to their concept of method safety and the consequences\nof violating it.\n\n--Jacob\n\n\n", "msg_date": "Tue, 24 Jan 2023 11:18:44 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "[ Changing subject line to something more appropriate: This is\nbranched from the \"Non-superuser subscription owners\" thread, but the\ntopic has become connection security more generally for outbound\nconnections from a PostgreSQL instance, the inadequacies of just\ntrying to require that such connections always use a password, and\nrelated problems. I proposed some kind of \"reverse pg_hba.conf file\"\nas a way of allowing configurable limits on such outbound connections.\n]\n\nOn Tue, Jan 24, 2023 at 2:18 PM Jacob Champion <jchampion@timescale.com> wrote:\n> - It's completely reasonable to let a proxy operator restrict how that\n> proxy is used. I doubt very much that a typical DBA wants to be\n> operating an open proxy.\n\nThat's very well put. It's precisely what I was thinking, but\nexpressed much more clearly.\n\n> - I think the devil will be in the details of the configuration\n> design. Lists of allowed destination authorities (in the URI sense),\n> options that must be present/absent/overridden, those sound great. But\n> your initial examples of allow-loopback and require-passwords options\n> are in the \"make the DBA deal with it\" line of thinking, IMO. I think\n> it's difficult for someone to reason through those correctly the first\n> time, even for experts. I'd like to instead see the core problem --\n> that *any* ambient authentication used by a proxy is inherently risky\n> -- exposed as a highly visible concept in the config, so that it's\n> hard to make mistakes.\n\nI find the concept of \"ambient authentication\" problematic. I don't\nknow exactly what you mean by it. I hope you'll tell me, but I think\nthat I won't like it even after I know, because as I said before, it's\ndifficult to know why anyone else makes a decision, and asking an\nuntrusted third-party why they're deciding something is sketchy at\nbest. I think that the problems we have in this area can be solved by\neither (a) restricting the open proxy to be less open or (b)\nencouraging people to authenticate users in some way that won't admit\nconnections from an open proxy. The former needs to be configurable by\nthe DBA, and the latter is also a configuration choice by the DBA. We\ncan provide tools here that make it less likely that people will shoot\nthemselves in the foot, and we can ship default configurations that\nreduce the chance of inadvertent foot-shooting, and we can write\ndocumentation that says \"don't shoot yourself in the foot,\" but we\ncannot actually prevent people from shooting themselves in the foot\nexcept, perhaps, by massively nerfing the capabilities of the system.\n\nWhat I was thinking about in terms of a \"reverse pg_hba.conf\" was\nsomething in the vein of, e.g.:\n\nSOURCE_COMPONENT SOURCE_DATABASE SOURCE_USER DESTINATION_SUBNET\nDESTINATION_DATABASE DESTINATION_USER OPTIONS ACTION\n\ne.g.\n\nall all all local all all - deny # block access through UNIX sockets\nall all all 127.0.0.0/8 all all - deny # block loopback interface via IPv4\n\nOr:\n\npostgres_fdw all all all all all authentication=cleartext,md5,sasl\nallow # allow postgres_fdw with password-ish authentication\n\nDisallowing loopback connections feels quite tricky. You could use\n127.anything.anything.anything, but you could also loop back via IPv6,\nor you could loop back via any interface. But you can't use\nsubnet-based ACLs to rule out loop backs through IP/IPv6 interfaces\nunless you know what all your system's own IPs are. Maybe that's an\nargument in favor of having a dedicated deny-loopback facility built\ninto the system instead of relying on IP ACLs. But I am not sure that\nreally works either: how sure are we that we can discover all of the\nlocal IP addresses? Maybe it doesn't matter anyway, since the point is\njust to disallow anything that would be likely to use \"trust\" or\n\"ident\" authentication, and that's probably not going to include any\nnon-loopback network interfaces. But ... is that true in general? What\nabout on Windows?\n\n> - I'm inherently skeptical of solutions that require all clients --\n> proxies, in this case -- to be configured correctly in order for a\n> server to be able to protect itself. (But I also have a larger\n> appetite for security options that break compatibility when turned on.\n> :D)\n\nI (still) don't think that restricting the proxy is required, but you\ncan't both not restrict the proxy and also allow passwordless loopback\nsuperuser connections. You have to pick one or the other. The reason I\nkeep harping on the role of the DBA is that I don't think we can make\nthat choice unilaterally on behalf of everyone. We've tried doing that\nwith the current rules and we've discussed the weaknesses of that\napproach already.\n\n> > I don't think the logon trigger thing is all *that* hypothetical. We\n> > don't have it yet, but there have been patches proposed repeatedly for\n> > many years.\n>\n> Okay. I think this thread has applicable lessons -- if connection\n> establishment itself leads to side effects, all actors in the\n> ecosystem (bouncers, proxies) have to be hardened against making those\n> connections passively. I know we're very different from HTTP, but it\n> feels similar to their concept of method safety and the consequences\n> of violating it.\n\nI am not familiar with that concept in detail but that sounds right to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 24 Jan 2023 15:04:15 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On Mon, Jan 23, 2023 at 3:50 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I believe your patch conflates two use cases:\n>\n> (A) Tightly-coupled servers that are managed by the administrator. In\n> this case, there are a finite number of connection strings to make, and\n> the admin knows about all of them. Validation is a poor solution for\n> this use case, because we get into the weeds trying to figure out\n> what's safe or not, overriding the admin's better judgement in some\n> cases and letting through connection strings that might be unsafe. A\n> much better solution is to simply declare the connection strings as\n> some kind of object (perhaps a SERVER object), and hand out privileges\n> or inherit them from a predefined role. Having connection string\n> objects is also just a better UI: it allows changes to connection\n> strings over time to adapt to changing security needs, and allows a\n> simple name that is much easier to type and read.\n>\n> (B) Loosely-coupled servers that the admin doesn't know about, but\n> which might be perfectly safe to access. Validation is useful here, but\n> it's a long road of fine-grained privileges around acceptable hosts,\n> IPs, authentication types, file access, password sources, password\n> protocols, connection options, etc. The right solution here is to\n> identify the sub-usecases of loosely-coupled servers, and enable them\n> (with the appropriate controls) one at a time. Arguably, that's already\n> what's happened by demanding a password (even if we don't like the\n> mechanism, it does seem to work for some important cases).\n>\n> Is your patch targeted at use case (A), (B), or both?\n\nI suppose that I would say that the patch is a better fit for (B),\nbecause I'm not proposing to add any kind of intermediate object of\nthe type you postulate in (A). However, I don't really agree with the\nway you've split this up, either. It seems to me that the relevant\nquestion isn't \"are the servers tightly coupled?\" but rather \"could\nsome user make a mess if we let them use any arbitrary connection\nstring?\".\n\nIf you're running all of the machines involved on a private network\nthat is well-isolated from the Internet and in which only trusted\nactors operate, you could use what I'm proposing here for either (A)\nor (B) and it would be totally fine. If your server is sitting out on\nthe public Internet and is adequately secured against malicious\nloopback connections, you could also probably use it for either (A) or\n(B), unless you've got users who are really shady and you're worried\nthat the outbound connections that they make from your machine might\nget you into trouble, in which case you probably can't use it for\neither (A) or (B). Basically, the patch is suitable for cases where\nyou don't really need to restrict what connection strings people can\nuse, and unsuitable for cases where you do, but that doesn't have much\nto do with whether the servers involved are loosely or tightly\ncoupled.\n\nI think that you're basically trying to make an argument that some\nsort of complex outbound connection filtering is mandatory, and I\nstill don't really agree with that. We ship postgres_fdw with\nsomething extremely minimal - just a requirement that the password get\nused - and the same for dblink. I think those rules suck and are\nprobably bad and insecure in quite a number of cases, and overly\nstrict in others, but I can think of no reason why CREATE SUBSCRIPTION\nshould be held to a higher standard than anything else. The\nconnections that you can make using CREATE SUBSCRIPTION are strictly\nweaker than the ones you can make with dblink, which permits arbitrary\nSQL execution. It cannot be right to suppose that a less-exploitable\nsystem needs to be held to a higher security standard than a similar\nbut more-exploitable system.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 24 Jan 2023 17:00:52 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On 1/24/23 12:04, Robert Haas wrote:\n> I find the concept of \"ambient authentication\" problematic. I don't\n> know exactly what you mean by it. I hope you'll tell me,\n\nSure: Ambient authority [1] means that something is granted access based\non some aspect of its existence that it can't remove (or even\nnecessarily enumerate). Up above, when you said \"I cannot choose not to\nbe myself,\" that's a clear marker that ambient authority is involved.\nExamples of ambient authn/z factors might include an originating IP\naddress, the user ID of a connected peer process, the use of a loopback\ninterface, a GPS location, and so on. So 'peer' and 'ident' are ambient\nauthentication methods.\n\nAnd, because I think it's useful, I'll extend the definition to include\nprivileges that _could_ be dropped by a proxy, but in practice are\nincluded because there's no easy way not to. Examples for libpq include\nthe automatic use of the client certificate in ~/.postgresql, or any\nKerberos credentials available in the local user cache. (Or even a\nPGPASSWORD set up and forgotten by a DBA.)\n\nAmbient authority is closely related to the confused deputy problem [2],\nand the proxy discussed here is a classic confused deputy. The proxy\ndoesn't know that a core piece of its identity has been used to\nauthenticate the request it's forwarding. It can't choose its IP\naddress, or its user ID.\n\nI'm most familiar with this in the context of HTTP, cookie-/IP-based\nauthn, and cross-site request forgeries. Whenever someone runs a local\nweb server with no authentication and says \"it's okay! we only respond\nto requests from the local host!\" they're probably about to be broken\nopen by the first person to successfully reflect a request through the\nvictim's (very local) web browser.\n\nWays to mitigate or solve this problem (that I know of) include\n\n1) Forwarding the original ambient context along with the request, so\nthe server can check it too. HTTP has the Origin header, so a browser\ncan say, \"This request is not coming from my end user; it's coming from\na page controlled by example.org. You can't necessarily treat attached\ncookies like they're authoritative.\" The PROXY protocol lets a proxy\nforward several ambient factors, including the originating IP address\n(or even the use of a UNIX socket) and information about the original\nTLS context.\n\n2) Explicitly combining the request with the proof of authority needed\nto make it, as in capability-based security [3]. Some web frameworks\npush secret \"CSRF tokens\" into URLs for this purpose, to tangle the\nauthorization into the request itself [4]. I'd argue that the \"password\nrequirement\" implemented by postgres_fdw and discussed upthread was an\nattempt at doing this, to try to ensure that the authentication comes\nfrom the user explicitly and not from the proxy. It's just not very strong.\n\n(require_auth would strengthen it quite a bit; a major feature of that\npatchset is to explicitly name the in-band authentication factors that a\nserver is allowed to pull out of a client. It's still not strong enough\nto make a true capability, for one because it's client-side only. But as\nlong as servers don't perform actions on behalf of users upon\nconnection, that's pretty good in practice.)\n\n3) Dropping as many implicitly-held privileges as possible before making\na request. This doesn't solve the problem but may considerably reduce\nthe practical attack surface. For example, if browsers didn't attach\ntheir user's cookies to cross-origin requests, cross-site request\nforgeries would probably be considerably less dangerous (and, in the\nyears since I left the space, it looks like browsers have finally\nstopped doing this by default). Upthread, Andres suggested disabling the\ndefault inclusion of client certs and GSS creds, and I would extend that\nto include really *anything* pulled in from the environment. Make the\nDBA explicitly allow those things.\n\n> but I think\n> that I won't like it even after I know, because as I said before, it's\n> difficult to know why anyone else makes a decision, and asking an\n> untrusted third-party why they're deciding something is sketchy at\n> best.\n\nI think that's a red herring. Setting aside that you can, in fact, prove\nthat the server has authenticated you (e.g. require_auth=scram-sha-256\nin my proposed patchset), I don't think \"untrusted servers, that we\ndon't control, doing something stupid\" is a very useful thing to focus\non. We're trying to secure the case where a server *is* authenticating\nus, using known useful factors, but those factors have been co-opted by\nan attacker via a proxy.\n\n> I think that the problems we have in this area can be solved by\n> either (a) restricting the open proxy to be less open or (b)\n> encouraging people to authenticate users in some way that won't admit\n> connections from an open proxy.\n\n(a) is an excellent mitigation, and we should do it. (b) starts getting\nshaky because I think peer auth is actually a very reasonable choice for\nmany people. So I hope we can also start solving the underlying problem\nwhile we implement (a).\n\n> we\n> cannot actually prevent people from shooting themselves in the foot\n> except, perhaps, by massively nerfing the capabilities of the system.\n\nBut I thought we already agreed that most DBAs do not want a massively\ncapable proxy? I don't think we have to massively nerf the system, but\nlet's say we did. Would that really be unacceptable for this use case?\n\n(You're still driving hard down the \"it's impossible for us to securely\nhandle both cases at the same time\" path. I don't think that's true from\na technical standpoint, because we hold nearly total control of the\nprotocol. I think we're in a much easier situation than HTTP was.)\n\n> What I was thinking about in terms of a \"reverse pg_hba.conf\" was\n> something in the vein of, e.g.:\n> \n> SOURCE_COMPONENT SOURCE_DATABASE SOURCE_USER DESTINATION_SUBNET\n> DESTINATION_DATABASE DESTINATION_USER OPTIONS ACTION\n> \n> e.g.\n> \n> all all all local all all - deny # block access through UNIX sockets\n> all all all 127.0.0.0/8 all all - deny # block loopback interface via IPv4\n> \n> Or:\n> \n> postgres_fdw all all all all all authentication=cleartext,md5,sasl\n> allow # allow postgres_fdw with password-ish authentication\n\nI think this style focuses on absolute configuration flexibility at the\nexpense of usability. It obfuscates the common use cases. (I have the\nexact same complaint about our HBA and ident configs, so I may be\nfighting uphill.)\n\nHow should a DBA decide what is correct, or audit a configuration they\ninherited from someone else? What makes it obvious why a proxy should\nrequire cleartext auth instead of peer auth (especially since peer auth\nseems to be inherently better, until you've read this thread)?\n\nI'd rather the configuration focus on the pieces of a proxy's identity\nthat can be assumed by a client. For example, if the config has an\noption for \"let a client steal the proxy's user ID\", and it's off by\ndefault, then we've given the problem a name. DBAs can educate\nthemselves on it.\n\nAnd if that option is off, then the implementation knows that\n\n1) If the client has supplied explicit credentials and we can force the\nserver to use them, we're safe.\n2) If the DBA says they're not running an ident server, or we can force\nthe server not to use ident authn, or the DBA pinky-swears that that\nserver isn't using ident authn, all IP connections are additionally safe.\n3) If we have a way to forward the client's \"origin\" and we know that\nthe server will pay attention to it, all UNIX socket connections are\nadditionally safe.\n4) Any *future* authentication method we add later needs to be\nrestricted in the same way.\n\nShould we allow the use of our default client cert? the Kerberos cache?\npasswords from the environment? All these are named and off by default.\nDBAs can look through those options and say \"oh, yeah, that seems like a\nreally bad idea because we have this one server over here...\" And we\n(the experts) now get to make the best decisions we can, based on a\nDBA's declared intent, so the implementation gets to improve over time.\n> Disallowing loopback connections feels quite tricky. You could use\n> 127.anything.anything.anything, but you could also loop back via IPv6,\n> or you could loop back via any interface. But you can't use\n> subnet-based ACLs to rule out loop backs through IP/IPv6 interfaces\n> unless you know what all your system's own IPs are. Maybe that's an\n> argument in favor of having a dedicated deny-loopback facility built\n> into the system instead of relying on IP ACLs. But I am not sure that\n> really works either: how sure are we that we can discover all of the\n> local IP addresses?\n\nWell, to follow you down that road a little bit, I think that a DBA that\nhas set up `samehost ... trust` in their HBA is going to expect a\ncorresponding concept here, and it seems important for us to use an\nidentical implementation of samehost and samenet.\n\nBut I don't really want to follow you down that road, because I think\nyou illustrated my point yourself. You're already thinking about making\nDisallowing Loopback Connections a first-class concept, but then you\nimmediately said\n\n> Maybe it doesn't matter anyway, since the point is\n> just to disallow anything that would be likely to use \"trust\" or\n> \"ident\" authentication\n\nI'd rather we enshrine that -- the point -- in the configuration, and\nhave the proxy disable everything that can't provably meet that intent.\n\nThanks,\n--Jacob\n\n[1] https://en.wikipedia.org/wiki/Ambient_authority\n[2] https://en.wikipedia.org/wiki/Confused_deputy_problem\n[3] https://en.wikipedia.org/wiki/Capability-based_security\n[4] https://www.rfc-editor.org/rfc/rfc6265#section-8.2\n\n\n", "msg_date": "Wed, 25 Jan 2023 15:22:02 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On Tue, 2023-01-24 at 17:00 -0500, Robert Haas wrote:\n> It seems to me that the relevant\n> question isn't \"are the servers tightly coupled?\" but rather \"could\n> some user make a mess if we let them use any arbitrary connection\n> string?\".\n\nThe split I created is much easier for an admin to answer: is the list\nof servers finite, or can users connect to new servers the admin isn't\neven aware of? If it's a finite list, I feel there's a much better\nsolution with both security and UI benefits.\n\nWith your question, I'm not entirely clear if that's a question that we\nalready have an answer for (require a password parameter), or that we\nwill answer in this thread, or that the admin will answer.\n\n> unless you've got users who are really shady \n\nOr compromised. Unfortunately, a role that's creating subscriptions has\na lot of surface area for escalation-of-privilege attacks, because they\nhave to trust all the owners of all the tables the subscriptions write\nto.\n\n\n> I think that you're basically trying to make an argument that some\n> sort of complex outbound connection filtering is mandatory\n\nNo, I'm not asking for the validation to be more complex.\n\nI believe use case (A) is a substantial use case, and I'd like to leave\nspace in the user interface to solve it a much better way than\nconnection string validation can offer. But to solve use case (A), we\nneed to separate the ability to create a subscription from the ability\nto create a connection string.\n\nRight now you see those as the same because they are done at the same\ntime in the same command; but I don't see it that way, because I had\nplans to allow a variant of CREATE SUBSCRIPTION that uses foreign\nservers. That plan would be consistent with dblink and postgres_fdw,\nwhich already allow specifying foreign servers.\n\nI propose that we have two predefined roles: pg_create_subscription,\nand pg_create_connection. If creating a subscription with a connection\nstring, you'd need to be a member of both roles. But to create a\nsubscription with a server object, you'd just need to be a member of\npg_create_subscription and have the USAGE privilege on the server\nobject.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Wed, 25 Jan 2023 19:45:09 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Jan 25, 2023 at 10:45 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I propose that we have two predefined roles: pg_create_subscription,\n> and pg_create_connection. If creating a subscription with a connection\n> string, you'd need to be a member of both roles. But to create a\n> subscription with a server object, you'd just need to be a member of\n> pg_create_subscription and have the USAGE privilege on the server\n> object.\n\nI have no issue with that as a long-term plan. However, I think that\nfor right now we should just introduce pg_create_subscription. It\nwould make sense to add pg_create_connection in the same patch that\nadds a CREATE CONNECTION command (or whatever exact syntax we end up\nwith) -- and that patch can also change CREATE SUBSCRIPTION to require\nboth privileges where a connection string is specified directly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 26 Jan 2023 09:43:09 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, 2023-01-26 at 09:43 -0500, Robert Haas wrote:\n> I have no issue with that as a long-term plan. However, I think that\n> for right now we should just introduce pg_create_subscription. It\n> would make sense to add pg_create_connection in the same patch that\n> adds a CREATE CONNECTION command (or whatever exact syntax we end up\n> with) -- and that patch can also change CREATE SUBSCRIPTION to\n> require\n> both privileges where a connection string is specified directly.\n\nI assumed it would be a problem to say that pg_create_subscription was\nenough to create a subscription today, and then later require\nadditional privileges (e.g. pg_create_connection).\n\nIf that's not a problem, then this sounds fine with me.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Thu, 26 Jan 2023 09:36:11 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Jan 26, 2023 at 12:36 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Thu, 2023-01-26 at 09:43 -0500, Robert Haas wrote:\n> > I have no issue with that as a long-term plan. However, I think that\n> > for right now we should just introduce pg_create_subscription. It\n> > would make sense to add pg_create_connection in the same patch that\n> > adds a CREATE CONNECTION command (or whatever exact syntax we end up\n> > with) -- and that patch can also change CREATE SUBSCRIPTION to\n> > require\n> > both privileges where a connection string is specified directly.\n>\n> I assumed it would be a problem to say that pg_create_subscription was\n> enough to create a subscription today, and then later require\n> additional privileges (e.g. pg_create_connection).\n>\n> If that's not a problem, then this sounds fine with me.\n\nWonderful! I'm working on a patch, but due to various distractions,\nit's not done yet.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 26 Jan 2023 15:55:13 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Jan 19, 2023 at 8:46 PM Andres Freund <andres@anarazel.de> wrote:\n> > If we already had (or have) that logic someplace else, it would\n> > probably make sense to reuse it\n>\n> We hve. See at least postgres_fdw's check_conn_params(), dblink's\n> dblink_connstr_check() and dblink_security_check().\n\nIn the patch I posted previously, I had some other set of checks, more\nor less along the lines suggested by Jeff. I looked into revising that\napproach and making the behavior match exactly what we do in those\nplaces instead. I find that it breaks 027_nosuperuser.pl.\nSpecifically, where without the patch I get \"ok 6 - nosuperuser admin\nwith all table privileges can replicate into unpartitioned\", with the\npatch it goes boom, because the subscription can't connect any more\ndue to the password requirement.\n\nAt first, I found it a bit tempting to see this as a further\nindication that the force-a-password approach is not the right idea,\nbecause the test case clearly memorializes a desire *not* to require a\npassword in this situation. However, the loopback-to-superuser attack\nis just as viable for subscription as it in other cases, and my\nprevious patch would have done nothing to block it. So what I did\ninstead is add a password_required attribute, just like what\npostgres_fdw has. As in the case of postgres_fdw, the actual rule is\nthat if the attribute is false, a password is not required, and if the\nattribute is true, a password is required unless you are a superuser.\nIf you're a superuser, it still isn't. This is a slightly odd set of\nsemantics but it has precedent and practical advantages. Also, as in\nthe case of postgres_fdw, only a superuser can set\npassword_required=false, and a subscription that has that setting can\nonly be modified by a superuser, no matter who owns it.\n\nEven though I hate the require-a-password stuff with the intensity of\na thousand suns, I think this is better than the previous patch,\nbecause it's more consistent with what we do elsewhere and because it\nblocks the loopback-connection-to-superuser attack. I think we\n*really* need to develop a better system for restricting proxied\nconnections (no matter how proxied) and I hope that we do that soon.\nBut inventing something for this purpose that differs from what we do\nelsewhere will make that task harder, not easier.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 27 Jan 2023 14:42:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Jan 25, 2023 at 6:22 PM Jacob Champion <jchampion@timescale.com> wrote:\n> Sure: Ambient authority [1] means that something is granted access based\n> on some aspect of its existence that it can't remove (or even\n> necessarily enumerate). Up above, when you said \"I cannot choose not to\n> be myself,\" that's a clear marker that ambient authority is involved.\n> Examples of ambient authn/z factors might include an originating IP\n> address, the user ID of a connected peer process, the use of a loopback\n> interface, a GPS location, and so on. So 'peer' and 'ident' are ambient\n> authentication methods.\n\nOK.\n\n> 1) Forwarding the original ambient context along with the request, so\n> the server can check it too.\n\nRight, so a protocol extension. Reasonable idea, but a big lift. Not\nonly do you need everyone to be running a new enough version of\nPostgreSQL, but existing proxies like pgpool and pgbouncer need\nupdates, too.\n\n> 2) Explicitly combining the request with the proof of authority needed\n> to make it, as in capability-based security [3].\n\nAs far as I can see, that link doesn't address how you'd make this\napproach work across a network.\n\n> 3) Dropping as many implicitly-held privileges as possible before making\n> a request. This doesn't solve the problem but may considerably reduce\n> the practical attack surface.\n\nRight. I definitely don't object to this kind of approach, but I don't\nthink it can ever be sufficient by itself.\n\n> > e.g.\n> >\n> > all all all local all all - deny # block access through UNIX sockets\n> > all all all 127.0.0.0/8 all all - deny # block loopback interface via IPv4\n> >\n> > Or:\n> >\n> > postgres_fdw all all all all all authentication=cleartext,md5,sasl\n> > allow # allow postgres_fdw with password-ish authentication\n>\n> I think this style focuses on absolute configuration flexibility at the\n> expense of usability. It obfuscates the common use cases. (I have the\n> exact same complaint about our HBA and ident configs, so I may be\n> fighting uphill.)\n\nThat's probably somewhat true, but on the other hand, it also is more\npowerful than what you're describing. In your system, is there some\nway the DBA can say \"hey, you can connect to any of the machines on\nthis list of subnets, but nothing else\"? Or equally, \"hey, you may NOT\nconnect to any machine on this list of subnets, but anything else is\nfine\"? Or \"you can connect to these subnets without SSL, but if you\nwant to talk to anything else, you need to use SSL\"? I would feel a\nbit bad saying that those are just use cases we don't care about. Most\npeople likely wouldn't use that kind of flexibility, so maybe it\ndoesn't really matter, but it seems kind of nice to have. Your idea\nseems to rely on us being able to identify all of the policies that a\nuser is likely to want and give names to each one, and I don't feel\nvery confident that that's realistic. But maybe I'm misinterpreting\nyour idea?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 16:08:38 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 14:42:01 -0500, Robert Haas wrote:\n> At first, I found it a bit tempting to see this as a further\n> indication that the force-a-password approach is not the right idea,\n> because the test case clearly memorializes a desire *not* to require a\n> password in this situation. However, the loopback-to-superuser attack\n> is just as viable for subscription as it in other cases, and my\n> previous patch would have done nothing to block it.\n\nHm, compared to postgres_fdw, the user has far less control over what's\nhappening using that connection. Is there a way a subscription owner can\ntrigger evaluation of near-arbitrary SQL on the publisher side?\n\n\n> So what I did instead is add a password_required attribute, just like what\n> postgres_fdw has. As in the case of postgres_fdw, the actual rule is that if\n> the attribute is false, a password is not required, and if the attribute is\n> true, a password is required unless you are a superuser. If you're a\n> superuser, it still isn't. This is a slightly odd set of semantics but it\n> has precedent and practical advantages. Also, as in the case of\n> postgres_fdw, only a superuser can set password_required=false, and a\n> subscription that has that setting can only be modified by a superuser, no\n> matter who owns it.\n\nI started out asking what benefits it provides to own a subscription one\ncannot modify. But I think it is a good capability to have, to restrict the\nset of relations that replication could target. Although perhaps it'd be\nbetter to set the \"replay user\" as a separate property on the subscription?\n\nDoes owning a subscription one isn't allowed to modify useful outside of that?\n\n\n\n> Even though I hate the require-a-password stuff with the intensity of\n> a thousand suns, I think this is better than the previous patch,\n> because it's more consistent with what we do elsewhere and because it\n> blocks the loopback-connection-to-superuser attack. I think we\n> *really* need to develop a better system for restricting proxied\n> connections (no matter how proxied) and I hope that we do that soon.\n> But inventing something for this purpose that differs from what we do\n> elsewhere will make that task harder, not easier.\n> \n> Thoughts?\n\nI think it's reasonable to mirror behaviour from elsewhere, and it'd let us\nhave this feature relatively soon - I think it's a common need to do this as a\nnon-superuser. It's IMO a very good idea to not subscribe as a superuser, even\nif set up by a superuser...\n\nBut I also would understand if you / somebody else chose to focus on\nimplementing a less nasty connection model.\n\n\n> Subject: [PATCH v2] Add new predefined role pg_create_subscriptions.\n\nMaybe a daft question:\n\nHave we considered using a \"normal grant\", e.g. on the database, instead of a\nrole? Could it e.g. be useful to grant a user the permission to create a\nsubscription in one database, but not in another?\n\n\n> @@ -1039,6 +1082,16 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,\n> \n> \tsub = GetSubscription(subid, false);\n> \n> +\t/*\n> +\t * Don't allow non-superuser modification of a subscription with\n> +\t * password_required=false.\n> +\t */\n> +\tif (!sub->passwordrequired && !superuser())\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> +\t\t\t\t\t\t errmsg(\"password_required=false is superuser-only\"),\n> +\t\t\t\t\t\t errhint(\"Subscriptions with the password_required option set to false may only be created or modified by the superuser.\")));\n> +\n> \t/* Lock the subscription so nobody else can do anything with it. */\n> \tLockSharedObject(SubscriptionRelationId, subid, 0, AccessExclusiveLock);\n\nThe subscription code already does ownership checks before locking and now\nthere's also the passwordrequired before. Is it possible that this could open\nup some sort of race? Could e.g. the user change the ownership to the\nsuperuser in one session, do an ALTER in the other?\n\nIt looks like your change won't increase the danger of that, as the\nsuperuser() check just checks the current users permissions.\n\n\n> @@ -180,6 +180,13 @@ libpqrcv_connect(const char *conninfo, bool logical, const char *appname,\n> \tif (PQstatus(conn->streamConn) != CONNECTION_OK)\n> \t\tgoto bad_connection_errmsg;\n> \n> +\tif (must_use_password && !PQconnectionUsedPassword(conn->streamConn))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_S_R_E_PROHIBITED_SQL_STATEMENT_ATTEMPTED),\n> +\t\t\t\t errmsg(\"password is required\"),\n> +\t\t\t\t errdetail(\"Non-superuser cannot connect if the server does not request a password.\"),\n> +\t\t\t\t errhint(\"Target server's authentication method must be changed.\")));\n> +\n\nThe documentation of libpqrcv_connect() says that:\n * Returns NULL on error and fills the err with palloc'ed error message.\n\nand throwing an error like that will at the very least leak the connection,\nfd, fd reservation. Which I just had fixed :). At the very least you'd need to\ncopy the stuff that \"bad_connection:\" does.\n\n\nI did wonder whether we should make libpqrcv_connect() use errsave() to return\nerrors. Or whether we should make libpqrcv register a memory context reset\ncallback that'd close the libpq connection.\n\n\n> /*\n> - * Validate connection info string (just try to parse it)\n> + * Validate connection info string, and determine whether it might cause\n> + * local filesystem access to be attempted.\n> + *\n> + * If the connection string can't be parsed, this function will raise\n> + * an error and will not return. If it can, it will return true if this\n> + * connection string specifies a password and false otherwise.\n> */\n> -static void\n> +static bool\n> libpqrcv_check_conninfo(const char *conninfo)\n\nThat is a somewhat odd API. Why does it throw for some things, but not\nothers? Seems a bit cleaner to pass in a parameter indicating whether it\nshould throw when not finding a password? Particularly because you already\npass that to walrcv_connect().\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 13:09:11 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, Jan 27, 2023 at 4:09 PM Andres Freund <andres@anarazel.de> wrote:\n> Hm, compared to postgres_fdw, the user has far less control over what's\n> happening using that connection. Is there a way a subscription owner can\n> trigger evaluation of near-arbitrary SQL on the publisher side?\n\nI'm not aware of one, but what I think it would let you do is\nexfiltrate data you're not entitled to see.\n\n> I started out asking what benefits it provides to own a subscription one\n> cannot modify. But I think it is a good capability to have, to restrict the\n> set of relations that replication could target. Although perhaps it'd be\n> better to set the \"replay user\" as a separate property on the subscription?\n\nThat's been proposed previously, but for reasons I don't quite\nremember it seems not to have happened. I don't think it achieved\nconsensus.\n\n> Does owning a subscription one isn't allowed to modify useful outside of that?\n\nUh, possibly that's a question for Mark or Jeff. I don't know. I can't\nsee what they would be, but I just work here.\n\n> Maybe a daft question:\n>\n> Have we considered using a \"normal grant\", e.g. on the database, instead of a\n> role? Could it e.g. be useful to grant a user the permission to create a\n> subscription in one database, but not in another?\n\nPotentially, but I didn't think we'd want to burn through permissions\nbits that fast, even given 7b378237aa805711353075de142021b1d40ff3b0.\nStill, if the consensus is otherwise, I can change it. Then I guess\nwe'd end up with GRANT CREATE ON DATABASE and GRANT CREATE\nSUBSCRIPTION ON DATABASE, which I'm sure wouldn't be confusing at all.\n\nOr, another thought, maybe this should be checking for CREATE on the\ncurrent database + also pg_create_subscription. That seems like it\nmight be the right idea, actually.\n\n> The subscription code already does ownership checks before locking and now\n> there's also the passwordrequired before. Is it possible that this could open\n> up some sort of race? Could e.g. the user change the ownership to the\n> superuser in one session, do an ALTER in the other?\n>\n> It looks like your change won't increase the danger of that, as the\n> superuser() check just checks the current users permissions.\n\nI'm not entirely clear whether there's a hazard there. If there is, I\nthink we could fix it by moving the LockSharedObject call up higher,\nabove object_ownercheck. The only problem with that is it lets you\nlock an object on which you have no permissions: see\n2ad36c4e44c8b513f6155656e1b7a8d26715bb94. To really fix that, we'd\nneed an analogue of RangeVarGetRelidExtended.\n\n> and throwing an error like that will at the very least leak the connection,\n> fd, fd reservation. Which I just had fixed :). At the very least you'd need to\n> copy the stuff that \"bad_connection:\" does.\n\nOK.\n\n> I did wonder whether we should make libpqrcv_connect() use errsave() to return\n> errors. Or whether we should make libpqrcv register a memory context reset\n> callback that'd close the libpq connection.\n\nYeah. Using errsave() might be better, but not sure I want to tackle\nthat just now.\n\n> That is a somewhat odd API. Why does it throw for some things, but not\n> others? Seems a bit cleaner to pass in a parameter indicating whether it\n> should throw when not finding a password? Particularly because you already\n> pass that to walrcv_connect().\n\nWill look into that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 16:35:11 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 16:35:11 -0500, Robert Haas wrote:\n> > Maybe a daft question:\n> >\n> > Have we considered using a \"normal grant\", e.g. on the database, instead of a\n> > role? Could it e.g. be useful to grant a user the permission to create a\n> > subscription in one database, but not in another?\n> \n> Potentially, but I didn't think we'd want to burn through permissions\n> bits that fast, even given 7b378237aa805711353075de142021b1d40ff3b0.\n> Still, if the consensus is otherwise, I can change it.\n\nI don't really have an opinion on what's better. I looked briefly whether\nthere was discussion around ithis but I didn't see anything.\n\npg_create_subcription feels a bit different than most of the other pg_*\nroles. For most of those there is no schema object to tie permissions to. But\nhere there is.\n\nBut I think there's good arguments against a GRANT approach, too. GRANT ALL ON\nDATABASE would suddenly be dangerous. How does it interact with database\nownership? Etc.\n\n\n> Then I guess we'd end up with GRANT CREATE ON DATABASE and GRANT CREATE\n> SUBSCRIPTION ON DATABASE, which I'm sure wouldn't be confusing at all.\n\nHeh. I guess it could just be GRANT SUBSCRIBE.\n\n\n\n> Or, another thought, maybe this should be checking for CREATE on the\n> current database + also pg_create_subscription. That seems like it\n> might be the right idea, actually.\n\nYes, that seems like a good idea.\n\n\n\n> > The subscription code already does ownership checks before locking and now\n> > there's also the passwordrequired before. Is it possible that this could open\n> > up some sort of race? Could e.g. the user change the ownership to the\n> > superuser in one session, do an ALTER in the other?\n> >\n> > It looks like your change won't increase the danger of that, as the\n> > superuser() check just checks the current users permissions.\n> \n> I'm not entirely clear whether there's a hazard there.\n\nI'm not at all either. It's just a code pattern that makes me anxious - I\nsuspect there's a few places it makes us more vulnerable.\n\n\n> If there is, I think we could fix it by moving the LockSharedObject call up\n> higher, above object_ownercheck. The only problem with that is it lets you\n> lock an object on which you have no permissions: see\n> 2ad36c4e44c8b513f6155656e1b7a8d26715bb94. To really fix that, we'd need an\n> analogue of RangeVarGetRelidExtended.\n\nYea, we really should have something like RangeVarGetRelidExtended() for other\nkinds of objects. It'd take a fair bit of work / time to use it widely, but\nit'll take even longer if we start in 5 years ;)\n\nPerhaps the bulk of RangeVarGetRelidExtended() could be generalized by having\na separate name->oid lookup callback?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 14:00:35 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Jan 27, 2023, at 1:35 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n>> I started out asking what benefits it provides to own a subscription one\n>> cannot modify. But I think it is a good capability to have, to restrict the\n>> set of relations that replication could target. Although perhaps it'd be\n>> better to set the \"replay user\" as a separate property on the subscription?\n> \n> That's been proposed previously, but for reasons I don't quite\n> remember it seems not to have happened. I don't think it achieved\n> consensus.\n> \n>> Does owning a subscription one isn't allowed to modify useful outside of that?\n> \n> Uh, possibly that's a question for Mark or Jeff. I don't know. I can't\n> see what they would be, but I just work here.\n\nIf the owner cannot modify the subscription, then the owner degenerates into a mere \"run-as\" user. Note that this isn't how things work now, and even if we disallowed owners from modifying the connection string, there would still be other attributes the owner could modify, such as the set of publications subscribed.\n\n\nMore generally, my thinking on this thread is that there needs to be two nosuperuser roles: A higher privileged role which can create a subscription, and a lower privileged role serving the \"run-as\" function. Those shouldn't be the same, because the \"run-as\" concept doesn't logically need to have subscription creation power, and likely *shouldn't* have that power. Depending on which sorts of attributes a subscription object has, such as the connection string, the answer differs for whether the owner/\"run-as\" user should get to change those attributes. One advantage of Jeff's idea of using a server object rather than a string is that it becomes more plausibly safe to allow the subscription owner to make changes to that attribute of the subscription.\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 27 Jan 2023 14:56:04 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, Jan 27, 2023 at 5:56 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> If the owner cannot modify the subscription, then the owner degenerates into a mere \"run-as\" user. Note that this isn't how things work now, and even if we disallowed owners from modifying the connection string, there would still be other attributes the owner could modify, such as the set of publications subscribed.\n\nThe proposed patch blocks every form of ALTER SUBSCRIPTION if\npassword_required false is set and you aren't a superuser. Is there\nsome other DML command that could be used to modify the set of\npublications subscribed?\n\n> More generally, my thinking on this thread is that there needs to be two nosuperuser roles: A higher privileged role which can create a subscription, and a lower privileged role serving the \"run-as\" function. Those shouldn't be the same, because the \"run-as\" concept doesn't logically need to have subscription creation power, and likely *shouldn't* have that power. Depending on which sorts of attributes a subscription object has, such as the connection string, the answer differs for whether the owner/\"run-as\" user should get to change those attributes. One advantage of Jeff's idea of using a server object rather than a string is that it becomes more plausibly safe to allow the subscription owner to make changes to that attribute of the subscription.\n\nThere's some question in my mind about what these different mechanisms\nare intended to accomplish.\n\nOn a technical level, I think that the idea of having a separate\nobjection for the connection string vs. the subscription itself is\nperfectly sound, and to repeat what I said earlier, if someone wants\nto implement that, cool. I also agree that it has the advantage that\nyou specify, namely, that someone can have rights to modify one of\nthose objects but not the other. What that lets you do is define a\nshort list of known systems and say, hey, you can replicate whatever\ntables you want with whatever options you want, but only between these\nsystems. I'm not quite sure what problem that solves, though.\n\n From my point of view, the two things that the superuser is most\nlikely to want to do are (1) control the replication setup themselves\nand delegate nothing to any non-superuser or (2) give a non-superuser\npretty much complete control over replication with just enough\nrestrictions to avoid letting them do things that would compromise\nsecurity, such as hacking the local superuser account. In other words,\nI expect that delegation of the logical replication configuration is\nusually going to be all or nothing. Jeff's system allows for a\nsituation where you want to delegate some stuff but not everything,\nand specifically where you want to dedicate control over the\nsubscription options and the tables being replicated, but not the\nconnection strings. To me, that feels like a bit of an awkward\nconfiguration; I don't really understand in what situation that\ndivision of responsibility would be particularly useful. I trust that\nJeff is proposing it because he knows of such a situation, but I don't\nknow what it is. I feel like, even if I wanted to let people use some\nconnection strings and not others, I'd probably want that control in\nsome form other than listing a specific list of allowable connection\nstrings -- I'd want to say things like \"you have to use SSL\" or \"no\nconnecting back to the local host,\" because that lets me enforce some\ngeneral organizational policy without having to care specifically\nabout how each subscription is being set up.\n\nUnfortunately, I have even less of an idea about what the run-as\nconcept is supposed to accomplish. I mean, at one level, I see it\nquite clearly: the user creating the subscription wants replication to\nhave restricted privileges when it's running, and so they make the\nrun-as user some role with fewer privileges than their own. Brilliant.\nBut then I get stuck: against what kind of attack does that actually\nprotect us? If I'm a high privilege user, perhaps even a superuser,\nand it's not safe to have logical replication running as me, then it\nseems like the security model of logical replication is fundamentally\nbusted and we need to fix that. It can't be right to say that if you\nhave 263 users in a database and you want to replicate the whole\ndatabase to some other node, you need 263 different subscriptions with\na different run-as user for each. You need to be able to run all of\nthat logical replication as the superuser or some other high-privilege\nuser and not end up with a security compromise. And if we suppose that\nthat already works and is safe, well then what's the case where I do\nneed a run-as user?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 30 Jan 2023 10:44:29 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Jan 30, 2023, at 7:44 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> And if we suppose that\n> that already works and is safe, well then what's the case where I do\n> need a run-as user?\n\nA) Alice publishes tables, and occasionally adds new tables to existing publications.\n\nB) Bob manages subscriptions, and periodically runs \"refresh publication\". Bob also creates new subscriptions for people when a row is inserted into the \"please create a subscription for me\" table which Bob owns, using a trigger that Bob created on that table.\n\nC) Alice creates a \"please create a subscription for me\" table on the publishing database, adds lots of malicious requests, and adds that table to the publication.\n\nD) Bob replicates the table, fires the trigger, creates the malicious subscriptions, and starts replicating all that stuff, too.\n\nI think that having Charlie, not Bob, as the \"run-as\" user helps somewhere right around (D). \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 30 Jan 2023 08:11:03 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Jan 30, 2023 at 11:11 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On Jan 30, 2023, at 7:44 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > And if we suppose that\n> > that already works and is safe, well then what's the case where I do\n> > need a run-as user?\n>\n> A) Alice publishes tables, and occasionally adds new tables to existing publications.\n>\n> B) Bob manages subscriptions, and periodically runs \"refresh publication\". Bob also creates new subscriptions for people when a row is inserted into the \"please create a subscription for me\" table which Bob owns, using a trigger that Bob created on that table.\n>\n> C) Alice creates a \"please create a subscription for me\" table on the publishing database, adds lots of malicious requests, and adds that table to the publication.\n>\n> D) Bob replicates the table, fires the trigger, creates the malicious subscriptions, and starts replicating all that stuff, too.\n>\n> I think that having Charlie, not Bob, as the \"run-as\" user helps somewhere right around (D).\n\nI suppose it does, but I have some complaints.\n\nFirst, it doesn't seem to make a lot of sense to have one person\nmanaging the publications and someone else managing the subscriptions,\nand especially if those parties are mutually untrusting. I can't think\nof any real reason to set things up that way. Sure, you could, but why\nwould you? You could, equally, decide that one member of your\nhousehold was going to decide what's for dinner every night, and some\nother member of your household was going to decide what gets purchased\nat the grocery store each week. If those two people exercise their\nresponsibilities without tight coordination, or with hostile intent\ntoward each other, things are going to go badly, but that's not an\nargument for putting a combination lock on the flour canister. It's an\nargument for getting along better, or not having such a dumb system in\nthe first place. I don't quite see how the situation you postulate in\n(A) and (B) is any different. Publications and subscriptions are as\nclosely connected as food purchases and meals. The point of a\npublication is for it to connect up to a subscription. In what\ncircumstances would be it be reasonable to give responsibility for\nthose objects to different and especially mutually untrusting users?\n\nSecond, in step (B), we may ask why Bob is doing this with a trigger.\nIf he's willing to create any subscription for which Alice asks, we\ncould have just given Alice the authority to do those actions herself.\nPresumably, therefore, Bob is willing to create some subscriptions for\nwhich Alice may ask and not others. Perhaps this whole arrangement is\njust a workaround for the lack of a sensible system for controlling\nwhich connection strings Alice can use, in which case what is really\nneeded here might be something like the separate connection object\nwhich Jeff postulated or my idea of a reverse pg_hba.conf. That kind\nof approach would give a better user interface to Alice, who wouldn't\nhave to rephrase all of her CREATE SUBSCRIPTION commands as insert\nstatements. Conversely, if Alice and Bob are truly dedicated to this\nconvoluted system of creating subscriptions, then Bob needs to put\nlogic into his trigger that's smart enough to block any malicious\nrequests that Alice may make. He really brought this problem on\nhimself by not doing that.\n\nThird, in step (C), it seems to me that whoever set up Alice's\npermissions has really messed up. Either the schema Bob is using for\nhis create-me-a-subscription table exists on the primary and Alice has\npermission to create tables in that schema, or else that schema does\nnot exist on the primary and Alice has permission to create it. Either\nway, that's a bad setup. Bob's table should be located in a schema for\nwhich Alice has only USAGE permissions and shouldn't have excess\npermissions on the table, either. Then this step can't happen. This\nstep could also be blocked if, instead of using a table with a\ntrigger, Bob wrote a security definer function or procedure and\ngranted EXECUTE permission on that function or procedure to Alice.\nHe's still going to need sanity checks, though, and if the function or\nprocedure inserts into a logging table or something, he'd better make\nsure that table is adequately secured rather than being, say, a table\nowned by Alice with malicious triggers on it.\n\nSo basically this doesn't really feel like a valid scenario to me.\nWe're supposed to believe that Alice is hostile to Bob, but the\nsuperuser doesn't seem to have thought very carefully about how Bob is\nsupposed to defend himself against Alice, and Bob doesn't even seem to\nbe trying. Maybe we should rename the users to Samson and Delilah? :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 30 Jan 2023 12:26:31 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Jan 30, 2023, at 9:26 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> First, it doesn't seem to make a lot of sense to have one person\n> managing the publications and someone else managing the subscriptions,\n> and especially if those parties are mutually untrusting. I can't think\n> of any real reason to set things up that way. Sure, you could, but why\n> would you? You could, equally, decide that one member of your\n> household was going to decide what's for dinner every night, and some\n> other member of your household was going to decide what gets purchased\n> at the grocery store each week. If those two people exercise their\n> responsibilities without tight coordination, or with hostile intent\n> toward each other, things are going to go badly, but that's not an\n> argument for putting a combination lock on the flour canister. It's an\n> argument for getting along better, or not having such a dumb system in\n> the first place. I don't quite see how the situation you postulate in\n> (A) and (B) is any different. Publications and subscriptions are as\n> closely connected as food purchases and meals. The point of a\n> publication is for it to connect up to a subscription.\n\nI have a grim view of the requirement that publishers and subscribers trust each other. Even when they do trust each other, they can firewall attacks by acting as if they do not.\n\n> In what\n> circumstances would be it be reasonable to give responsibility for\n> those objects to different and especially mutually untrusting users?\n\nWhen public repositories of data, such as the IANA whois database, publish their data via postgres publications.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 30 Jan 2023 10:46:07 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Jan 30, 2023, at 9:26 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> So basically this doesn't really feel like a valid scenario to me.\n> We're supposed to believe that Alice is hostile to Bob, but the\n> superuser doesn't seem to have thought very carefully about how Bob is\n> supposed to defend himself against Alice, and Bob doesn't even seem to\n> be trying. Maybe we should rename the users to Samson and Delilah? :-)\n\nNo, Atahualpa and Pizarro.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 30 Jan 2023 11:09:26 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Jan 30, 2023 at 1:46 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I have a grim view of the requirement that publishers and subscribers trust each other. Even when they do trust each other, they can firewall attacks by acting as if they do not.\n\nI think it's OK if the CREATE PUBLICATION user doesn't particularly\ntrust the CREATE SUBSCRIPTION user, because the publication is just a\ngrouping of tables to which somebody can pay attention or not. The\nCREATE PUBLICATION user isn't compromised either way. But, at least as\nthings stand, I don't see how the CREATE SUBSCRIPTION user get away\nwith not trusting the CREATE PUBLICATION user. CREATE SUBSCRIPTION\nprovides no tools at all for filtering the data that the subscriber\nchooses to send.\n\nNow that can be changed, I suppose, and a run-as user would be one way\nto make progress in that direction. But I'm not sure how viable that\nis, because...\n\n> > In what\n> > circumstances would be it be reasonable to give responsibility for\n> > those objects to different and especially mutually untrusting users?\n>\n> When public repositories of data, such as the IANA whois database, publish their data via postgres publications.\n\n... for that to work, IANA would need to set up the database so that\nuntrusted parties can create logical replication slots on their\nPostgreSQL server. And I think that granting REPLICATION privilege on\nyour database to random people on the Internet is not really viable,\nnor intended to be viable. As the CREATE ROLE documentation says, \"A\nrole having the REPLICATION attribute is a very highly privileged\nrole.\"\n\nConcretely, this kind of setup would have the problem that you could\nkill the IANA database by just creating a replication slot and then\nnot using it (or replicating from it only very very slowly).\nEventually, the replication slot would either hold back xmin enough\nthat you got a lot of bloat, or cause enough WAL to be retained that\nyou ran out of disk space. Maybe you could protect yourself against\nthat kind of problem by cutting off users who get too far behind, but\nthat also cuts off people who just have an outage for longer than your\ncutoff.\n\nAlso, anyone who can connection to a replication slot can also connect\nto any other replication slot, and drop any replication slot. So if\nIANA did grant REPLICATION privilege to random people on the Internet,\none of them could jump into the system and screw things up for all the\nothers.\n\nThis kind of setup just doesn't seem viable to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 30 Jan 2023 14:30:27 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Jan 30, 2023, at 11:30 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> CREATE SUBSCRIPTION\n> provides no tools at all for filtering the data that the subscriber\n> chooses to send.\n> \n> Now that can be changed, I suppose, and a run-as user would be one way\n> to make progress in that direction. But I'm not sure how viable that\n> is, because...\n> \n>>> In what\n>>> circumstances would be it be reasonable to give responsibility for\n>>> those objects to different and especially mutually untrusting users?\n>> \n>> When public repositories of data, such as the IANA whois database, publish their data via postgres publications.\n> \n> ... for that to work, IANA would need to set up the database so that\n> untrusted parties can create logical replication slots on their\n> PostgreSQL server. And I think that granting REPLICATION privilege on\n> your database to random people on the Internet is not really viable,\n> nor intended to be viable.\n\nThat was an aspirational example in which there's infinite daylight between the publisher and subscriber. I, too, doubt that's ever going to be possible. But I still think we should aspire to some extra daylight between the two. Perhaps IANA doesn't publish to the whole world, but instead publishes only to subscribers who have a contract in place, and have agreed to monetary penalties should they abuse the publishing server. Whatever. There's going to be some amount of daylight possible if we design for it, and none otherwise.\n\nMy real argument here isn't against your goal of having non-superusers who can create subscriptions. That part seems fine to me.\n\nGiven that my work last year made it possible for subscriptions to run as somebody other than the subscription creator, it annoys me that you now want the subscription creator's privileges to be what the subscription runs as. That seems to undo what I worked on. In my mental model of a (superuser-creator, non-superuser-owner) pair, it seems you're logically only touching the lefthand side, so you should then have a (nonsuperuser-creator, nonsuperuser-owner) pair. But you don't. You go the apparently needless extra step of just squashing them together. I just don't see why it needs to be like that.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 30 Jan 2023 12:27:02 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, Jan 27, 2023 at 5:00 PM Andres Freund <andres@anarazel.de> wrote:\n> > Or, another thought, maybe this should be checking for CREATE on the\n> > current database + also pg_create_subscription. That seems like it\n> > might be the right idea, actually.\n>\n> Yes, that seems like a good idea.\n\nDone in this version. I also changed check_conninfo to take an extra\nargument instead of returning a Boolean, as per your suggestion.\n\nI had a long think about what to do with ALTER SUBSCRIPTION ... OWNER\nTO in terms of permissions checks. The previous version required that\nthe new owner have permissions of pg_create_subscription, but there\nseems to be no particular reason for that rule except that it happens\nto be what I made the code do. So I changed it to say that the current\nowner must have CREATE privilege on the database, and must be able to\nSET ROLE to the new owner. This matches the rule for CREATE SCHEMA.\nPossibly we should *additionally* require that the person performing\nthe rename still have pg_create_subscription, but that shouldn't be\nthe only requirement. This change means that you can't just randomly\ngive your subscription to the superuser (with or without concurrently\nattempting some other change as per your other comments) which is good\nbecause you can't do that with other object types either.\n\nThere seems to be a good deal of inconsistency here. If you want to\ngive someone a schema, YOU need CREATE on the database. But if you\nwant to give someone a table, THEY need CREATE on the containing\nschema. It make sense that we check permissions on the containing\nobject, which could be a database or a schema depending on what you're\nrenaming, but it's unclear to me why we sometimes check on the person\nperforming the ALTER command and at other times on the recipient. It's\nalso somewhat unclear to me why we are checking CREATE in the first\nplace, especially on the donor. It might make sense to have a rule\nthat you can't own an object in a place where you couldn't have\ncreated it, but there is no such rule, because you can give someone\nCREATE on a schema, they can create an object, and they you can take\nCREATE a way and they still own an object there. So it kind of looks\nto me like we made it up as we went along and that the result isn't\nvery consistent, but I'm inclined to follow CREATE SCHEMA here unless\nthere's some reason to do otherwise.\n\nAnother question around ALTER SUBSCRIPTION ... OWNER TO and also ALTER\nSUBSCRIPTION .. RENAME is whether they ought to fail if you're not a\nsuperuser and password_required false is set. They are separate code\npaths from the rest of the ALTER SUBSCRIPTION cases, so if we want\nthat to be a rule we need dedicated code for it. I'm not quite sure\nwhat's right. There's no comparable case for ALTER USER MAPPING\nbecause a user mapping doesn't have an owner and so can't be\nreassigned to a new owner. I don't see what the harm is, especially\nfor RENAME, but I might be missing something, and it certainly seems\narguable.\n\n> > I'm not entirely clear whether there's a hazard there.\n>\n> I'm not at all either. It's just a code pattern that makes me anxious - I\n> suspect there's a few places it makes us more vulnerable.\n\nIt looks likely to me that it was cut down from the CREATE SCHEMA code, FWIW.\n\n> > If there is, I think we could fix it by moving the LockSharedObject call up\n> > higher, above object_ownercheck. The only problem with that is it lets you\n> > lock an object on which you have no permissions: see\n> > 2ad36c4e44c8b513f6155656e1b7a8d26715bb94. To really fix that, we'd need an\n> > analogue of RangeVarGetRelidExtended.\n>\n> Yea, we really should have something like RangeVarGetRelidExtended() for other\n> kinds of objects. It'd take a fair bit of work / time to use it widely, but\n> it'll take even longer if we start in 5 years ;)\n\nWe actually have something sort of like that in the form of\nget_object_address(). It doesn't allow for a callback, but it does\nhave a retry loop.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 30 Jan 2023 15:32:34 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, Jan 27, 2023 at 1:08 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > 1) Forwarding the original ambient context along with the request, so\n> > the server can check it too.\n>\n> Right, so a protocol extension. Reasonable idea, but a big lift. Not\n> only do you need everyone to be running a new enough version of\n> PostgreSQL, but existing proxies like pgpool and pgbouncer need\n> updates, too.\n\nRight.\n\n> > 2) Explicitly combining the request with the proof of authority needed\n> > to make it, as in capability-based security [3].\n>\n> As far as I can see, that link doesn't address how you'd make this\n> approach work across a network.\n\nThe CSRF-token example I gave is one. But that's HTTP-specific\n(stateless, server-driven) and probably doesn't make a lot of sense\nfor our case.\n\nFor our case, assuming that connections have side effects, one\nsolution could be for the client to signal to the server that the\nconnection should use in-band authentication only; i.e. fail the\nconnection if the credentials provided aren't good enough by\nthemselves to authenticate the client. (This has some overlap with\nSASL negotiation, maybe.)\n\nBut that still requires server support. I don't know if there's a\nclever way to tie the authentication to the request on the client side\nonly, using existing server implementations. (If connections don't\nhave side effects, require_auth should be sufficient.)\n\n> > 3) Dropping as many implicitly-held privileges as possible before making\n> > a request. This doesn't solve the problem but may considerably reduce\n> > the practical attack surface.\n>\n> Right. I definitely don't object to this kind of approach, but I don't\n> think it can ever be sufficient by itself.\n\nI agree. (But for the record, I think that an outbound proxy filter is\nalso insufficient. Someone, somewhere, is going to want to safely\nproxy through localhost _and_ have peer authentication set up.)\n\n> > I think this style focuses on absolute configuration flexibility at the\n> > expense of usability. It obfuscates the common use cases. (I have the\n> > exact same complaint about our HBA and ident configs, so I may be\n> > fighting uphill.)\n>\n> That's probably somewhat true, but on the other hand, it also is more\n> powerful than what you're describing. In your system, is there some\n> way the DBA can say \"hey, you can connect to any of the machines on\n> this list of subnets, but nothing else\"? Or equally, \"hey, you may NOT\n> connect to any machine on this list of subnets, but anything else is\n> fine\"? Or \"you can connect to these subnets without SSL, but if you\n> want to talk to anything else, you need to use SSL\"?\n\nI guess I didn't call it out explicitly, so it was fair to assume that\nit did not. I don't think we should ignore those cases.\n\nBut if we let the configuration focus on policies instead, and\nsimultaneously improve the confused-deputy problem, then any IP/host\nfilter functionality that we provide becomes an additional safety\nmeasure instead of your only viable line of defense. \"I screwed up our\nIP filter, but we're still safe because the proxy refused to forward\nits client cert to the backend.\" Or, \"this other local application\nrequires peer authentication, but it's okay because the proxy\ndisallows those connections by default.\"\n\n> Your idea\n> seems to rely on us being able to identify all of the policies that a\n> user is likely to want and give names to each one, and I don't feel\n> very confident that that's realistic. But maybe I'm misinterpreting\n> your idea?\n\nNo, that's pretty accurate. But I'm used to systems that provide a\nridiculous number of policies [1, 2] via what's basically a scoped\nproperty bag. \"Turn off option 1 and 2 globally. For host A and IP\naddress B, turn on option 1 as an exception.\" And I don't really\nexpect us to need as many options as those systems do.\n\nI think that configuration style evolves well, it focuses on the right\nthings, and it can still handle IP lists intuitively [3], if that's\nthe way a DBA really wants to set up policies.\n\n--Jacob\n\n[1] https://httpd.apache.org/docs/2.4/mod/mod_proxy.html\n[2] https://www.haproxy.com/documentation/hapee/latest/onepage/#4\n[3] https://docs.nginx.com/nginx/admin-guide/security-controls/controlling-access-proxied-tcp/\n\n\n", "msg_date": "Mon, 30 Jan 2023 13:12:31 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On Mon, Jan 30, 2023 at 3:27 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> That was an aspirational example in which there's infinite daylight between the publisher and subscriber. I, too, doubt that's ever going to be possible. But I still think we should aspire to some extra daylight between the two. Perhaps IANA doesn't publish to the whole world, but instead publishes only to subscribers who have a contract in place, and have agreed to monetary penalties should they abuse the publishing server. Whatever. There's going to be some amount of daylight possible if we design for it, and none otherwise.\n>\n> My real argument here isn't against your goal of having non-superusers who can create subscriptions. That part seems fine to me.\n>\n> Given that my work last year made it possible for subscriptions to run as somebody other than the subscription creator, it annoys me that you now want the subscription creator's privileges to be what the subscription runs as. That seems to undo what I worked on. In my mental model of a (superuser-creator, non-superuser-owner) pair, it seems you're logically only touching the lefthand side, so you should then have a (nonsuperuser-creator, nonsuperuser-owner) pair. But you don't. You go the apparently needless extra step of just squashing them together. I just don't see why it needs to be like that.\n\nI feel like you're accusing me of removing functionality that has\nnever existed. A subscription doesn't run as the subscription creator.\nIt runs as the subscription owner. If you or anyone else had added the\ncapability for it to run as someone other than the subscription owner,\nI certainly wouldn't be trying to back that capability out as part of\nthis patch, and because there isn't, I'm not proposing to add that as\npart of this patch. I don't see how that makes me guilty of squashing\nanything together. The current state of affairs, where the run-as user\nis taken from pg_subscription.subowner, the same field that is updated\nby ALTER SUBSCRIPTION ... OWNER TO, is the result of your work, not\nanything that I have done or am proposing to do.\n\nI also *emphatically* disagree with the idea that this undoes what you\nworked on. My patch would be *impossible* without your work. Prior to\nyour work, the run-as user was always, basically, the superuser, and\nso the idea of allowing anyone other than a superuser to execute\nCREATE SUBSCRIPTION would be flat-out nuts. Because of your work,\nthat's now a thing that we may be able to reasonably allow, if we can\nwork through the remaining issues. So I'm grateful to you, and also\nsorry to hear that you're annoyed with me. But I still don't think\nthat the fact that the division you want doesn't exist is somehow my\nfault.\n\nI'm kind of curious why you *didn't* make this distinction at the time\nthat you were did the other work in this area. Maybe my memory is\nplaying tricks on me again, but I seem to recall talking about the\nidea with you at the time, and I seem to recall thinking that it\nsounded like an OK idea. I seem to vaguely recall us discussing\nhazards like: well, what if replication causes code to get executed as\nthe subscription owner that that causes something bad to happen? But I\nthink the only way that happens is if they put triggers on the tables\nthat are being replicated, which is their choice, and they can avoid\ninstalling problematic code there if they want. I think there might\nhave been some other scenarios, too, but I just can't remember. In any\ncase, I don't think the idea is completely without merit. I think it\ncould very well be something that we want to have for one reason or\nanother. But I don't currently understand exactly what those reasons\nare, and I don't see any reason why one patch should both split owner\nfrom run-as user and also allow the owner to be a non-superuser. That\nseems like two different efforts to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 30 Jan 2023 16:29:06 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Jan 30, 2023 at 4:12 PM Jacob Champion <jchampion@timescale.com> wrote:\n> For our case, assuming that connections have side effects, one\n> solution could be for the client to signal to the server that the\n> connection should use in-band authentication only; i.e. fail the\n> connection if the credentials provided aren't good enough by\n> themselves to authenticate the client. (This has some overlap with\n> SASL negotiation, maybe.)\n\nI'm not an expert on this stuff, but to me that feels like a weak and\nfuzzy concept. If the client is going to tell the server something,\nI'd much rather have it say something like \"i'm proxying a request\nfrom my local user rhaas, who authenticated using such and such a\nmethod and connected from such and such an IP yadda yadda\". That feels\nto me like really clear communication that the server can then be\nconfigured to something about via pg_hba.conf or similar. Saying \"use\nin-band authentication only\", to me, feels much murkier. As the\nrecipient of that message, I don't know exactly what to do about it,\nand it feels like whatever heuristic I adopt might end up being wrong\nand something bad happens anyway.\n\n> I agree. (But for the record, I think that an outbound proxy filter is\n> also insufficient. Someone, somewhere, is going to want to safely\n> proxy through localhost _and_ have peer authentication set up.)\n\nWell then they're indeed going to need some way to distinguish a\nproxied connection from a non-proxied one. You can't send identical\nconnection requests in different scenarios and get different\nresults....\n\n> I guess I didn't call it out explicitly, so it was fair to assume that\n> it did not. I don't think we should ignore those cases.\n\nOK, cool.\n\n> But if we let the configuration focus on policies instead, and\n> simultaneously improve the confused-deputy problem, then any IP/host\n> filter functionality that we provide becomes an additional safety\n> measure instead of your only viable line of defense. \"I screwed up our\n> IP filter, but we're still safe because the proxy refused to forward\n> its client cert to the backend.\" Or, \"this other local application\n> requires peer authentication, but it's okay because the proxy\n> disallows those connections by default.\"\n\nDefense in depth is good.\n\n> > Your idea\n> > seems to rely on us being able to identify all of the policies that a\n> > user is likely to want and give names to each one, and I don't feel\n> > very confident that that's realistic. But maybe I'm misinterpreting\n> > your idea?\n>\n> No, that's pretty accurate. But I'm used to systems that provide a\n> ridiculous number of policies [1, 2] via what's basically a scoped\n> property bag. \"Turn off option 1 and 2 globally. For host A and IP\n> address B, turn on option 1 as an exception.\" And I don't really\n> expect us to need as many options as those systems do.\n>\n> I think that configuration style evolves well, it focuses on the right\n> things, and it can still handle IP lists intuitively [3], if that's\n> the way a DBA really wants to set up policies.\n\nI think what we really need here is an example or three of a proposed\nconfiguration file syntax. I think it would be good if we could pick a\nsyntax that doesn't require a super-complicated parser, and that maybe\nhas something in common with our existing configuration file syntaxes.\nBut if we have to invent something new, then we can do that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 30 Jan 2023 17:21:22 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "\n\n> On Jan 30, 2023, at 1:29 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> I feel like you're accusing me of removing functionality that has\n> never existed. A subscription doesn't run as the subscription creator.\n> It runs as the subscription owner. If you or anyone else had added the\n> capability for it to run as someone other than the subscription owner,\n> I certainly wouldn't be trying to back that capability out as part of\n> this patch, and because there isn't, I'm not proposing to add that as\n> part of this patch. I don't see how that makes me guilty of squashing\n> anything together. The current state of affairs, where the run-as user\n> is taken from pg_subscription.subowner, the same field that is updated\n> by ALTER SUBSCRIPTION ... OWNER TO, is the result of your work, not\n> anything that I have done or am proposing to do.\n> \n> I also *emphatically* disagree with the idea that this undoes what you\n> worked on. My patch would be *impossible* without your work. Prior to\n> your work, the run-as user was always, basically, the superuser, and\n> so the idea of allowing anyone other than a superuser to execute\n> CREATE SUBSCRIPTION would be flat-out nuts. Because of your work,\n> that's now a thing that we may be able to reasonably allow, if we can\n> work through the remaining issues. So I'm grateful to you, and also\n> sorry to hear that you're annoyed with me. But I still don't think\n> that the fact that the division you want doesn't exist is somehow my\n> fault.\n> \n> I'm kind of curious why you *didn't* make this distinction at the time\n> that you were did the other work in this area. Maybe my memory is\n> playing tricks on me again, but I seem to recall talking about the\n> idea with you at the time, and I seem to recall thinking that it\n> sounded like an OK idea. I seem to vaguely recall us discussing\n> hazards like: well, what if replication causes code to get executed as\n> the subscription owner that that causes something bad to happen? But I\n> think the only way that happens is if they put triggers on the tables\n> that are being replicated, which is their choice, and they can avoid\n> installing problematic code there if they want. I think there might\n> have been some other scenarios, too, but I just can't remember. In any\n> case, I don't think the idea is completely without merit. I think it\n> could very well be something that we want to have for one reason or\n> another. But I don't currently understand exactly what those reasons\n> are, and I don't see any reason why one patch should both split owner\n> from run-as user and also allow the owner to be a non-superuser. That\n> seems like two different efforts to me.\n\nI don't have a concrete problem with your patch, and wouldn't object if you committed it. My concerns were more how you were phrasing things, but it seems not worth any additional conversation, because it's probably a distinction without a difference.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 31 Jan 2023 03:32:38 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-01-30 10:44:29 -0500, Robert Haas wrote:\n> On a technical level, I think that the idea of having a separate\n> objection for the connection string vs. the subscription itself is\n> perfectly sound, and to repeat what I said earlier, if someone wants\n> to implement that, cool. I also agree that it has the advantage that\n> you specify, namely, that someone can have rights to modify one of\n> those objects but not the other. What that lets you do is define a\n> short list of known systems and say, hey, you can replicate whatever\n> tables you want with whatever options you want, but only between these\n> systems. I'm not quite sure what problem that solves, though.\n\nThat does seem somewhat useful, but also fairly limited, at least as\nlong as it's really just a single connection, rather than a \"pattern\" of\nsafe connections.\n\n\n> Unfortunately, I have even less of an idea about what the run-as\n> concept is supposed to accomplish. I mean, at one level, I see it\n> quite clearly: the user creating the subscription wants replication to\n> have restricted privileges when it's running, and so they make the\n> run-as user some role with fewer privileges than their own. Brilliant.\n> But then I get stuck: against what kind of attack does that actually\n> protect us? If I'm a high privilege user, perhaps even a superuser,\n> and it's not safe to have logical replication running as me, then it\n> seems like the security model of logical replication is fundamentally\n> busted and we need to fix that.\n\nI don't really understand that - the run-as approach seems like a\nnecessary piece of improving the security model.\n\nI think it's perfectly reasonable to want to replicate from one system\nin another, but to not want to allow logical replication to insert into\npg_class or whatnot. So not using superuser to execute the replication\nmakes sense.\n\nThis is particularly the case if you're just replicating a small part of\nthe tables from one system to another. E.g. in a sharded setup, you may\nwant to replicate metadata too servers.\n\nEven if all the systems are operated by people you trust (including\npossibly even yourself, if you want to go that far), you may want to\nreduce the blast radius of privilege escalation, or even just bugs, to a\nsmaller amount of data.\n\n\nI think we'll need two things to improve upon the current situation:\n\n1) run-as user, to reduce the scope of potential danger\n\n2) Option to run the database inserts as the owner of the table, with a\n check that the run-as is actually allowed to perform work as the\n owning role. That prevents escalation from table owner (who could add\n default expressions etc) from gettng the privs of the\n run-as/replication owner.\n\n\nI think it makes sense for 1) to be a fairly privileged user, but I\nthink it's good practice for that user to not be allowed to change the\nsystem configuration etc.\n\n\n> It can't be right to say that if you have 263 users in a database and\n> you want to replicate the whole database to some other node, you need\n> 263 different subscriptions with a different run-as user for each. You\n> need to be able to run all of that logical replication as the\n> superuser or some other high-privilege user and not end up with a\n> security compromise.\n\nI'm not quite following along here - are you thinking of 263 tables\nowned by 263 users? If yes, that's why I am thinking that we need the\noption to perform each table modification as the owner of that table\n(with the same security restrictions we use for REINDEX etc).\n\n\n> And if we suppose that that already works and is safe, well then\n> what's the case where I do need a run-as user?\n\nIt's not at all safe today, IMO. You need to trust that nothing bad will\nbe replicated, otherwise the owner of the subscription has to be\nconsidered compromised.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 31 Jan 2023 16:01:16 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Tue, Jan 31, 2023 at 7:01 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't really understand that - the run-as approach seems like a\n> necessary piece of improving the security model.\n>\n> I think it's perfectly reasonable to want to replicate from one system\n> in another, but to not want to allow logical replication to insert into\n> pg_class or whatnot. So not using superuser to execute the replication\n> makes sense.\n>\n> This is particularly the case if you're just replicating a small part of\n> the tables from one system to another. E.g. in a sharded setup, you may\n> want to replicate metadata too servers.\n\nI don't think that a system catalog should be considered a valid\nreplication target, no matter who owns the subscription, so ISTM that\nwriting to pg_class should be blocked regardless. The thing I'm\nstruggling to understand is: if you only want to replicate into tables\nthat Alice can write, why not just make Alice own the subscription?\nFor a run-as user to make sense, you need a scenario where we want the\nreplication to target only tables that Alice can touch, but we also\ndon't want Alice herself to be able to touch the subscription, so you\nmake Alice the run-as user and yourself the owner, or something like\nthat. But I'm not sure what that scenario is exactly.\n\nMark was postulating a scenario where the publisher and subscriber\ndon't trust each other. I was thinking a little bit more about that. I\nstill maintain that the current system is poorly set up to make that\nwork, but suppose we wanted to do better. We could add filtering on\nthe subscriber side, like you list schemas or specific relations that\nyou are or are not willing to replicate into. Then you could, for\nexample, connect your subscription to a certain remote publication,\nbut with the restriction that you're only willing to replicate into\nthe \"headquarters\" schema. Then we'll replicate whatever tables they\nsend us, but if the dorks at headquarters mess up the publications on\ntheir end (intentionally or otherwise) and add some tables from the\n\"locally_controlled_stuff\" schema, we'll refuse to replicate that into\nour eponymous schema. I don't think this kind of system is well-suited\nto environments where people are totally hostile to each other,\nbecause you still need to have replication slots on the remote side\nand stuff. Also, having the remote side decode stuff and ignoring it\nlocally is expensive, and I bet if we add stuff like this then people\nwill misuse it and be sad. But it would make the system easier to\nreason about: I know for sure that this subscription will only write\nto these places, because that's all I've given it permission to do.\n\nIn the sharding scenario you mention, if you want to provide\naccidental writes to unrelated tables due to the publication being not\nwhat we expect, you can either make the subscription owned by the same\nrole that owns the sharded tables, or a special-purpose role that has\npermission to write to exactly the set of tables that you expect to be\ntouched and no others. Or, if you had something like what I posited in\nthe last paragraph, you could use that instead. But I don't see how a\nseparate run-as user helps. If I'm just being super-dense here, I hope\nthat one of you will explain using short words. :-)\n\n> I think we'll need two things to improve upon the current situation:\n>\n> 1) run-as user, to reduce the scope of potential danger\n>\n> 2) Option to run the database inserts as the owner of the table, with a\n> check that the run-as is actually allowed to perform work as the\n> owning role. That prevents escalation from table owner (who could add\n> default expressions etc) from gettng the privs of the\n> run-as/replication owner.\n\nI'm not quite sure what we do here now, but I agree that trigger\nfiring seems like a problem. It might be that we need to worry about\nthe user on the origin server, too. If Alice inserts a row that causes\na replicated table owned by Bob to fire a trigger or evaluate a\ndefault expression or whatever due the presence of a subscription\nowned by Charlie, there is a risk that Alice might try to attack\neither Bob or Charlie, or that Bob might try to attack Charlie.\n\n> > And if we suppose that that already works and is safe, well then\n> > what's the case where I do need a run-as user?\n>\n> It's not at all safe today, IMO. You need to trust that nothing bad will\n> be replicated, otherwise the owner of the subscription has to be\n> considered compromised.\n\nWhat kinds of things are bad to replicate?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Feb 2023 09:43:39 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Feb 1, 2023, at 6:43 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> The thing I'm\n> struggling to understand is: if you only want to replicate into tables\n> that Alice can write, why not just make Alice own the subscription?\n> For a run-as user to make sense, you need a scenario where we want the\n> replication to target only tables that Alice can touch, but we also\n> don't want Alice herself to be able to touch the subscription, so you\n> make Alice the run-as user and yourself the owner, or something like\n> that. But I'm not sure what that scenario is exactly.\n\nThis \"run-as\" idea came about because we didn't want arbitrary roles to be able to change the subscription's connection string. A competing idea was to have a server object rather than a string, with roles like Alice being able to use the server object if they have been granted usage privilege, and not otherwise. So the \"run-as\" and \"server\" ideas were somewhat competing.\n\n> Mark was postulating a scenario where the publisher and subscriber\n> don't trust each other. I was thinking a little bit more about that. I\n> still maintain that the current system is poorly set up to make that\n> work, but suppose we wanted to do better. We could add filtering on\n> the subscriber side, like you list schemas or specific relations that\n> you are or are not willing to replicate into. Then you could, for\n> example, connect your subscription to a certain remote publication,\n> but with the restriction that you're only willing to replicate into\n> the \"headquarters\" schema. Then we'll replicate whatever tables they\n> send us, but if the dorks at headquarters mess up the publications on\n> their end (intentionally or otherwise) and add some tables from the\n> \"locally_controlled_stuff\" schema, we'll refuse to replicate that into\n> our eponymous schema.\n\nThat example is good, though I don't see how \"filters\" are better than roles+privileges. Care to elaborate?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 1 Feb 2023 09:22:31 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Jan 30, 2023 at 2:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Jan 30, 2023 at 4:12 PM Jacob Champion <jchampion@timescale.com> wrote:\n> > For our case, assuming that connections have side effects, one\n> > solution could be for the client to signal to the server that the\n> > connection should use in-band authentication only; i.e. fail the\n> > connection if the credentials provided aren't good enough by\n> > themselves to authenticate the client. (This has some overlap with\n> > SASL negotiation, maybe.)\n>\n> I'm not an expert on this stuff, but to me that feels like a weak and\n> fuzzy concept. If the client is going to tell the server something,\n> I'd much rather have it say something like \"i'm proxying a request\n> from my local user rhaas, who authenticated using such and such a\n> method and connected from such and such an IP yadda yadda\". That feels\n> to me like really clear communication that the server can then be\n> configured to something about via pg_hba.conf or similar. Saying \"use\n> in-band authentication only\", to me, feels much murkier. As the\n> recipient of that message, I don't know exactly what to do about it,\n> and it feels like whatever heuristic I adopt might end up being wrong\n> and something bad happens anyway.\n\nIs it maybe just a matter of terminology? If a proxy tells the server,\n\"This user is logging in. Here's the password I have for them. DO NOT\nauthenticate using anything else,\" and the HBA says to use ident auth\nfor that user, then the server fails the connection. That's what I\nmean by in-band -- the proxy says, \"here are the credentials for this\nconnection.\" That's it.\n\nAlternatively, if you really don't like making this server-side: any\nfuture \"connection side effects\" we add, such as logon triggers, could\neither be opted into by the client or explicitly invoked by the client\nafter it's happy with the authentication exchange. Or it could be\ndisabled at the server side for forms of ambient authn. (This is\ngetting closer to HTTP's method safety concept.)\n\n> > I agree. (But for the record, I think that an outbound proxy filter is\n> > also insufficient. Someone, somewhere, is going to want to safely\n> > proxy through localhost _and_ have peer authentication set up.)\n>\n> Well then they're indeed going to need some way to distinguish a\n> proxied connection from a non-proxied one. You can't send identical\n> connection requests in different scenarios and get different\n> results....\n\nYeah. Most of these solutions require explicitly labelling things that\nwere implicit before.\n\n> I think what we really need here is an example or three of a proposed\n> configuration file syntax. I think it would be good if we could pick a\n> syntax that doesn't require a super-complicated parser\n\nAgreed. The danger from my end is, I'm trained on configuration\nformats that have infinite bells and whistles. I don't really want to\ngo too crazy with it.\n\n> and that maybe\n> has something in common with our existing configuration file syntaxes.\n> But if we have to invent something new, then we can do that.\n\nOkay. Personally I'd like\n- the ability to set options globally (so filters are optional)\n- the ability to maintain many options for a specific scope (host? IP\nrange?) without making my config lines grow without bound\n- the ability to audit a configuration without trusting its comments\n\nBut getting all of my wishlist into a sane configuration format that\nhandles all the use cases is the tricky part. I'll think about it.\n\n--Jacob\n\n\n", "msg_date": "Wed, 1 Feb 2023 12:37:26 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "Hi,\n\nOn 2023-02-01 09:43:39 -0500, Robert Haas wrote:\n> On Tue, Jan 31, 2023 at 7:01 PM Andres Freund <andres@anarazel.de> wrote:\n> > I don't really understand that - the run-as approach seems like a\n> > necessary piece of improving the security model.\n> >\n> > I think it's perfectly reasonable to want to replicate from one system\n> > in another, but to not want to allow logical replication to insert into\n> > pg_class or whatnot. So not using superuser to execute the replication\n> > makes sense.\n> >\n> > This is particularly the case if you're just replicating a small part of\n> > the tables from one system to another. E.g. in a sharded setup, you may\n> > want to replicate metadata too servers.\n> \n> I don't think that a system catalog should be considered a valid\n> replication target, no matter who owns the subscription, so ISTM that\n> writing to pg_class should be blocked regardless.\n\nThe general point is that IMO is that in many setups you should use a\nuser with fewer privileges than a superuser. It doesn't really matter\nwhether we have an ad-hoc restriction for system catalogs. More often\nthan not being able to modify other tables will give you a lot of\nprivileges too.\n\n\n> The thing I'm struggling to understand is: if you only want to\n> replicate into tables that Alice can write, why not just make Alice\n> own the subscription?\n\nBecause it implies that the replication happens as a user that's\nprivileged enough to change the configuration of replication.\n\n\n> Mark was postulating a scenario where the publisher and subscriber\n> don't trust each other.\n\nFWIW, I don't this this is mainly about \"trust\", but instead about\nlayering security / the principle of least privilege. The \"run-as\" user\n(i.e. currently owner) is constantly performing work on behalf of a\nremote node, including executing code (default clauses etc). To make it\nharder to use such a cross-system connection to move from one system to\nthe next, it's a good idea to execute it in the least privileged context\npossible. And I don't see why it'd need the permission to modify the\ndefinition of the subscription and similar \"admin\" tasks.\n\nIt's not that such an extra layer would necessarily completely stop an\nattacker. But it might delay them and make their attack more noisy.\n\n\nSimilarly, if I were to operate an important production environment\nagain, I'd not have relations owned by the [pseudo]superuser, but by a\nuser controlled by the [pseudo]superuser. That way somebody tricking the\nsuperuser into a REINDEX or such only gets the ability to execute code\nin a less privileged context.\n\n\n\n\n> I was thinking a little bit more about that. I\n> still maintain that the current system is poorly set up to make that\n> work, but suppose we wanted to do better. We could add filtering on\n> the subscriber side, like you list schemas or specific relations that\n> you are or are not willing to replicate into.\n\nIsn't that largely a duplication of the ACLs on relations etc?\n\n\n> > I think we'll need two things to improve upon the current situation:\n> >\n> > 1) run-as user, to reduce the scope of potential danger\n> >\n> > 2) Option to run the database inserts as the owner of the table, with a\n> > check that the run-as is actually allowed to perform work as the\n> > owning role. That prevents escalation from table owner (who could add\n> > default expressions etc) from gettng the privs of the\n> > run-as/replication owner.\n> \n> I'm not quite sure what we do here now, but I agree that trigger\n> firing seems like a problem. It might be that we need to worry about\n> the user on the origin server, too. If Alice inserts a row that causes\n> a replicated table owned by Bob to fire a trigger or evaluate a\n> default expression or whatever due the presence of a subscription\n> owned by Charlie, there is a risk that Alice might try to attack\n> either Bob or Charlie, or that Bob might try to attack Charlie.\n\nThe attack on Bob exists without logical replication too - a REINDEX or\nsuch is executed as the owner of the relation and re-evaluates index\nexpressions, constraints etc. Given our security model I don't think we\ncan protect the relation owner if they trust somebody to insert rows, so\nI don't really know what we can do to protect Charlie against Bob.\n\n\n\n> > > And if we suppose that that already works and is safe, well then\n> > > what's the case where I do need a run-as user?\n> >\n> > It's not at all safe today, IMO. You need to trust that nothing bad will\n> > be replicated, otherwise the owner of the subscription has to be\n> > considered compromised.\n> \n> What kinds of things are bad to replicate?\n\nI think that's unfortunately going to be specific to a setup.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Feb 2023 12:37:28 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-01-30 15:32:34 -0500, Robert Haas wrote:\n> I had a long think about what to do with ALTER SUBSCRIPTION ... OWNER\n> TO in terms of permissions checks. The previous version required that\n> the new owner have permissions of pg_create_subscription, but there\n> seems to be no particular reason for that rule except that it happens\n> to be what I made the code do. So I changed it to say that the current\n> owner must have CREATE privilege on the database, and must be able to\n> SET ROLE to the new owner. This matches the rule for CREATE SCHEMA.\n> Possibly we should *additionally* require that the person performing\n> the rename still have pg_create_subscription, but that shouldn't be\n> the only requirement.\n\nAs long as owner and run-as are the same, I think it's strongly\npreferrable to *not* require pg_create_subscription.\n\n\n> There seems to be a good deal of inconsistency here. If you want to\n> give someone a schema, YOU need CREATE on the database. But if you\n> want to give someone a table, THEY need CREATE on the containing\n> schema. It make sense that we check permissions on the containing\n> object, which could be a database or a schema depending on what you're\n> renaming, but it's unclear to me why we sometimes check on the person\n> performing the ALTER command and at other times on the recipient. It's\n> also somewhat unclear to me why we are checking CREATE in the first\n> place, especially on the donor. It might make sense to have a rule\n> that you can't own an object in a place where you couldn't have\n> created it, but there is no such rule, because you can give someone\n> CREATE on a schema, they can create an object, and they you can take\n> CREATE a way and they still own an object there. So it kind of looks\n> to me like we made it up as we went along and that the result isn't\n> very consistent, but I'm inclined to follow CREATE SCHEMA here unless\n> there's some reason to do otherwise.\n\nYuck. No idea what the best policy around this is.\n\n\n> Another question around ALTER SUBSCRIPTION ... OWNER TO and also ALTER\n> SUBSCRIPTION .. RENAME is whether they ought to fail if you're not a\n> superuser and password_required false is set.\n\nI don't really see a benefit in allowing it, so I'm inclined to go for\nthe more restrictive option. But this is a really weakly held opinion.\n\n\n\n> > > If there is, I think we could fix it by moving the LockSharedObject call up\n> > > higher, above object_ownercheck. The only problem with that is it lets you\n> > > lock an object on which you have no permissions: see\n> > > 2ad36c4e44c8b513f6155656e1b7a8d26715bb94. To really fix that, we'd need an\n> > > analogue of RangeVarGetRelidExtended.\n> >\n> > Yea, we really should have something like RangeVarGetRelidExtended() for other\n> > kinds of objects. It'd take a fair bit of work / time to use it widely, but\n> > it'll take even longer if we start in 5 years ;)\n>\n> We actually have something sort of like that in the form of\n> get_object_address(). It doesn't allow for a callback, but it does\n> have a retry loop.\n\nHm, sure looks like that code doesn't do any privilege checking...\n\n\n> @@ -1269,13 +1270,19 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)\n> \t\t\t\t\t\t\t\t\tslotname,\n> \t\t\t\t\t\t\t\t\tNAMEDATALEN);\n>\n> +\t/* Is the use of a password mandatory? */\n> +\tmust_use_password = MySubscription->passwordrequired &&\n> +\t\t!superuser_arg(MySubscription->owner);\n\nThere's a few repetitions of this - perhaps worth putting into a helper?\n\n\n> @@ -180,6 +181,13 @@ libpqrcv_connect(const char *conninfo, bool logical, const char *appname,\n> if (PQstatus(conn->streamConn) != CONNECTION_OK)\n> goto bad_connection_errmsg;\n>\n> + if (must_use_password && !PQconnectionUsedPassword(conn->streamConn))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_S_R_E_PROHIBITED_SQL_STATEMENT_ATTEMPTED),\n> + errmsg(\"password is required\"),\n> + errdetail(\"Non-superuser cannot connect if the server does not request a password.\"),\n> + errhint(\"Target server's authentication method must be changed. or set password_required=false in the subscription attributes\\\n.\")));\n> +\n> if (logical)\n> {\n> PGresult *res;\n\nThis still leaks the connection on error, no?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Feb 2023 13:02:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Feb 1, 2023 at 1:09 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > On Feb 1, 2023, at 6:43 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> > The thing I'm\n> > struggling to understand is: if you only want to replicate into tables\n> > that Alice can write, why not just make Alice own the subscription?\n> > For a run-as user to make sense, you need a scenario where we want the\n> > replication to target only tables that Alice can touch, but we also\n> > don't want Alice herself to be able to touch the subscription, so you\n> > make Alice the run-as user and yourself the owner, or something like\n> > that. But I'm not sure what that scenario is exactly.\n>\n> This \"run-as\" idea came about because we didn't want arbitrary roles to be able to change the subscription's connection string. A competing idea was to have a server object rather than a string, with roles like Alice being able to use the server object if they have been granted usage privilege, and not otherwise. So the \"run-as\" and \"server\" ideas were somewhat competing.\n\nAs far as not changing the connection string goes, a few more ideas\nhave entered the fray: the current patch uses a password_required\nproperty that is modelled on postgres_fdw, and I've elsewhere proposed\na reverse pg_hba.conf.\n\nIMHO, for the use cases that I can imagine, the reverse pg_hba.conf\nidea feels better than all competitors, because it's the only idea\nthat lets you define a class of acceptable connection strings. Jeff's\nidea of a separate connection object is fine if you have a specific,\nshort list of connection strings and you want to allow those and\ndisallow everything else, and there may be cases where people want\nthat, and that's fine, but my guess is that it's overly restrictive in\na lot of environments. The password_required property has the virtue\nof being compatible with what we do in other places right now, and of\npreventing wraparound-to-superuser attacks effectively, but it's\ntotally unconfigurable and that sucks. The runas user idea gives you\nsome control over who is allowed to set the connection string, but it\ndoesn't help you delegate that to a non-superuser, because the idea\nthere is that you want the non-superuser to be able to set connection\nstrings that are OK with the actual superuser but not others.\n\nI think part of my confusion here is that I thought that the point of\nthe runas user was to defend against logical replication itself\nchanging the connection string, and I don't see how it would do that.\nIt's just moving rows around. If the point is that somebody who can\nlog in as the runas user might change the connection string to\nsomething we don't like, that makes somewhat more sense. I think I had\nin my head that you wouldn't use someone's actual login role to run\nlogical replication, but rather some role specifically set up for that\npurpose. In that scenario, nobody's running SQL commands as the runas\nuser, so even if they also own the subscription, there's no way for it\nto get modified.\n\n> > Mark was postulating a scenario where the publisher and subscriber\n> > don't trust each other. I was thinking a little bit more about that. I\n> > still maintain that the current system is poorly set up to make that\n> > work, but suppose we wanted to do better. We could add filtering on\n> > the subscriber side, like you list schemas or specific relations that\n> > you are or are not willing to replicate into. Then you could, for\n> > example, connect your subscription to a certain remote publication,\n> > but with the restriction that you're only willing to replicate into\n> > the \"headquarters\" schema. Then we'll replicate whatever tables they\n> > send us, but if the dorks at headquarters mess up the publications on\n> > their end (intentionally or otherwise) and add some tables from the\n> > \"locally_controlled_stuff\" schema, we'll refuse to replicate that into\n> > our eponymous schema.\n>\n> That example is good, though I don't see how \"filters\" are better than roles+privileges. Care to elaborate?\n\nI'm not sure that they are. Are we assuming that the user who is\ncreating subscriptions is also powerful enough to create roles and\ngive them just the required amount of privilege? If so, it seems like\nthey might as well just do it that way. And maybe we should assume\nthat, because in most cases, a dedication replication role makes more\nsense to me than running replication under some role that you're also\nusing for other things. On the other hand, I bet a lot of people today\nare just running replication as a superuser, in which case maybe this\ncould be useful? This whole idea was mostly just me spitballing to see\nwhat other people thought. I'm not wild about the complexity involved\nfor what you get out of it, so if we don't need it, that's more than\nfine with me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Feb 2023 09:11:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Feb 1, 2023 at 3:37 PM Andres Freund <andres@anarazel.de> wrote:\n> The general point is that IMO is that in many setups you should use a\n> user with fewer privileges than a superuser. It doesn't really matter\n> whether we have an ad-hoc restriction for system catalogs. More often\n> than not being able to modify other tables will give you a lot of\n> privileges too.\n\nI don't know what you mean by this. DML doesn't confer privileges. If\ncode gets executed and runs with the replication user's credentials,\nthat could lead to privilege escalation, but just moving rows around\ndoesn't, at least not in the database sense. It might confer\nunanticipated real-world benefits, like if you can update your own\nsalary or something, but in the context of replication you have to\nhave had the ability to do that on some other node anyway. If that\nchange wasn't supposed to get replicated to the local node, then why\nare we using replication? Or why is that table in the publication? I'm\nconfused.\n\n> > The thing I'm struggling to understand is: if you only want to\n> > replicate into tables that Alice can write, why not just make Alice\n> > own the subscription?\n>\n> Because it implies that the replication happens as a user that's\n> privileged enough to change the configuration of replication.\n\nBut again, replication is just about inserting, updating, and deleting\nrows. To change the replication configuration, you have to be able to\nparlay that into the ability to execute code. That's why I think\ntrigger security is really important. But I'm wondering if there's\nsome way we can handle that that doesn't require us to make a decision\nabout arun-as user. For instance, if firing triggers as the table\nowner is an acceptable solution, then the only thing that the run-as\nuser is actually controlling is which tables we're willing to\nreplicate into in the first place (unless there's some other way that\nlogical replication can run arbitrary code). The name almost becomes a\nmisnomer in that case. It's not a run-as user, it's\nuse-this-user's-permissions-to-see-if-I-should-fail-replication user.\n\n> > I was thinking a little bit more about that. I\n> > still maintain that the current system is poorly set up to make that\n> > work, but suppose we wanted to do better. We could add filtering on\n> > the subscriber side, like you list schemas or specific relations that\n> > you are or are not willing to replicate into.\n>\n> Isn't that largely a duplication of the ACLs on relations etc?\n\nYeah, maybe.\n\n> > I'm not quite sure what we do here now, but I agree that trigger\n> > firing seems like a problem. It might be that we need to worry about\n> > the user on the origin server, too. If Alice inserts a row that causes\n> > a replicated table owned by Bob to fire a trigger or evaluate a\n> > default expression or whatever due the presence of a subscription\n> > owned by Charlie, there is a risk that Alice might try to attack\n> > either Bob or Charlie, or that Bob might try to attack Charlie.\n>\n> The attack on Bob exists without logical replication too - a REINDEX or\n> such is executed as the owner of the relation and re-evaluates index\n> expressions, constraints etc. Given our security model I don't think we\n> can protect the relation owner if they trust somebody to insert rows, so\n> I don't really know what we can do to protect Charlie against Bob.\n\nYikes.\n\n> > > > And if we suppose that that already works and is safe, well then\n> > > > what's the case where I do need a run-as user?\n> > >\n> > > It's not at all safe today, IMO. You need to trust that nothing bad will\n> > > be replicated, otherwise the owner of the subscription has to be\n> > > considered compromised.\n> >\n> > What kinds of things are bad to replicate?\n>\n> I think that's unfortunately going to be specific to a setup.\n\nCan you give an example?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Feb 2023 09:28:03 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-02-02 09:28:03 -0500, Robert Haas wrote:\n> I don't know what you mean by this. DML doesn't confer privileges. If\n> code gets executed and runs with the replication user's credentials,\n> that could lead to privilege escalation, but just moving rows around\n> doesn't, at least not in the database sense.\n\nExecuting DML ends up executing code. Think predicated/expression\nindexes, triggers, default expressions etc. If a badly written trigger\netc can be tricked to do arbitrary code exec, an attack will be able to\nrun with the privs of the run-as user. How bad that is is influenced to\nsome degree by the amount of privileges that user has.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 3 Feb 2023 00:47:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Feb 1, 2023 at 3:37 PM Jacob Champion <jchampion@timescale.com> wrote:\n> > I'm not an expert on this stuff, but to me that feels like a weak and\n> > fuzzy concept. If the client is going to tell the server something,\n> > I'd much rather have it say something like \"i'm proxying a request\n> > from my local user rhaas, who authenticated using such and such a\n> > method and connected from such and such an IP yadda yadda\". That feels\n> > to me like really clear communication that the server can then be\n> > configured to something about via pg_hba.conf or similar. Saying \"use\n> > in-band authentication only\", to me, feels much murkier. As the\n> > recipient of that message, I don't know exactly what to do about it,\n> > and it feels like whatever heuristic I adopt might end up being wrong\n> > and something bad happens anyway.\n>\n> Is it maybe just a matter of terminology? If a proxy tells the server,\n> \"This user is logging in. Here's the password I have for them. DO NOT\n> authenticate using anything else,\" and the HBA says to use ident auth\n> for that user, then the server fails the connection. That's what I\n> mean by in-band -- the proxy says, \"here are the credentials for this\n> connection.\" That's it.\n\nI don't think that's quite the right concept. It seems to me that the\nclient is responsible for informing the server of what the situation\nis, and the server is responsible for deciding whether to allow the\nconnection. In your scenario, the client is not only communicating\ninformation (\"here's the password I have got\") but also making demands\non the server (\"DO NOT authenticate using anything else\"). I like the\nfirst part fine, but not the second part.\n\nConsider the scenario where somebody wants to allow a connection that\nis proxied and does not require a password. For example, maybe I have\na group of three machines that all mutually trust each other and the\nnetwork is locked down so that we need not worry about IP spoofing or\nwhatever. Just be doubly sure, they all have SSL certificates so that\nthey can verify that an incoming connection is from one of the other\ntrusted machines. I, as the administrator, want to configure things so\nthat each machine will proxy connections to the others as long as\nlocal user = remote user. When the remote machine receives the\nconnection, it can trust that the request is legitimate provided that\nthe SSL certificate is successfully verified.\n\nThe way I think this should work is, first, on each machine, in the\nproxy configuration, there should be a rule that says \"only proxy\nconnections where local user = remote user\" (and any other rules I\nwant to enforce). Second, in the HBA configuration, there should be a\nrule that says \"if somebody is trying to proxy a connection, it has to\nbe for one of these IPs and they have to authenticate using an SSL\ncertificate\". In this kind of scenario, the client has no business\ndemanding that the server authenticate using the password rather than\nanything else. The server, not the client, is in charge of deciding\nwhich connections to accept; the client's job is only to decide which\nconnections to proxy. And the human being is responsible for making\nsure that the combination of those two things implements the intended\nsecurity policy.\n\n> Agreed. The danger from my end is, I'm trained on configuration\n> formats that have infinite bells and whistles. I don't really want to\n> go too crazy with it.\n\nYeah. If I remember my math well enough, the time required to\nimplement infinite bells and whistles will also be infinite, and as a\nwise man once said, real artists ship.\n\nIt does seem like a good idea, if we can, to make the configuration\nfile format flexible enough that we can easily extend it with more\nbells and whistles later if we so choose. But realistically most\npeople are going to have very simple configurations.\n\n> > and that maybe\n> > has something in common with our existing configuration file syntaxes.\n> > But if we have to invent something new, then we can do that.\n>\n> Okay. Personally I'd like\n> - the ability to set options globally (so filters are optional)\n> - the ability to maintain many options for a specific scope (host? IP\n> range?) without making my config lines grow without bound\n> - the ability to audit a configuration without trusting its comments\n>\n> But getting all of my wishlist into a sane configuration format that\n> handles all the use cases is the tricky part. I'll think about it.\n\nNobody seemed too keen on my proposal of a bunch of tab-separated\nfields; maybe we're all traumatized from pg_hba.conf and should look\nfor something more complex with a real parser. I thought that\ntab-separated fields might be good enough and simple to implement, but\nit doesn't matter how simple it is to implement if nobody likes it. We\ncould do something that looks more like a series of if-then rules,\ne.g.\n\ntarget-host 127.0.0.0/8 => reject\nauthentication-method scram => accept\nreject\n\nBut it's only a hop, skip and a jump from there to something that\nlooks an awful lot like a full-blown programing language, and maybe\nthat's even the right idea, but, oh, the bike-shedding!\n\nCue someone to suggest that it's about time we embed a Lua interpreter.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Feb 2023 11:22:11 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On Fri, Feb 3, 2023 at 3:47 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-02-02 09:28:03 -0500, Robert Haas wrote:\n> > I don't know what you mean by this. DML doesn't confer privileges. If\n> > code gets executed and runs with the replication user's credentials,\n> > that could lead to privilege escalation, but just moving rows around\n> > doesn't, at least not in the database sense.\n>\n> Executing DML ends up executing code. Think predicated/expression\n> indexes, triggers, default expressions etc. If a badly written trigger\n> etc can be tricked to do arbitrary code exec, an attack will be able to\n> run with the privs of the run-as user. How bad that is is influenced to\n> some degree by the amount of privileges that user has.\n\nI spent some time studying this today. I think you're right. What I'm\nconfused about is: why do we consider this situation even vaguely\nacceptable? Isn't this basically an admission that our logical\nreplication security model is completely and totally broken and we\nneed to fix it somehow and file for a CVE number? Like, in released\nbranches, you can't even have a subscription owned by a non-superuser.\nBut any non-superuser can set a default expression or create an enable\nalways trigger and sure enough, if that table is replicated, the\nsystem will run that trigger as the subscription owner, who is a\nsuperuser. Which AFAICS means that if a non-superuser owns a table\nthat is part of a subscription, they can instantly hack superuser.\nWhich seems, uh, extremely bad. Am I missing something?\n\nBased on other remarks you made upthread, it seems like we ought to be\ndoing the actual replication as the table owner, since the table owner\nhas to be prepared for executable code attached to the table to be\nre-run on rows in the table at any table when somebody does a REINDEX.\nAnd then, in master, where there's some provision for non-superuser\nsubscription owners, we maybe need to re-think the privileges required\nto replicate into a table in the first place. I don't think that\nhaving I/U/D permissions on a table is really sufficient to justify\nperforming those operations *as the table owner*; perhaps the check\nought to be whether you have the privileges of the table owner.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Feb 2023 14:07:39 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-02-06 14:07:39 -0500, Robert Haas wrote:\n> On Fri, Feb 3, 2023 at 3:47 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-02-02 09:28:03 -0500, Robert Haas wrote:\n> > > I don't know what you mean by this. DML doesn't confer privileges. If\n> > > code gets executed and runs with the replication user's credentials,\n> > > that could lead to privilege escalation, but just moving rows around\n> > > doesn't, at least not in the database sense.\n> >\n> > Executing DML ends up executing code. Think predicated/expression\n> > indexes, triggers, default expressions etc. If a badly written trigger\n> > etc can be tricked to do arbitrary code exec, an attack will be able to\n> > run with the privs of the run-as user. How bad that is is influenced to\n> > some degree by the amount of privileges that user has.\n> \n> I spent some time studying this today. I think you're right. What I'm\n> confused about is: why do we consider this situation even vaguely\n> acceptable? Isn't this basically an admission that our logical\n> replication security model is completely and totally broken and we\n> need to fix it somehow and file for a CVE number? Like, in released\n> branches, you can't even have a subscription owned by a non-superuser.\n> But any non-superuser can set a default expression or create an enable\n> always trigger and sure enough, if that table is replicated, the\n> system will run that trigger as the subscription owner, who is a\n> superuser. Which AFAICS means that if a non-superuser owns a table\n> that is part of a subscription, they can instantly hack superuser.\n> Which seems, uh, extremely bad. Am I missing something?\n\nIt's decidedly not great, yes. I don't know if it's quite a CVE type issue,\nafter all, the same is true for any other type of query the superuser\nexecutes. But at the very least the documentation needs to be better, with a\nbig red box making sure the admin is aware of the problem.\n\nI think we need some more fundamental ways to deal with this issue, including\nbut not restricted to the replication context. Some potentially relevant\ndiscussion is in this thread:\nhttps://postgr.es/m/75b0dbb55e9febea54c441efff8012a6d2cb5bd7.camel%40j-davis.com\n\nI don't agree with Jeff's proposal, but I think there's some worthwhile ideas\nin the idea + followups.\n\n\n> And then, in master, where there's some provision for non-superuser\n> subscription owners, we maybe need to re-think the privileges required\n> to replicate into a table in the first place. I don't think that\n> having I/U/D permissions on a table is really sufficient to justify\n> performing those operations *as the table owner*; perhaps the check\n> ought to be whether you have the privileges of the table owner.\n\nYes, I think we ought to check role membership, including non-inherited\nmemberships.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 6 Feb 2023 11:18:38 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Feb 6, 2023 at 2:18 PM Andres Freund <andres@anarazel.de> wrote:\n> It's decidedly not great, yes. I don't know if it's quite a CVE type issue,\n> after all, the same is true for any other type of query the superuser\n> executes. But at the very least the documentation needs to be better, with a\n> big red box making sure the admin is aware of the problem.\n\nI don't think that's the same thing at all. A superuser executing a\nquery interactively can indeed cause all sorts of bad things to\nhappen, but you don't have to log in as superuser and run DML queries\non tables owned by unprivileged users, and you shouldn't.\n\nBut what we're talking about here is -- the superuser comes along and\nsets up logical replication in the configuration in what seems to be\nexactly the way it was intended to be used, and now any user who can\nlog into the subscriber node can become superuser for free whenever\nthey want, without the superuser doing anything at all, even logging\nin. Saying it's \"not ideal\" seems like you're putting it in the same\ncategory as \"the cheese got moldy in the fridge\" but to me it sounds\nmore like \"the fridge exploded and the house is on fire.\"\n\nIf we were to document this, I assume that the warning we would add to\nthe documentation would look like this:\n\n<-- begin documentation text -->\nPretty much don't ever use logical replication. In any normal\nconfiguration, it lets every user on your system escalate to superuser\nwhenever they want. It is possible to make it safe, if you make sure\nall the tables on the replica are owned by the superuser and none of\nthem have any triggers, defaults, expression indexes, or anything else\nassociated with them that might execute any code while replicating.\nBut notice that this makes logical replication pretty much useless for\none of its intended purposes, which is high availability, because if\nyou actually fail over, you're going to then have to change the owners\nof all of those tables and apply any missing triggers, defaults,\nexpression indexes, or anything like that which you may want to have.\nAnd then to fail back you're going to have to remove all of that stuff\nagain and once again make the tables superuser-owned. That's obviously\npretty impractical, so you probably shouldn't use logical replication\nat all until we get around to fixing this. You might wonder why we\nimplemented a feature that can't be used in any kind of normal way\nwithout completely and totally breaking your system security -- but\ndon't ask us, we don't know, either!\n<-- end documentation text -->\n\nHonestly, this makes the CREATEROLE exploit that I fixed recently in\nmaster look like a walk in the park. Sure, it's a pain for service\nproviders, who might otherwise use it, but most normal users don't and\nwouldn't no matter how it worked, and really are not going to care.\nBut people do use logical replication, and it seems to me that the\nissue you're describing here means that approximately 100% of those\ninstallations have a vulnerability allowing any local user who owns a\ntable or can create one to escalate to superuser. Far from being not\nquite a CVE issue, that seems substantially more serious than most\nthings we get CVEs for.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Feb 2023 14:40:18 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Feb 1, 2023 at 4:02 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-01-30 15:32:34 -0500, Robert Haas wrote:\n> > I had a long think about what to do with ALTER SUBSCRIPTION ... OWNER\n> > TO in terms of permissions checks.\n>\n> As long as owner and run-as are the same, I think it's strongly\n> preferrable to *not* require pg_create_subscription.\n\nOK.\n\n> > Another question around ALTER SUBSCRIPTION ... OWNER TO and also ALTER\n> > SUBSCRIPTION .. RENAME is whether they ought to fail if you're not a\n> > superuser and password_required false is set.\n>\n> I don't really see a benefit in allowing it, so I'm inclined to go for\n> the more restrictive option. But this is a really weakly held opinion.\n\nI went back and forth on this and ended up with what you propose here.\nIt's simpler to explain this way.\n\n> > + /* Is the use of a password mandatory? */\n> > + must_use_password = MySubscription->passwordrequired &&\n> > + !superuser_arg(MySubscription->owner);\n>\n> There's a few repetitions of this - perhaps worth putting into a helper?\n\nI don't think so. It's slightly different each time, because it's\npulling data out of different data structures.\n\n> This still leaks the connection on error, no?\n\nI've attempted to fix this in v4, attached.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 7 Feb 2023 16:56:55 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On 2/6/23 08:22, Robert Haas wrote:\n> I don't think that's quite the right concept. It seems to me that the\n> client is responsible for informing the server of what the situation\n> is, and the server is responsible for deciding whether to allow the\n> connection. In your scenario, the client is not only communicating\n> information (\"here's the password I have got\") but also making demands\n> on the server (\"DO NOT authenticate using anything else\"). I like the\n> first part fine, but not the second part.\n\nFor what it's worth, making a negative demand during authentication is\npretty standard: if you visit example.com and it tells you \"I need your\nOS login password and Social Security Number to authenticate you,\" you\nhave the option of saying \"no thanks\" and closing the tab.\n\nIt's not really about protecting the server at that point; the server\ncan protect itself. It's about protecting *you*. Allowing the proxy to\npin a specific set of authentication details to the connection is just a\nway for it to close the tab on a server that would otherwise pull some\nother piece of ambient authority out of it.\n\nIn a hypothetical world where the server presented the client with a\nlist of authentication options before allowing any access, this would\nmaybe be a little less convoluted to solve. For example, a proxy seeing\na SASL list of\n\n- ANONYMOUS\n- EXTERNAL\n\ncould understand that both methods allow the client to assume the\nauthority of the proxy itself. So if its client isn't allowed to do\nthat, the proxy realizes something is wrong (either it, or its target\nserver, has been misconfigured or is under attack), and it can close the\nconnection *before* the server runs login triggers.\n\n> In this kind of scenario, the client has no business\n> demanding that the server authenticate using the password rather than\n> anything else. The server, not the client, is in charge of deciding\n> which connections to accept; the client's job is only to decide which\n> connections to proxy.\n\nThis sounds like a reasonable separation of responsibilities on the\nsurface, but I think it's subtly off. The entire confused-deputy problem\nspace revolves around the proxy being unable to correctly decide which\nconnections to allow unless it also knows why the connections are being\nauthorized.\n\nYou've constructed an example where that's not a concern: everything's\nsymmetrical, all proxies operate with the same authority, and internal\nusers are identical to external users. But the CVE that led to the\npassword requirement, as far as I can tell, dealt with asymmetry. The\nproxy had the authority to connect locally to a user, and the clients\nhad the authority to connect to other machines' users, but those users\nweren't the same and were not mutually trusting.\n\n> And the human being is responsible for making\n> sure that the combination of those two things implements the intended\n> security policy.\n\nSure, but upthread it was illustrated how difficult it is for even the\npeople implementing the protocol to reason through what's safe and\nwhat's not.\n\nThe primitives we're providing in the protocol are, IMO, difficult to\nwield safely for more complex use cases. We can provide mitigations, and\ndemand that the DBA reason through every combination, and tell them\n\"don't do that\" when they screw up or come across a situation that our\nmitigations can't paper over. But I think we can solve the root problem\ninstead.\n\n> We\n> could do something that looks more like a series of if-then rules,\n> e.g.\n> \n> target-host 127.0.0.0/8 => reject\n> authentication-method scram => accept\n> reject\n\nYeah, I think something based on allow/deny is going to be the most\nintuitive.\n\n> But it's only a hop, skip and a jump from there to something that\n> looks an awful lot like a full-blown programing language, and maybe\n> that's even the right idea, but, oh, the bike-shedding!\n\nEh. Someone will demand Turing-completeness eventually, but you don't\nhave to listen. :D\n\n--Jacob\n\n\n", "msg_date": "Thu, 9 Feb 2023 13:46:04 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On Mon, 2023-02-06 at 14:40 -0500, Robert Haas wrote:\n> On Mon, Feb 6, 2023 at 2:18 PM Andres Freund <andres@anarazel.de>\n> wrote:\n> > It's decidedly not great, yes. I don't know if it's quite a CVE\n> > type issue,\n> > after all, the same is true for any other type of query the\n> > superuser\n> > executes. But at the very least the documentation needs to be\n> > better, with a\n> > big red box making sure the admin is aware of the problem.\n> \n> I don't think that's the same thing at all. A superuser executing a\n> query interactively can indeed cause all sorts of bad things to\n> happen, but you don't have to log in as superuser and run DML queries\n> on tables owned by unprivileged users, and you shouldn't.\n\nThere are two questions:\n\n1. Is the security situation with logical replication bad? Yes. You\nnicely summarized just how bad.\n\n2. Is it the same situation as accessing a table owned by a user you\ndon't absolutely trust? \n\nRegardless of how the second question is answered, it won't diminish\nyour point that logical replication is in a bad state. If another\nsituation is also bad, we should fix that too.\n\nAnd I think the DML situation is really bad, too. Anyone reading our\ndocumentation would find extensive explanations about GRANT/REVOKE, and\npuzzle over the fine details of exactly how much they trust user foo.\nDo I trust foo enough for WITH GRANT OPTION? Does foo really need to\nsee all of the columns of this table, or just a subset?\n\nBut there's no obvious mention that user foo must trust you absolutely\nin order to exercise the GRANT at all, because you (as table owner) can\ntrivially cause foo to execute arbitrary code. There's no warning or\nhint or suggestion at runtime to know that you are about to execute\nsomeone else's code with your privileges or that it might be dangerous.\n\nIt gets worse. Let's say that user foo figures that out, and they're\nextra cautious to SET SESSION AUTHORIZATION or SET ROLE to drop their\nprivileges before accessing a table. No good: the table owner can just\ncraft their arbitrary code with a \"RESET SESSION AUTHORIZATION\" or a\n\"RESET ROLE\" at the top, and the code will still execute with the\nprivileges of user foo.\n\nSo I don't think \"shouldn't\" is quite good enough. In the first place,\nthe user needs to know that the risk exists. Second, what if they\nactually do want to access a table owned by someone else for whatever\nreason -- how do they do that safely?\n\nI can't resist mentioning that these are all SECURITY INVOKER problems.\nSECURITY INVOKER is insecure unless the invoker absolutely trusts the\ndefiner, and that only really makes sense if the definer is a superuser\n(or something very close). That's why we keep adding exceptions with\nSECURITY_RESTRICTED_OPERATION, which is really just a way to silently\nignore the SECURITY INVOKER label and use SECURITY DEFINER instead.\n\nAt some point we need to ask: \"when is SECURITY INVOKER both safe and\nuseful?\" and contain it to those cases, rather than silently ignoring\nit in an expanding list of cases.\n\nI know that the response here is that SECURITY DEFINER is somehow\nworse. Maybe for superuser-defined functions, it is. But basically, the\nproblems with SECURITY DEFINER all amount to \"the author of the code\nneeds to be careful\", which is a lot more intuitive than the problems\nwith SECURITY INVOKER.\n\nAnother option is having some kind SECURITY NONE that would run the\ncode as a very limited-privilege user that can basically only access\nthe catalog. That would be useful for running default expressions and\nthe like without the definer or invoker needing to be careful.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Wed, 22 Feb 2023 09:18:34 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Feb 22, 2023, at 9:18 AM, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> Another option is having some kind SECURITY NONE that would run the\n> code as a very limited-privilege user that can basically only access\n> the catalog. That would be useful for running default expressions and\n> the like without the definer or invoker needing to be careful.\n\nAnother option is to execute under the intersection of their privileges, where both the definer and the invoker need the privileges in order for the action to succeed. That would be more permissive than the proposed SECURITY NONE, while still preventing either party from hijacking privileges of the other.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 22 Feb 2023 09:27:19 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, 2023-02-22 at 09:27 -0800, Mark Dilger wrote:\n> Another option is to execute under the intersection of their\n> privileges, where both the definer and the invoker need the\n> privileges in order for the action to succeed.  That would be more\n> permissive than the proposed SECURITY NONE, while still preventing\n> either party from hijacking privileges of the other.\n\nInteresting idea, I haven't heard of something like that being done\nbefore. Is there some precedent for that or a use case where it's\nhelpful?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 22 Feb 2023 10:49:42 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "\n\n> On Feb 22, 2023, at 10:49 AM, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> On Wed, 2023-02-22 at 09:27 -0800, Mark Dilger wrote:\n>> Another option is to execute under the intersection of their\n>> privileges, where both the definer and the invoker need the\n>> privileges in order for the action to succeed. That would be more\n>> permissive than the proposed SECURITY NONE, while still preventing\n>> either party from hijacking privileges of the other.\n> \n> Interesting idea, I haven't heard of something like that being done\n> before. Is there some precedent for that or a use case where it's\n> helpful?\n\nNo current use case comes to mind, but I proposed it for event triggers one or two development cycles ago, to allow for non-superuser event trigger owners. The problems associated with allowing non-superusers to create and own event triggers were pretty similar to the problems being discussed in this thread.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 22 Feb 2023 11:12:05 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On 2/22/23 14:12, Mark Dilger wrote:\n>> On Feb 22, 2023, at 10:49 AM, Jeff Davis <pgsql@j-davis.com> wrote:\n>> On Wed, 2023-02-22 at 09:27 -0800, Mark Dilger wrote:\n>>> Another option is to execute under the intersection of their\n>>> privileges, where both the definer and the invoker need the\n>>> privileges in order for the action to succeed. That would be more\n>>> permissive than the proposed SECURITY NONE, while still preventing\n>>> either party from hijacking privileges of the other.\n>> \n>> Interesting idea, I haven't heard of something like that being done\n>> before. Is there some precedent for that or a use case where it's\n>> helpful?\n> > No current use case comes to mind, but I proposed it for event\n> triggers one or two development cycles ago, to allow for\n> non-superuser event trigger owners. The problems associated with\n> allowing non-superusers to create and own event triggers were pretty\n> similar to the problems being discussed in this thread.\n\n\nThe intersection of privileges is used, for example, in multi-level \nsecurity contexts where the intersection of the network-allowed levels \nand the subject allowed levels is used to bracket what can be accessed \nand how.\n\nOther examples I found with a quick search:\n\nhttps://docs.oracle.com/javase/8/docs/api/java/security/AccessController.html#doPrivileged-java.security.PrivilegedAction-java.security.AccessControlContext-\n\nhttps://learn.microsoft.com/en-us/dotnet/api/system.security.permissions.dataprotectionpermission.intersect?view=dotnet-plat-ext-7.0\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 22 Feb 2023 15:25:44 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Feb 22, 2023 at 12:18 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> There are two questions:\n>\n> 1. Is the security situation with logical replication bad? Yes. You\n> nicely summarized just how bad.\n>\n> 2. Is it the same situation as accessing a table owned by a user you\n> don't absolutely trust?\n>\n> Regardless of how the second question is answered, it won't diminish\n> your point that logical replication is in a bad state. If another\n> situation is also bad, we should fix that too.\n\nWell said.\n\n> So I don't think \"shouldn't\" is quite good enough. In the first place,\n> the user needs to know that the risk exists. Second, what if they\n> actually do want to access a table owned by someone else for whatever\n> reason -- how do they do that safely?\n\nGood question. I don't think we currently have a good answer.\n\n> I can't resist mentioning that these are all SECURITY INVOKER problems.\n> SECURITY INVOKER is insecure unless the invoker absolutely trusts the\n> definer, and that only really makes sense if the definer is a superuser\n> (or something very close). That's why we keep adding exceptions with\n> SECURITY_RESTRICTED_OPERATION, which is really just a way to silently\n> ignore the SECURITY INVOKER label and use SECURITY DEFINER instead.\n\nThat's an interesting way to look at it. I think there are perhaps two\ndifferent possible perspectives here. One possibility is to take the\nview that you've adopted here, and blame it on SECURITY INVOKER. The\nother possibility, at least as I see it, is to blame it on the fact\nthat we have so many places to attach executable code to tables and\nvery few ways for people using those tables to limit their exposure to\nsuch code. Suppose Alice owns a table and attaches a trigger to it. If\nBob inserts into that table, I think we have to run the trigger,\nbecause Alice is entitled to assume that, for example, any BEFORE\ntriggers she might have defined that block certain kinds of inserts\nare actually going to block those inserts; any constraints that she\nhas applied to the table are going to be enforced against all new\nrows; and any default expressions she supplies are actually going to\nwork. I think Bob has to be OK with those things too; otherwise, he\njust shouldn't insert anything into the table.\n\nBut Bob doesn't have to be OK with Alice's code changing the session\nstate, or executing DML or DDL with his permissions. I wonder if\nthat's where we should be trying to insert restrictions here. Right\nnow, we think of SECURITY_RESTRICTED_OPERATION as a way to prevent a\nfunction or procedure that runs under a different user ID than the\nsession user from poisoning the session state. But I'm thinking that\nmaybe the problem isn't really with code running under a different\nuser ID. It's with running code *provided by* a different user ID.\nMaybe we should stop thinking about the security context as something\nthat you set when you switch to running as a different user ID, and\nstart thinking about it as something that needs to be set based on the\nrelationship between the user that provided the code and the session\nuser. If they're not the same, some restrictions are probably\nappropriate, except I think in the case where the user who provided\nthe code can become the session user anyway.\n\n> Another option is having some kind SECURITY NONE that would run the\n> code as a very limited-privilege user that can basically only access\n> the catalog. That would be useful for running default expressions and\n> the like without the definer or invoker needing to be careful.\n\nThis might be possible, but I have some doubts about how difficult it\nwould be to get all the details right. We'd need to make sure that\nthis limited-privilege user couldn't ever create a table, or own one,\nor be granted any privileges to do anything other than the minimal set\nof things it's supposed to be able to do, or poison the session state,\netc. And it would have weird results like current_user returning the\nname of the limited-privilege user rather than any of the users\ninvolved in the operation. Maybe that's all OK, but I find it more\nappealing to try to think about what kinds of operations can be\nperformed in what contexts than to invent entirely new users.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Feb 2023 10:45:06 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, 2023-02-27 at 10:45 -0500, Robert Haas wrote:\n\n> Suppose Alice owns a table and attaches a trigger to it. If\n> Bob inserts into that table, I think we have to run the trigger,\n> because Alice is entitled to assume that, for example, any BEFORE\n> triggers she might have defined that block certain kinds of inserts\n> are actually going to block those inserts; any constraints that she\n> has applied to the table are going to be enforced against all new\n> rows; and any default expressions she supplies are actually going to\n> work.\n\nTrue, but I still find myself suspending my disbelief. Which of these\nuse cases make sense for SECURITY INVOKER?\n\n> I think Bob has to be OK with those things too; otherwise, he\n> just shouldn't insert anything into the table.\n\nRight, but why should Bob's privileges be needed to do any of those\nthings? Any difference in privileges, for those use cases, could only\neither get in the way of achieving Alice's goals, or cause a security\nproblem for Bob.\n\n> But Bob doesn't have to be OK with Alice's code changing the session\n> state, or executing DML or DDL with his permissions.\n\nWhat's left? Should Bob be OK with Alice's code using his permissions\nfor anything?\n\n> I wonder if\n> that's where we should be trying to insert restrictions here. Right\n> now, we think of SECURITY_RESTRICTED_OPERATION as a way to prevent a\n> function or procedure that runs under a different user ID than the\n> session user from poisoning the session state. But I'm thinking that\n> maybe the problem isn't really with code running under a different\n> user ID. It's with running code *provided by* a different user ID.\n> Maybe we should stop thinking about the security context as something\n> that you set when you switch to running as a different user ID, and\n> start thinking about it as something that needs to be set based on\n> the\n> relationship between the user that provided the code and the session\n> user. If they're not the same, some restrictions are probably\n> appropriate, except I think in the case where the user who provided\n> the code can become the session user anyway.\n\nI think you are saying that we should still run Alice's code with the\nprivileges of Bob, but somehow make that safe(r) for Bob. Is that\nright?\n\nThat sounds hard, and I'm still stuck at the \"why\" question. Why do we\nwant to run Alice's code with Bob's permissions?\n\nThe answers I have so far are abstract. For instance, maybe it's a\nclever SRF that takes table names as inputs and you want people to only\nbe able to use the clever SRF with tables they have privileges on. But\nthat's not what most functions do, and it's certainly not what most\ndefault expressions do.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 27 Feb 2023 10:25:29 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Mon, 2023-02-27 at 10:45 -0500, Robert Haas wrote:\n> > Suppose Alice owns a table and attaches a trigger to it. If\n> > Bob inserts into that table, I think we have to run the trigger,\n> > because Alice is entitled to assume that, for example, any BEFORE\n> > triggers she might have defined that block certain kinds of inserts\n> > are actually going to block those inserts; any constraints that she\n> > has applied to the table are going to be enforced against all new\n> > rows; and any default expressions she supplies are actually going to\n> > work.\n> \n> True, but I still find myself suspending my disbelief. Which of these\n> use cases make sense for SECURITY INVOKER?\n\nI do think there are some use-cases for it, but agree that it'd be\nbetter to encourage more use of SECURITY DEFINER and one approach to\nthat might be to have a way for users to explicitly say \"don't run code\nthat isn't mine or a superuser's with my privileges.\" Of course, we\nneed to make sure it's possible to write safe SECURITY DEFINER functions\nand to be clear about how to do that to avoid the risk in the other\ndirection. We also need to provide some additonal functions along the\nlines of \"calling_role()\" or similar (so that the function can know who\nthe actual role is that's running the trigger) for the common case of\nauditing or needing to know the calling role for RLS or similar.\n\nI don't think we'd be able to get away with just getting rid of SECURITY\nINVOKER entirely or even in changing the current way triggers (or\nfunctions in views, etc) are run by default.\n\n> > I think Bob has to be OK with those things too; otherwise, he\n> > just shouldn't insert anything into the table.\n> \n> Right, but why should Bob's privileges be needed to do any of those\n> things? Any difference in privileges, for those use cases, could only\n> either get in the way of achieving Alice's goals, or cause a security\n> problem for Bob.\n> \n> > But Bob doesn't have to be OK with Alice's code changing the session\n> > state, or executing DML or DDL with his permissions.\n> \n> What's left? Should Bob be OK with Alice's code using his permissions\n> for anything?\n\nI don't know about trying to define that X things are ok and Y things\nare not, that seems like it would be more confusing and difficult to\nwork with. Regular SELECT queries that pull data that Bob has access to\nbut Alice doesn't is a security issue too, were Alice to install a\nfunction that Bob calls which writes that data into a place that Alice\ncould then access it. Perhaps if we could allow Bob to say \"these\nthings are ok for Alice's code to access\" then it could work ... but if\nthat's what is going on then the code could run with Alice's permissions\nand Bob could use our nice and granular GRANT/RLS system to say what\nAlice is allowed to access.\n\n> > I wonder if\n> > that's where we should be trying to insert restrictions here. Right\n> > now, we think of SECURITY_RESTRICTED_OPERATION as a way to prevent a\n> > function or procedure that runs under a different user ID than the\n> > session user from poisoning the session state. But I'm thinking that\n> > maybe the problem isn't really with code running under a different\n> > user ID. It's with running code *provided by* a different user ID.\n> > Maybe we should stop thinking about the security context as something\n> > that you set when you switch to running as a different user ID, and\n> > start thinking about it as something that needs to be set based on\n> > the\n> > relationship between the user that provided the code and the session\n> > user. If they're not the same, some restrictions are probably\n> > appropriate, except I think in the case where the user who provided\n> > the code can become the session user anyway.\n> \n> I think you are saying that we should still run Alice's code with the\n> privileges of Bob, but somehow make that safe(r) for Bob. Is that\n> right?\n> \n> That sounds hard, and I'm still stuck at the \"why\" question. Why do we\n> want to run Alice's code with Bob's permissions?\n> \n> The answers I have so far are abstract. For instance, maybe it's a\n> clever SRF that takes table names as inputs and you want people to only\n> be able to use the clever SRF with tables they have privileges on. But\n> that's not what most functions do, and it's certainly not what most\n> default expressions do.\n\ncurrent_role / current_user are certainly common as a default\nexpression. I agree that that's more of an edge case that would be nice\nto solve in a different way though. I do think there's some other use\ncases for SECURITY INVOKER but not enough folks understand the security\nrisk associated with it and it'd be good for us to improve on that\nsituation.\n\nThanks,\n\nStephen", "msg_date": "Mon, 27 Feb 2023 14:10:02 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Feb 27, 2023 at 1:25 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I think you are saying that we should still run Alice's code with the\n> privileges of Bob, but somehow make that safe(r) for Bob. Is that\n> right?\n\nYeah. That's the idea I was floating, at least.\n\n> That sounds hard, and I'm still stuck at the \"why\" question. Why do we\n> want to run Alice's code with Bob's permissions?\n>\n> The answers I have so far are abstract. For instance, maybe it's a\n> clever SRF that takes table names as inputs and you want people to only\n> be able to use the clever SRF with tables they have privileges on. But\n> that's not what most functions do, and it's certainly not what most\n> default expressions do.\n\nI guess I have a pretty hard time imagining that we can just\nobliterate SECURITY INVOKER entirely. It seems fundamentally\nreasonable to me that Alice might want to make some code available to\nbe executed in the form of a function or procedure but without\noffering to execute it with her own privileges. But I think maybe\nyou're asking a different question, which is whether when the code is\nattached to a table we ought to categorically switch to the table\nowner before executing it. I'm less sure about the answer to that\nquestion. We already take the position that VACUUM always runs as the\ntable owner, and while VACUUM runs index expressions but not for\nexample triggers, why not just be consistent and run all code that is\ntied to the table as the table owner, all the time?\n\nMaybe that's the right thing to do, but I think it would inevitably\nbreak some things for some users. Alice might choose to write her\ntriggers or default expressions in ways that rely on them running with\nBob's permissions in any number of ways. For instance, maybe those\nfunctions issue a SELECT query against an RLS-enabled table, such that\nthe answer depends on whose privileges are used to run the query. More\nsimply, she might refer to CURRENT_ROLE, say to record who inserted\nany particular row into her table, which seems like a totally\nreasonable thing to want to do. If she was feeling really clever, she\nmight even have designed queries that she's using inside those\ntriggers or default expressions to fail if Bob doesn't have enough\npermissions to do some particular modification that he's attempting,\nand thus block certain kinds of access to her own tables. That would\nbe pretty weird and perhaps too clever by half, but the point is that\nthe current behavior is probably known to many, many users and we\nreally can't know what they've done that depends on that. If we change\nany behavior here, some people are going to notice those changes, and\nthey may not like them.\n\nTo put that another way, we're not talking about a black-and-white\nsecurity vulnerability here, like a buffer overrun that allows for\narbitrary code execution. We're talking about a set of semantics that\nseem to be somewhat fragile and vulnerable to spawning security\nproblems. Nobody wants those security problems, for sure. But it\ndoesn't follow that nobody is relying on the semantics.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Feb 2023 16:13:59 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, 2023-02-27 at 14:10 -0500, Stephen Frost wrote:\n> I do think there are some use-cases for it, but agree that it'd be\n> better to encourage more use of SECURITY DEFINER and one approach to\n> that might be to have a way for users to explicitly say \"don't run\n> code\n> that isn't mine or a superuser's with my privileges.\" \n\nI tried that:\n\nhttps://www.postgresql.org/message-id/75b0dbb55e9febea54c441efff8012a6d2cb5bd7.camel@j-davis.com\n\nbut Andres pointed out some problems with my implementation. They\ndidn't seem easily fixable, but perhaps with more effort it could work\n(run all the expressions as security definer, as well?).\n\n> Of course, we\n> need to make sure it's possible to write safe SECURITY DEFINER\n> functions\n> and to be clear about how to do that to avoid the risk in the other\n> direction.\n\nAgreed. Perhaps we can force search_path to be set for SECURITY\nDEFINER, and require that the temp schema be explicitly included rather\nthan the current \"must be at the end\". We could also provide a way to\nturn public access off in the same statement, so that you don't need to\nuse a transaction block to keep the function private.\n\n> I don't think we'd be able to get away with just getting rid of\n> SECURITY\n> INVOKER entirely or even in changing the current way triggers (or\n> functions in views, etc) are run by default.\n\nI didn't propose anything radical. I'm just trying to get some\nagreement that SECURITY INVOKER is central to a lot of our security\nwoes, and that we should be treating it with skepticism on a\nfundamental level.\n\nIndividual proposals for how to get away from SECURITY INVOKER should\nbe evaluated on their merits (i.e. don't break a bunch of stuff).\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 27 Feb 2023 16:03:15 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, 2023-02-27 at 16:13 -0500, Robert Haas wrote:\n> On Mon, Feb 27, 2023 at 1:25 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > I think you are saying that we should still run Alice's code with\n> > the\n> > privileges of Bob, but somehow make that safe(r) for Bob. Is that\n> > right?\n> \n> Yeah. That's the idea I was floating, at least.\n\nIsn't that a hard problem; maybe impossible?\n\n> \n> I guess I have a pretty hard time imagining that we can just\n> obliterate SECURITY INVOKER entirely.\n\nOf course not.\n\n> It seems fundamentally\n> reasonable to me that Alice might want to make some code available to\n> be executed in the form of a function or procedure but without\n> offering to execute it with her own privileges.\n\nIt also seems fundamentally reasonable that if someone grants you\nprivileges on one of their tables, it might be safe to access it.\n\nI'm sure there are a few use cases for SECURITY INVOKER, but they are\nquite narrow.\n\nPerhaps most frustratingly, even if none of the users on a system has\nany use case for SECURITY INVOKER, they still must all live in fear of\naccessing each others' tables, because at any time a SECURITY INVOKER\nfunction could be attached to one of the tables.\n\nI feel like we are giving up mainstream utility and safety in exchange\nfor contrived or exceptional cases. That's not a good trade.\n\n> We already take the position that VACUUM always runs as the\n> table owner, and while VACUUM runs index expressions but not for\n> example triggers, why not just be consistent and run all code that is\n> tied to the table as the table owner, all the time?\n\nI'd also extend this to default expressions and other code that can be\nexecuted implicitly.\n\n> Maybe that's the right thing to do\n\nIf it's the right place to go, then I think we should consider\nreasonable steps to take in that direction that don't cause unnecessary\nbreakage.\n\n> , but I think it would inevitably\n> break some things for some users.\n\nNot all steps would be breaking changes, and a lot of those steps are\nthings we should do anyway. We could make it easier to write safe\nSECURITY DEFINER functions, provide more tools for users to opt-out of\nexecuting SECURITY INVOKER code, provide a way for superusers to safely\ndrop privileges, document the problems with security invoker and what\nto do about them, etc.\n\n> Alice might choose to write her\n> triggers or default expressions in ways that rely on them running\n> with\n> Bob's permissions in any number of ways.\n\nSure, breakage is possible, and we should mitigate it.\n\nBut we also shouldn't exaggerate it -- for instance, others have\nproposed that we run code as the table owner for logical subscriptions,\nand that's going to break things in the same way. Arguably, if we are\ngoing to break something, it's better to break it consistently rather\nthan one subsystem at a time.\n\nBack to the $SUBJECT, if we allow non-superusers to run subscriptions,\nand the subscription runs the code as the table owner, that might also\nlead to some weird behavior for triggers that rely on SECURITY INVOKER\nsemantics.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 27 Feb 2023 16:37:26 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Mon, 2023-02-27 at 14:10 -0500, Stephen Frost wrote:\n> > I do think there are some use-cases for it, but agree that it'd be\n> > better to encourage more use of SECURITY DEFINER and one approach to\n> > that might be to have a way for users to explicitly say \"don't run\n> > code\n> > that isn't mine or a superuser's with my privileges.\" \n> \n> I tried that:\n> \n> https://www.postgresql.org/message-id/75b0dbb55e9febea54c441efff8012a6d2cb5bd7.camel@j-davis.com\n> \n> but Andres pointed out some problems with my implementation. They\n> didn't seem easily fixable, but perhaps with more effort it could work\n> (run all the expressions as security definer, as well?).\n\nPresumably. Ultimately, I tend to agree it won't be easy. That doesn't\nmean it's not a worthwhile effort.\n\n> > Of course, we\n> > need to make sure it's possible to write safe SECURITY DEFINER\n> > functions\n> > and to be clear about how to do that to avoid the risk in the other\n> > direction.\n> \n> Agreed. Perhaps we can force search_path to be set for SECURITY\n> DEFINER, and require that the temp schema be explicitly included rather\n> than the current \"must be at the end\". We could also provide a way to\n> turn public access off in the same statement, so that you don't need to\n> use a transaction block to keep the function private.\n\nWe do pretty strongly encourage a search_path setting for SECURITY\nDEFINER today.. That said, I'm not against pushing on that harder. The\nissue about temporary schemas is a more difficult issue... but frankly,\nI'd like an option to say \"no temporary schemas should be allowed in my\nsearch path\" when it comes to a security definer function.\n\n> > I don't think we'd be able to get away with just getting rid of\n> > SECURITY\n> > INVOKER entirely or even in changing the current way triggers (or\n> > functions in views, etc) are run by default.\n> \n> I didn't propose anything radical. I'm just trying to get some\n> agreement that SECURITY INVOKER is central to a lot of our security\n> woes, and that we should be treating it with skepticism on a\n> fundamental level.\n\nSure, but if we want to make progress then we have to provide a\ndirection for folks to go in that's both secure and convenient.\n\n> Individual proposals for how to get away from SECURITY INVOKER should\n> be evaluated on their merits (i.e. don't break a bunch of stuff).\n\nOf course. That said ... we don't want to spend a lot of time\ngoing in a direction that won't bear fruit; I'm hopeful that this\ndirection will though.\n\nThanks,\n\nStephen", "msg_date": "Mon, 27 Feb 2023 21:31:47 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> Not all steps would be breaking changes, and a lot of those steps are\n> things we should do anyway. We could make it easier to write safe\n> SECURITY DEFINER functions, provide more tools for users to opt-out of\n> executing SECURITY INVOKER code, provide a way for superusers to safely\n> drop privileges, document the problems with security invoker and what\n> to do about them, etc.\n\nAgreed.\n\n> But we also shouldn't exaggerate it -- for instance, others have\n> proposed that we run code as the table owner for logical subscriptions,\n> and that's going to break things in the same way. Arguably, if we are\n> going to break something, it's better to break it consistently rather\n> than one subsystem at a time.\n\nI tend to agree with this.\n\n> Back to the $SUBJECT, if we allow non-superusers to run subscriptions,\n> and the subscription runs the code as the table owner, that might also\n> lead to some weird behavior for triggers that rely on SECURITY INVOKER\n> semantics.\n\nIndeed.\n\nThanks,\n\nStephen", "msg_date": "Mon, 27 Feb 2023 21:38:35 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Feb 27, 2023 at 7:37 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > Yeah. That's the idea I was floating, at least.\n>\n> Isn't that a hard problem; maybe impossible?\n\nIt doesn't seem that hard to me; maybe I'm missing something.\n\nThe existing SECURITY_RESTRICTED_OPERATION flag basically prevents you\nfrom tinkering with the session state. If we also had a similar flags\nlike DATABASE_READS_PROHIBITED and DATABASE_WRITES_PROHIBITED (or just\na combined DATABASE_ACCESS_PROHIBITED flag) I think that would be\npretty close to what we need. The idea would be that, when a user\nexecutes a function or procedure owned by a user that they don't trust\ncompletely, we'd set\nSECURITY_RESTRICTED_OPERATION|DATABASE_READS_PROHIBITED|DATABASE_WRITES_PROHIBITED.\nAnd we could provide a user with a way to express the degree of trust\nthey have in some other user or perhaps even some specific function,\ne.g.\n\nSET trusted_roles='alice:read';\n\n...could mean that I trust alice to read from the database with my\npermissions, should I happen to run code provided by her in SECURITY\nINVOKER modacke.\n\nI'm sure there's some details to sort out here, e.g. around security\nrelated to the trusted_roles GUC itself. But I don't really see a\nfundamental problem. We can invent arbitrary flags that prohibit\nclasses of operations that are of concern, set them by default in\ncases where concern is justified, and then give users who want the\ncurrent behavior some kind of escape hatch that causes those flags to\nnot get set after all. Not only does such a solution not seem\nimpossible, I can possibly even imagine back-patching it, depending on\nexactly what the shape of the final solution is, how important we\nthink it is to get a fix out there, and how brave I'm feeling that\nday.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 28 Feb 2023 08:37:02 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-02-22 09:18:34 -0800, Jeff Davis wrote:\n> I can't resist mentioning that these are all SECURITY INVOKER problems.\n> SECURITY INVOKER is insecure unless the invoker absolutely trusts the\n> definer, and that only really makes sense if the definer is a superuser\n> (or something very close). That's why we keep adding exceptions with\n> SECURITY_RESTRICTED_OPERATION, which is really just a way to silently\n> ignore the SECURITY INVOKER label and use SECURITY DEFINER instead.\n> \n> At some point we need to ask: \"when is SECURITY INVOKER both safe and\n> useful?\" and contain it to those cases, rather than silently ignoring\n> it in an expanding list of cases.\n\nI can only repeat myself in stating that SECURITY DEFINER solves none of the\nrelevant issues. I included several examples of why it doesn't in the recent\nthread about \"blocking SECURITY INVOKER\". E.g. that default arguments of\nSECDEF functions are evaluated with the current user's privileges, not the\nfunction owner's privs:\n\nhttps://postgr.es/m/20230113032943.iyxdu7bnxe4cmbld%40awork3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 28 Feb 2023 11:28:30 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Tue, 2023-02-28 at 11:28 -0800, Andres Freund wrote:\n> I can only repeat myself in stating that SECURITY DEFINER solves none\n> of the\n> relevant issues. I included several examples of why it doesn't in the\n> recent\n> thread about \"blocking SECURITY INVOKER\". E.g. that default arguments\n> of\n> SECDEF functions are evaluated with the current user's privileges,\n> not the\n> function owner's privs:\n> \n> https://postgr.es/m/20230113032943.iyxdu7bnxe4cmbld%40awork3.anarazel.de\n\nI was speaking a bit loosely, using \"SECURITY DEFINER\" to mean the\nsemantics of executing code as the one who wrote it. I didn't\nspecifically mean the function marker, because as you pointed out in\nthe other thread, that's not enough.\n\n From your email it looks like there is still a path forward:\n\n\"The proposal to not trust any expressions controlled by untrusted\nusers at least allows to prevent execution of code, even if it doesn't\nprovide a way to execute the code in a safe manner. Given that we\ndon't have the former, it seems foolish to shoot for the latter.\"\n\nAnd later:\n\n\"I think the combination of\na) a setting that restricts evaluation of any non-trusted expressions,\n independent of the origin\nb) an easy way to execute arbitrary statements within\n SECURITY_RESTRICTED_OPERATION\"\n\nMy takeaway from that thread was that we need a mechanism to deal with\nnon-function code (e.g. default expressions) first; but once we have\nthat, it opens up the design space to better solutions or at least\nmitigations. Is that right?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 28 Feb 2023 12:36:38 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Tue, 2023-02-28 at 08:37 -0500, Robert Haas wrote:\n> The existing SECURITY_RESTRICTED_OPERATION flag basically prevents\n> you\n> from tinkering with the session state.\n\nCurrently, every time we set that flag we also run all the code as the\ntable owner.\n\nYou're suggesting using the SECURITY_RESTRICTED_OPERATION flag, along\nwith the new security flags, but not switch to the table owner, right?\n\n> If we also had a similar flags\n> like DATABASE_READS_PROHIBITED and DATABASE_WRITES_PROHIBITED (or\n> just\n> a combined DATABASE_ACCESS_PROHIBITED flag) I think that would be\n> pretty close to what we need. The idea would be that, when a user\n> executes a function or procedure \n\nOr default expressions, I presume. If we at least agree on this point,\nthen I think we should try to find a way to treat these other hunks of\ncode in a secure way (which I think is what Andres was suggesting).\n\n> owned by a user that they don't trust\n> completely, we'd set\n> SECURITY_RESTRICTED_OPERATION|DATABASE_READS_PROHIBITED|DATABASE_WRIT\n> ES_PROHIBITED.\n\nIt seems like you're saying to basically just keep the user ID the\nsame, and maybe keep USAGE privileges, but not be able to do anything\nelse? Might be useful. Kind of like running it as a nobody user but\nwithout the problems you mentioned. Some details to think about, I'm\nsure.\n\n> And we could provide a user with a way to express the degree of trust\n> they have in some other user or perhaps even some specific function,\n> e.g.\n> \n> SET trusted_roles='alice:read';\n> \n> ...could mean that I trust alice to read from the database with my\n> permissions, should I happen to run code provided by her in SECURITY\n> INVOKER modacke.\n\nI'm not very excited about inventing a new privilege language inside a\nGUC, but perhaps a simpler form could be a reasonable mitigation (or at\nleast a starting place).\n\n> I'm sure there's some details to sort out here, e.g. around security\n> related to the trusted_roles GUC itself. But I don't really see a\n> fundamental problem. We can invent arbitrary flags that prohibit\n> classes of operations that are of concern, set them by default in\n> cases where concern is justified, and then give users who want the\n> current behavior some kind of escape hatch that causes those flags to\n> not get set after all. Not only does such a solution not seem\n> impossible, I can possibly even imagine back-patching it, depending\n> on\n> exactly what the shape of the final solution is, how important we\n> think it is to get a fix out there, and how brave I'm feeling that\n> day.\n\nUnless the trusted roles defaults to '*', then I think it will still\nbreak some things.\n\n\nOne of my key tests for user-facing proposals is whether the\ndocumentation will be reasonable or not. Most of these proposals to\nmake SECURITY INVOKER less bad fail that test.\n\nEach of these ideas and sub-ideas affect the semantics, and should be\ndocumented. But how do we document that some code runs as you, some as\nthe person who wrote it, sometimes we obey SECURITY INVOKER and\nsometimes we ignore it and use DEFINER semantics, some code is outside\na function and always executes as the invoker, some code has some\nsecurity flags, and some code has more security flags, code can change\nbetween the time you look at it and the time it runs, and it's all\nfiltered through GUCs with their own privilege sub-language?\n\nOK, let's assume that we have all of that documented, then how do we\nguide users on what reasonable best practices are for the GUC settings,\netc.? Or do we just say \"this is mechanically how all these parts work,\ngood luck assembling it into a secure system!\". [ Note: I feel like\nthis is the state we are in now. Even if technically we don't have live\nsecurity bugs that I'm aware of, we are setting users up for security\nproblems. ]\n\nOn the other hand, if we focus on executing code as the user who wrote\nit in most places, then the documentation will be something like: \"you\ndefined the table, you wrote the code, it runs as you, here are some\nbest practices for writing secure code\". And we have some different\ndocumentation for writing a cool SECURITY INVOKER function and how to\nget other users to trust you enough to run it. That sounds a LOT more\nunderstandable for users.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 28 Feb 2023 13:01:05 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Tue, 2023-02-28 at 08:37 -0500, Robert Haas wrote:\n> > The existing SECURITY_RESTRICTED_OPERATION flag basically prevents\n> > you\n> > from tinkering with the session state.\n> \n> Currently, every time we set that flag we also run all the code as the\n> table owner.\n> \n> You're suggesting using the SECURITY_RESTRICTED_OPERATION flag, along\n> with the new security flags, but not switch to the table owner, right?\n\nI'm having trouble following this too, I have to admit. If we aren't\nchanging who we're running the code under.. but making it so that the\ncode isn't actually able to do anything then that doesn't strike me as\nlikely to actually be useful? Surely things like triggers which are\nused to update another table or insert into another table what happened\non the table with the trigger need to be allowed to modify the database-\nhow do we make that possible while the code runs as the invoker and not\nthe table owner when the table owner is the one who gets to write the\ncode?\n\n> > If we also had a similar flags\n> > like DATABASE_READS_PROHIBITED and DATABASE_WRITES_PROHIBITED (or\n> > just\n> > a combined DATABASE_ACCESS_PROHIBITED flag) I think that would be\n> > pretty close to what we need. The idea would be that, when a user\n> > executes a function or procedure \n> \n> Or default expressions, I presume. If we at least agree on this point,\n> then I think we should try to find a way to treat these other hunks of\n> code in a secure way (which I think is what Andres was suggesting).\n\nWould need to apply to functions in views and functions in RLS too,\nalong wth default expressions and everything else that could be defined\nby one person and run by another.\n\n> > owned by a user that they don't trust\n> > completely, we'd set\n> > SECURITY_RESTRICTED_OPERATION|DATABASE_READS_PROHIBITED|DATABASE_WRIT\n> > ES_PROHIBITED.\n> \n> It seems like you're saying to basically just keep the user ID the\n> same, and maybe keep USAGE privileges, but not be able to do anything\n> else? Might be useful. Kind of like running it as a nobody user but\n> without the problems you mentioned. Some details to think about, I'm\n> sure.\n\nWhile there's certainly some use-cases where a completely unprivileged\nuser would work, there's certainly an awful lot where it wouldn't.\nHaving that as an option might be interesting for those much more\nlimited use-cases and maybe you could even say \"only run functions which\nare owned by a superuser or X roles\" but it's certainly not a general\nsolution to the problem.\n\n> > And we could provide a user with a way to express the degree of trust\n> > they have in some other user or perhaps even some specific function,\n> > e.g.\n> > \n> > SET trusted_roles='alice:read';\n> > \n> > ...could mean that I trust alice to read from the database with my\n> > permissions, should I happen to run code provided by her in SECURITY\n> > INVOKER modacke.\n> \n> I'm not very excited about inventing a new privilege language inside a\n> GUC, but perhaps a simpler form could be a reasonable mitigation (or at\n> least a starting place).\n\nI'm pretty far down the path of \"wow that looks really difficult to work\nwith\", to put it nicely.\n\n> > I'm sure there's some details to sort out here, e.g. around security\n> > related to the trusted_roles GUC itself. But I don't really see a\n> > fundamental problem. We can invent arbitrary flags that prohibit\n> > classes of operations that are of concern, set them by default in\n> > cases where concern is justified, and then give users who want the\n> > current behavior some kind of escape hatch that causes those flags to\n> > not get set after all. Not only does such a solution not seem\n> > impossible, I can possibly even imagine back-patching it, depending\n> > on\n> > exactly what the shape of the final solution is, how important we\n> > think it is to get a fix out there, and how brave I'm feeling that\n> > day.\n> \n> Unless the trusted roles defaults to '*', then I think it will still\n> break some things.\n\nDefaulting to an option that is \"don't break anything\" while giving\nusers flexibility to test out other, more secure, options seems like it\nwould be a pretty reasonable way forward, generally. That said.. I\ndon't really think this particular approach ends up being a good\ndirection to go in...\n\n> One of my key tests for user-facing proposals is whether the\n> documentation will be reasonable or not. Most of these proposals to\n> make SECURITY INVOKER less bad fail that test.\n\nand this is certainly a very good point as to why.\n\nThanks,\n\nStephen", "msg_date": "Tue, 28 Feb 2023 21:22:48 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Tue, Feb 28, 2023 at 4:01 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> You're suggesting using the SECURITY_RESTRICTED_OPERATION flag, along\n> with the new security flags, but not switch to the table owner, right?\n\nCorrect.\n\n> Or default expressions, I presume. If we at least agree on this point,\n> then I think we should try to find a way to treat these other hunks of\n> code in a secure way (which I think is what Andres was suggesting).\n\nYeah, or any other expressions. Basically impose restrictions when the\nuser running the code is not the same as the user who provided the\ncode.\n\n> It seems like you're saying to basically just keep the user ID the\n> same, and maybe keep USAGE privileges, but not be able to do anything\n> else? Might be useful. Kind of like running it as a nobody user but\n> without the problems you mentioned. Some details to think about, I'm\n> sure.\n\nYep.\n\n> I'm not very excited about inventing a new privilege language inside a\n> GUC, but perhaps a simpler form could be a reasonable mitigation (or at\n> least a starting place).\n\nI'm not very sure about this part, either. I think we need some way of\nshutting off whatever new controls we impose, but the shape of it is\nunclear to me and I think there are a bunch of problems.\n\n> Unless the trusted roles defaults to '*', then I think it will still\n> break some things.\n\nDefinitely. IMHO, it's OK to break some things, certainly in a major\nrelease and maybe even in a minor release. But we don't want to break\nmore things that we really need to break. And as you say, we want the\nrestrictions to be comprehensible.\n\n> Each of these ideas and sub-ideas affect the semantics, and should be\n> documented. But how do we document that some code runs as you, some as\n> the person who wrote it, sometimes we obey SECURITY INVOKER and\n> sometimes we ignore it and use DEFINER semantics, some code is outside\n> a function and always executes as the invoker, some code has some\n> security flags, and some code has more security flags, code can change\n> between the time you look at it and the time it runs, and it's all\n> filtered through GUCs with their own privilege sub-language?\n>\n> OK, let's assume that we have all of that documented, then how do we\n> guide users on what reasonable best practices are for the GUC settings,\n> etc.? Or do we just say \"this is mechanically how all these parts work,\n> good luck assembling it into a secure system!\". [ Note: I feel like\n> this is the state we are in now. Even if technically we don't have live\n> security bugs that I'm aware of, we are setting users up for security\n> problems. ]\n>\n> On the other hand, if we focus on executing code as the user who wrote\n> it in most places, then the documentation will be something like: \"you\n> defined the table, you wrote the code, it runs as you, here are some\n> best practices for writing secure code\". And we have some different\n> documentation for writing a cool SECURITY INVOKER function and how to\n> get other users to trust you enough to run it. That sounds a LOT more\n> understandable for users.\n\nWhat I was imagining is that we would document something like: A table\ncan have executable code associated with it in a variety of ways. For\nexample, it can have triggers, default expressions, check constraints,\nor row-level security filters. In most cases, these expressions are\nexecuted with the privileges of the user performing the operation on\nthe table, except when SECURITY DEFINER functions are used. Because\nthese expressions are set by the table owner and executed by the users\naccessing the table, there is a risk that the table owner could\ninclude malicious code that usurps the privileges of the user\naccessing the table. For this reason, these expressions are, by\ndefault, restricted from doing <things>. If you want to allow those\noperations, you can <something>.\n\nI agree that running code as the table owner is helpful in a bunch of\nscenarios, but I also don't think it fixes everything. You earlier\nmentioned that switching to the table owner seems to be just a way of\nturning SECURITY INVOKER into SECURITY DEFINER in random places, or\nmaybe that's not exactly what you said but that's what I took from it.\nAnd I think that's right. If we just slather user context switches\neverywhere, I'm not actually very sure that's going to be\ncomprehensible behavior: if my trigger function is SECURITY INVOKER,\nwhy is it getting executed as me, not the inserting user? I also think\nthere are plenty of cases where that could just replace the current\nset of security problems with a new set of security problems. If the\ntrigger function is SECURITY INVOKER, then the user who wrote it\ndoesn't have to worry about securing it against attacks by users\naccessing the table; it's just running with the permissions of the\nuser performing the DML. Maybe there are correctness issues if you\ndon't lock down search_path etc., but there's no security compromise\nbecause there's no user ID switching. As soon as you magically turn\nthat into a SECURITY DEFINER function, you've provided a way for the\nusers performing DML to attack the table owner.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Mar 2023 10:05:42 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, Feb 28, 2023 at 4:01 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > Or default expressions, I presume. If we at least agree on this point,\n> > then I think we should try to find a way to treat these other hunks of\n> > code in a secure way (which I think is what Andres was suggesting).\n> \n> Yeah, or any other expressions. Basically impose restrictions when the\n> user running the code is not the same as the user who provided the\n> code.\n\nWould this have carve-outs for things like \"except if the user providing\nthe code is trusted/superuser\"? Seems like that would be necessary for\nthe function to be able to do more-or-less anything, but then I worry\nthat there's superuser-owned code which could leak information or be\nused by a malicious owner as that code would still be running as the\ninvoking user.. Perhaps we could say that the function also has to be\nleakproof, but that isn't quite the same issue and therefore it seems\nlike we'd need to decorate all of the functions with another flag that's\nallowed to be run in this manner.\n\nRandom thought- what if the function has a NOTIFY in it with a payload\nof some kind of sensitive information?\n\n> > Unless the trusted roles defaults to '*', then I think it will still\n> > break some things.\n> \n> Definitely. IMHO, it's OK to break some things, certainly in a major\n> release and maybe even in a minor release. But we don't want to break\n> more things that we really need to break. And as you say, we want the\n> restrictions to be comprehensible.\n\nReally hard to say if whatever this is is OK for back-patching and\nbreaking minor releases without knowing exactly what is getting broken\n... but it'd have to be a very clear edge case of what gets broken for\nit to be sensible for breaking in a minor release without a very clear\nvulnerability or such that's being fixed with a simple work-around.\nJust making all auditing triggers break in a minor release certainly\nwouldn't be acceptable, as an example that I imagine we all agree with.\n\n> > Each of these ideas and sub-ideas affect the semantics, and should be\n> > documented. But how do we document that some code runs as you, some as\n> > the person who wrote it, sometimes we obey SECURITY INVOKER and\n> > sometimes we ignore it and use DEFINER semantics, some code is outside\n> > a function and always executes as the invoker, some code has some\n> > security flags, and some code has more security flags, code can change\n> > between the time you look at it and the time it runs, and it's all\n> > filtered through GUCs with their own privilege sub-language?\n> >\n> > OK, let's assume that we have all of that documented, then how do we\n> > guide users on what reasonable best practices are for the GUC settings,\n> > etc.? Or do we just say \"this is mechanically how all these parts work,\n> > good luck assembling it into a secure system!\". [ Note: I feel like\n> > this is the state we are in now. Even if technically we don't have live\n> > security bugs that I'm aware of, we are setting users up for security\n> > problems. ]\n> >\n> > On the other hand, if we focus on executing code as the user who wrote\n> > it in most places, then the documentation will be something like: \"you\n> > defined the table, you wrote the code, it runs as you, here are some\n> > best practices for writing secure code\". And we have some different\n> > documentation for writing a cool SECURITY INVOKER function and how to\n> > get other users to trust you enough to run it. That sounds a LOT more\n> > understandable for users.\n> \n> What I was imagining is that we would document something like: A table\n> can have executable code associated with it in a variety of ways. For\n> example, it can have triggers, default expressions, check constraints,\n> or row-level security filters. In most cases, these expressions are\n> executed with the privileges of the user performing the operation on\n> the table, except when SECURITY DEFINER functions are used. Because\n> these expressions are set by the table owner and executed by the users\n> accessing the table, there is a risk that the table owner could\n> include malicious code that usurps the privileges of the user\n> accessing the table. For this reason, these expressions are, by\n> default, restricted from doing <things>. If you want to allow those\n> operations, you can <something>.\n\nWell, one possible answer to 'something' might be 'use SECURITY DEFINER\nfunctions which are owned by a role allowed to do <things>'. Note that\nthat doesn't have to be the table owner though, it could be a much more\nconstrained role. That approach would allow us to leverage the existing\nGRANT/RLS/et al system for what's allowed and avoid having to create new\nthings like a complex permission system inside of a GUC for users to\nhave to understand.\n\n> I agree that running code as the table owner is helpful in a bunch of\n> scenarios, but I also don't think it fixes everything. You earlier\n> mentioned that switching to the table owner seems to be just a way of\n> turning SECURITY INVOKER into SECURITY DEFINER in random places, or\n> maybe that's not exactly what you said but that's what I took from it.\n> And I think that's right. If we just slather user context switches\n> everywhere, I'm not actually very sure that's going to be\n> comprehensible behavior: if my trigger function is SECURITY INVOKER,\n> why is it getting executed as me, not the inserting user? I also think\n> there are plenty of cases where that could just replace the current\n> set of security problems with a new set of security problems. If the\n> trigger function is SECURITY INVOKER, then the user who wrote it\n> doesn't have to worry about securing it against attacks by users\n> accessing the table; it's just running with the permissions of the\n> user performing the DML. Maybe there are correctness issues if you\n> don't lock down search_path etc., but there's no security compromise\n> because there's no user ID switching. As soon as you magically turn\n> that into a SECURITY DEFINER function, you've provided a way for the\n> users performing DML to attack the table owner.\n\nI agree that we don't want to just turn \"SECURITY INVOKER function when\nrun as a trigger\" into SECURITY DEFINER, and that SECURITY DEFINER\nfunctions need to be able to be written in a mannor that limits the risk\nof them being able to be abused to gain control of the role which owns\nthe function (the latter being something we've worked on but should\ncertainly continue to improve on, independently of any of this..).\n\nAlong the same general vein of \"don't break things\", perhaps an approach\nwould be a GUC that users can enable that says \"don't allow code that\ndoes something dangerous (again, need to figure out how to do that..)\nwhen it's written by someone else to run with my privileges (and\ntherefore isn't a SECURITY DEFINER function)\". The idea here being that\nwe want to encourage users to enable that, maybe we eventually enable it\nby default in a new major version, and push people in the direction of\nwriting secure SECURITY DEFINER functions for the cases where they\nactually need the trigger, or such, to do something beyond whatever we\ndefine as being 'safe'. This keeps the GUC as a simple on/off or enum\nlike row_security, fails the action when something not-safe is being\nattempted, and gives the flexibility of our existing GRANT/RLS system\nfor the case where a SECURITY DEFINER function is created to perform the\noperation. This does still need some supporting functions like 'calling\nrole' or such because there could be many many roles doing an INSERT\ninto a table which runs a trigger and that trigger runs as some other\nrole, but the other role could be one that has more privileges than the\nINSERT'ing role and therefore it needs to implement additional checks on\nthe operation to limit what the INSERT'ing role is allowed to do.\n\nI do worry about asking function authors to effectively rewrite these\nkinds of permission checks and wonder if there's a way we could make it\neasier for them- perhaps a kind of function that's SECURITY DEFINER in\nthat it runs as the owner of the function, but it sets a flag saying\n\"only allow things that the function owner is allowed to do AND the\ncalling user is allowed to do\", similar to the 'intersection of\nprivileges' idea mentioned elsewhere on this thread.\n\nThanks,\n\nStephen", "msg_date": "Wed, 1 Mar 2023 10:48:22 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Mar 1, 2023 at 10:48 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > Yeah, or any other expressions. Basically impose restrictions when the\n> > user running the code is not the same as the user who provided the\n> > code.\n>\n> Would this have carve-outs for things like \"except if the user providing\n> the code is trusted/superuser\"? Seems like that would be necessary for\n> the function to be able to do more-or-less anything, but then I worry\n> that there's superuser-owned code which could leak information or be\n> used by a malicious owner as that code would still be running as the\n> invoking user.. Perhaps we could say that the function also has to be\n> leakproof, but that isn't quite the same issue and therefore it seems\n> like we'd need to decorate all of the functions with another flag that's\n> allowed to be run in this manner.\n\nYes, I think there can be carve-outs based on the relationship of the\nusers involved -- if the user who provided the code is the superuser\nor some other user who can anyway run whatever they want as the user\nperforming the operation, then there's no point in imposing any\nrestrictions -- and I think there can also be some way of setting\npolicy. I proposed a GUC in an earlier email, and you proposed one\nwith somewhat different semantics in this email, and I'm not sure that\neither of those things in particular is right or that we ought to be\nusing a GUC for this at all. However, there should almost certainly be\nSOME way for the superuser to turn any new restrictions off, and there\nshould probably also be some way for an unprivileged user to say \"you\nknow, I am totally OK with running any code that alice provides --\njust go with it.\"\n\nI don't think we're at a point where we can conclude on what those\nmechanisms should look like just yet, but I think that everyone who\nhas spoken up agrees that they ought to exist, assuming we go in this\ndirection at all.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Mar 2023 12:33:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, 2023-03-01 at 10:05 -0500, Robert Haas wrote:\n> For this reason, these expressions are, by\n> default, restricted from doing <things>.\n\nThe hard part is defining <things> without resorting to a bunch of\nspecial cases, and also in a way that doesn't break a bunch of stuff.\n\n> You earlier\n> mentioned that switching to the table owner seems to be just a way of\n> turning SECURITY INVOKER into SECURITY DEFINER in random places, or\n> maybe that's not exactly what you said but that's what I took from\n> it.\n\nYeah, though I glossed over some details, see below.\n\n> If we just slather user context switches\n> everywhere, I'm not actually very sure that's going to be\n> comprehensible behavior: if my trigger function is SECURITY INVOKER,\n> why is it getting executed as me, not the inserting user?\n\nLet's consider other expressions first. The proposal is that all\nexpressions attached to a table should run as the table owner (set\naside compatibility concerns for a moment). If those expressions call a\nSECURITY INVOKER function, the invoker would need to be the table owner\nas well. Users could get confused by that, but I think it's\ndocumentable and understandable; and it's really the only thing that\nmakes sense -- otherwise changing the user is completely useless.\n\nWe should consider triggers as just another expression being executed,\nand the invoker is the table owner. The problem is that it's a little\nannoying to users because they probably defined the function for the\nsole purpose of being a trigger function for a single table, and it\nmight look as though the SECURITY INVOKER label was ignored.\n\nBut there is a difference, which I glossed over before: SECURITY\nINVOKER on a trigger function would still have significance, because\nthe function owner (definer) and table owner (invoker) could still be\ndifferent in the case of a trigger, just like in an expression.\n\nThis goes back to my point that SECURITY INVOKER is more complex for us\nto document and for users to understand. The user *must* understand who\nthe invoker is in various contexts. That's the situation today and\nthere's no escaping it. We aren't making things any worse, at least as\nlong as we can sort out compatibility in a reasonable way.\n\n(Aside: I'm having some serious concerns about how the invoker of a\nfunction called in a view is not the view definer. That's another thing\nwe'll need to fix, because it's another way of putting SECURITY INVOKER\ncode in front of someone without them knowing.)\n\n(Aside: We should take a second look at the security invoker views\nbefore we release them. I know that I learned some things during this\ndiscussion and a fresh look might be useful.)\n\n> As soon as you magically turn\n> that into a SECURITY DEFINER function, you've provided a way for the\n> users performing DML to attack the table owner.\n\nI don't think it's magic, as I said above. But I assume that your more\ngeneral point is that if we take some responsibility away from the\ninvoker and place it on the definer, then it creates room for new kinds\nof problems. And I agree.\n\nThe point of moving responsibility to the definer is that the definer\ncan actually do something to protect themselves (nail down search_path,\nrestrict USAGE privs, and avoid dynamic SQL); whereas the invoker is\nnearly helpless.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 01 Mar 2023 10:13:55 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Mar 1, 2023 at 1:13 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I don't think it's magic, as I said above. But I assume that your more\n> general point is that if we take some responsibility away from the\n> invoker and place it on the definer, then it creates room for new kinds\n> of problems. And I agree.\n>\n> The point of moving responsibility to the definer is that the definer\n> can actually do something to protect themselves (nail down search_path,\n> restrict USAGE privs, and avoid dynamic SQL); whereas the invoker is\n> nearly helpless.\n\nI think there's some truth to that allegation, but I think it largely\ncomes from the fact that we've given very little thought or attention\nto this problem. We have a section in the documentation on writing\nSECURITY DEFINER functions safely because we've known for a long time\nthat it's dangerous and we've provided some (imperfect) tools for\ndealing with it, like allowing a SET search_path clause to be attached\nto a function definition. We have no comparable documentation section\nabout SECURITY INVOKER because we haven't historically taken that\nseriously as a security hazard and we have no tools to make it safe.\nBut we could, as with what I'm proposing here, or the user/function\ntrust mechanism previously proposed by Noah, or various other ideas\nthat we might have.\n\nI don't like the idea of saying that we're not going to try to invent\nanything new and just push people into using the stuff we already\nhave. The stuff we have for security SECURITY DEFINER functions is not\nvery good. True, it's better than what we have for protecting against\nthe risks inherent in SECURITY INVOKER, but that's not saying much:\nanything at all is better than nothing. But it's easy to forget a SET\nsearch_path clause on one of your functions, or to include something\nin that search path that's not actually safe, or to have a problem\nthat isn't blocked by just setting search_path. Also, not that it's\nthe most important consideration here, but putting a SET clause on\nyour functions is really kind of expensive if what the function does\nis trivial, which if you're using it in an index expression or a\ndefault expression, will often be the case. I don't want to pretend\nlike I have all the answers here, but I find it really hard to believe\nthat pushing people to do the same kind of nonsense that's currently\nrequired when writing a SECURITY DEFINER function for a lot of their\nother functions as well is going to be a win. I think it will probably\nsuck.\n\nTo be fair, it's possible that there's no solution to this class of\nproblems that *doesn't* suck, but I think we should look a lot harder\nbefore coming to that conclusion. I've come to agree with your\ncontention that we're not taking the hazards of SECURITY INVOKER\nnearly seriously enough, but I think you're underestimating the\nhazards that SECURITY DEFINER poses, and overestimating how easy it is\nto avoid them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Mar 2023 16:06:14 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, 2023-03-01 at 16:06 -0500, Robert Haas wrote:\n\n> To be fair, it's possible that there's no solution to this class of\n> problems that *doesn't* suck, but I think we should look a lot harder\n> before coming to that conclusion.\n\nFair enough. The situation is bad enough that I'm willing to consider a\npretty wide range of solutions and mitigations that might otherwise be\nunappealing.\n\nI think there might be something promising in your idea to highly\nrestrict the privileges of code attached to a table. A lot of\nexpressions are really simple and don't need much to be both useful and\nsafe. We may not need the exact same solution for both default\nexpressions and triggers. Some details to work through, though.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 01 Mar 2023 14:27:25 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-02-28 12:36:38 -0800, Jeff Davis wrote:\n> On Tue, 2023-02-28 at 11:28 -0800, Andres Freund wrote:\n> > I can only repeat myself in stating that SECURITY DEFINER solves none\n> > of the\n> > relevant issues. I included several examples of why it doesn't in the\n> > recent\n> > thread about \"blocking SECURITY INVOKER\". E.g. that default arguments\n> > of\n> > SECDEF functions are evaluated with the current user's privileges,\n> > not the\n> > function owner's privs:\n> > \n> > https://postgr.es/m/20230113032943.iyxdu7bnxe4cmbld%40awork3.anarazel.de\n> \n> I was speaking a bit loosely, using \"SECURITY DEFINER\" to mean the\n> semantics of executing code as the one who wrote it. I didn't\n> specifically mean the function marker, because as you pointed out in\n> the other thread, that's not enough.\n\nOh, ok.\n\n\n> From your email it looks like there is still a path forward:\n> \n> \"The proposal to not trust any expressions controlled by untrusted\n> users at least allows to prevent execution of code, even if it doesn't\n> provide a way to execute the code in a safe manner. Given that we\n> don't have the former, it seems foolish to shoot for the latter.\"\n> \n> And later:\n> \n> \"I think the combination of\n> a) a setting that restricts evaluation of any non-trusted expressions,\n> independent of the origin\n> b) an easy way to execute arbitrary statements within\n> SECURITY_RESTRICTED_OPERATION\"\n> \n> My takeaway from that thread was that we need a mechanism to deal with\n> non-function code (e.g. default expressions) first; but once we have\n> that, it opens up the design space to better solutions or at least\n> mitigations. Is that right?\n\nI doubt it's realistic to change the user for all kinds of expressions\nindividually. A query can involve expressions controlled by many users,\nchanging the current user in a super granular way seems undesirable from a\nperformance and complexity pov.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Mar 2023 16:14:30 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-02-28 08:37:02 -0500, Robert Haas wrote:\n> On Mon, Feb 27, 2023 at 7:37 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > > Yeah. That's the idea I was floating, at least.\n> >\n> > Isn't that a hard problem; maybe impossible?\n>\n> It doesn't seem that hard to me; maybe I'm missing something.\n>\n> The existing SECURITY_RESTRICTED_OPERATION flag basically prevents you\n> from tinkering with the session state. If we also had a similar flags\n> like DATABASE_READS_PROHIBITED and DATABASE_WRITES_PROHIBITED (or just\n> a combined DATABASE_ACCESS_PROHIBITED flag) I think that would be\n> pretty close to what we need. The idea would be that, when a user\n> executes a function or procedure owned by a user that they don't trust\n> completely, we'd set\n> SECURITY_RESTRICTED_OPERATION|DATABASE_READS_PROHIBITED|DATABASE_WRITES_PROHIBITED.\n> And we could provide a user with a way to express the degree of trust\n> they have in some other user or perhaps even some specific function,\n> e.g.\n\nISTM that this would require annotating most functions in the system. There's\nmany problems besides accessing database contents. Just a few examples:\n\n- dblink functions to access another system / looping back\n- pg_read_file()/pg_file_write() allows almost arbitrary mischief\n- pg_stat_reset[_shared]()\n- creating/dropping logical replication slots\n- use untrusted PL functions\n- many more\n\nA single wrongly annotated function would be sufficient to escape. This\nincludes user defined functions.\n\n\nThis basically proposes that we can implement a safe sandbox for executing\narbitrary code in a privileged context. IMO history suggests that that's a\nhard thing to do.\n\nAm I missing something?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Mar 2023 16:34:40 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Mar 1, 2023 at 7:34 PM Andres Freund <andres@anarazel.de> wrote:\n> ISTM that this would require annotating most functions in the system. There's\n> many problems besides accessing database contents. Just a few examples:\n>\n> - dblink functions to access another system / looping back\n> - pg_read_file()/pg_file_write() allows almost arbitrary mischief\n> - pg_stat_reset[_shared]()\n> - creating/dropping logical replication slots\n> - use untrusted PL functions\n> - many more\n>\n> A single wrongly annotated function would be sufficient to escape. This\n> includes user defined functions.\n>\n> This basically proposes that we can implement a safe sandbox for executing\n> arbitrary code in a privileged context. IMO history suggests that that's a\n> hard thing to do.\n\nYeah, that's true, but I don't think switching users all the time is\ngoing to be great either. And it's not like other people haven't gone\nthis way: that's what plperl (not plperlu) is all about, and\nJavaScript running in your browser, and so on. Those things aren't\nproblem-free, of course, but we're all using them.\n\nWhen I was initially thinking about this, I thought that maybe we\ncould just block access to tables and utility statements. That's got\nproblems in both directions. On the one hand, there are functions like\nthe ones you propose here that have side effects which we might not\nwant to allow, and on the other hand, somebody might have an index\nexpression that does a lookup in a table that they \"never change\". The\nlatter case is problematic for non-security reasons, because there's\nan invisible dump-ordering constraint that must be obeyed for\ndump/restore to work at all, but there's no security issue. Still, I'm\nnot sure this idea is completely dead in the water. It doesn't seem\nunreasonable to me that if you have that kind of case, you have to\nsomehow opt into the behavior: yeah, I know that index functions I'm\nexecuting are going to read from tables, and I consent to that. And\nsimilarly, if your index expression calls pg_stat_reset_shared(), that\nprobably ought to be blocked by default too, and if you want to allow\nit, you have to say so. Yes, that does require labelling functions, or\nmaybe putting run-time checks in them:\n\nRequireAvailableCapability(CAP_MODIFY_DATABASE_STATE);\n\nIf that capability isn't available in the present context, the call\nerrors out. That way, it's possible for the required capabilities to\ndepend on the arguments to the function, and we can change markings in\nminor releases without needing catalog changes.\n\nThere's another way of thinking about this problem, which involves\nsupposing that the invoker should only be induced to do things that\nthe definer could also have done. Basically do every privilege check\ntwice, and require that both pass. The problem I have with that is\nthat there are various operations which depend on your identity, not\njust your privileges. For instance, a GRANT statement records a\ngrantor, and a REVOKE statement infers a grantor whose grant is to be\nrevoked. The operation isn't just allowed or disallowed based on who\nperformed it -- it actually does something different depending on who\nperforms it. I believe we have a number of cases like that, and I\nthink that they suggest that that whole model is pretty flawed. Even\nif that were no issue, this also seems extremely complex to implement,\nbecause we have an absolute crap-ton of places that perform privilege\nchecks and getting all of those places to check privileges as a second\nuser seems nightmarish. I also think that it might be lead to\nconfusing error messages: alice tried to do X but we're not allowing\nit because bob isn't allowed to do X. Eh, what? As opposed to the\nsandboxing approach, where I think you get something more like:\n\nERROR: database state cannot be modified now\nDETAIL: The database system is evaluating an index expression.\nHINT: Do $SOMETHING to allow this.\n\nI don't want to press too hard on my idea here. I'm sure it has a\nbunch of problems apart from those already mentioned, and those\nalready mentioned are not trivial. However, I do think there might be\nways to make it work, and I'm not at all convinced that trying to\nswitch users all over the place is going to be be better, either for\nsecurity or usability. Is there some other whole kind of approach we\ncan take here that we haven't discussed yet?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Mar 2023 09:43:21 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Feb 9, 2023 at 4:46 PM Jacob Champion <jchampion@timescale.com> wrote:\n> On 2/6/23 08:22, Robert Haas wrote:\n> > I don't think that's quite the right concept. It seems to me that the\n> > client is responsible for informing the server of what the situation\n> > is, and the server is responsible for deciding whether to allow the\n> > connection. In your scenario, the client is not only communicating\n> > information (\"here's the password I have got\") but also making demands\n> > on the server (\"DO NOT authenticate using anything else\"). I like the\n> > first part fine, but not the second part.\n>\n> For what it's worth, making a negative demand during authentication is\n> pretty standard: if you visit example.com and it tells you \"I need your\n> OS login password and Social Security Number to authenticate you,\" you\n> have the option of saying \"no thanks\" and closing the tab.\n\nNo, that's the opposite, and exactly the point I'm trying to make. In\nthat case, the SERVER says what it's willing to accept, and the CLIENT\ndecides whether or not to provide that. In your proposal, the client\ntells the server which authentication methods to accept.\n\n> In a hypothetical world where the server presented the client with a\n> list of authentication options before allowing any access, this would\n> maybe be a little less convoluted to solve. For example, a proxy seeing\n> a SASL list of\n>\n> - ANONYMOUS\n> - EXTERNAL\n>\n> could understand that both methods allow the client to assume the\n> authority of the proxy itself. So if its client isn't allowed to do\n> that, the proxy realizes something is wrong (either it, or its target\n> server, has been misconfigured or is under attack), and it can close the\n> connection *before* the server runs login triggers.\n\nYep, that totally makes sense to me, but I don't think it's what you proposed.\n\n> This sounds like a reasonable separation of responsibilities on the\n> surface, but I think it's subtly off. The entire confused-deputy problem\n> space revolves around the proxy being unable to correctly decide which\n> connections to allow unless it also knows why the connections are being\n> authorized.\n\nI agree.\n\n> You've constructed an example where that's not a concern: everything's\n> symmetrical, all proxies operate with the same authority, and internal\n> users are identical to external users. But the CVE that led to the\n> password requirement, as far as I can tell, dealt with asymmetry. The\n> proxy had the authority to connect locally to a user, and the clients\n> had the authority to connect to other machines' users, but those users\n> weren't the same and were not mutually trusting.\n\nYeah, agreed. So, I think the point here is that the proxy\nconfiguration (and pg_hba.conf) need to be sufficiently powerful that\neach user can permit the things that make sense in their environment\nand block the things that don't.\n\nI don't think we're really very far apart here, but for some reason\nthe terminology seems to be giving us some trouble. Of course, there's\nalso the small problem of actually finding the time to do some\nmeaningful work on this stuff, rather than just talking....\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Mar 2023 14:04:14 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On Tue, Mar 7, 2023 at 11:04 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Feb 9, 2023 at 4:46 PM Jacob Champion <jchampion@timescale.com> wrote:\n> > On 2/6/23 08:22, Robert Haas wrote:\n> > > I don't think that's quite the right concept. It seems to me that the\n> > > client is responsible for informing the server of what the situation\n> > > is, and the server is responsible for deciding whether to allow the\n> > > connection. In your scenario, the client is not only communicating\n> > > information (\"here's the password I have got\") but also making demands\n> > > on the server (\"DO NOT authenticate using anything else\"). I like the\n> > > first part fine, but not the second part.\n> >\n> > For what it's worth, making a negative demand during authentication is\n> > pretty standard: if you visit example.com and it tells you \"I need your\n> > OS login password and Social Security Number to authenticate you,\" you\n> > have the option of saying \"no thanks\" and closing the tab.\n>\n> No, that's the opposite, and exactly the point I'm trying to make. In\n> that case, the SERVER says what it's willing to accept, and the CLIENT\n> decides whether or not to provide that. In your proposal, the client\n> tells the server which authentication methods to accept.\n\nAh, that's a (the?) sticking point. In my example, the client doesn't\ntell the server which methods to accept. The client tells the server\nwhich method the *client* has the ability to use. (Or, implicitly,\nwhich methods it refuses to use.)\n\nThat shouldn't lose any power, security-wise, because the server is\nlooking for an intersection of the two sets. And the client already\nhas the power to do that for almost every form of authentication,\nexcept the ambient methods.\n\nI don't think I necessarily like that option better than SASL-style,\nbut hopefully that clarifies it somewhat?\n\n> I don't think we're really very far apart here, but for some reason\n> the terminology seems to be giving us some trouble.\n\nAgreed.\n\n> Of course, there's\n> also the small problem of actually finding the time to do some\n> meaningful work on this stuff, rather than just talking....\n\nAgreed :)\n\n--Jacob\n\n\n", "msg_date": "Wed, 8 Mar 2023 11:30:15 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On Wed, Mar 8, 2023 at 2:30 PM Jacob Champion <jchampion@timescale.com> wrote:\n> > No, that's the opposite, and exactly the point I'm trying to make. In\n> > that case, the SERVER says what it's willing to accept, and the CLIENT\n> > decides whether or not to provide that. In your proposal, the client\n> > tells the server which authentication methods to accept.\n>\n> Ah, that's a (the?) sticking point. In my example, the client doesn't\n> tell the server which methods to accept. The client tells the server\n> which method the *client* has the ability to use. (Or, implicitly,\n> which methods it refuses to use.)\n>\n> That shouldn't lose any power, security-wise, because the server is\n> looking for an intersection of the two sets. And the client already\n> has the power to do that for almost every form of authentication,\n> except the ambient methods.\n>\n> I don't think I necessarily like that option better than SASL-style,\n> but hopefully that clarifies it somewhat?\n\nHmm, yeah, I guess that's OK. I still don't love it, though. It feels\nmore solid to me if the proxy can actually block the connections\nbefore they even happen, without having to rely on a server\ninteraction to figure out what is permissible.\n\nI don't know what you mean by SASL-style, exactly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Mar 2023 14:40:42 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "Hi,\n\nOn 2023-02-07 16:56:55 -0500, Robert Haas wrote:\n> On Wed, Feb 1, 2023 at 4:02 PM Andres Freund <andres@anarazel.de> wrote:\n> > > + /* Is the use of a password mandatory? */\n> > > + must_use_password = MySubscription->passwordrequired &&\n> > > + !superuser_arg(MySubscription->owner);\n> >\n> > There's a few repetitions of this - perhaps worth putting into a helper?\n> \n> I don't think so. It's slightly different each time, because it's\n> pulling data out of different data structures.\n> \n> > This still leaks the connection on error, no?\n> \n> I've attempted to fix this in v4, attached.\n\nHm - it still feels wrong that we error out in case of failure, despite the\ncomment to the function saying:\n * Returns NULL on error and fills the err with palloc'ed error message.\n\nOther than this, the change looks ready to me.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Mar 2023 11:47:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Mar 8, 2023 at 11:40 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Mar 8, 2023 at 2:30 PM Jacob Champion <jchampion@timescale.com> wrote:\n> > I don't think I necessarily like that option better than SASL-style,\n> > but hopefully that clarifies it somewhat?\n>\n> Hmm, yeah, I guess that's OK.\n\nOkay, cool.\n\n> I still don't love it, though. It feels\n> more solid to me if the proxy can actually block the connections\n> before they even happen, without having to rely on a server\n> interaction to figure out what is permissible.\n\nSure. I don't see a way for the proxy to figure that out by itself,\nthough, going back to my asymmetry argument from before. Only the\nserver truly knows, at time of HBA processing, whether the proxy\nitself has authority. If the proxy knew, it wouldn't be confused.\n\n> I don't know what you mean by SASL-style, exactly.\n\nThat's the one where the server explicitly names all forms of\nauthentication, including the ambient ones (ANONYMOUS, EXTERNAL,\netc.), and requires the client to choose one before running any\nactions on their behalf. That lets the require_auth machinery work for\nthis case, too.\n\n--Jacob\n\n\n", "msg_date": "Wed, 8 Mar 2023 14:44:26 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On Wed, Mar 8, 2023 at 5:44 PM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Wed, Mar 8, 2023 at 11:40 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Wed, Mar 8, 2023 at 2:30 PM Jacob Champion <jchampion@timescale.com> wrote:\n> > > I don't think I necessarily like that option better than SASL-style,\n> > > but hopefully that clarifies it somewhat?\n> >\n> > Hmm, yeah, I guess that's OK.\n>\n> Okay, cool.\n>\n> > I still don't love it, though. It feels\n> > more solid to me if the proxy can actually block the connections\n> > before they even happen, without having to rely on a server\n> > interaction to figure out what is permissible.\n>\n> Sure. I don't see a way for the proxy to figure that out by itself,\n> though, going back to my asymmetry argument from before. Only the\n> server truly knows, at time of HBA processing, whether the proxy\n> itself has authority. If the proxy knew, it wouldn't be confused.\n>\n> > I don't know what you mean by SASL-style, exactly.\n>\n> That's the one where the server explicitly names all forms of\n> authentication, including the ambient ones (ANONYMOUS, EXTERNAL,\n> etc.), and requires the client to choose one before running any\n> actions on their behalf. That lets the require_auth machinery work for\n> this case, too.\n>\n> --Jacob\n\n\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Mar 2023 09:12:37 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On Wed, Mar 8, 2023 at 5:44 PM Jacob Champion <jchampion@timescale.com> wrote:\n> Sure. I don't see a way for the proxy to figure that out by itself,\n> though, going back to my asymmetry argument from before. Only the\n> server truly knows, at time of HBA processing, whether the proxy\n> itself has authority. If the proxy knew, it wouldn't be confused.\n\nThat seems like a circular argument. If you call the problem the\nconfused deputy problem then the issue must indeed be that the deputy\nis confused, and needs to talk to someone else to get un-confused. But\nwhy is the deputy necessarily confused in the first place? Our deputy\nis confused because our code to decide whether to proxy a connection\nor not is super-dumb, but if there's an intrinsic reason it can't be\nsmarter, I don't understand what it is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Mar 2023 09:17:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On Thu, Mar 9, 2023 at 6:17 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> That seems like a circular argument. If you call the problem the\n> confused deputy problem then the issue must indeed be that the deputy\n> is confused, and needs to talk to someone else to get un-confused. But\n> why is the deputy necessarily confused in the first place? Our deputy\n> is confused because our code to decide whether to proxy a connection\n> or not is super-dumb,\n\nNo, I think our proxy is confused because it doesn't know what power\nit has, and it can't tell the server what power it wants to use. That\nproblem is independent of the decision to proxy. You're suggesting\nstrengthening the code that makes that decision -- adding an oracle\n(in the form of a DBA) that knows about the confusion and actively\nmitigates it. That's guaranteed to work if the oracle is perfect,\nbecause \"perfect\" is somewhat tautologically defined as \"whatever\nensures secure operation\". But the oracle doesn't reduce the\nconfusion, and DBAs aren't perfect.\n\nIf you want to add a Sheriff Andy to hold Barney Fife's hand [1], that\nwill absolutely make Barney less of a problem, and I'd like to have\nAndy around regardless. But Barney still doesn't know what's going on,\nand when Andy makes a mistake, there will still be trouble. I'd like\nto teach Barney some useful stuff.\n\n> but if there's an intrinsic reason it can't be\n> smarter, I don't understand what it is.\n\nWell... I'm not well-versed enough in this to prove non-existence of a\nsolution. Can you find a solution, using the current protocol, that\ndoesn't make use of perfect out-of-band knowledge? We have a client\nthat will authenticate using any method the server asks it to, even if\nits user intended to use something else. And we have a server that can\neagerly skip client authentication, and then eagerly run code on its\nbehalf, without first asking the client what it's even trying to do.\nThat would be an inherently hostile environment for *any* proxy, not\njust ours.\n\nThanks,\n--Jacob\n\n[1] https://en.wikipedia.org/wiki/The_Andy_Griffith_Show#Premise_and_characters\n\n\n", "msg_date": "Fri, 10 Mar 2023 16:00:19 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On Fri, Mar 10, 2023 at 7:00 PM Jacob Champion <jchampion@timescale.com> wrote:\n> On Thu, Mar 9, 2023 at 6:17 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > That seems like a circular argument. If you call the problem the\n> > confused deputy problem then the issue must indeed be that the deputy\n> > is confused, and needs to talk to someone else to get un-confused. But\n> > why is the deputy necessarily confused in the first place? Our deputy\n> > is confused because our code to decide whether to proxy a connection\n> > or not is super-dumb,\n>\n> No, I think our proxy is confused because it doesn't know what power\n> it has, and it can't tell the server what power it wants to use. That\n> problem is independent of the decision to proxy. You're suggesting\n> strengthening the code that makes that decision -- adding an oracle\n> (in the form of a DBA) that knows about the confusion and actively\n> mitigates it. That's guaranteed to work if the oracle is perfect,\n> because \"perfect\" is somewhat tautologically defined as \"whatever\n> ensures secure operation\". But the oracle doesn't reduce the\n> confusion, and DBAs aren't perfect.\n\nI think this is the root of our disagreement. My understanding of the\nprevious discussion is that people think that the major problem here\nis the wraparound-to-superuser attack. That is, in general, we expect\nthat when we connect to a database over the network, we expect it to\ndo some kind of active authentication, like asking us for a password,\nor asking us for an SSL certificate that isn't just lying around for\nanyone to use. However, in the specific case of a local connection, we\nhave a reliable way of knowing who the remote user is without any kind\nof active authentication, namely 'peer' authentication or perhaps even\n'trust' if we trust all the local users, and so we don't judge it\nunreasonable to allow local connections without any form of active\nauthentication. There can be some scenarios where even over a network\nwe can know the identity of the person connecting with complete\ncertainty, e.g. if endpoints are locked down such that the source IP\naddress is a reliable indicator of who is initiating the connection,\nbut in general when there's a network involved you don't know who the\nperson making the connection is and need to do something extra to\nfigure it out.\n\nIf you accept this characterization of the problem, then I don't think\nthe oracle is that hard to design. We simply set it up not to allow\nwraparound connections, or maybe even more narrowly to not allow\nwraparound connections to superuser. If the DBA has some weird network\ntopology where that's not the correct rule, either because they want\nto allow wraparound connections or they want to disallow other things,\nthen yeah they have to tell us what to allow, but I don't really see\nwhy that's an unreasonable expectation. I'd expect the correct\nconfiguration of the proxy facility to fall naturally out of what's\nallowed in pg_hba.conf. If machine A is configured to accept\nconnections from machines B and C based on environmental factors, then\nmachines B and C should be configured not to proxy connections to A.\nIf machines B and C aren't under our control such that we can\nconfigure them that way, then the configuration is fundamentally\ninsecure in a way that we can't really fix.\n\nI think that what you're proposing is that B and C can just be allowed\nto proxy to A and A can say \"hey, by the way, I'm just gonna let you\nin without asking for anything else\" and B and C can, when proxying,\nreact to that by disconnecting before the connection actually goes\nthrough. That's simpler, in a sense. It doesn't require us to set up\nthe proxy configuration on B and C in a way that matches what\npg_hba.conf allows on A. Instead, B and C can automatically deduce\nwhat connections they should refuse to proxy. I guess that's nice, but\nit feels pretty magical to me. It encourages the DBA not to think\nabout what B and C should actually be allowed to proxy, and instead\njust trust that the automatics are going to prevent any security\ndisasters. I'm not sure that they always will, and I fear cultivating\ntoo much reliance on them. I think that if you're setting up a network\ntopology where the correct rule is something more complex than \"don't\nallow wraparound connections to superuser,\" maybe you ought to be\nforced to spell that rule out instead of letting the system deduce one\nthat you hope will be right.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Mar 2023 12:32:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On Wed, Mar 8, 2023 at 2:47 PM Andres Freund <andres@anarazel.de> wrote:\n> Hm - it still feels wrong that we error out in case of failure, despite the\n> comment to the function saying:\n> * Returns NULL on error and fills the err with palloc'ed error message.\n\nI've amended the comment so that it explains why it's done that way.\n\n> Other than this, the change looks ready to me.\n\nWell, it still needed documentation changes and pg_dump changes. I've\nadded those in the attached version.\n\nIf nobody's too unhappy with the idea, I plan to commit this soon,\nboth because I think that the feature is useful, and also because I\nthink it's an important security improvement. Since replication is\ncurrently run as the subscription owner, any table owner can\ncompromise the subscription owner's account, which is really bad, but\nif the subscription owner can be a non-superuser, it's a little bit\nless bad. From a security point of view, I think the right thing to do\nand what would improve security a lot more is to run replication as\nthe table owner rather than the subscription owner. I've posted a\npatch for that at\nhttp://postgr.es/m/CA+TgmoaSCkg9ww9oppPqqs+9RVqCexYCE6Aq=UsYPfnOoDeFkw@mail.gmail.com\nand AFAICT everyone agrees with the idea, even if the patch itself\nhasn't yet attracted any code reviews. But although the two patches\nare fairly closely related, this seems to be a good idea whether that\nmoves forward or not, and that seems to be a good idea whether this\nmoves forward or not. As this one has had more review and discussion,\nmy current thought is to try to get this one committed first.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 22 Mar 2023 12:16:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, 2023-03-22 at 12:16 -0400, Robert Haas wrote:\n> If nobody's too unhappy with the idea, I plan to commit this soon,\n> both because I think that the feature is useful, and also because I\n> think it's an important security improvement.\n\nIs there any chance I can convince you to separate the privileges of\nusing a connection string and creating a subscription, as I\nsuggested[1] earlier?\n\nIt would be useful for dblink, and I also plan to propose CREATE\nSUBSCRIPTION ... SERVER for v17 (it was too late for 16), for which it\nwould also be useful to make the distinction.\n\nYou seemed to generally think it was a reasonable idea, but wanted to\nwait for the other patch. I think it's the right breakdown of\nprivileges even now, and I don't see a reason to give ourselves a\nheadache later trying to split up the privileges later.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/message-id/fa1190c117c2455f2dd968a1a09f796ccef27b29.camel@j-davis.com\n\n\n", "msg_date": "Wed, 22 Mar 2023 12:53:18 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, Mar 22, 2023 at 3:53 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Is there any chance I can convince you to separate the privileges of\n> using a connection string and creating a subscription, as I\n> suggested[1] earlier?\n\nWhat would this amount to concretely? Also adding a\npg_connection_string predefined role and requiring both that and\npg_create_subscription in all cases until your proposed changes get\nmade?\n\nIf so, I don't think that's a good idea. Maybe for some reason your\nproposed changes won't end up happening, and then we've just got a\nuseless extra thing that makes things confusing. I think that adding a\npg_connection_string privilege properly belongs to whatever patch\nmakes it possible to separate the connection string from the\nsubscription, and that we probably shouldn't add those even in\nseparate commits, let alone in separate major releases.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Mar 2023 11:52:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, 2023-03-23 at 11:52 -0400, Robert Haas wrote:\n> What would this amount to concretely? Also adding a\n> pg_connection_string predefined role and requiring both that and\n> pg_create_subscription [to CREATE SUBSCRIPTION]\n\nYes.\n\n> If so, I don't think that's a good idea. Maybe for some reason your\n> proposed changes won't end up happening, and then we've just got a\n> useless extra thing that makes things confusing.\n\nEven if my changes don't happen, I would find it less confusing and\nmore likely that users understand what they're doing.\n\nTo most users, the consequences of allowing users to write connection\nstrings on the server are far from obvious. Even we, as developers,\nneeded to spend a lot of time discussing the nuances.\n\nSomeone merely granting the ability to CREATE SUBSCRIPTION would read\nthat page in the docs, which is dominated by the mechanics of a\nsubscription and says little about the connection string, let alone the\nsecurity nuances of using it on a server.\n\nBut if there is also a separate connection string privilege required,\nwe can document it better and they are more likely to find it and\nunderstand.\n\nBeyond that, the connection string and the mechanics of the\nsubscription are really different concepts.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 23 Mar 2023 10:41:05 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Mar 23, 2023 at 1:41 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Even if my changes don't happen, I would find it less confusing and\n> more likely that users understand what they're doing.\n\nI respectfully but firmly disagree. I think having two predefined\nroles that are both required to create a subscription and neither of\nwhich allows you to do anything other than create a subscription is\nintrinsically confusing. I'm not willing to commit a patch that works\nlike that, and I will object if someone else wants to do so.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Mar 2023 15:39:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wed, 2023-03-22 at 12:16 -0400, Robert Haas wrote:\n> I've posted a\n> patch for that at\n> http://postgr.es/m/CA+TgmoaSCkg9ww9oppPqqs+9RVqCexYCE6Aq=UsYPfnOoDeFkw@mail.gmail.com\n> and AFAICT everyone agrees with the idea, even if the patch itself\n> hasn't yet attracted any code reviews. But although the two patches\n> are fairly closely related, this seems to be a good idea whether that\n> moves forward or not, and that seems to be a good idea whether this\n> moves forward or not. As this one has had more review and discussion,\n> my current thought is to try to get this one committed first.\n\nThe current patch (non-superuser-subscriptions) is the most user-facing\naspect, and it seems wrong to commit it before we have the security\nmodel in a reasonable place. As you pointed out[1], it's not in a\nreasonable place now, so encouraging more use seems like a bad idea.\n\nThe other patch you posted seems like it makes a lot of progress in\nthat direction, and I think that should go in first. That was one of\nthe items I suggested previously[2], so thank you for working on that.\n\nRegards,\n\tJeff Davis\n\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BTgmoavSQVcvEW3ZgZ7a1Q-TJ-fp0%2BNt7W3D7FCawArtTCBCQ%40mail.gmail.com\n\n[2]\nhttps://www.postgresql.org/message-id/27c557b12a590067c5e00588009447bb5bb2dd42.camel@j-davis.com\n\n\n", "msg_date": "Fri, 24 Mar 2023 00:17:14 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, Mar 24, 2023 at 3:17 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> The current patch (non-superuser-subscriptions) is the most user-facing\n> aspect, and it seems wrong to commit it before we have the security\n> model in a reasonable place. As you pointed out[1], it's not in a\n> reasonable place now, so encouraging more use seems like a bad idea.\n\nI certainly agree that the security model isn't in a reasonable place\nright now. However, I feel that:\n\n(1) adding an extra predefined role doesn't really help, because it\ndoesn't actually do anything in and of itself, it only prepares for\nfuture work, and\n\n(2) even adding the connection string security stuff that you're\nproposing doesn't really help, because (2a) connection string security\nis almost completely separate from the internal security\nconsiderations addressed in the message to which you linked, and (2b)\nin my opinion, there will be a lot of people who won't use that\nconnection string security stuff even if we had it, possibly even a\nlarge majority of people won't use it, because it responds to a\nspecific use case which I think a lot of people don't have, and\n\n(3) I don't agree either that this patch would encourage more use of\nlogical replication or that it would be bad if it did. I mean, there\ncould be someone who knows about this patch and will hesitate to\ndeploy logical replication if it doesn't get committed, or maybe\nslightly more likely, won't be able to do so if this patch doesn't get\ncommitted because they're running in a cloud environment. But probably\nnot. Cloud providers are already hacking around this problem,\nMicrosoft included. As a community, we're better off having a standard\nsolution in core than having every vendor hack it their own way. And\noutside of a cloud environment, there's not really any reason for the\nlack of this patch to make a potential user hesitate. Also, features\ngetting used is a thing that I think we should all want. If logical\nreplication is in such a bad state that we think people should be\nusing it, we should rip it out until the issues are fixed. I don't\nthink anyone would seriously propose that such a course of action is\nadvisable. So the alternative is to make it better.\n\nTo reiterate what I think the most important point here is, both Azure\nand AWS already let you do this. EDB's own cloud offering is also\ngoing to let you do this, whether this change goes in or not. But if\nthis patch gets committed, then eventually all of those vendors and\nwhatever others are out there will let you do this in the same way,\ni.e. pg_create_subscription, instead of every vendor having their own\npatch to the code that does what this patch does through some method\nthat is specific to that cloud vendor. That sort of fragmentation of\nthe ecosystem is not good for anyone, AFAICS.\n\n> The other patch you posted seems like it makes a lot of progress in\n> that direction, and I think that should go in first. That was one of\n> the items I suggested previously[2], so thank you for working on that.\n\nPerhaps you could review that work?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 Mar 2023 09:24:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, Mar 24, 2023 at 9:24 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > The other patch you posted seems like it makes a lot of progress in\n> > that direction, and I think that should go in first. That was one of\n> > the items I suggested previously[2], so thank you for working on that.\n>\n> Perhaps you could review that work?\n\nAh, you already did. Thanks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 Mar 2023 10:47:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On 3/20/23 09:32, Robert Haas wrote:\n> I think this is the root of our disagreement.\n\nAgreed.\n\n> My understanding of the\n> previous discussion is that people think that the major problem here\n> is the wraparound-to-superuser attack. That is, in general, we expect\n> that when we connect to a database over the network, we expect it to\n> do some kind of active authentication, like asking us for a password,\n> or asking us for an SSL certificate that isn't just lying around for\n> anyone to use. However, in the specific case of a local connection, we\n> have a reliable way of knowing who the remote user is without any kind\n> of active authentication, namely 'peer' authentication or perhaps even\n> 'trust' if we trust all the local users, and so we don't judge it\n> unreasonable to allow local connections without any form of active\n> authentication. There can be some scenarios where even over a network\n> we can know the identity of the person connecting with complete\n> certainty, e.g. if endpoints are locked down such that the source IP\n> address is a reliable indicator of who is initiating the connection,\n> but in general when there's a network involved you don't know who the\n> person making the connection is and need to do something extra to\n> figure it out.\n\nOkay, but this is walking back from the network example you just\ndescribed upthread. Do you still consider that in scope, or...?\n\n> If you accept this characterization of the problem,\n\nI'm not going to say yes or no just yet, because I don't understand your\nrationale for where to draw the lines.\n\nIf you just want the bare minimum thing that will solve the localhost\ncase, require_auth landed this week. Login triggers are not yet a thing,\nso `require_auth=password,md5,scram-sha-256` ensures active\nauthentication. You don't even have to disallow localhost connections,\nas far as I can tell; they'll work as intended.\n\nIf you think login triggers will get in for PG16, my bigger proposal\ncan't help in time. But if you're drawing the line at \"environmental\nHBAs are fundamentally unsafe and you shouldn't use them if you have a\nproxy,\" why can't I instead draw the line at \"login triggers are\nfundamentally unsafe and you shouldn't use them if you have a proxy\"?\n\nAnd if you want to handle the across-the-network case, too, then I don't\naccept the characterization of the problem.\n\n> then I don't think\n> the oracle is that hard to design. We simply set it up not to allow\n> wraparound connections, or maybe even more narrowly to not allow\n> wraparound connections to superuser. If the DBA has some weird network\n> topology where that's not the correct rule, either because they want\n> to allow wraparound connections or they want to disallow other things,\n> then yeah they have to tell us what to allow, but I don't really see\n> why that's an unreasonable expectation.\n\nThis seems like a security model that has been carefully gerrymandered\naround the existing implementation. My argument is that the \"weird\nnetwork topology\" isn't weird at all, and it's only dangerous because of\ndecisions we made (and can unmake).\n\nI feel pretty strongly that the design arrow needs to be pointed in the\nopposite direction. The model needs to be chosen first, to prevent us\nfrom saying, \"We defend against whatever the implementation lets us\ndefend against today. Good luck, DBAs.\"\n\n> If machines B and C aren't under our control such that we can\n> configure them that way, then the configuration is fundamentally\n> insecure in a way that we can't really fix.\n\nHere's probably our biggest point of contention. You're unlikely to\nconvince me that this is the DBA's fault.\n\nIf machines B and C aren't under our control, then our *protocol* is\nfundamentally insecure in a way that we have the ability to fix, in a\nway that's already been characterized in security literature.\n\n> I think that what you're proposing is that B and C can just be allowed\n> to proxy to A and A can say \"hey, by the way, I'm just gonna let you\n> in without asking for anything else\" and B and C can, when proxying,\n> react to that by disconnecting before the connection actually goes\n> through. That's simpler, in a sense. It doesn't require us to set up\n> the proxy configuration on B and C in a way that matches what\n> pg_hba.conf allows on A. Instead, B and C can automatically deduce\n> what connections they should refuse to proxy.\n\nRight. It's meant to take the \"localhost/wraparound connection\" out of a\nclass of special things we have to worry about, and make it completely\nboring.\n\n> I guess that's nice, but\n> it feels pretty magical to me. It encourages the DBA not to think\n> about what B and C should actually be allowed to proxy, and instead\n> just trust that the automatics are going to prevent any security\n> disasters.\n\nI agree magical behavior is dangerous, if what you think it can do\ndoesn't match up with what it can actually do. Bugs are always possible,\nand maybe I'm just not seeing a corner case yet, because I'm talking too\nmuch and not coding it -- but is this really a case where I'm\noverpromising? Or does it just feel magical because it's meant to fix\nthe root issue?\n\n(Remember, I'm not arguing against your proxy filter; I just want both.\nThey complement each other.)\n\n> I'm not sure that they always will, and I fear cultivating\n> too much reliance on them.\n\nI can't really argue against this... but I'm not really sure anyone could.\n\nMy strawman rephrasing of that is, \"we have to make the feature crappy\nenough that we can blame the DBA when things go wrong.\" And even that\nstrawman could be perfectly reasonable, in situations where the DBA\nnecessarily has more information than the machine. In this case, though,\nit seems to me that the two machines have all the information necessary\nto make a correct decision between them.\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Fri, 24 Mar 2023 14:47:40 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On Fri, 2023-03-24 at 09:24 -0400, Robert Haas wrote:\n> I certainly agree that the security model isn't in a reasonable place\n> right now. However, I feel that:\n> \n> (1) adding an extra predefined role\n\n> (2) even adding the connection string security stuff\n\nI don't see how these points are related to the question of whether you\nshould commit your non-superuser-subscription-owners patch or logical-\nrepl-as-table-owner patch first.\n\n\nMy perspective is that logical replication is an unfinished feature\nwith an incomplete design. As I said earlier, that's why I backed away\nfrom trying to do non-superuser subscriptions as a documented feature:\nit feels like we need to settle some of the underlying pieces first.\n\nThere are some big issues, like the security model for replaying\nchanges. And some smaller issues like feature gaps (RLS doesn't work,\nif I remember correctly, and maybe something with partitioning). There\nare potential clashes with other proposals, like the CREATE\nSUBSCRIPTION ... SERVER, which I hope can be sorted out later. And I\ndon't feel like I have a good handle on the publisher security model\nand threats, which hopefully is just a matter of documenting some best\npractices.\n\nEach time we dig into one of these issues I learn something, and I\nthink others do, too. If we skip past that process and start adding new\nfeatures on top of this unfinished design, then I think we are setting\nourselves up for trouble that is going to be harder to fix later.\n\nI don't mean to say all of the above issues are blockers or that they\nshould all be resolved in my favor. But there are enough issues and\nsome of those issues are serious enough that I feel like it's premature\nto just go ahead with the non-superuser subscriptions and the\npredefined role.\n\nThere are already users, which complicates things. And you make a good\npoint that some important users may be already working around the\nflaws. But there's already a patch and discussion going on for some\nsecurity model improvements (thanks to you), so let's try to get that\none in first. If we can't, it's probably because we learned something\nimportant.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Sat, 25 Mar 2023 12:16:35 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi,\n\nOn 2023-03-25 12:16:35 -0700, Jeff Davis wrote:\n> On Fri, 2023-03-24 at 09:24 -0400, Robert Haas wrote:\n> > I certainly agree that the security model isn't in a reasonable place\n> > right now. However, I feel that:\n> > \n> > (1) adding an extra predefined role\n> \n> > (2) even adding the connection string security stuff\n> \n> I don't see how these points are related to the question of whether you\n> should commit your non-superuser-subscription-owners patch or logical-\n> repl-as-table-owner patch first.\n> \n> \n> My perspective is that logical replication is an unfinished feature\n> with an incomplete design.\n\nI agree with that much.\n\n\n> �As I said earlier, that's why I backed away from trying to do non-superuser\n> subscriptions as a documented feature: it feels like we need to settle some\n> of the underlying pieces first.\n\nI don't agree. The patch allows to use logical rep in a far less dangerous\nfashion than now. The alternative is to release 16 without a real way to use\nlogical rep less insanely. Which I think is work.\n\n\n> There are some big issues, like the security model for replaying\n> changes.\n\nThat seems largely unrelated.\n\n\n> And some smaller issues like feature gaps (RLS doesn't work,\n> if I remember correctly, and maybe something with partitioning).\n\nEntirely unrelated?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 27 Mar 2023 10:46:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Sat, Mar 25, 2023 at 3:16 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Fri, 2023-03-24 at 09:24 -0400, Robert Haas wrote:\n> > I certainly agree that the security model isn't in a reasonable place\n> > right now. However, I feel that:\n> >\n> > (1) adding an extra predefined role\n>\n> > (2) even adding the connection string security stuff\n>\n> I don't see how these points are related to the question of whether you\n> should commit your non-superuser-subscription-owners patch or logical-\n> repl-as-table-owner patch first.\n\nI thought you were asking for those changes to be made before this\npatch got committed, so that's what I was responding to. If you're\nasking for it not to be committed at all, that's a different\ndiscussion.\n\n> My perspective is that logical replication is an unfinished feature\n> with an incomplete design. As I said earlier, that's why I backed away\n> from trying to do non-superuser subscriptions as a documented feature:\n> it feels like we need to settle some of the underlying pieces first.\n\nI kind of agree with you about the feature itself. Even though the\nbasic feature works quite well and does something people really want,\nthere are a lot of loose ends to sort out, and not just about\nsecurity. But I also want to make some progress. If there are problems\nwith what I'm proposing that will make us regret committing things\nright before feature freeze, then we shouldn't. But waiting a whole\nadditional year to see any kind of improvement is not free; these\nissues are serious.\n\n> I don't mean to say all of the above issues are blockers or that they\n> should all be resolved in my favor. But there are enough issues and\n> some of those issues are serious enough that I feel like it's premature\n> to just go ahead with the non-superuser subscriptions and the\n> predefined role.\n>\n> There are already users, which complicates things. And you make a good\n> point that some important users may be already working around the\n> flaws. But there's already a patch and discussion going on for some\n> security model improvements (thanks to you), so let's try to get that\n> one in first. If we can't, it's probably because we learned something\n> important.\n\nI think this patch is a lot better-baked and less speculative than\nthat one. I think that patch is more important, so if they were\nequally mature, I'd favor getting that one committed first. But that's\nnot the case.\n\nAlso, I don't really understand how we could end up not wanting this\npatch. I mean there's a lot of things I don't understand that are\nstill true anyway, so the mere fact that I don't understand how we\ncould not end up wanting this patch doesn't mean that it couldn't\nhappen. But like, the current state of play is that subscription\nowners are always going to be superusers at the time the subscription\nis created, and literally nobody thinks that's a good idea. Some\npeople (like me) think that we ought to assume that subscription\nowners will be and need to be high-privilege users like superusers,\nbut to my knowledge every such person thinks that it's OK for the\nsubscription owner to be a non-superuser if they have adequate\nprivileges. I just think that's a high amount of privileges, not that\nit has to be all the privileges i.e. superuser. Other people (like\nyou, AIUI) think that we ought to try to set things up so that\nsubscription owners can be low-privilege users, in which case we, once\nagain, don't want the user who owns the subscription to start out a\nsuperuser. I actually can't imagine anyone defending the idea of\nhaving the subscription owner always be a superuser at the time they\nfirst own the subscription. That's a weird rule that can only serve to\nreduce security. Nor can I imagine anyone saying that forcing\nsubscriptions to be created only by superusers improves security. I\ndon't think anyone thinks that.\n\nIf we're going to delay this patch, probably for a full year, because\nof other ongoing discussions, it should be because there is some\noutcome of those discussions that would involve deciding that this\npatch isn't needed or should be significantly redesigned. If this\npatch is going to end up being desirable no matter how those\ndiscussions turn out, and if it's not going to change significantly no\nmatter how those discussions turn out, then those discussions aren't a\nreason not to get it into this release.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Mar 2023 14:06:45 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, 2023-03-27 at 10:46 -0700, Andres Freund wrote:\n> > There are some big issues, like the security model for replaying\n> > changes.\n> \n> That seems largely unrelated.\n\nThey are self-evidently related in a fundamental way. The behavior of\nthe non-superuser-subscription patch depends on the presence of the\napply-as-table-owner patch.\n\nI think I'd like to understand the apply-as-table-owner patch better to\nunderstand the interaction.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 27 Mar 2023 12:21:15 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, 2023-03-27 at 14:06 -0400, Robert Haas wrote:\n> I thought you were asking for those changes to be made before this\n> patch got committed, so that's what I was responding to. If you're\n> asking for it not to be committed at all, that's a different\n> discussion.\n\nI separately had a complaint (in a separate subthread) about the scope\nof the predefined role you are introducing, which I think encompasses\ntwo concepts that should be treated differently and I think that may\nneed to be revisited later. If you ignore this complaint it wouldn't be\nthe end of the world.\n\nThis subthread is about the order in which the patches get committed\n(which is a topic you brought up), not whether they are ever to be\ncommitted.\n\n> \n> I kind of agree with you about the feature itself. Even though the\n> basic feature works quite well and does something people really want,\n> there are a lot of loose ends to sort out, and not just about\n> security. But I also want to make some progress. If there are\n> problems\n> with what I'm proposing that will make us regret committing things\n> right before feature freeze, then we shouldn't. But waiting a whole\n> additional year to see any kind of improvement is not free; these\n> issues are serious.\n\nThe non-superuser-subscription-owner patch without the apply-as-table-\nowner patch feels like a facade to me, at least right now. Perhaps I\ncan be convinced otherwise, but that's what it looks like to me.\n\n> \n> I think this patch is a lot better-baked and less speculative than\n> that one. I think that patch is more important, so if they were\n> equally mature, I'd favor getting that one committed first. But\n> that's\n> not the case.\n\nYou explicitly asked about the order of the patches, which made me\nthink it was more of an option?\n\nIf the apply-as-table-owner patch gets held up for whatever reason, we\nmight have to make a difficult decision. I'd prefer focus on the apply-\nas-table-owner patch briefly, and now that it's getting some review\nattention, we might find out how ready it is quite soon.\n\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 27 Mar 2023 15:17:27 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, 2023-03-24 at 00:17 -0700, Jeff Davis wrote:\n> The other patch you posted seems like it makes a lot of progress in\n> that direction, and I think that should go in first. That was one of\n> the items I suggested previously[2], so thank you for working on\n> that.\n\nThe above is not a hard objection.\n\nI still hold the opinion that the non-superuser subscriptions work is\nfeels premature without the apply-as-table-owner work. It would be\ngreat if the other patch ends up ready quickly, which would moot the\ncommit-ordering question.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 28 Mar 2023 10:52:33 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, Mar 24, 2023 at 5:47 PM Jacob Champion <jchampion@timescale.com> wrote:\n> Okay, but this is walking back from the network example you just\n> described upthread. Do you still consider that in scope, or...?\n\nSorry, I don't know which example you mean.\n\n> > If machines B and C aren't under our control such that we can\n> > configure them that way, then the configuration is fundamentally\n> > insecure in a way that we can't really fix.\n>\n> Here's probably our biggest point of contention. You're unlikely to\n> convince me that this is the DBA's fault.\n>\n> If machines B and C aren't under our control, then our *protocol* is\n> fundamentally insecure in a way that we have the ability to fix, in a\n> way that's already been characterized in security literature.\n\nI guess I wouldn't have a problem blaming the DBA here, but you seem\nto be telling me that the security literature has settled on another\nkind of approach, and I'm not in a position to dispute that. It still\nfeels weird to me, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 30 Mar 2023 08:58:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On Tue, Mar 28, 2023 at 1:52 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Fri, 2023-03-24 at 00:17 -0700, Jeff Davis wrote:\n> > The other patch you posted seems like it makes a lot of progress in\n> > that direction, and I think that should go in first. That was one of\n> > the items I suggested previously[2], so thank you for working on\n> > that.\n>\n> The above is not a hard objection.\n\nThe other patch is starting to go in a direction that is going to have\nsome conflicts with this one, so I went ahead and committed this one\nto avoid rebasing pain.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 30 Mar 2023 12:04:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Greetings,\n\n* Jacob Champion (jchampion@timescale.com) wrote:\n> On 3/20/23 09:32, Robert Haas wrote:\n> > I think this is the root of our disagreement.\n> \n> Agreed.\n\nI've read all the way back to the $SUBJECT change to try and get an\nunderstanding of the questions here and it's not been easy, in part, I\nthink, due to the verbiage but also the perhaps lack of concrete\nexamples and instead references to other systems and protocols.\n\n> > My understanding of the\n> > previous discussion is that people think that the major problem here\n> > is the wraparound-to-superuser attack. That is, in general, we expect\n> > that when we connect to a database over the network, we expect it to\n> > do some kind of active authentication, like asking us for a password,\n> > or asking us for an SSL certificate that isn't just lying around for\n> > anyone to use. However, in the specific case of a local connection, we\n> > have a reliable way of knowing who the remote user is without any kind\n> > of active authentication, namely 'peer' authentication or perhaps even\n> > 'trust' if we trust all the local users, and so we don't judge it\n> > unreasonable to allow local connections without any form of active\n> > authentication. There can be some scenarios where even over a network\n> > we can know the identity of the person connecting with complete\n> > certainty, e.g. if endpoints are locked down such that the source IP\n> > address is a reliable indicator of who is initiating the connection,\n> > but in general when there's a network involved you don't know who the\n> > person making the connection is and need to do something extra to\n> > figure it out.\n> \n> Okay, but this is walking back from the network example you just\n> described upthread. Do you still consider that in scope, or...?\n\nThe concern around the network certainly needs to be in-scope overall.\n\n> > If you accept this characterization of the problem,\n> \n> I'm not going to say yes or no just yet, because I don't understand your\n> rationale for where to draw the lines.\n> \n> If you just want the bare minimum thing that will solve the localhost\n> case, require_auth landed this week. Login triggers are not yet a thing,\n> so `require_auth=password,md5,scram-sha-256` ensures active\n> authentication. You don't even have to disallow localhost connections,\n> as far as I can tell; they'll work as intended.\n\nI do think require_auth helps us move in a positive direction. As I\nmentioned elsewhere, I don't think we highlight it nearly enough in the\npostgres_fdw documentation. Let's look at that in a bit more depth with\nconcrete examples and perhaps everyone will be able to get a bit more\nunderstanding of the issues.\n\nClient is psql\nProxy is some PG server that's got postgres_fdw\nTarget is another PG server, that is being connected to from Proxy\nAuthentication is via GSS/Kerberos with proxied credentials\n\nWhat do we want to require the user to configure to make this secure?\n\nProxy's pg_hba configured to require GSS auth from Client.\nTarget's pg_hba configured to require GSS auth from Proxy.\n\nWho are we trusting with what? In particular, I'd argue that the user\nwho is able to install the postgres_fdw extension and the user who is\nable to issue the CREATE SERVER are largely trusted; at least in so far\nas the user doing CREATE SERVER is allowed to create the server and\nthrough that allowed to make outbound connections from the Proxy.\n\nTherefore, the Proxy is configured with postgres_fdw and with a trusted\nuser performing the CREATE SERVER.\n\nWhat doesn't this handle today? Connection side-effects are one\nproblem- once the CREATE SERVER is done, any user with USAGE rights on\nthe server can create a USER MAPPING for themselves, either with a\npassword or without one (if they're able to proxy GSS credentials to the\nsystem). They aren't able to set password_required though, which\ndefaults to true. However, without having require_auth set, they're\nable to cause the Proxy to reach an authentication stage with the Target\nthat might not match what credentials they're supposed to be providing.\n\nWe attempt to address this by checking post-auth to Target that we used\nthe credentials to connect that we expected to- if GSS credentials were\nproxied, then we expect to use those. If a password was provided then\nwe expect to use a password to auth (only checked after we see if GSS\ncredentials were proxied and used). The issue here is 'post-auth' bit,\nwe'd prefer to fail the connection pre-auth if it isn't what we're\nexpecting. Should we then explicit set require_auth=gss when GSS\ncredentials are proxied? Also, if a password is provided, then\nexplicitly set require_auth=scram-sha-256? Or default to these, at\nleast, and allow the CREATE SERVER user to override our choices? Or\nshould it be a USER MAPPING option that's restricted? Or not?\n\n> > I think that what you're proposing is that B and C can just be allowed\n> > to proxy to A and A can say \"hey, by the way, I'm just gonna let you\n> > in without asking for anything else\" and B and C can, when proxying,\n> > react to that by disconnecting before the connection actually goes\n> > through. That's simpler, in a sense. It doesn't require us to set up\n> > the proxy configuration on B and C in a way that matches what\n> > pg_hba.conf allows on A. Instead, B and C can automatically deduce\n> > what connections they should refuse to proxy.\n> \n> Right. It's meant to take the \"localhost/wraparound connection\" out of a\n> class of special things we have to worry about, and make it completely\n> boring.\n\nAgain, trying to get at a more concrete example- the concern here is a\nuser with CREATE SERVER ability could leverage that access to become a\nsuperuser if the system is configured with 'peer' access, right? A\nnon-superuser is already prevented from being able to set\n\"password_required=false\", perhaps we shouldn't allow them to set\n\"require_auth=none\" (or have that effect) either? Perhaps the system\nshould simply forcibly set require_auth based on the credentials\nprovided in the USER MAPPING or on the connection and have require_auth\notherwise restricted to superuser (who could override it if they'd\nreally like to)? Perhaps if password_required=false we implicitly\nun-set require_auth, to avoid having to make superusers change their\nexisting configurations where they've clearly already accepted that\ncredential-less connections are allowed.\n\nAutomatically setting require_auth and restricting the ability of it to\nbe set on user mappings to superusers doesn't strike me as terribly\ndifficult to do and seems like it'd prevent this concern.\n\nJust to make sure I'm following- Robert's up-thread suggestion of an\n'outbound pg_hba' would be an additional restriction when it comes to\nwhat a user who can use CREATE SERVER is allowed to do? I'm not against\nthe idea of having a way to lock that down.. but it's another level of\ncomplication certainly and I'm not sure that some external config file\nor such is the best way to try and deal with that, though I do see how\nit can have some appeal for certain environments. It does overall\nstrike me as something we've not tried to address in any way thus far\nand a pretty large effort that's not likely to make it into PG16, unlike\nthe possibility of auto-setting require_auth, now that it exists.\n\nThanks!\n\nStephen", "msg_date": "Thu, 30 Mar 2023 14:13:51 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On Friday, March 31, 2023 12:05 AM Robert Haas <robertmhaas@gmail.com> wrote:\r\n\r\nHi,\r\n\r\n> \r\n> On Tue, Mar 28, 2023 at 1:52 PM Jeff Davis <pgsql@j-davis.com> wrote:\r\n> > On Fri, 2023-03-24 at 00:17 -0700, Jeff Davis wrote:\r\n> > > The other patch you posted seems like it makes a lot of progress in\r\n> > > that direction, and I think that should go in first. That was one of\r\n> > > the items I suggested previously[2], so thank you for working on\r\n> > > that.\r\n> >\r\n> > The above is not a hard objection.\r\n> \r\n> The other patch is starting to go in a direction that is going to have some\r\n> conflicts with this one, so I went ahead and committed this one to avoid\r\n> rebasing pain.\r\n\r\nI noticed the BF[1] report a core dump after this commit.\r\n\r\n#0  0xfd581864 in _lwp_kill () from /usr/lib/libc.so.12\r\n#0  0xfd581864 in _lwp_kill () from /usr/lib/libc.so.12\r\n#1  0xfd5817dc in raise () from /usr/lib/libc.so.12\r\n#2  0xfd581c88 in abort () from /usr/lib/libc.so.12\r\n#3  0x01e6c8d4 in ExceptionalCondition (conditionName=conditionName@entry=0x2007758 \"IsTransactionState()\", fileName=fileName@entry=0x20565c4 \"catcache.c\", lineNumber=lineNumber@entry=1208) at assert.c:66\r\n#4  0x01e4e404 in SearchCatCacheInternal (cache=0xfd21e500, nkeys=nkeys@entry=1, v1=v1@entry=28985, v2=v2@entry=0, v3=v3@entry=0, v4=v4@entry=0) at catcache.c:1208\r\n#5  0x01e4eea0 in SearchCatCache1 (cache=<optimized out>, v1=v1@entry=28985) at catcache.c:1162\r\n#6  0x01e66e34 in SearchSysCache1 (cacheId=cacheId@entry=11, key1=key1@entry=28985) at syscache.c:825\r\n#7  0x01e98c40 in superuser_arg (roleid=28985) at superuser.c:70\r\n#8  0x01c657bc in ApplyWorkerMain (main_arg=<optimized out>) at worker.c:4552\r\n#9  0x01c1ceac in StartBackgroundWorker () at bgworker.c:861\r\n#10 0x01c23be0 in do_start_bgworker (rw=<optimized out>) at postmaster.c:5762\r\n#11 maybe_start_bgworkers () at postmaster.c:5986\r\n#12 0x01c2459c in process_pm_pmsignal () at postmaster.c:5149\r\n#13 ServerLoop () at postmaster.c:1770\r\n#14 0x01c26cdc in PostmasterMain (argc=argc@entry=4, argv=argv@entry=0xffffe0e4) at postmaster.c:1463\r\n#15 0x01ee2c8c in main (argc=4, argv=0xffffe0e4) at main.c:200\r\n\r\nIt looks like the super user check is out of a transaction, I haven't checked why\r\nit only failed on one BF animal, but it seems we can put the check into the\r\ntransaction like the following:\r\n\r\ndiff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c\r\nindex 6fd674b5d6..08f10fc331 100644\r\n--- a/src/backend/replication/logical/worker.c\r\n+++ b/src/backend/replication/logical/worker.c\r\n@@ -4545,12 +4545,13 @@ ApplyWorkerMain(Datum main_arg)\r\n \t\treplorigin_session_setup(originid, 0);\r\n \t\treplorigin_session_origin = originid;\r\n \t\torigin_startpos = replorigin_session_get_progress(false);\r\n-\t\tCommitTransactionCommand();\r\n \r\n \t\t/* Is the use of a password mandatory? */\r\n \t\tmust_use_password = MySubscription->passwordrequired &&\r\n \t\t\t!superuser_arg(MySubscription->owner);\r\n \r\n+\t\tCommitTransactionCommand();\r\n+\r\n\r\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2023-03-30%2019%3A41%3A08\r\n\r\nBest Regards,\r\nHou Zhijie\r\n", "msg_date": "Fri, 31 Mar 2023 01:49:28 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Mar 30, 2023 at 9:49 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> It looks like the super user check is out of a transaction, I haven't checked why\n> it only failed on one BF animal, but it seems we can put the check into the\n> transaction like the following:\n\nThat looks like a reasonable fix but I can't reproduce the problem\nlocally. I thought the reason why that machine sees the problem might\nbe that it uses -DRELCACHE_FORCE_RELEASE, but I tried that option here\nand the tests still pass. Anyone ideas how to reproduce?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 31 Mar 2023 16:00:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Saturday, April 1, 2023 4:00 AM Robert Haas <robertmhaas@gmail.com>\r\n\r\nHi,\r\n\r\n> \r\n> On Thu, Mar 30, 2023 at 9:49 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > It looks like the super user check is out of a transaction, I haven't\r\n> > checked why it only failed on one BF animal, but it seems we can put\r\n> > the check into the transaction like the following:\r\n> \r\n> That looks like a reasonable fix but I can't reproduce the problem locally. I\r\n> thought the reason why that machine sees the problem might be that it uses\r\n> -DRELCACHE_FORCE_RELEASE, but I tried that option here and the tests still pass.\r\n> Anyone ideas how to reproduce?\r\n\r\nI think it's a timing problem because superuser_arg() function will cache the\r\nroleid that passed in last time, so it might not search the syscache to hit the\r\nAssert() check each time. And in the regression test, the roleid cache happened\r\nto be invalidated before the superuser_arg() by some concurrently ROLE change(\r\nmaybe in subscription.sql and publication.sql).\r\n\r\nI can reproduce it by using gdb and starting another session to change the ROLE.\r\n\r\nWhen the apply worker starts, use the gdb to block the apply worker in the\r\ntransaction before the super user check. Then start another session to ALTER\r\nROLE to invalidate the roleid cache in superuser_arg() which will cause the\r\napply worker to search the syscache and hit the Assert().\r\n\r\n--\r\n\t\torigin_startpos = replorigin_session_get_progress(false);\r\nB*\t\tCommitTransactionCommand();\r\n\r\n\t\t/* Is the use of a password mandatory? */\r\n\t\tmust_use_password = MySubscription->passwordrequired &&\r\n\t\t\t! superuser_arg(MySubscription->owner);\r\n--\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Sat, 1 Apr 2023 01:24:05 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Non-superuser subscription owners" }, { "msg_contents": "Hello Robert,\n\n31.03.2023 23:00, Robert Haas wrote:\n> That looks like a reasonable fix but I can't reproduce the problem\n> locally. I thought the reason why that machine sees the problem might\n> be that it uses -DRELCACHE_FORCE_RELEASE, but I tried that option here\n> and the tests still pass. Anyone ideas how to reproduce?\n\nI've managed to reproduce it using the following script:\nfor ((i=1;i<=10;i++)); do\necho \"iteration $i\"\necho \"\nCREATE ROLE sub_user;\nCREATE SUBSCRIPTION testsub CONNECTION 'dbname=db'\n   PUBLICATION testpub WITH (connect = false);\nALTER SUBSCRIPTION testsub ENABLE;\nDROP SUBSCRIPTION testsub;\nSELECT pg_sleep(0.001);\nDROP ROLE sub_user;\n\" | psql\npsql -c \"ALTER SUBSCRIPTION testsub DISABLE;\"\npsql -c \"ALTER SUBSCRIPTION testsub SET (slot_name = NONE);\"\npsql -c \"DROP SUBSCRIPTION testsub;\"\ngrep 'TRAP' server.log && break\ndone\n\niteration 3\nCREATE ROLE\n...\nALTER SUBSCRIPTION\nWARNING:  terminating connection because of crash of another server process\nDETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because ano\nther server process exited abnormally and possibly corrupted shared memory.\nHINT:  In a moment you should be able to reconnect to the database and repeat your command.\nserver closed the connection unexpectedly\n        This probably means the server terminated abnormally\n        before or while processing the request.\nconnection to server was lost\nTRAP: failed Assert(\"IsTransactionState()\"), File: \"catcache.c\", Line: 1208, PID: 1001242\n\nBest regards,\nAlexander\n\n\n\n\n\nHello Robert,\n\n 31.03.2023 23:00, Robert Haas wrote:\n\n\nThat looks like a reasonable fix but I can't reproduce the problem\nlocally. I thought the reason why that machine sees the problem might\nbe that it uses -DRELCACHE_FORCE_RELEASE, but I tried that option here\nand the tests still pass. Anyone ideas how to reproduce?\n\n\n\n I've managed to reproduce it using the following script:\n for ((i=1;i<=10;i++)); do\n echo \"iteration $i\"\n echo \"\n CREATE ROLE sub_user;\n CREATE SUBSCRIPTION testsub CONNECTION 'dbname=db'\n   PUBLICATION testpub WITH (connect = false);\n ALTER SUBSCRIPTION testsub ENABLE;\n DROP SUBSCRIPTION testsub;\n SELECT pg_sleep(0.001);\n DROP ROLE sub_user;\n \" | psql\n psql -c \"ALTER SUBSCRIPTION testsub DISABLE;\"\n psql -c \"ALTER SUBSCRIPTION testsub SET (slot_name = NONE);\"\n psql -c \"DROP SUBSCRIPTION testsub;\"\n grep 'TRAP' server.log && break\n done\n\niteration 3\n \n CREATE ROLE\n \n ...\n ALTER SUBSCRIPTION\n \n WARNING:  terminating connection because of crash of another\n server process\n \n DETAIL:  The postmaster has commanded this server process to roll\n back the current transaction and exit, because ano\n ther server process exited abnormally and possibly corrupted\n shared memory.\n \n HINT:  In a moment you should be able to reconnect to the database\n and repeat your command.\n \n server closed the connection unexpectedly\n \n        This probably means the server terminated abnormally\n \n        before or while processing the request.\n \n connection to server was lost\n \n TRAP: failed Assert(\"IsTransactionState()\"), File: \"catcache.c\",\n Line: 1208, PID: 1001242\n\nBest regards,\n Alexander", "msg_date": "Sat, 1 Apr 2023 19:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "Hi, \n\nOn April 1, 2023 9:00:00 AM PDT, Alexander Lakhin <exclusion@gmail.com> wrote:\n>Hello Robert,\n>\n>31.03.2023 23:00, Robert Haas wrote:\n>> That looks like a reasonable fix but I can't reproduce the problem\n>> locally. I thought the reason why that machine sees the problem might\n>> be that it uses -DRELCACHE_FORCE_RELEASE, but I tried that option here\n>> and the tests still pass. Anyone ideas how to reproduce?\n>\n>I've managed to reproduce it using the following script:\n>for ((i=1;i<=10;i++)); do\n>echo \"iteration $i\"\n>echo \"\n>CREATE ROLE sub_user;\n>CREATE SUBSCRIPTION testsub CONNECTION 'dbname=db'\n>  PUBLICATION testpub WITH (connect = false);\n>ALTER SUBSCRIPTION testsub ENABLE;\n>DROP SUBSCRIPTION testsub;\n>SELECT pg_sleep(0.001);\n>DROP ROLE sub_user;\n>\" | psql\n>psql -c \"ALTER SUBSCRIPTION testsub DISABLE;\"\n>psql -c \"ALTER SUBSCRIPTION testsub SET (slot_name = NONE);\"\n>psql -c \"DROP SUBSCRIPTION testsub;\"\n>grep 'TRAP' server.log && break\n>done\n>\n>iteration 3\n>CREATE ROLE\n>...\n>ALTER SUBSCRIPTION\n>WARNING:  terminating connection because of crash of another server process\n>DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because ano\n>ther server process exited abnormally and possibly corrupted shared memory.\n>HINT:  In a moment you should be able to reconnect to the database and repeat your command.\n>server closed the connection unexpectedly\n>       This probably means the server terminated abnormally\n>       before or while processing the request.\n>connection to server was lost\n>TRAP: failed Assert(\"IsTransactionState()\"), File: \"catcache.c\", Line: 1208, PID: 1001242\n\nErrors like that are often easier to reproduce with clobber caches (or whatever the name is these days) enabled.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Sat, 01 Apr 2023 16:07:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Sat, Apr 1, 2023 at 12:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> I've managed to reproduce it using the following script:\n> for ((i=1;i<=10;i++)); do\n> echo \"iteration $i\"\n> echo \"\n> CREATE ROLE sub_user;\n> CREATE SUBSCRIPTION testsub CONNECTION 'dbname=db'\n> PUBLICATION testpub WITH (connect = false);\n> ALTER SUBSCRIPTION testsub ENABLE;\n> DROP SUBSCRIPTION testsub;\n> SELECT pg_sleep(0.001);\n> DROP ROLE sub_user;\n> \" | psql\n> psql -c \"ALTER SUBSCRIPTION testsub DISABLE;\"\n> psql -c \"ALTER SUBSCRIPTION testsub SET (slot_name = NONE);\"\n> psql -c \"DROP SUBSCRIPTION testsub;\"\n> grep 'TRAP' server.log && break\n> done\n\nAfter a bit of experimentation this repro worked for me -- I needed\n-DRELCACHE_FORCE_RELEASE as well, and a bigger iteration count. I\nverified that the patch fixed it, and committed the patch with the\naddition of a comment.\n\nThanks very much for this repro, and likewise many thanks to Hou\nZhijie for the report and patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 Apr 2023 13:56:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Mar 30, 2023 at 9:35 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Mar 28, 2023 at 1:52 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > On Fri, 2023-03-24 at 00:17 -0700, Jeff Davis wrote:\n> > > The other patch you posted seems like it makes a lot of progress in\n> > > that direction, and I think that should go in first. That was one of\n> > > the items I suggested previously[2], so thank you for working on\n> > > that.\n> >\n> > The above is not a hard objection.\n>\n> The other patch is starting to go in a direction that is going to have\n> some conflicts with this one, so I went ahead and committed this one\n> to avoid rebasing pain.\n>\n\nDo we need to have a check for this new option \"password_required\" in\nmaybe_reread_subscription() where we \"Exit if any parameter that\naffects the remote connection was changed.\"? This new option is\nrelated to the remote connection so I thought it is worth considering\nwhether we want to exit and restart the apply worker when this option\nis changed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 8 Apr 2023 11:04:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Sat, Apr 8, 2023 at 1:35 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Do we need to have a check for this new option \"password_required\" in\n> maybe_reread_subscription() where we \"Exit if any parameter that\n> affects the remote connection was changed.\"? This new option is\n> related to the remote connection so I thought it is worth considering\n> whether we want to exit and restart the apply worker when this option\n> is changed.\n\nHmm, good question. I think that's probably a good idea. If the\ncurrent connection is already working, the only possible result of\ngetting rid of it and trying to create a new one is that it might now\nfail instead, but someone might want that behavior. Otherwise, they'd\ninstead find the failure at a later, maybe less convenient, time.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 Apr 2023 11:45:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Apr 10, 2023 at 9:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Apr 8, 2023 at 1:35 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Do we need to have a check for this new option \"password_required\" in\n> > maybe_reread_subscription() where we \"Exit if any parameter that\n> > affects the remote connection was changed.\"? This new option is\n> > related to the remote connection so I thought it is worth considering\n> > whether we want to exit and restart the apply worker when this option\n> > is changed.\n>\n> Hmm, good question. I think that's probably a good idea. If the\n> current connection is already working, the only possible result of\n> getting rid of it and trying to create a new one is that it might now\n> fail instead, but someone might want that behavior. Otherwise, they'd\n> instead find the failure at a later, maybe less convenient, time.\n>\n\nI think additionally, we should check that the new owner of the\nsubscription is not a superuser, otherwise, anyway, this parameter is\nignored. Please find the attached to add this check.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 11 Apr 2023 15:23:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Tue, Apr 11, 2023 at 5:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I think additionally, we should check that the new owner of the\n> subscription is not a superuser, otherwise, anyway, this parameter is\n> ignored. Please find the attached to add this check.\n\nI don't see why we should check that. It makes this different from all\nthe other cases and I don't see any benefit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 Apr 2023 10:51:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Tue, Apr 11, 2023 at 8:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Apr 11, 2023 at 5:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I think additionally, we should check that the new owner of the\n> > subscription is not a superuser, otherwise, anyway, this parameter is\n> > ignored. Please find the attached to add this check.\n>\n> I don't see why we should check that. It makes this different from all\n> the other cases and I don't see any benefit.\n>\n\nI thought it would be better if we don't restart the worker unless it\nis required. In case, the subscription's owner is a superuser, the\n'password_required' is ignored, so why restart the apply worker when\nsomebody changes it in such a case? I understand that there may not be\na need to change the 'password_required' option when the\nsubscription's owner is the superuser but one may first choose to\nchange the password_required flag and then the owner of a subscription\nto a non-superuser. Anyway, I don't think as such there is any problem\nwith restarting the worker even when the subscription owner is a\nsuperuser, so adjusted the check accordingly.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 12 Apr 2023 08:26:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Tue, Apr 11, 2023 at 10:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Anyway, I don't think as such there is any problem\n> with restarting the worker even when the subscription owner is a\n> superuser, so adjusted the check accordingly.\n\nLGTM. I realize we could do more sophisticated things here, but I\nthink it's better to keep the code simple.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Apr 2023 08:19:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On 3/30/23 05:58, Robert Haas wrote:\n> On Fri, Mar 24, 2023 at 5:47 PM Jacob Champion <jchampion@timescale.com> wrote:\n>> Okay, but this is walking back from the network example you just\n>> described upthread. Do you still consider that in scope, or...?\n> \n> Sorry, I don't know which example you mean.\n\nThe symmetrical proxy situation you described, where all the proxies are\nmutually trusting. While it's easier to secure that setup than the\nasymmetrical ones, it's also not a localhost-only situation anymore, and\nthe moment you open up to other machines is where I think your\ncharacterization runs into trouble.\n\n> I guess I wouldn't have a problem blaming the DBA here, but you seem\n> to be telling me that the security literature has settled on another\n> kind of approach, and I'm not in a position to dispute that. It still\n> feels weird to me, though.\n\nIf it helps, [1] is a paper that helped me wrap my head around some of\nit. It's focused on capability systems and an academic audience, but the\n\"Avoiding Confused Deputy Problems\" section starting on page 11 is a\ngood place to jump to for the purposes of this discussion.\n\n--Jacob\n\n[1] https://srl.cs.jhu.edu/pubs/SRL2003-02.pdf\n\n\n", "msg_date": "Wed, 12 Apr 2023 11:23:01 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On 3/30/23 11:13, Stephen Frost wrote:\n>> Okay, but this is walking back from the network example you just\n>> described upthread. Do you still consider that in scope, or...?\n> \n> The concern around the network certainly needs to be in-scope overall.\n\nSounds good!\n\n> Who are we trusting with what? In particular, I'd argue that the user\n> who is able to install the postgres_fdw extension and the user who is\n> able to issue the CREATE SERVER are largely trusted; at least in so far\n> as the user doing CREATE SERVER is allowed to create the server and\n> through that allowed to make outbound connections from the Proxy.\n> \n> Therefore, the Proxy is configured with postgres_fdw and with a trusted\n> user performing the CREATE SERVER.\n> \n> What doesn't this handle today? Connection side-effects are one\n> problem- once the CREATE SERVER is done, any user with USAGE rights on\n> the server can create a USER MAPPING for themselves, either with a\n> password or without one (if they're able to proxy GSS credentials to the\n> system). They aren't able to set password_required though, which\n> defaults to true. However, without having require_auth set, they're\n> able to cause the Proxy to reach an authentication stage with the Target\n> that might not match what credentials they're supposed to be providing.\n> \n> We attempt to address this by checking post-auth to Target that we used\n> the credentials to connect that we expected to- if GSS credentials were\n> proxied, then we expect to use those. If a password was provided then\n> we expect to use a password to auth (only checked after we see if GSS\n> credentials were proxied and used). The issue here is 'post-auth' bit,\n> we'd prefer to fail the connection pre-auth if it isn't what we're\n> expecting.\n\nRight. Keep in mind that require_auth is post-auth, though; it can't fix\nthat issue, so it doesn't fix any connection side-effect problems at all.\n\n> Should we then explicit set require_auth=gss when GSS\n> credentials are proxied? Also, if a password is provided, then\n> explicitly set require_auth=scram-sha-256? Or default to these, at\n> least, and allow the CREATE SERVER user to override our choices? Or\n> should it be a USER MAPPING option that's restricted? Or not?\nIMO, yes -- whatever credentials the proxy is forwarding from the user,\nthe proxy should be checking that the server has actually used them. The\nperson with the ability to create a USER MAPPING should probably not\nhave the ability to override that check.\n\n>>> I think that what you're proposing is that B and C can just be allowed\n>>> to proxy to A and A can say \"hey, by the way, I'm just gonna let you\n>>> in without asking for anything else\" and B and C can, when proxying,\n>>> react to that by disconnecting before the connection actually goes\n>>> through. That's simpler, in a sense. It doesn't require us to set up\n>>> the proxy configuration on B and C in a way that matches what\n>>> pg_hba.conf allows on A. Instead, B and C can automatically deduce\n>>> what connections they should refuse to proxy.\n>>\n>> Right. It's meant to take the \"localhost/wraparound connection\" out of a\n>> class of special things we have to worry about, and make it completely\n>> boring.\n> \n> Again, trying to get at a more concrete example- the concern here is a\n> user with CREATE SERVER ability could leverage that access to become a\n> superuser if the system is configured with 'peer' access, right?\n\nOr 'trust localhost', or 'ident [postgres user]', yes.\n\n> A\n> non-superuser is already prevented from being able to set\n> \"password_required=false\", perhaps we shouldn't allow them to set\n> \"require_auth=none\" (or have that effect) either?\n\nI think that sounds reasonable.\n\n> Perhaps the system\n> should simply forcibly set require_auth based on the credentials\n> provided in the USER MAPPING or on the connection and have require_auth\n> otherwise restricted to superuser (who could override it if they'd\n> really like to)? Perhaps if password_required=false we implicitly\n> un-set require_auth, to avoid having to make superusers change their\n> existing configurations where they've clearly already accepted that\n> credential-less connections are allowed.\n\nMm, I think I like the first idea better. If you've set a password,\nwouldn't you like to know if the server ignored it? If password_required\nis false, *and* you don't have a password, then we can drop require_auth\nwithout issue.\n\n> Automatically setting require_auth and restricting the ability of it to\n> be set on user mappings to superusers doesn't strike me as terribly\n> difficult to do and seems like it'd prevent this concern.\n> \n> Just to make sure I'm following- Robert's up-thread suggestion of an\n> 'outbound pg_hba' would be an additional restriction when it comes to\n> what a user who can use CREATE SERVER is allowed to do?\n\nYes. That can provide additional safety in the case where you really\nneed to take the require_auth checks away for whatever reason. I think\nit's just a good in-depth measure, and if we don't extend the protocol\nin some way to do a pre-auth check, it's also the way for the DBA to\nbless known-good connection paths.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Wed, 12 Apr 2023 11:24:33 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw, dblink, and CREATE SUBSCRIPTION security" }, { "msg_contents": "On Wed, Apr 12, 2023 at 5:50 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Apr 11, 2023 at 10:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Anyway, I don't think as such there is any problem\n> > with restarting the worker even when the subscription owner is a\n> > superuser, so adjusted the check accordingly.\n>\n> LGTM.\n>\n\nThanks. I am away for a few days so can push it only next week.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 13 Apr 2023 08:02:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Apr 13, 2023 at 8:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 12, 2023 at 5:50 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Tue, Apr 11, 2023 at 10:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Anyway, I don't think as such there is any problem\n> > > with restarting the worker even when the subscription owner is a\n> > > superuser, so adjusted the check accordingly.\n> >\n> > LGTM.\n> >\n>\n> Thanks. I am away for a few days so can push it only next week.\n>\n\nPushed. I noticed that we didn't display this new subscription option\n'password_required' in \\dRs+:\n\npostgres=# \\dRs+\n\n List of subscriptions\n Name | Owner | Enabled | Publication | Binary | Streaming |\nTwo-phase commit | Disable on error | Origin | Run as Owner? |\nSynchronous commit | Conninfo | Skip LSN\n\nIs that intentional? Sorry, if it was discussed previously because I\nhaven't followed this discussion in detail.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 20 Apr 2023 10:38:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Apr 20, 2023 at 1:08 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Pushed. I noticed that we didn't display this new subscription option\n> 'password_required' in \\dRs+:\n>\n> postgres=# \\dRs+\n>\n> List of subscriptions\n> Name | Owner | Enabled | Publication | Binary | Streaming |\n> Two-phase commit | Disable on error | Origin | Run as Owner? |\n> Synchronous commit | Conninfo | Skip LSN\n>\n> Is that intentional? Sorry, if it was discussed previously because I\n> haven't followed this discussion in detail.\n\nNo, I don't think that's intentional. I just didn't think about it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Apr 2023 16:18:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, 21 Apr 2023 at 01:49, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Apr 20, 2023 at 1:08 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Pushed. I noticed that we didn't display this new subscription option\n> > 'password_required' in \\dRs+:\n> >\n> > postgres=# \\dRs+\n> >\n> > List of subscriptions\n> > Name | Owner | Enabled | Publication | Binary | Streaming |\n> > Two-phase commit | Disable on error | Origin | Run as Owner? |\n> > Synchronous commit | Conninfo | Skip LSN\n> >\n> > Is that intentional? Sorry, if it was discussed previously because I\n> > haven't followed this discussion in detail.\n>\n> No, I don't think that's intentional. I just didn't think about it.\n\nHere is a patch to display Password required with \\dRs+ command. Also\nadded one test to describe subscription when password_required is\nfalse, as all the existing tests were there only for password_required\nas true.\n\nRegards,\nVignesh", "msg_date": "Fri, 21 Apr 2023 12:29:50 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, Apr 21, 2023 at 12:30 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, 21 Apr 2023 at 01:49, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, Apr 20, 2023 at 1:08 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Pushed. I noticed that we didn't display this new subscription option\n> > > 'password_required' in \\dRs+:\n> > >\n> > > postgres=# \\dRs+\n> > >\n> > > List of subscriptions\n> > > Name | Owner | Enabled | Publication | Binary | Streaming |\n> > > Two-phase commit | Disable on error | Origin | Run as Owner? |\n> > > Synchronous commit | Conninfo | Skip LSN\n> > >\n> > > Is that intentional? Sorry, if it was discussed previously because I\n> > > haven't followed this discussion in detail.\n> >\n> > No, I don't think that's intentional. I just didn't think about it.\n>\n> Here is a patch to display Password required with \\dRs+ command. Also\n> added one test to describe subscription when password_required is\n> false, as all the existing tests were there only for password_required\n> as true.\n>\n\nLGTM. Let's see if Robert or others have any comments, otherwise, I'll\npush this early next week.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 21 Apr 2023 17:48:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, Apr 21, 2023 at 8:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> LGTM. Let's see if Robert or others have any comments, otherwise, I'll\n> push this early next week.\n\nLGTM too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 Apr 2023 08:51:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Fri, Apr 21, 2023 at 6:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Apr 21, 2023 at 8:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > LGTM. Let's see if Robert or others have any comments, otherwise, I'll\n> > push this early next week.\n>\n> LGTM too.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 24 Apr 2023 11:28:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Tuesday, April 4, 2023 1:57 AM Robert Haas <robertmhaas@gmail.com> wrote:\r\n> \r\n> On Sat, Apr 1, 2023 at 12:00 PM Alexander Lakhin <exclusion@gmail.com>\r\n> wrote:\r\n> > I've managed to reproduce it using the following script:\r\n> > for ((i=1;i<=10;i++)); do\r\n> > echo \"iteration $i\"\r\n> > echo \"\r\n> > CREATE ROLE sub_user;\r\n> > CREATE SUBSCRIPTION testsub CONNECTION 'dbname=db'\r\n> > PUBLICATION testpub WITH (connect = false); ALTER SUBSCRIPTION\r\n> > testsub ENABLE; DROP SUBSCRIPTION testsub; SELECT pg_sleep(0.001);\r\n> > DROP ROLE sub_user; \" | psql psql -c \"ALTER SUBSCRIPTION testsub\r\n> > DISABLE;\"\r\n> > psql -c \"ALTER SUBSCRIPTION testsub SET (slot_name = NONE);\"\r\n> > psql -c \"DROP SUBSCRIPTION testsub;\"\r\n> > grep 'TRAP' server.log && break\r\n> > done\r\n> \r\n> After a bit of experimentation this repro worked for me -- I needed\r\n> -DRELCACHE_FORCE_RELEASE as well, and a bigger iteration count. I verified\r\n> that the patch fixed it, and committed the patch with the addition of a\r\n> comment.\r\n\r\nThanks for pushing!\r\n\r\nWhile testing this, I found a similar problem in table sync worker,\r\nas we also invoke superuser_arg() in table sync worker which is not in a\r\ntransaction.\r\n\r\nLogicalRepSyncTableStart\r\n...\r\n\t/* Is the use of a password mandatory? */\r\n\tmust_use_password = MySubscription->passwordrequired &&\r\n\t\t!superuser_arg(MySubscription->owner);\r\n\r\n#0  0x00007f18bb55aaff in raise () from /lib64/libc.so.6\r\n#1  0x00007f18bb52dea5 in abort () from /lib64/libc.so.6\r\n#2  0x0000000000b69a22 in ExceptionalCondition (conditionName=0xda4338 \"IsTransactionState()\", fileName=0xda403e \"catcache.c\", lineNumber=1208) at assert.c:66\r\n#3  0x0000000000b4842a in SearchCatCacheInternal (cache=0x27cab80, nkeys=1, v1=10, v2=0, v3=0, v4=0) at catcache.c:1208\r\n#4  0x0000000000b48329 in SearchCatCache1 (cache=0x27cab80, v1=10) at catcache.c:1162\r\n#5  0x0000000000b630c7 in SearchSysCache1 (cacheId=11, key1=10) at syscache.c:825\r\n#6  0x0000000000b982e3 in superuser_arg (roleid=10) at superuser.c:70\r\n\r\nI can reproduce this via gdb following similar steps in [1].\r\n\r\nI think we need to move this call into a transaction as well and here is an attempt\r\nto do that.\r\n\r\n[1] https://www.postgresql.org/message-id/OS0PR01MB5716E596E4FB83DE46F592FE948C9%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Fri, 12 May 2023 09:58:31 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Non-superuser subscription owners" }, { "msg_contents": "On Fri, May 12, 2023 at 3:28 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n>\n> I can reproduce this via gdb following similar steps in [1].\n>\n> I think we need to move this call into a transaction as well and here is an attempt\n> to do that.\n>\n\nI am able to reproduce this issue following the steps mentioned by you\nand the proposed patch to fix the issue looks good to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Jun 2023 14:25:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Tue, Jun 13, 2023 at 2:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 12, 2023 at 3:28 PM Zhijie Hou (Fujitsu)\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> >\n> > I can reproduce this via gdb following similar steps in [1].\n> >\n> > I think we need to move this call into a transaction as well and here is an attempt\n> > to do that.\n> >\n>\n> I am able to reproduce this issue following the steps mentioned by you\n> and the proposed patch to fix the issue looks good to me.\n>\n\nI'll push this tomorrow unless there are any suggestions or comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Jun 2023 14:29:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Tue, Jun 13, 2023 at 2:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 12, 2023 at 3:28 PM Zhijie Hou (Fujitsu)\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> >\n> > I can reproduce this via gdb following similar steps in [1].\n> >\n> > I think we need to move this call into a transaction as well and here is an attempt\n> > to do that.\n> >\n>\n> I am able to reproduce this issue following the steps mentioned by you\n> and the proposed patch to fix the issue looks good to me.\n>\n\nToday, again looking at the patch, it seems to me that it would be\nbetter if we can fix this without starting a new transaction. Won't it\nbe better if we move this syscall to a place where we are fetching\nrelstate (GetSubscriptionRelState()) a few lines above? I understand\nby doing that in some cases like when copy_data = false, we may do\nthis syscall unnecessarily but OTOH, starting a new transaction just\nfor a syscall (superuser_arg()) also doesn't seem like a good idea to\nme.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 14 Jun 2023 07:41:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Wednesday, June 14, 2023 10:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Tue, Jun 13, 2023 at 2:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Fri, May 12, 2023 at 3:28 PM Zhijie Hou (Fujitsu)\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > >\r\n> > > I can reproduce this via gdb following similar steps in [1].\r\n> > >\r\n> > > I think we need to move this call into a transaction as well and\r\n> > > here is an attempt to do that.\r\n> > >\r\n> >\r\n> > I am able to reproduce this issue following the steps mentioned by you\r\n> > and the proposed patch to fix the issue looks good to me.\r\n> >\r\n> \r\n> Today, again looking at the patch, it seems to me that it would be better if we\r\n> can fix this without starting a new transaction. Won't it be better if we move this\r\n> syscall to a place where we are fetching relstate (GetSubscriptionRelState()) a\r\n> few lines above? I understand by doing that in some cases like when copy_data\r\n> = false, we may do this syscall unnecessarily but OTOH, starting a new\r\n> transaction just for a syscall (superuser_arg()) also doesn't seem like a good\r\n> idea to me.\r\n\r\nMakes sense to me, here is the updated patch which does the same.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Wed, 14 Jun 2023 03:53:54 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Non-superuser subscription owners" }, { "msg_contents": "On 2023-Jun-13, Amit Kapila wrote:\n\n> I'll push this tomorrow unless there are any suggestions or comments.\n\nNote the proposed commit message is wrong about which commit is to blame\nfor the original problem -- it mentions e7e7da2f8d57 twice, but one of\nthem is actually c3afe8cf5a1e.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 15 Jun 2023 19:48:44 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Thu, Jun 15, 2023 at 11:18 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2023-Jun-13, Amit Kapila wrote:\n>\n> > I'll push this tomorrow unless there are any suggestions or comments.\n>\n> Note the proposed commit message is wrong about which commit is to blame\n> for the original problem -- it mentions e7e7da2f8d57 twice, but one of\n> them is actually c3afe8cf5a1e.\n>\n\nRight, I also noticed this and changed it before pushing, See\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b5c517379a40fa1af84c0852aa3730a5875a6482\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 16 Jun 2023 07:37:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-superuser subscription owners" }, { "msg_contents": "On Mon, Feb 27, 2023 at 7:37 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Mon, 2023-02-27 at 16:13 -0500, Robert Haas wrote:\n> > On Mon, Feb 27, 2023 at 1:25 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > > I think you are saying that we should still run Alice's code with\n> > > the\n> > > privileges of Bob, but somehow make that safe(r) for Bob. Is that\n> > > right?\n> >\n> > Yeah. That's the idea I was floating, at least.\n>\n> Isn't that a hard problem; maybe impossible?\n\nI want to flesh out the ideas I previously articulated in this area a bit more.\n\nAs a refresher, the scenario I'm talking about is any one in which one\nuser, who I'll call Bob, does something that results in executing code\nprovided by another user, who I'll call Alice. The most obvious way\nthat this can happen is if Bob performs some operation that targets a\ntable owned by Alice. That operation might be DML, like an INSERT or\nUPDATE; or it might be some other kind of maintenance command that can\ncause code execution, like REINDEX, which can evaluate index\nexpressions. The code being executed might be run either as Alice or\nas Bob, depending on how it's been attached to the table and what\noperation is being performed and maybe whether some function or\nprocedure that might contain it is SECURITY INVOKER or SECURITY\nDEFINER. Regardless of the details, our concern is that Alice's code\nmight do something that Bob does not like. This is a particularly\nlively concern if the code happens to be running with the privileges\nof Bob, because then Alice might try to do something like access\nobjects for which Bob has permissions and Alice does not. But the\nproblems don't completely go away if the code is being run as Alice,\nbecause even then, Alice could try to manipulate the session state in\nsome way that will cause Bob to hose himself later on. The existing\nSECURITY_RESTRICTED_OPERATION flag defends against some scenarios of\nthis type, but at present we also rely heavily on Bob being *very*\ncareful, as Jeff has highlighted rather compellingly.\n\nI think we can do better, both in the case where Bob is running code\nprovided by Alice using his own permissions, and also in the case\nwhere Bob is running code provided by Alice using Alice's permissions.\nTo that end, I'd like to define a few terms. First, let's define the\nprovider of a piece of code as either (a) the owner of the function or\nprocedure that contains it or (b) the owner of the object to which\nit's directly attached or (c) the session user, for code directly\nentered at top level. For example, if Alice owns a table T1 and\napplies a default expression which uses a function provided by\nCharlie, and Bob then inserts into T1, then Bob provides the insert\nstatement, Alice provides the default expression, and Charlie provides\nthe code inside the function. I assert that in every context where\nPostgreSQL evaluates expressions or runs SQL statements, there's a\nwell-defined provider for the expression or statement, and we can make\nthe system track it if we want to. Second, I'd like to define trust.\nUsers trust themselves, and they also trust users who have a superset\nof their permissions, a category that most typically just includes\nsuperusers but could include others if role grants are in use. A user\ncan also declare through some mechanism or other that they trust\nanother user even if that other user does not have a superset of their\npermissions. Such a declaration carries the risk that the trusted user\ncould hijack the trusting user's permissions; we would document and\ndisclaim this risk.\n\nFinally, let's define sandboxing. When code is sandboxed, the set of\noperations that it is allowed to perform is restricted. Sandboxing\nisn't necessarily all or nothing; there can be different categories of\noperations and we can allow some and deny others, if we wish.\nObviously this is quite a bit of work to implement, but I don't think\nit's unmanageable. YMMV. To keep things simple for purposes of\ndiscussion, I'm going to just define two levels of sandboxing for the\nmoment; I think we might want more. If code is fully sandboxed, it can\nonly do the following things:\n\n1. Compute stuff. There's no restriction on the permissible amount of\ncompute; if you call untrusted code, nothing prevents it from running\nforever.\n2. Call other code. This may be done by a function call or a command\nsuch as CALL or DO, all subject to the usual permissions checks but no\nfurther restrictions.\n3. Access the current session state, without modifying it. For\nexample, executing SHOW or current_setting() is fine.\n4. Transiently modify the current session state in ways that are\nnecessarily reversed before returning to the caller. For example, an\nEXCEPTION block or a configuration change driven by proconfig is fine.\n5. Produce messages at any log level. This includes any kind of ERROR.\n\nFully sandboxed code can't access or modify data beyond what gets\npassed to it, with the exception of the session state mentioned above.\nThis includes data inside of PostgreSQL, like tables or statistics, as\nwell as data outside of PostgreSQL, like files that it might try to\nread by calling pg_read_file(). If it tries, an ERROR occurs.\n\nPartially sandboxed code is much less restricted. Partially sandboxed\ncode can do almost anything that unsandboxed code can do, but with one\nimportant exception: it can't modify the session state. This means it\ncan't run commands like CLOSE, DEALLOCATE, DECLARE, DISCARD, EXECUTE,\nFETCH, LISTEN, MOVE, PREPARE, or UNLISTEN. Nor can it try to COMMIT or\nROLLBACK the current transaction or set up a SAVEPOINT or ROLLBACK TO\nSAVEPOINT. Nor can it use SET or set_config() to change a parameter\nvalue.\n\nWith those definitions in hand, I think it's possible to propose a\nmeaningful security model:\n\nRule #1: If the current user does not trust the provider, the code is\nfully sandboxed.\nRule #2: If the session user does not trust the provider either of the\ncurrently-running code or of any other code that's still on the call\nstack, the code is partially sandboxed.\n\nLet's take a few examples. First, suppose Alice has a table and it has\nsome associated code for which the provider is always Alice. That is,\nshe may have default expressions or index expressions for which she is\nnecessarily the provider, and she may have triggers, but in this\nexample she owns the functions or procedures called by those triggers\nand is thus the provider for those as well. Now, Bob, who does not\ntrust Alice, does something to Alice's table. The code might run as\nBob (by default) and then it will be fully sandboxed because of rule\n#1. Or there might be a SECURITY DEFINER function or procedure\ninvolved causing the code to run as Alice, in which case the code will\nbe partially sandboxed because of rule #2. I argue that Bob is pretty\nsafe here. Alice can't make any durable changes to Bob's session state\nno matter what she does, and if she provides code that runs as Bob it\ncan only do innocuous things like calculating x+y or x || y or running\ngenerate_series() or examining current_role. Yes, it could go into a\nloop, but that doesn't compromise Bob's account: he can hit ^C or set\nstatement_timeout. If she provides code that runs as herself it can\nmake use of her privileges (but not Bob's) as long as it doesn't try\nto touch the session state. So Bob is pretty safe.\n\nNow, suppose instead that Bob has a table but some code that is\nattached to it can call a function that is owned by Alice. In this\ncase, as long as everything on the call stack is provided by Bob,\nthere are no restrictions. But as soon as we enter Alice's function,\nthe code is fully sandboxed unless it arranges to switch to Alice's\npermissions using SECURITY DEFINER, in which case it's still partially\nsandboxed. Again, it's hard to see how Alice can get any leverage\nhere.\n\nFinally, suppose Alice has a table and attaches a trigger to it that\ncalls a function provided by Charlie. Bob now does something to this\ntable that results in the execution of this trigger. If the current\nuser -- which will be either Alice or Bob depending on whether the\nfunction is SECURITY DEFINER -- does not trust Charlie, the code\ninside the trigger is going to run fully sandboxed because of rule #1.\nBut even if the current user does trust Charlie, the code inside the\ntrigger is still going to be partially sandboxed unless Bob trusts\nBOTH Alice AND Charlie because of rule #2. This seems appropriate,\nbecause in this situation, either Alice or Charlie could be trying to\nfool Bob into taking some action he doesn't intend to take by\ntinkering with his session.\n\nIn general if we have a great big call stack that involves calling a\nwhole bunch of functions either as SECURITY INVOKER or as SECURITY\nDEFINER, changing the session state is blocked unless the session user\ntrusts the owners of all of those functions. And if we got to any of\nthose functions by means of code attached directly to tables, like an\nindex expression or default expression, changing the session state is\nblocked unless the session user also trusts the owners of those\ntables.\n\nI see a few obvious objections to this line of attack that someone\nmight raise, and I'd like to address them now. First, somebody might\nargue that this is too hard to implement. I don't think so, because a\nlot of things can be blocked from common locations. However, it will\nbe necessary to go through all the functions we ship and add checks in\na bunch of places to individual functions. That's a pain, but it's not\nthat different from what we've already done with PARALLEL { SAFE |\nRESTRICTED | UNSAFE } or LEAKPROOF. Those weren't terribly enjoyable\nexercises for me and I made some mistakes categorizing some things,\nbut the job got done and those mechanisms are accepted infrastructure\nnow. Second, somebody might argue that full sandboxing is such a\ndraconian set of restrictions that it will inconvenience users greatly\nor that it's pointless to even allow anything to be executed or\nsomething along those lines. I think that argument has some merit, but\nI think the restrictions sound worse than they actually are in\ncontext. For instance, have you ever written a default expression for\na column that would fail under full sandboxing? I wouldn't be\nsurprised if you have, but I also bet it's a fairly small percentage\nof cases. I think a lot of things that people want to do as a\npractical matter will be totally fine. I can think of exceptions, most\nobviously reading from a random-number generator that has a\nuser-controllable seed, which technically qualifies as tinkering with\nthe session state. But a lot of things are going to work fine, and the\nthings that do fall afoul of a mechanism like this probably deserve\nsome study and examination. If you're writing index expressions that\ndo anything more than simple calculation, it's probably fine for the\nsystem to raise an eyebrow about that. Even if they do something as\nsimple as reading from another table, that's not necessarily going to\ndump and restore properly, even if it's secure, because the table\nordering dependencies won't be clear to pg_dump.\n\nAnd that brings me to another point, which is that we might think of\nsandboxing some operations, either by default or unconditionally, for\nreasons other than trust or the lack of it. There's a lot of things\nthat you COULD do in an index expression that you really SHOULD NOT\ndo. As mentioned, even reading a table is pretty sketchy, but should a\nfunction called from an index expression ever be allowed to execute\nDDL? Is it reasonable if such a function wants to execute CREATE\nTABLE? Even a temporary table is dubious, and a non-temporary table is\nreally dubious. What if such a function wants to ALTER ROLE ...\nSUPERUSER? I think that's bonkers and should almost certainly be\ncategorically denied. Probably someone is trying to hack something,\nand even if they aren't, it's still nuts. So I would argue that in a\ncontext like an index expression, some amount of sandboxing -- not\nnecessarily corresponding to either of the levels described above --\nis probably a good idea, not based on the relationship between\nwhatever users are involved, but based rather on the context. There's\nroom for a lot of bikeshedding here and I don't think this kind of\nthing is necessarily the top priority, but I think it's worth thinking\nabout.\n\nFinally, I'd like to note that partial sandboxing can be viewed as a\nstrengthening of restrictions that we already have in the form of\nSECURITY_RESTRICTED_OPERATION. I can't claim to be an authority on the\nevolution of that flag, but I think that up to this point the general\nphilosophy has been to insert the smallest possible plug in the dike.\nWhen a definite security problem is discovered, somebody tries to\nblock just enough stuff to make it not demonstrably insecure. However,\nI feel that the surface area for things to go wrong is rather large,\nand we'd be better off with a more comprehensive series of\nrestrictions. We likely have some security issues that haven't been\nfound yet, and even something we wouldn't classify as a security\nvulnerability can still be a pitfall for the unwary. I imagine that\nSECURITY_RESTRICTED_OPERATION might end up getting subsumed into what\nI'm here calling partial sandboxing, but I'm not quite sure about that\nbecause right now this is just a theoretical description of a system,\nnot something for which I've written any code.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Aug 2023 11:25:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "sandboxing untrusted code" }, { "msg_contents": "On Thu, 2023-08-31 at 11:25 -0400, Robert Haas wrote:\n> As a refresher, the scenario I'm talking about is any one in which\n> one\n> user, who I'll call Bob, does something that results in executing\n> code\n> provided by another user, who I'll call Alice. The most obvious way\n> that this can happen is if Bob performs some operation that targets a\n> table owned by Alice. That operation might be DML, like an INSERT or\n> UPDATE; or it might be some other kind of maintenance command that\n> can\n> cause code execution, like REINDEX, which can evaluate index\n> expressions.\n\nREINDEX executes index expressions as the table owner. (You are correct\nthat INSERT executes index expressions as the inserting user.)\n\n> The code being executed might be run either as Alice or\n> as Bob, depending on how it's been attached to the table and what\n> operation is being performed and maybe whether some function or\n> procedure that might contain it is SECURITY INVOKER or SECURITY\n> DEFINER. Regardless of the details, our concern is that Alice's code\n> might do something that Bob does not like. This is a particularly\n> lively concern if the code happens to be running with the privileges\n> of Bob, because then Alice might try to do something like access\n> objects for which Bob has permissions and Alice does not.\n\nAgreed.\n\n\n> 1. Compute stuff. There's no restriction on the permissible amount of\n> compute; if you call untrusted code, nothing prevents it from running\n> forever.\n> 2. Call other code. This may be done by a function call or a command\n> such as CALL or DO, all subject to the usual permissions checks but\n> no\n> further restrictions.\n> 3. Access the current session state, without modifying it. For\n> example, executing SHOW or current_setting() is fine.\n> 4. Transiently modify the current session state in ways that are\n> necessarily reversed before returning to the caller. For example, an\n> EXCEPTION block or a configuration change driven by proconfig is\n> fine.\n> 5. Produce messages at any log level. This includes any kind of\n> ERROR.\n\nNothing in that list really exercises privileges (except #2?). If those\nare the allowed set of things a sandboxed function can do, is a\nsandboxed function equivalent to a function running with no privileges\nat all?\n\nPlease explain #2 in a bit more detail. Whose EXECUTE privileges would\nbe used (I assume it depende on SECURITY DEFINER/INVOKER)? Would the\ncalled code also be sandboxed?\n\n> In general if we have a great big call stack that involves calling a\n> whole bunch of functions either as SECURITY INVOKER or as SECURITY\n> DEFINER, changing the session state is blocked unless the session\n> user\n> trusts the owners of all of those functions.\n\nThat clarifies the earlier mechanics you described, thank you.\n\n> And if we got to any of\n> those functions by means of code attached directly to tables, like an\n> index expression or default expression, changing the session state is\n> blocked unless the session user also trusts the owners of those\n> tables.\n> \n> I see a few obvious objections to this line of attack that someone\n> might raise, and I'd like to address them now. First, somebody might\n> argue that this is too hard to implement.\n\nThat seems to be a response to my question above: \"Isn't that a hard\nproblem; maybe impossible?\".\n\nLet me qualify that: if the function is written by Alice, and the code\nis able to really exercise the privileges of the caller (Bob), then it\nseems really hard to make it safe for the caller.\n\nIf the function is sandboxed such that it's not really using Bob's\nprivileges (it's just nominally running as Bob) that's a much more\ntractable problem.\n\nI believe there's some nuance to your proposal where some of Bob's\nprivileges could be used safely, but I'm not clear on exactly which\nones. The difficulty of the implementation would depend on these\ndetails.\n\n> Second, somebody might argue that full sandboxing is such a\n> draconian set of restrictions that it will inconvenience users\n> greatly\n> or that it's pointless to even allow anything to be executed or\n> something along those lines. I think that argument has some merit,\n> but\n> I think the restrictions sound worse than they actually are in\n> context.\n\n+100. We should make typical cases easy to secure.\n\n> Even if they do something as\n> simple as reading from another table, that's not necessarily going to\n> dump and restore properly, even if it's secure, because the table\n> ordering dependencies won't be clear to pg_dump.\n\nA good point. A lot of these extraordinary cases are either incredibly\nfragile or already broken.\n\n> What if such a function wants to ALTER ROLE ...\n> SUPERUSER? I think that's bonkers and should almost certainly be\n> categorically denied.\n\n...also agreed, a lot of these extraordinary cases are really just\nsurface area for attack with no legitimate use case.\n\n\n\n\nOne complaint (not an objection, because I don't think we have\nthe luxury of objecting to viable proposals when it comes to improving\nour security model):\n\nAlthough your proposal sounds like a good security backstop, it feels\nlike it's missing the point that there are different _kinds_ of\nfunctions. We already have the IMMUTABLE marker and we already have\nruntime checks to make sure that immutable functions can't CREATE\nTABLE; why not build on that mechanism or create new markers?\n\nDeclarative markers are nice because they are easier to test: if Alice\nwrites a function and declares it as IMMUTABLE, she can test it before\neven using it in an index expression and it will fail whatever runtime\nprotections IMMUTABLE offers. If we instead base it on the session user\nand call stack, Alice wouldn't be able to test it effectively, only Bob\ncan test it.\n\nIn other words, there are some consistency aspects to how we run code\nthat go beyond pure security. A function author typically has\nassumptions about the execution context of a function (the user, the\nsandbox restrictions, the search_path, etc.) and guiding users towards\na consistent execution context in typical cases is a good thing.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 31 Aug 2023 17:57:25 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: sandboxing untrusted code" }, { "msg_contents": "On Thu, Aug 31, 2023 at 8:57 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > As a refresher, the scenario I'm talking about is any one in which\n> > one\n> > user, who I'll call Bob, does something that results in executing\n> > code\n> > provided by another user, who I'll call Alice. The most obvious way\n> > that this can happen is if Bob performs some operation that targets a\n> > table owned by Alice. That operation might be DML, like an INSERT or\n> > UPDATE; or it might be some other kind of maintenance command that\n> > can\n> > cause code execution, like REINDEX, which can evaluate index\n> > expressions.\n>\n> REINDEX executes index expressions as the table owner. (You are correct\n> that INSERT executes index expressions as the inserting user.)\n\nI was speaking here of who provided the code, rather than whose\ncredentials were used to execute it. The index expressions are\nprovided by the table owner no matter who evaluates them in a\nparticular case.\n\n> > 1. Compute stuff. There's no restriction on the permissible amount of\n> > compute; if you call untrusted code, nothing prevents it from running\n> > forever.\n> > 2. Call other code. This may be done by a function call or a command\n> > such as CALL or DO, all subject to the usual permissions checks but\n> > no\n> > further restrictions.\n> > 3. Access the current session state, without modifying it. For\n> > example, executing SHOW or current_setting() is fine.\n> > 4. Transiently modify the current session state in ways that are\n> > necessarily reversed before returning to the caller. For example, an\n> > EXCEPTION block or a configuration change driven by proconfig is\n> > fine.\n> > 5. Produce messages at any log level. This includes any kind of\n> > ERROR.\n>\n> Nothing in that list really exercises privileges (except #2?). If those\n> are the allowed set of things a sandboxed function can do, is a\n> sandboxed function equivalent to a function running with no privileges\n> at all?\n\nClose but not quite. As you say, #2 does exercise privileges. Also,\neven if no privileges are exercised, you could still refer to\nCURRENT_ROLE, and I think you could also call a function like\nhas_table_privilege. Your identity hasn't changed, but you're\nrestricted from exercising some of your privileges. Really, you still\nhave them, but they're just not available to you in that situation.\n\n> Please explain #2 in a bit more detail. Whose EXECUTE privileges would\n> be used (I assume it depende on SECURITY DEFINER/INVOKER)? Would the\n> called code also be sandboxed?\n\nNothing in this proposed system has any impact on whose privileges are\nused in any particular context, so any privilege checks conducted\npursuant to #2 are performed as the same user who would perform them\ntoday. Whether the called code would be sandboxed depends on how the\nrules I articulated in the previous email would apply to it. Since\nthose rules depend on the user IDs, if the called code is owned by the\nsame user as the calling code and is SECURITY INVOKER, then those\nrules apply in the same way and the same level of sandboxing will\napply. But if the called function is owned by a different user or is\nSECURITY DEFINER, then the rules might apply differently to the called\ncode than the calling code. It's possible this isn't quite good enough\nand that some adjustments to the rules are necessary; I'm not sure.\n\n> Let me qualify that: if the function is written by Alice, and the code\n> is able to really exercise the privileges of the caller (Bob), then it\n> seems really hard to make it safe for the caller.\n>\n> If the function is sandboxed such that it's not really using Bob's\n> privileges (it's just nominally running as Bob) that's a much more\n> tractable problem.\n\nAgreed.\n\n> One complaint (not an objection, because I don't think we have\n> the luxury of objecting to viable proposals when it comes to improving\n> our security model):\n>\n> Although your proposal sounds like a good security backstop, it feels\n> like it's missing the point that there are different _kinds_ of\n> functions. We already have the IMMUTABLE marker and we already have\n> runtime checks to make sure that immutable functions can't CREATE\n> TABLE; why not build on that mechanism or create new markers?\n\nI haven't ruled that out completely, but there's some subtlety here\nthat doesn't exist in those other cases. If the owner of a function\nmarks it wrongly in terms of volatility or parallel safety, then they\nmight make queries run more slowly than they should, or they might\nmake queries return wrong answers, or error out, or even end up with\nmessed-up indexes. But none of that threatens the stability of the\nsystem in any very deep way, or the security of the system. It's no\ndifferent than putting a CHECK (false) constraint on a table, or\nsomething like that: it might make the system not work, and if that\nhappens, then you can fix it. Here, however, we can't trust the owners\nof functions to label those functions accurately. It won't do for\nAlice to create a function and then apply the NICE_AND_SAFE marker to\nit. That defeats the whole point. We need to know the real behavior of\nAlice's function, not the behavior that Alice says it has.\n\nNow, in the case of a C function, things are a bit different. We can't\ninspect the generated machine code and know what the function does,\nbecause of that pesky halting problem. We could handle that either\nthrough function labeling, since only superusers can create C\nfunctions, or by putting checks directly in the C code. I was somewhat\ninclined toward the latter approach, but I'm not completely sure yet\nwhat makes sense. Thinking about your comments here made me realize\nthat there are other procedural languages to worry about, too, like\nPL/python or PL/perl or PL/sh. Whatever we do for the C functions will\nhave to be extended to those cases somehow as well. If we label\nfunctions, then we'll have to allow superusers only to label functions\nin these languages as well and make the default label \"this is\nunsafe.\" If we put checks in the C code then I guess any given PL\nneeds to certify that it knows about sandboxing or have all of its\nfunctions treated as unsafe. I think doing this at the C level would\nbe better, strictly speaking, because it's more granular. Imagine a\nfunction that only conditionally does some prohibited action - it can\nbe allowed to work in the cases where it does not attempt the\nprohibited operation, and blocked when it does. Labeling is\nall-or-nothing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Sep 2023 09:12:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sandboxing untrusted code" }, { "msg_contents": "On Fri, 2023-09-01 at 09:12 -0400, Robert Haas wrote:\n> Close but not quite. As you say, #2 does exercise privileges. Also,\n> even if no privileges are exercised, you could still refer to\n> CURRENT_ROLE, and I think you could also call a function like\n> has_table_privilege.  Your identity hasn't changed, but you're\n> restricted from exercising some of your privileges. Really, you still\n> have them, but they're just not available to you in that situation.\n\nWhich privileges are available in a sandboxed environment, exactly? Is\nit kind of like masking away all privileges except EXECUTE, or are\nother privileges available, like SELECT?\n\nAnd the distinction that you are drawing between having the privileges\nbut them (mostly) not being available, versus not having the privileges\nat all, is fairly subtle. Some examples showing why that distinction is\nimportant would be helpful.\n\n> \n> > Although your proposal sounds like a good security backstop, it\n> > feels\n> > like it's missing the point that there are different _kinds_ of\n> > functions. We already have the IMMUTABLE marker and we already have\n> > runtime checks to make sure that immutable functions can't CREATE\n> > TABLE; why not build on that mechanism or create new markers?\n\n...\n\n> Here, however, we can't trust the owners\n> of functions to label those functions accurately.\n\nOf course, but observe:\n\n =# CREATE FUNCTION f(i INT) RETURNS INT IMMUTABLE LANGUAGE plpgsql AS\n $$\n BEGIN\n CREATE TABLE x(t TEXT);\n RETURN 42 + i;\n END;\n $$;\n\n =# SELECT f(2);\n ERROR: CREATE TABLE is not allowed in a non-volatile function\n CONTEXT: SQL statement \"CREATE TABLE x(t TEXT)\"\n PL/pgSQL function f(integer) line 3 at SQL statement\n\nThe function f() is called at the top level, not as part of any index\nexpression or other special context. But it fails to CREATE TABLE\nsimply because that's not an allowed thing for an IMMUTABLE function to\ndo. That tells me right away that my function isn't going to work, and\nI can rewrite it rather than waiting for some other user to say that it\nfailed when run in a sandbox.\n\n> It won't do for\n> Alice to create a function and then apply the NICE_AND_SAFE marker to\n> it.\n\nYou can if you always execute NICE_AND_SAFE functions in a sandbox. The\ndifference is that it's always executed in a sandbox, rather than\nsometimes, so it will fail consistently.\n\n> Now, in the case of a C function, things are a bit different. We\n> can't\n> inspect the generated machine code and know what the function does,\n> because of that pesky halting problem. We could handle that either\n> through function labeling, since only superusers can create C\n> functions, or by putting checks directly in the C code. I was\n> somewhat\n> inclined toward the latter approach, but I'm not completely sure yet\n> what makes sense. Thinking about your comments here made me realize\n> that there are other procedural languages to worry about, too, like\n> PL/python or PL/perl or PL/sh. Whatever we do for the C functions\n> will\n> have to be extended to those cases somehow as well. If we label\n> functions, then we'll have to allow superusers only to label\n> functions\n> in these languages as well and make the default label \"this is\n> unsafe.\" If we put checks in the C code then I guess any given PL\n> needs to certify that it knows about sandboxing or have all of its\n> functions treated as unsafe. I think doing this at the C level would\n> be better, strictly speaking, because it's more granular. Imagine a\n> function that only conditionally does some prohibited action - it can\n> be allowed to work in the cases where it does not attempt the\n> prohibited operation, and blocked when it does. Labeling is\n> all-or-nothing.\n\nHere I'm getting a little lost in what you mean by \"prohibited\noperation\". Most languages mostly use SPI, and whatever sandboxing\nchecks you do should work there, too. Are you talking about completely\nseparate side effects like writing files or opening sockets?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 01 Sep 2023 14:27:07 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: sandboxing untrusted code" }, { "msg_contents": "On Fri, Sep 1, 2023 at 5:27 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Which privileges are available in a sandboxed environment, exactly? Is\n> it kind of like masking away all privileges except EXECUTE, or are\n> other privileges available, like SELECT?\n\nI think I've more or less answered this already -- fully sandboxed\ncode can't make reference to external data sources, from which it\nfollows that it can't exercise SELECT (and most other privileges).\n\n> And the distinction that you are drawing between having the privileges\n> but them (mostly) not being available, versus not having the privileges\n> at all, is fairly subtle. Some examples showing why that distinction is\n> important would be helpful.\n\nI view it like this: when Bob tries to insert or update or delete\nAlice's table, and Alice has some code attached to it, Alice is\neffectively asking Bob to execute that code with his own privileges.\nIn general, I think we can reasonably expect that Bob WILL be willing\nto do this: if he didn't want to modify into Alice's table, he\nwouldn't have executed a DML statement against it, and executing the\ncode that Alice has attached to that table is a precondition of being\nallowed to perform that modification. It's Alice's table and she gets\nto set the rules. However, Bob is also allowed to protect himself. If\nhe's running Alice's code and it wants to do something with which Bob\nisn't comfortable, he can change his mind and refuse to execute it\nafter all.\n\nI always find it helpful to consider real world examples with similar\ncharacteristics. Let's say that Bob is renting a VRBO from Alice.\nAlice leaves behind, in the VRBO, a set of rules which Bob must follow\nas a condition of being allowed to rent the VRBO. Those rules include\nthings that Bob but must do at checkout time, like washing all of his\ndishes. As a matter of routine, Bob will follow Alice's checkout\ninstructions. But if Alice includes in the checkout instructions\n\"Leave your driver's license and social security card on the dining\nroom table after checkout, plus a record of all of your bank account\nnumbers,\" the security systems in Bob's brain should activate and\nprevent those instructions from getting followed.\n\nA major difference between that situation (a short term rental of\nsomeone else's house) and the in-database case (a DML statement\nagainst someone else's table) is that when Bob is following Alice's\nVRBO checkout instructions, he knows exactly what actions he is\nperforming. When he executes a DML statement against Alice's table,\nBob the human being does not actually know what Alice's triggers or\nindex expressions or whatever are causing him to do. As I see it, the\npurpose of this system is to prevent Bob from doing things that he\ndidn't intend to do. He's cool with adding 2 and 2 or concatenating\nsome strings or whatever, but probably not with reading data and\nhanding it over to Alice, and definitely not handing all of his\nprivileges over to Alice. Full sandboxing has to block that kind of\nstuff, and it needs to do so precisely because *Bob would not allow\nthose operations if he knew about them*.\n\nNow, it is not going to be possible to get that perfectly right.\nPostgreSQL can not know the state of Bob's human mind, and it cannot\nbe expected to judge with perfect accuracy what actions Bob would or\nwould not approve. However, it can make some conservative guesses. If\nBob wants to override those guesses by saying \"I trust Alice, do\nwhatever she says\" that's fine. This system attempts to prevent Bob\nfrom accidentally giving away his permissions to an adversary who has\nburied malicious code in some unexpected place. But, unlike the\nregular permissions system, it is not there to prevent Bob from doing\nthings that he isn't allowed to do. It's there to prevent Bob from\ndoing things that he didn't intend to do.\n\nAnd that's where I see the distinction between *having* permissions\nand those permissions being *available* in a particular context. Bob\nhas permission to give Alice an extra $1000 or whatever if he has the\nmoney and wishes to do so. But those permissions are probably not\n*available* in the context where Bob is following a set of\ninstructions from Alice. If Bob's brain spontaneously generated the\nidea \"let's give Alice a $1000 tip because her vacation home was\nabsolutely amazing and I am quite rich,\" he would probably go right\nahead and act on that idea and that is completely fine. But when Bob\nencounters that same idea *on a list of instructions provided by\nAlice*, the same operation is blocked *because it came from Alice*. If\nthe list of instructions from Alice said to sweep the parlor, Bob\nwould just go ahead and do it. Alice has permission to induce Bob to\nsweep the parlor, but does not have permission to induce Bob to give\nher a bunch of extra money.\n\nAnd in the database context, I think it's fine if Alice induces Bob to\ncompute some values or look at the value of work_mem, but I don't\nthink it's OK if Alice induces Bob to make her a superuser. Unless Bob\ndeclares that he trusts Alice completely, in which case it's fine if\nshe does that.\n\n> Here I'm getting a little lost in what you mean by \"prohibited\n> operation\". Most languages mostly use SPI, and whatever sandboxing\n> checks you do should work there, too. Are you talking about completely\n> separate side effects like writing files or opening sockets?\n\nYeah.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Sep 2023 12:25:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sandboxing untrusted code" }, { "msg_contents": "On Tue, 2023-09-05 at 12:25 -0400, Robert Haas wrote:\n> I think I've more or less answered this already -- fully sandboxed\n> code can't make reference to external data sources, from which it\n> follows that it can't exercise SELECT (and most other privileges).\n\nBy what principle are we allowing EXECUTE but not SELECT? In theory, at\nleast, a function could hold secrets in the code, e.g.:\n\n CREATE FUNCTION answer_to_ultimate_question() RETURNS INT\n LANGUAGE plpgsql AS $$ BEGIN RETURN 42; END; $$;\n\nObviously that's a bad idea in plpgsql, because anyone can just read\npg_proc. And maybe C would be handled differently somehow, so maybe it\nall works.\n\nBut it feels like something is wrong there: it's fine to execute the\nanswer_to_ultimate_question() not because Bob has an EXECUTE privilege,\nbut because the sandbox renders any security concerns with *anyone*\nexecuting the function moot. So why bother checking the EXECUTE\nprivilege at all?\n\n> And that's where I see the distinction between *having* permissions\n> and those permissions being *available* in a particular context. Bob\n> has permission to give Alice an extra $1000 or whatever if he has the\n> money and wishes to do so. But those permissions are probably not\n> *available* in the context where Bob is following a set of\n> instructions from Alice. If Bob's brain spontaneously generated the\n> idea \"let's give Alice a $1000 tip because her vacation home was\n> absolutely amazing and I am quite rich,\" he would probably go right\n> ahead and act on that idea and that is completely fine. But when Bob\n> encounters that same idea *on a list of instructions provided by\n> Alice*, the same operation is blocked *because it came from Alice*.\n> If\n> the list of instructions from Alice said to sweep the parlor, Bob\n> would just go ahead and do it. Alice has permission to induce Bob to\n> sweep the parlor, but does not have permission to induce Bob to give\n> her a bunch of extra money.\n\nIn the real world example, sweeping the parlor has a (slight) cost to\nthe person doing it and it (slightly) matters who does it. In Postgres,\nwe don't do any CPU accounting per user, and it's all executed under\nthe same PID, so it really doesn't matter.\n\nSo it raises the question: why would we not simply say that this list\nof instructions should be executed by the person who wrote it, in which\ncase the existing privilege mechanism would work just fine?\n\n> And in the database context, I think it's fine if Alice induces Bob\n> to\n> compute some values or look at the value of work_mem, but I don't\n> think it's OK if Alice induces Bob to make her a superuser.\n\nIf all the code can do is compute some values or look at work_mem,\nperhaps the function needs no privileges at all (or some minimal\nprivileges)?\n\nYou explained conceptually where you're coming from, but I still don't\nsee much of a practical difference between having privileges but being\nin a context where they won't be used, and dropping the privileges\nentirely during that time. I suppose the answer is that the EXECUTE\nprivilege will still be used, but as I said above, that doesn't\nentirely make sense to me, either.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 05 Sep 2023 15:20:53 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: sandboxing untrusted code" } ]
[ { "msg_contents": "Hi,\n\nI wonder if we really need signals to implement interrupts. Given\nthat they are non-preemptive/cooperative (work happens at the next\nCFI()), why not just use shared memory flags and latches? That skips\na bunch of code, global variables and scary warnings about programming\nin signal handlers.\n\nI sketched out some code to try that a few months back, while\nspeculating about bite-sized subproblems that would come up if each\nbackend is, one day, a thread.\n\nThere are several other conditions that are also handled by\nCHECK_FOR_INTERRUPTS(), but are not triggered by other backends\nsending signals, or are set by other signal handlers (SIGALRM,\nSIGQUIT). One idea is to convert those into \"procsignals\" too, for\nconsistency. In the attached, they can be set (ie by the same\nbackend) with ProcSignalRaise(), but it's possible that in future we\nmight have a reason for another backend to set them too, so it seems\nlike a good idea to have a single system, effectively merging the\nconcepts of \"procsignals\" and \"interrupts\".\n\nThere are still a few more ad hoc (non-ProcSignal) uses of SIGUSR1 in\nthe tree. For one thing, we don't allow the postmaster to set\nlatches; if we gave up on that rule, we wouldn't need the bgworker\nplease-signal-me thing. Also the logical replication launcher does\nthe same sort of thing for no apparent reason. Changed in the\nattached -- mainly so I could demonstrate that check-world passes with\nSIGUSR1 ignored.\n\nThe attached is only experiment grade code: in particular, I didn't\nquite untangle the recovery conflict flags properly. It's also doing\nfunction calls where some kind of fast inlined magic is probably\nrequired, and I probably have a few other details wrong, but I figured\nit was good enough to demonstrate the concept.", "msg_date": "Thu, 21 Oct 2021 07:55:54 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Interrupts vs signals" }, { "msg_contents": "Hi,\n\nOn 2021-10-21 07:55:54 +1300, Thomas Munro wrote:\n> I wonder if we really need signals to implement interrupts. Given\n> that they are non-preemptive/cooperative (work happens at the next\n> CFI()), why not just use shared memory flags and latches? That skips\n> a bunch of code, global variables and scary warnings about programming\n> in signal handlers.\n\nDepending on how you implement them, one difference could be whether / when\n\"slow\" system calls (recv, poll, etc) are interrupted.\n\nAnother is that that signal handling provides a memory barrier in the\nreceiving process. For things that rarely change (like most interrupts), it\ncan be more efficient to move the cost of that out-of-line, instead of\nincurring them on every check.\n\n\nOne nice thing of putting the state variables into shared memory would be that\nthat'd allow to see the pending interrupts of other backends for debugging\npurposes.\n\n\n> One idea is to convert those into \"procsignals\" too, for\n> consistency. In the attached, they can be set (ie by the same\n> backend) with ProcSignalRaise(), but it's possible that in future we\n> might have a reason for another backend to set them too, so it seems\n> like a good idea to have a single system, effectively merging the\n> concepts of \"procsignals\" and \"interrupts\".\n\nThis seems a bit confusing to me. For one, we need to have interrupts working\nbefore we have a proc, IIRC. But leaving details like that aside, it just\nseems a bit backwards to me. I'm on board with other backends directly setting\ninterrupt flags, but it seems to me that the procsignal stuff should be\n\"client\" of the process-local interrupt infrastructure, rather than the other\nway round.\n\n\n> +bool\n> +ProcSignalAnyPending(void)\n> +{\n> +\tvolatile ProcSignalSlot *slot = MyProcSignalSlot;\n> \n> -\tif (CheckProcSignal(PROCSIG_RECOVERY_CONFLICT_BUFFERPIN))\n> -\t\tRecoveryConflictInterrupt(PROCSIG_RECOVERY_CONFLICT_BUFFERPIN);\n> +\t/* XXX make this static inline? */\n> +\t/* XXX point to a dummy entry instead of using NULL to avoid a branch */\n> +\treturn slot && slot->pss_signaled;\n> +}\n\nISTM it might be easier to make this stuff efficiently race-free if we made\nthis a count of pending operations.\n\n\n> @@ -3131,12 +3124,13 @@ ProcessInterrupts(void)\n> \t/* OK to accept any interrupts now? */\n> \tif (InterruptHoldoffCount != 0 || CritSectionCount != 0)\n> \t\treturn;\n> -\tInterruptPending = false;\n> +\tProcSignalClearAnyPending();\n> +\n> +\tpg_read_barrier();\n> \n> -\tif (ProcDiePending)\n> +\tif (ProcSignalConsume(PROCSIG_DIE))\n> \t{\n\nI think making all of these checks into function calls isn't great. How about\nmaking the set of pending signals a bitmask? That'd allow to efficiently check\na bunch of interrupts together and even where not, it'd just be a single test\nof the mask, likely already in a register.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 20 Oct 2021 12:27:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "On Thu, Oct 21, 2021 at 8:27 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-10-21 07:55:54 +1300, Thomas Munro wrote:\n> > I wonder if we really need signals to implement interrupts. Given\n> > that they are non-preemptive/cooperative (work happens at the next\n> > CFI()), why not just use shared memory flags and latches? That skips\n> > a bunch of code, global variables and scary warnings about programming\n> > in signal handlers.\n>\n> Depending on how you implement them, one difference could be whether / when\n> \"slow\" system calls (recv, poll, etc) are interrupted.\n\nHopefully by now all such waits are implemented with latch.c facilities?\n\n> Another is that that signal handling provides a memory barrier in the\n> receiving process. For things that rarely change (like most interrupts), it\n> can be more efficient to move the cost of that out-of-line, instead of\n> incurring them on every check.\n\nAgreed, but in this experiment I was trying out the idea that a memory\nbarrier is not really needed at all, unless you're about to go to\nsleep. We already insert one of those before a latch wait. That is,\nif we see !set->latch->is_set, we do pg_memory_barrier() and check\nagain, before sleeping, so your next CFI must see the flag. For\ncomputation loops (sort, hash, query execution, ...), I speculate that\na relaxed read of memory is fine... you'll see the flag pretty soon,\nand you certainly won't be allowed to finish your computation and go\nto sleep.\n\n> One nice thing of putting the state variables into shared memory would be that\n> that'd allow to see the pending interrupts of other backends for debugging\n> purposes.\n\n+1\n\n> > One idea is to convert those into \"procsignals\" too, for\n> > consistency. In the attached, they can be set (ie by the same\n> > backend) with ProcSignalRaise(), but it's possible that in future we\n> > might have a reason for another backend to set them too, so it seems\n> > like a good idea to have a single system, effectively merging the\n> > concepts of \"procsignals\" and \"interrupts\".\n>\n> This seems a bit confusing to me. For one, we need to have interrupts working\n> before we have a proc, IIRC. But leaving details like that aside, it just\n> seems a bit backwards to me. I'm on board with other backends directly setting\n> interrupt flags, but it seems to me that the procsignal stuff should be\n> \"client\" of the process-local interrupt infrastructure, rather than the other\n> way round.\n\nHmm. Yeah, I see your point. But I can also think of some arguments\nfor merging the concepts of local and shared interrupts; see below.\n\nIn this new sketch, I tried doing it the other way around. That is,\ncompletely removing the concept of \"ProcSignal\", leaving only\n\"Interrupts\". Initially, MyPendingInterrupts points to something\nprivate, and once you're connected to shared memory it points to\nMyProc->pending_interrupts.\n\n> > +bool\n> > +ProcSignalAnyPending(void)\n> > +{\n> > + volatile ProcSignalSlot *slot = MyProcSignalSlot;\n> >\n> > - if (CheckProcSignal(PROCSIG_RECOVERY_CONFLICT_BUFFERPIN))\n> > - RecoveryConflictInterrupt(PROCSIG_RECOVERY_CONFLICT_BUFFERPIN);\n> > + /* XXX make this static inline? */\n> > + /* XXX point to a dummy entry instead of using NULL to avoid a branch */\n> > + return slot && slot->pss_signaled;\n> > +}\n>\n> ISTM it might be easier to make this stuff efficiently race-free if we made\n> this a count of pending operations.\n\nHmm, with a unified interrupt system and a bitmap it's not necessary\nto have a separate flag/counter at all.\n\n> > @@ -3131,12 +3124,13 @@ ProcessInterrupts(void)\n> > /* OK to accept any interrupts now? */\n> > if (InterruptHoldoffCount != 0 || CritSectionCount != 0)\n> > return;\n> > - InterruptPending = false;\n> > + ProcSignalClearAnyPending();\n> > +\n> > + pg_read_barrier();\n> >\n> > - if (ProcDiePending)\n> > + if (ProcSignalConsume(PROCSIG_DIE))\n> > {\n>\n> I think making all of these checks into function calls isn't great. How about\n> making the set of pending signals a bitmask? That'd allow to efficiently check\n> a bunch of interrupts together and even where not, it'd just be a single test\n> of the mask, likely already in a register.\n\n+1.\n\nSome assorted notes:\n\n1. Aside from doing interrupts in this new way, I also have the\npostmaster setting latches (!) instead of sending ad hoc SIGUSR1 here\nand there. My main reason for doing that was to be able to chase out\nall reasons to register a SIGUSR1 handler, so I could prove that\ncheck-world passes. I like it, though. But maybe it's really a\nseparate topic.\n\n2. I moved this stuff into interrupt.{h,c}. There is nothing left in\nprocsignal.c except code relating to ProcSignalBarrier. I guess that\nthing could use another name, anyway. It's a ...\nSystemInterruptBarrier?\n\n3. Child-level SIGINT and SIGTERM handlers probably aren't really\nnecessary, either: maybe the sender could do\nInterruptSend(INTERRUPT_{DIE,QUERY_CANCEL}, pgprocno) instead? But\nperhaps people are attached to being able to send those signals from\nexternal programs directly to backends.\n\n4. Like the above, a SIGALRM handler might need to do eg\nInterruptRaise(INTERRUPT_STATEMENT_TIMEOUT). That's a problem for\nsystems using spinlocks (self-deadlock against user context in\nInterruptRaise()), so I'd need to come up with some flag protocol for\ndinosaurs to make that safe, OR revert to having these \"local only\"\ninterrupts done with separate flags, as you were getting at earlier.\n\n5. The reason I prefer to put currently \"local only\" interrupts into\nthe same atomic system is that I speculate that ultimately all of the\nbackend-level signal handlers won't be needed. They all fall into\nthree categories: (1) could be replaced with these interrupts\ndirectly, (2) could be replaced by the new timer infrastructure that\nmultithreaded postgres would need to have to deliver interrupts to the\nright recipients, (3) are quickdie and can be handled at the\ncontaining process level. Then the only signal handlers left are top\nlevel external ones.\n\nBut perhaps you're right and I should try reintroducing separate local\ninterrupts for now. I dunno, I like the simplicity of the unified\nsystem; if only it weren't for those spinlock-backed atomics.", "msg_date": "Thu, 11 Nov 2021 18:27:09 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "On Thu, Nov 11, 2021 at 12:27 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Depending on how you implement them, one difference could be whether / when\n> > \"slow\" system calls (recv, poll, etc) are interrupted.\n>\n> Hopefully by now all such waits are implemented with latch.c facilities?\n\nDo read(), write(), etc. count? Because we certainly have raw calls to\nthose functions in lots of places.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Nov 2021 09:06:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "Hi,\n\nOn 2021-11-11 09:06:01 -0500, Robert Haas wrote:\n> On Thu, Nov 11, 2021 at 12:27 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > Depending on how you implement them, one difference could be whether / when\n> > > \"slow\" system calls (recv, poll, etc) are interrupted.\n> >\n> > Hopefully by now all such waits are implemented with latch.c facilities?\n> \n> Do read(), write(), etc. count? Because we certainly have raw calls to\n> those functions in lots of places.\n\nThey can count, but only when used for network sockets or pipes (\"slow\ndevices\" or whatever the posix language is). Disk IO doesn't count as that. So\nI don't think it'd be a huge issue.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 11 Nov 2021 11:50:00 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "On Thu, Nov 11, 2021 at 2:50 PM Andres Freund <andres@anarazel.de> wrote:\n> They can count, but only when used for network sockets or pipes (\"slow\n> devices\" or whatever the posix language is). Disk IO doesn't count as that. So\n> I don't think it'd be a huge issue.\n\nSomehow the idea that the network is a slow device and the disk a fast\none does not seem like it's necessarily accurate on modern hardware,\nbut I guess the spec is what it is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Nov 2021 15:24:41 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "On Fri, Nov 12, 2021 at 9:24 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Nov 11, 2021 at 2:50 PM Andres Freund <andres@anarazel.de> wrote:\n> > They can count, but only when used for network sockets or pipes (\"slow\n> > devices\" or whatever the posix language is). Disk IO doesn't count as that. So\n> > I don't think it'd be a huge issue.\n>\n> Somehow the idea that the network is a slow device and the disk a fast\n> one does not seem like it's necessarily accurate on modern hardware,\n> but I guess the spec is what it is.\n\n[Somehow I managed to reply to Robert only; let me try that again,\nthis time to the list...]\n\nNetwork filesystems have in the past been confusing because they're\nboth disk-like and network-like, and also slow as !@#$, which is why\nthere have been mount point options like \"intr\", \"nointr\" (now ignored\non Linux) to control what happens if you receive an async signal\nduring a sleepy read/write. But even if you had some kind of\nDeathstation 9000 that had a switch on the front panel that ignores\nSA_RESTART and produces EINTR for disk I/O when a signal arrives,\nPostgreSQL already doesn't work today. Our pread() and pwrite() paths\nfor data and WAL don't not have a EINTR loops or\nCHECK_FOR_INTERRUPTS() (we just can't take interrupts in the middle of\neg a synchronous write), so I think we'd produce an ERROR or PANIC.\nSo I think disk I/O is irrelevant, and network/pipe I/O is already\nhandled everywhere via latch.c facilities.\n\nIf there are any eg blocking reads on a socket in PostgreSQL, we\nshould fix that to use latch.c non-blocking techniques, because such a\nplace is already a place that ignores postmaster death and interrupts.\nTo be more precise: such a place could of course wake up for EINTR on\nSIGUSR1 from procsignal.c, and that would no longer happen with my\npatch, but if we're relying on that anywhere, it's dangerous and\nunreliable. If SIGUSR1 is delivered right before you enter a blocking\nread(), you'll sleep waiting for the socket or whatever. That's\nprecisely the problem that latch.c solves, and why it's already a bug\nif there are such places.\n\n\n", "msg_date": "Fri, 12 Nov 2021 09:57:38 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "Here's an updated version of this patch.\n\nThe main idea is that SendProcSignal(pid, PROCSIGNAL_XXX, procno)\nbecomes SendInterrupt(INTERRUPT_XXX, procno), and all the pending\ninterrupt global variables and pss_procsignalFlags[] go away, along\nwith the SIGUSR1 handler. The interrupts are compressed into a single\nbitmap. See commit message for more details.\n\nThe patch is failing on Windows CI for reasons I haven't debugged yet,\nbut I wanted to share what I have so far. Work in progress!\n\nHere is my attempt to survey the use of signals and write down what I\nthink we might do about them all so far, to give the context for this\npatch:\n\nhttps://wiki.postgresql.org/wiki/Signals\n\nComments, corrections, edits very welcome.", "msg_date": "Mon, 8 Jul 2024 14:56:38 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "On 08/07/2024 05:56, Thomas Munro wrote:\n> Here's an updated version of this patch.\n> \n> The main idea is that SendProcSignal(pid, PROCSIGNAL_XXX, procno)\n> becomes SendInterrupt(INTERRUPT_XXX, procno), and all the pending\n> interrupt global variables and pss_procsignalFlags[] go away, along\n> with the SIGUSR1 handler. The interrupts are compressed into a single\n> bitmap. See commit message for more details.\n> \n> The patch is failing on Windows CI for reasons I haven't debugged yet,\n> but I wanted to share what I have so far. Work in progress!\n> \n> Here is my attempt to survey the use of signals and write down what I\n> think we might do about them all so far, to give the context for this\n> patch:\n> \n> https://wiki.postgresql.org/wiki/Signals\n> \n> Comments, corrections, edits very welcome.\n\nNice, thanks!\n\n> Background worker state notifications are also changed from raw\n> kill(SIGUSR1) to SetLatch(). That means that SetLatch() is now called\n> from the postmaster. The main purpose of including that change is to be\n> able to remove procsignal_sigusr1_handler completely and set SIGUSR1 to\n> SIG_IGN, and show the system working.\n> \n> XXX Do we need to invent SetLatchRobust() that doesn't trust anything in\n> shared memory, to be able to set latches from the postmaster?\n\nThe patch actually does both: it still does kill(SIGUSR1) and also sets \nthe latch.\n\nI think it would be nice if RegisterDynamicBackgroundWorker() had a \n\"bool notify_me\" argument, instead of requiring the caller to set \n\"bgw_notify_pid = MyProcPid\" before the call. That's a \nbackwards-compatibility break, but maybe we should bite the bullet and \ndo it. Or we could do this in RegisterDynamicBackgroundWorker():\n\nif (worker->bgw_notify_pid == MyProcPid)\n worker->bgw_notify_pgprocno = MyProcNumber;\n\nI think we can forbid setting pgw_notify_pid to anything else than 0 or \nMyProcPid.\n\nA SetLatchRobust would be nice. Looking at SetLatch(), I don't think it \ncan do any damage if you called it on a pointer to garbage, except if \nthe pointer itself is bogus, then just dereferencing it an cause a \nsegfault. So it would be nice to have a version specifically designed \nwith that in mind. For example, it could assume that the Latch's pid is \nnever legally equal to MyProcPid, because postmaster cannot own any latches.\n\nAnother approach would be to move the responsibility of background \nworker state notifications out of postmaster completely. When a new \nbackground worker is launched, the worker process itself could send the \nnotification that it has started. And similarly, when a worker exits, it \ncould send the notification just before exiting. There's a little race \ncondition with exiting: if a process is waiting for the bgworker to \nexit, and launches a new worker immediately when the old one exits, \nthere will be a brief period when the old and new process are alive at \nthe same time. The old worker wouldn't be doing anything interesting \nanymore since it's exiting, but it still counts towards \nmax_worker_processes, so launching the new process might fail because of \nhitting the limit. Maybe we should just bump up max_worker_processes. Or \npostmaster could check PMChildFlags and not count processes that have \nalready deregistered from PMChildFlags towards the limit.\n\n> -volatile uint32 InterruptHoldoffCount = 0;\n> -volatile uint32 QueryCancelHoldoffCount = 0;\n> -volatile uint32 CritSectionCount = 0;\n> +uint32 InterruptHoldoffCount = 0;\n> +uint32 QueryCancelHoldoffCount = 0;\n> +uint32 CritSectionCount = 0;\n\nI wondered if these are used in PG_TRY-CATCH blocks in a way that would \nstill require volatile. I couldn't find any such usage by some quick \ngrepping, so I think we're good, but I thought I'd mention it.\n\n> +/*\n> + * The set of \"standard\" interrupts that CHECK_FOR_INTERRUPTS() and\n> + * ProcessInterrupts() handle. These perform work that is safe to run whenever\n> + * interrupts are not \"held\". Other kinds of interrupts are only handled at\n> + * more restricted times.\n> + */\n> +#define INTERRUPT_STANDARD_MASK\t\t\t\t\t\t\t \\\n\nSome interrupts are missing from this mask:\n\n- INTERRUPT_PARALLEL_APPLY_MESSAGE\n- INTERRUPT_IDLE_STATS_UPDATE_TIMEOUT\n- INTERRUPT_SINVAL_CATCHUP\n- INTERRUPT_NOTIFY\n\nIs that on purpose?\n\n> -/*\n> - * Because backends sitting idle will not be reading sinval events, we\n> - * need a way to give an idle backend a swift kick in the rear and make\n> - * it catch up before the sinval queue overflows and forces it to go\n> - * through a cache reset exercise. This is done by sending\n> - * PROCSIG_CATCHUP_INTERRUPT to any backend that gets too far behind.\n> - *\n> - * The signal handler will set an interrupt pending flag and will set the\n> - * processes latch. Whenever starting to read from the client, or when\n> - * interrupted while doing so, ProcessClientReadInterrupt() will call\n> - * ProcessCatchupEvent().\n> - */\n> -volatile sig_atomic_t catchupInterruptPending = false;\n\nWould be nice to move that comment somewhere else rather than remove it \ncompletely.\n\n> --- a/src/backend/storage/lmgr/proc.c\n> +++ b/src/backend/storage/lmgr/proc.c\n> @@ -444,6 +444,10 @@ InitProcess(void)\n> \tOwnLatch(&MyProc->procLatch);\n> \tSwitchToSharedLatch();\n> \n> +\t/*We're now ready to accept interrupts from other processes. */\n> +\tpg_atomic_init_u32(&MyProc->pending_interrupts, 0);\n> +\tSwitchToSharedInterrupts();\n> +\n> \t/* now that we have a proc, report wait events to shared memory */\n> \tpgstat_set_wait_event_storage(&MyProc->wait_event_info);\n> \n> @@ -611,6 +615,9 @@ InitAuxiliaryProcess(void)\n> \tOwnLatch(&MyProc->procLatch);\n> \tSwitchToSharedLatch();\n> \n> +\t/* We're now ready to accept interrupts from other processes. */\n> +\tSwitchToSharedInterrupts();\n> +\n> \t/* now that we have a proc, report wait events to shared memory */\n> \tpgstat_set_wait_event_storage(&MyProc->wait_event_info);\n> \n\nIs there a reason for the different initialization between a regular \nbackend and aux process?\n\n> +/*\n> + * Switch to shared memory interrupts. Other backends can send interrupts\n> + * to this one if they know its ProcNumber.\n> + */\n> +void\n> +SwitchToSharedInterrupts(void)\n> +{\n> +\tpg_atomic_fetch_or_u32(&MyProc->pending_interrupts, pg_atomic_read_u32(MyPendingInterrupts));\n> +\tMyPendingInterrupts = &MyProc->pending_interrupts;\n> +}\n\nHmm, I think there's a race condition here (and similarly in \nSwitchToLocalInterrupts), if the signal handler runs between the \npg_atomic_fetch_or_u32, and changing MyPendingInterrupts. Maybe \nsomething like this instead:\n\nMyPendingInterrupts = &MyProc->pending_interrupts;\npg_memory_barrier();\npg_atomic_fetch_or_u32(&MyProc->pending_interrupts, \npg_atomic_read_u32(LocalPendingInterrupts));\n\nAnd perhaps this should also clear LocalPendingInterrupts, just to be tidy.\n\n> @@ -138,7 +139,8 @@\n> typedef struct ProcState\n> {\n> \t/* procPid is zero in an inactive ProcState array entry. */\n> -\tpid_t\t\tprocPid;\t\t/* PID of backend, for signaling */\n> +\tpid_t\t\tprocPid;\t\t/* pid of backend */\n> +\tProcNumber\tpgprocno;\t\t/* for sending interrupts */\n> \t/* nextMsgNum is meaningless if procPid == 0 or resetState is true. */\n> \tint\t\t\tnextMsgNum;\t\t/* next message number to read */\n> \tbool\t\tresetState;\t\t/* backend needs to reset its state */\n\nWe can easily remove procPid altogether now that we have pgprocno here. \nSimilarly with the pid/pgprocno in ReplicationSlot and WalSndState.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 8 Jul 2024 12:38:22 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "On Mon, Jul 8, 2024 at 5:38 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Another approach would be to move the responsibility of background\n> worker state notifications out of postmaster completely. When a new\n> background worker is launched, the worker process itself could send the\n> notification that it has started. And similarly, when a worker exits, it\n> could send the notification just before exiting. There's a little race\n> condition with exiting: if a process is waiting for the bgworker to\n> exit, and launches a new worker immediately when the old one exits,\n> there will be a brief period when the old and new process are alive at\n> the same time. The old worker wouldn't be doing anything interesting\n> anymore since it's exiting, but it still counts towards\n> max_worker_processes, so launching the new process might fail because of\n> hitting the limit. Maybe we should just bump up max_worker_processes. Or\n> postmaster could check PMChildFlags and not count processes that have\n> already deregistered from PMChildFlags towards the limit.\n\nI can testify that the current system is the result of a lot of trial\nand error. I'm not saying it can't be made better, but my initial\nattempts at getting this to work (back in the 9.4 era) resembled what\nyou proposed here, were consequently a lot simpler than what we have\nnow, and also did not work. Race conditions like you mention here were\npart of that. Another consideration is that fork() can fail, and in\nthat case, the process that tried to register the new background\nworker needs to find out that the background worker won't ever be\nstarting. Yet another problem is that, even if fork() succeeds, the\nnew process might fail before it executes any of our code e.g. because\nit seg faults very early, a case that actually happened to me -\ninadvertently - while I was testing these facilities. I ended up\ndeciding that we can't rely on the new process to do anything until\nit's given us some signal that it is alive and able to carry out its\nduties. If it dies before telling us that, or never starts in the\nfirst place, we have to have some other way of finding that out, and\nit's difficult to see how that can happen without postmaster\ninvolvement.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Jul 2024 16:18:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "On Mon, Jul 8, 2024 at 9:38 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> The patch actually does both: it still does kill(SIGUSR1) and also sets\n> the latch.\n\nYeah, I had some ideas about supporting old extension code that really\nwanted a SIGUSR1, but on reflection, the only reason anyone ever wants\nthat is so that sigusr1_handler can SetLatch(), which pairs with\nWaitLatch() in WaitForBackgroundWorker*(). Let's go all the way and\nassume that.\n\n> I think it would be nice if RegisterDynamicBackgroundWorker() had a\n> \"bool notify_me\" argument, instead of requiring the caller to set\n> \"bgw_notify_pid = MyProcPid\" before the call. That's a\n> backwards-compatibility break, but maybe we should bite the bullet and\n> do it. Or we could do this in RegisterDynamicBackgroundWorker():\n>\n> if (worker->bgw_notify_pid == MyProcPid)\n> worker->bgw_notify_pgprocno = MyProcNumber;\n>\n> I think we can forbid setting pgw_notify_pid to anything else than 0 or\n> MyProcPid.\n\nAnother idea: we could keep the bgw_notify_pid field around for a\nwhile, documented as unused and due to be removed in future. We could\nautomatically capture the caller's proc number. So you get latch\nwakeups by default, which I expect many people want, and most people\ncould cope with even if they don't want them. If you really really\ndon't want them, you could set a new flag BGW_NO_NOTIFY.\n\nI have now done this part of the change in a new first patch. This\nparticular use of kill(SIGUSR1) is separate from the ProcSignal\nremoval, it's just that it relies on ProcSignal's handler's default\naction of calling SetLatch(). It's needed so the ProcSignal-ectomy\ncan fully delete sigusr1_handler(), but it's not directly the same\nthing, so it seemed good to split the patch.\n\n> A SetLatchRobust would be nice. Looking at SetLatch(), I don't think it\n> can do any damage if you called it on a pointer to garbage, except if\n> the pointer itself is bogus, then just dereferencing it an cause a\n> segfault. So it would be nice to have a version specifically designed\n> with that in mind. For example, it could assume that the Latch's pid is\n> never legally equal to MyProcPid, because postmaster cannot own any latches.\n\nYeah I'm starting to think that all we need to do here is range-check\nthe proc number. Here's a version that adds:\nProcSetLatch(proc_number). Another idea would be for SetLatch(latch)\nto sanitise the address of a latch, ie its offset and range.\n\nWhat the user really wants to be able to do with this bgworker API, I\nthink, is wait for a given a handle, which could find a condition\nvariable + generation in the slot, so that we don't have to register\nany proc numbers anywhere until we're actually waiting. But *clearly*\nthe postmaster can't use the condition variable API without risking\nfollowing corrupted pointers in shared memory. Whereas AFAICT\nProcSetLatch() from the patched postmaster can't really be corrupted\nin any new way that it couldn't already be corrupted in master (where\nit runs in the target process), if we're just a bit paranoid about how\nwe find our way to the latch.\n\nReceiving latch wakeups in the postmaster might be another question,\nbut I don't think we need to confront that question just yet.\n\n> > -volatile uint32 InterruptHoldoffCount = 0;\n> > -volatile uint32 QueryCancelHoldoffCount = 0;\n> > -volatile uint32 CritSectionCount = 0;\n> > +uint32 InterruptHoldoffCount = 0;\n> > +uint32 QueryCancelHoldoffCount = 0;\n> > +uint32 CritSectionCount = 0;\n>\n> I wondered if these are used in PG_TRY-CATCH blocks in a way that would\n> still require volatile. I couldn't find any such usage by some quick\n> grepping, so I think we're good, but I thought I'd mention it.\n\nHmm. Still thinking about this.\n\n> > +/*\n> > + * The set of \"standard\" interrupts that CHECK_FOR_INTERRUPTS() and\n> > + * ProcessInterrupts() handle. These perform work that is safe to run whenever\n> > + * interrupts are not \"held\". Other kinds of interrupts are only handled at\n> > + * more restricted times.\n> > + */\n> > +#define INTERRUPT_STANDARD_MASK \\\n>\n> Some interrupts are missing from this mask:\n>\n> - INTERRUPT_PARALLEL_APPLY_MESSAGE\n\nOops, that one ^ is a rebasing mistake. I wrote the ancestor of this\npatch in 2021, and that new procsignal arrived in 2023, and I put the\ncode in to handle it, but I forgot to add it to the mask, which gives\nme an idea (see below)...\n\n> - INTERRUPT_IDLE_STATS_UPDATE_TIMEOUT\n> - INTERRUPT_SINVAL_CATCHUP\n> - INTERRUPT_NOTIFY\n>\n> Is that on purpose?\n\nINTERRUPT_SINVAL_CATCHUP and INTERRUPT_NOTIFY are indeed handled\ndifferently on purpose. In master, they don't set InterruptPending,\nand they are not handled by regular CHECK_FOR_INTERRUPTS() sites, but\nin the patch they still need a bit in pending_interrupts, and that is\nwhat that mask hides from CHECK_FOR_INTERRUPTS(). They are checked\nexplicitly in ProcessClientReadInterrupt(). I think the idea is that\nwe can't handle sinval at random places because that might create\ndangling pointers to cached objects where we don't expect them, and we\ncan't emit NOTIFY-related protocol messages at random times either.\n\nThere is something a little funky about _IDLE_STATS_UPDATE_TIMEOUT,\nthough. It has a different scheme for running only when idle, where\nif it opts not to do anything, it doesn't consume the interrupt (a\nlater CFI() will, but the latch is not set so we might sleep). I was\nconfused by that. I think I have changed to be more faithful to\nmaster's behaviour now.\n\nHmm, a better terminology for the interupts that CFI handles would be\ns/standard/regular/, so I have changed that.\n\nNew idea: it would be less error-prone if we instead had a mask of\nthese special cases, of which there are now only two. Tried that way!\n\n> > -/*\n> > - * Because backends sitting idle will not be reading sinval events, we\n> > - * need a way to give an idle backend a swift kick in the rear and make\n> > - * it catch up before the sinval queue overflows and forces it to go\n> > - * through a cache reset exercise. This is done by sending\n> > - * PROCSIG_CATCHUP_INTERRUPT to any backend that gets too far behind.\n> > - *\n> > - * The signal handler will set an interrupt pending flag and will set the\n> > - * processes latch. Whenever starting to read from the client, or when\n> > - * interrupted while doing so, ProcessClientReadInterrupt() will call\n> > - * ProcessCatchupEvent().\n> > - */\n> > -volatile sig_atomic_t catchupInterruptPending = false;\n>\n> Would be nice to move that comment somewhere else rather than remove it\n> completely.\n\nOK, I moved it to the top of ProcessCatchupInterrupt().\n\n> > --- a/src/backend/storage/lmgr/proc.c\n> > +++ b/src/backend/storage/lmgr/proc.c\n> > @@ -444,6 +444,10 @@ InitProcess(void)\n> > OwnLatch(&MyProc->procLatch);\n> > SwitchToSharedLatch();\n> >\n> > + /*We're now ready to accept interrupts from other processes. */\n> > + pg_atomic_init_u32(&MyProc->pending_interrupts, 0);\n> > + SwitchToSharedInterrupts();\n> > +\n> > /* now that we have a proc, report wait events to shared memory */\n> > pgstat_set_wait_event_storage(&MyProc->wait_event_info);\n> >\n> > @@ -611,6 +615,9 @@ InitAuxiliaryProcess(void)\n> > OwnLatch(&MyProc->procLatch);\n> > SwitchToSharedLatch();\n> >\n> > + /* We're now ready to accept interrupts from other processes. */\n> > + SwitchToSharedInterrupts();\n> > +\n> > /* now that we have a proc, report wait events to shared memory */\n> > pgstat_set_wait_event_storage(&MyProc->wait_event_info);\n> >\n>\n> Is there a reason for the different initialization between a regular\n> backend and aux process?\n\nNo. But I thought about something else to fix here. Really we don't\nwant to switch to shared interrupts until we are ready for CFI() to do\nstuff. I think that should probably be at the places where master\nunblocks signals. Otherwise, for example, if someone sends you an\ninterrupt while you're starting up, something as innocent as\nelog(DEBUG, ...), which reaches CFI(), might try to do things for\nwhich the infrastructure is not yet fully set up, for example\nINTERRUPT_BARRIER.\n\nNot done yet, but wanted to share this new version.\n\n> > +/*\n> > + * Switch to shared memory interrupts. Other backends can send interrupts\n> > + * to this one if they know its ProcNumber.\n> > + */\n> > +void\n> > +SwitchToSharedInterrupts(void)\n> > +{\n> > + pg_atomic_fetch_or_u32(&MyProc->pending_interrupts, pg_atomic_read_u32(MyPendingInterrupts));\n> > + MyPendingInterrupts = &MyProc->pending_interrupts;\n> > +}\n>\n> Hmm, I think there's a race condition here (and similarly in\n> SwitchToLocalInterrupts), if the signal handler runs between the\n> pg_atomic_fetch_or_u32, and changing MyPendingInterrupts. Maybe\n> something like this instead:\n>\n> MyPendingInterrupts = &MyProc->pending_interrupts;\n> pg_memory_barrier();\n> pg_atomic_fetch_or_u32(&MyProc->pending_interrupts,\n> pg_atomic_read_u32(LocalPendingInterrupts));\n\nYeah, right, done.\n\n> And perhaps this should also clear LocalPendingInterrupts, just to be tidy.\n\nI used atomic_exchange() to read and clear the bits in one go.\n\n> > @@ -138,7 +139,8 @@\n> > typedef struct ProcState\n> > {\n> > /* procPid is zero in an inactive ProcState array entry. */\n> > - pid_t procPid; /* PID of backend, for signaling */\n> > + pid_t procPid; /* pid of backend */\n> > + ProcNumber pgprocno; /* for sending interrupts */\n> > /* nextMsgNum is meaningless if procPid == 0 or resetState is true. */\n> > int nextMsgNum; /* next message number to read */\n> > bool resetState; /* backend needs to reset its state */\n>\n> We can easily remove procPid altogether now that we have pgprocno here.\n\nSince other things access those values, I propose to remove them in\nseparate patches.\n\n> Similarly with the pid/pgprocno in ReplicationSlot and WalSndState.\n\nSame. Those pids show up in user interfaces, so I think they should\nbe handled in separate patches.\n\nNote to self: I need to change some pgprocno to proc_number...\n\nThe next problems to remove are, I think, the various SIGUSR2, SIGINT,\nSIGTERM signals sent by the postmaster. These should clearly become\nSendInterrupt() or ProcSetLatch(). The problem here is that the\npostmaster doesn't have the proc numbers yet. One idea is to teach\nthe postmaster to assign them! Not explored yet.\n\nThis version is passing on Windows. I'll create a CF entry. Still\nwork in progress!", "msg_date": "Wed, 10 Jul 2024 18:48:48 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "On 10/07/2024 09:48, Thomas Munro wrote:\n> On Mon, Jul 8, 2024 at 9:38 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> The patch actually does both: it still does kill(SIGUSR1) and also sets\n>> the latch.\n> \n> Yeah, I had some ideas about supporting old extension code that really\n> wanted a SIGUSR1, but on reflection, the only reason anyone ever wants\n> that is so that sigusr1_handler can SetLatch(), which pairs with\n> WaitLatch() in WaitForBackgroundWorker*(). Let's go all the way and\n> assume that.\n\n+1\n\n>> I think it would be nice if RegisterDynamicBackgroundWorker() had a\n>> \"bool notify_me\" argument, instead of requiring the caller to set\n>> \"bgw_notify_pid = MyProcPid\" before the call. That's a\n>> backwards-compatibility break, but maybe we should bite the bullet and\n>> do it. Or we could do this in RegisterDynamicBackgroundWorker():\n>>\n>> if (worker->bgw_notify_pid == MyProcPid)\n>> worker->bgw_notify_pgprocno = MyProcNumber;\n>>\n>> I think we can forbid setting pgw_notify_pid to anything else than 0 or\n>> MyProcPid.\n> \n> Another idea: we could keep the bgw_notify_pid field around for a\n> while, documented as unused and due to be removed in future. We could\n> automatically capture the caller's proc number. So you get latch\n> wakeups by default, which I expect many people want, and most people\n> could cope with even if they don't want them. If you really really\n> don't want them, you could set a new flag BGW_NO_NOTIFY.\n\nOk. I was going to say that it feels excessive to change the default \nlike that. However, searching for RegisterDynamicBackgroundWorker() in \ngithub, I can't actually find any callers that don't set pg_notify_pid. \nSo yeah, make sense.\n\n> I have now done this part of the change in a new first patch. This\n> particular use of kill(SIGUSR1) is separate from the ProcSignal\n> removal, it's just that it relies on ProcSignal's handler's default\n> action of calling SetLatch(). It's needed so the ProcSignal-ectomy\n> can fully delete sigusr1_handler(), but it's not directly the same\n> thing, so it seemed good to split the patch.\n\nPostmasterMarkPIDForWorkerNotify() is now unused. Which means that \nbgworker_notify is never set, and BackgroundWorkerStopNotifications() is \nnever called either.\n\n>> A SetLatchRobust would be nice. Looking at SetLatch(), I don't think it\n>> can do any damage if you called it on a pointer to garbage, except if\n>> the pointer itself is bogus, then just dereferencing it an cause a\n>> segfault. So it would be nice to have a version specifically designed\n>> with that in mind. For example, it could assume that the Latch's pid is\n>> never legally equal to MyProcPid, because postmaster cannot own any latches.\n> \n> Yeah I'm starting to think that all we need to do here is range-check\n> the proc number. Here's a version that adds:\n> ProcSetLatch(proc_number). Another idea would be for SetLatch(latch)\n> to sanitise the address of a latch, ie its offset and range.\n\nHmm, I don't think postmaster should trust ProcGlobal->allProcCount either.\n\n> The next problems to remove are, I think, the various SIGUSR2, SIGINT,\n> SIGTERM signals sent by the postmaster. These should clearly become\n> SendInterrupt() or ProcSetLatch().\n\n+1\n\n> The problem here is that the\n> postmaster doesn't have the proc numbers yet. One idea is to teach\n> the postmaster to assign them! Not explored yet.\n\nI've been thinking that we should:\n\na) assign every child process a PGPROC entry, and make postmaster \nresponsible for assigning them like you suggest. We'll need more PGPROC \nentries, because currently a process doesn't reserve one until \nauthentication has happened. Or we change that behavior.\n\nor\n\nb) Use the pmsignal.c slot numbers for this, instead of ProcNumbers. \nPostmaster already assigns those.\n\nI'm kind of leaning towards b) for now, because it feels like a much \nsmaller patch. In the long run, it would be nice if every child process \nhad a ProcNumber, though. It was a nice simplification in v17 that we \ndon't have separate BackendId and ProcNumber anymore; similarly it would \nbe nice to not have separate PMChildSlot and ProcNumber concepts.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 24 Jul 2024 15:57:58 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "On Wed, Jul 24, 2024 at 8:58 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> a) assign every child process a PGPROC entry, and make postmaster\n> responsible for assigning them like you suggest. We'll need more PGPROC\n> entries, because currently a process doesn't reserve one until\n> authentication has happened. Or we change that behavior.\n\nI wonder how this works right now. Is there something that limits the\nnumber of authentication requests that can be in flight concurrently,\nor is it completely uncapped (except by machine resources)?\n\nMy first thought when I read this was that it would be bad to have to\nput a limit on something that's currently unlimited. But then I\nstarted thinking that, even if it is currently unlimited, that might\nbe a bug rather than a feature. If you have hundreds of pending\nauthentication requests, that just means you're using a lot of machine\nresources on something that doesn't really help anybody. A machine\nwith hundreds of authentication-pending connections is possibly\ngetting DDOS'd and probably getting buried. You'd be better off\nfocusing the machine's limited resources on the already-established\nconnections and a more limited number of new connection attempts. If\nyou accept so many connection attempts that you don't actually have\nenough memory/CPU/kernel scheduling firepower to complete the\nauthentication process with any of them, it does nobody any good.\n\nI'm not sure what's best to do here; just thinking out loud.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jul 2024 12:56:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "On 29/07/2024 19:56, Robert Haas wrote:\n> On Wed, Jul 24, 2024 at 8:58 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> a) assign every child process a PGPROC entry, and make postmaster\n>> responsible for assigning them like you suggest. We'll need more PGPROC\n>> entries, because currently a process doesn't reserve one until\n>> authentication has happened. Or we change that behavior.\n> \n> I wonder how this works right now. Is there something that limits the\n> number of authentication requests that can be in flight concurrently,\n> or is it completely uncapped (except by machine resources)?\n\n> My first thought when I read this was that it would be bad to have to\n> put a limit on something that's currently unlimited. But then I\n> started thinking that, even if it is currently unlimited, that might\n> be a bug rather than a feature. If you have hundreds of pending\n> authentication requests, that just means you're using a lot of machine\n> resources on something that doesn't really help anybody. A machine\n> with hundreds of authentication-pending connections is possibly\n> getting DDOS'd and probably getting buried. You'd be better off\n> focusing the machine's limited resources on the already-established\n> connections and a more limited number of new connection attempts. If\n> you accept so many connection attempts that you don't actually have\n> enough memory/CPU/kernel scheduling firepower to complete the\n> authentication process with any of them, it does nobody any good.\n> \n> I'm not sure what's best to do here; just thinking out loud.\n\nYes, there's a limit, roughly 2x max_connections. see \nMaxLivePostmasterChildren().\n\nThere's another issue with that that I was about to post in another \nthread, but here goes: the MaxLivePostmasterChildren() limit is shared \nby all regular backends, bgworkers and autovacuum workers. If you open a \nlot of TCP connections to postmaster and don't send anything to the \nserver, you exhaust those slots, and the server won't be able to start \nany autovacuum workers or background workers either. That's not great. I \nstarted to work on approach b), with separate pools of slots for \ndifferent kinds of child processes, which fixes that. Stay tuned for a \npatch.\n\nIn addition to that, you can have an unlimited number of \"dead-end\" \nbackends, which are doomed to just respond with \"sorry, too many \nclients\" error. The only limit on those is the amount of resources \nneeded for all the processes and a little memory to track them.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 29 Jul 2024 20:18:22 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I wonder how this works right now. Is there something that limits the\n> number of authentication requests that can be in flight concurrently,\n> or is it completely uncapped (except by machine resources)?\n\nThe former. IIRC, the postmaster won't spawn more than 2X max_connections\nsubprocesses (don't recall the exact limit, but it's around there).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Jul 2024 13:24:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "On Mon, Jul 29, 2024 at 1:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I wonder how this works right now. Is there something that limits the\n> > number of authentication requests that can be in flight concurrently,\n> > or is it completely uncapped (except by machine resources)?\n>\n> The former. IIRC, the postmaster won't spawn more than 2X max_connections\n> subprocesses (don't recall the exact limit, but it's around there).\n\nHmm. Not to sidetrack this thread too much, but multiplying by two\ndoesn't really sound like the right idea to me. The basic idea\narticulated in the comment for canAcceptConnections() makes sense:\nsome backends might fail authentication, or might be about to exit, so\nit makes sense to allow for some slop. But 2X is a lot of slop even on\na machine with the default max_connections=100, and people with\nconnection management problems are likely to be running with\nmax_connections=500 or max_connections=900 or even (insanely)\nmax_connections=2000. Continuing with a connection attempt because we\nthink that hundreds or thousands of connections that are ahead of us\nin the queue might clear out of the way before we need a PGPROC is not\na good bet.\n\nI wonder if we ought to restrict this to a small, flat value, like say\n50, or by a new GUC that defaults to such a value if a constant seems\nproblematic. Maybe it doesn't really matter. I'm not sure how much\nwork we'd save by booting out the doomed connection attempt earlier.\n\nThe unlimited number of dead-end backends doesn't sound too great\neither. I don't have another idea, but presumably resisting DDOS\nattacks and/or preserving resources for things that still have a\nchance of working ought to take priority over printing a nicer error\nmessage from a connection that's doomed to fail anyway.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jul 2024 14:05:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "On 10/07/2024 09:48, Thomas Munro wrote:\n> The next problems to remove are, I think, the various SIGUSR2, SIGINT,\n> SIGTERM signals sent by the postmaster. These should clearly become\n> SendInterrupt() or ProcSetLatch(). The problem here is that the\n> postmaster doesn't have the proc numbers yet. One idea is to teach\n> the postmaster to assign them! Not explored yet.\n\nWith my latest patches on the \"Refactoring postmaster's code to cleanup \nafter child exit\" thread [1], every postmaster child process is assigned \na slot in the pmsignal.c array, including all the aux processes. If we \nmoved 'pending_interrupts' and the process Latch to the pmsignal.c \narray, then you could send an interrupt also to a process that doesn't \nhave a PGPROC entry. That includes dead-end backends, backends that are \nstill in authentication, and the syslogger.\n\nThat would also make it so that the postmaster would never need to poke \ninto the procarray. pmsignal.c is already designated as the limited \npiece of shared memory that is accessed by the postmaster \n(BackgroundWorkerSlots is the other exception), so it would be kind of \nnice if all the information that the postmaster needs to send an \ninterrupt was there. That would mean that where you currently use a \nProcNumber to identify a process, you'd use an index into the \nPMSignalState array instead.\n\nI don't insist on changing that right now, I think this patch is OK as \nit is, but that might be a good next step later.\n\n[1] \nhttps://www.postgresql.org/message-id/8f2118b9-79e3-4af7-b2c9-bd5818193ca4%40iki.fi\n\nI'm also wondering about the relationship between interrupts and \nlatches. Currently, SendInterrupt sets a latch to wake up the target \nprocess. I wonder if it should be the other way 'round? Move all the \nwakeup code, with the signalfd, the self-pipe etc to interrupt.c, and in \nSetLatch, call SendInterrupt to wake up the target process? Somehow that \nfeels more natural to me, I think.\n\n> This version is passing on Windows. I'll create a CF entry. Still\n> work in progress!\n\nSome comments on the patch details:\n\n> \t\tereport(WARNING,\n> \t\t\t\t(errmsg(\"NOTIFY queue is %.0f%% full\", fillDegree * 100),\n> -\t\t\t\t (minPid != InvalidPid ?\n> -\t\t\t\t errdetail(\"The server process with PID %d is among those with \nthe oldest transactions.\", minPid)\n> +\t\t\t\t (minPgprocno != INVALID_PROC_NUMBER ?\n> +\t\t\t\t errdetail(\"The server process with pgprocno %d is among those \nwith the oldest transactions.\", minPgprocno)\n> \t\t\t\t : 0),\n> -\t\t\t\t (minPid != InvalidPid ?\n> +\t\t\t\t (minPgprocno != INVALID_PROC_NUMBER ?\n> \t\t\t\t errhint(\"The NOTIFY queue cannot be emptied until that process \nends its current transaction.\")\n> \t\t\t\t : 0)));\n\nThis makes the message less useful to the user, as the ProcNumber isn't \nexposed to users. With the PID, you can do \"pg_terminate_backend(pid)\"\n\n> diff --git a/src/backend/optimizer/util/pathnode.c \nb/src/backend/optimizer/util/pathnode.c\n> index c42742d2c7b..bfb89049020 100644\n> --- a/src/backend/optimizer/util/pathnode.c\n> +++ b/src/backend/optimizer/util/pathnode.c\n> @@ -18,6 +18,7 @@\n>\n> #include \"foreign/fdwapi.h\"\n> #include \"miscadmin.h\"\n> +#include \"postmaster/interrupt.h\"\n> #include \"nodes/extensible.h\"\n> #include \"optimizer/appendinfo.h\"\n> #include \"optimizer/clauses.h\"\n\nmisordered\n\n> +\t * duplicated interrupts later if we switch backx.\n\ntypo: backx -> back\n\n> -\tif (IdleInTransactionSessionTimeoutPending)\n> +\tif (ConsumeInterrupt(INTERRUPT_IDLE_TRANSACTION_TIMEOUT))\n> \t{\n> \t\t/*\n> \t\t * If the GUC has been reset to zero, ignore the signal. This is\n> @@ -3412,7 +3361,6 @@ ProcessInterrupts(void)\n> \t\t * interrupt. We need to unset the flag before the injection point,\n> \t\t * otherwise we could loop in interrupts checking.\n> \t\t */\n> -\t\tIdleInTransactionSessionTimeoutPending = false;\n> \t\tif (IdleInTransactionSessionTimeout > 0)\n> \t\t{\n> \t\t\tINJECTION_POINT(\"idle-in-transaction-session-timeout\");\n\nThe \"We need to unset the flag..\" comment is a bit out of place now, \nsince the flag was already cleared by ConsumeInterrupt(). Same in the \nINTERRUPT_TRANSACTION_TIMEOUT and INTERRUPT_IDLE_SESSION_TIMEOUT \nhandling after this.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 7 Aug 2024 17:59:23 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "On 07/08/2024 17:59, Heikki Linnakangas wrote:\n> I'm also wondering about the relationship between interrupts and \n> latches. Currently, SendInterrupt sets a latch to wake up the target \n> process. I wonder if it should be the other way 'round? Move all the \n> wakeup code, with the signalfd, the self-pipe etc to interrupt.c, and in \n> SetLatch, call SendInterrupt to wake up the target process? Somehow that \n> feels more natural to me, I think.\n\nI explored that a little, see attached patch set. It's going towards the \nsame end state as your patches, I think, but it starts from different \nangle. In a nutshell:\n\nRemove Latch as an abstraction, and replace all use of Latches with \nInterrupts. When I originally created the Latch abstraction, I imagined \nthat we would have different latches for different purposes, but in \nreality, almost all code just used the general-purpose \"process latch\". \nthis patch accepts that reality and replaces the Latch struct directly \nwith the interrupt mask in PGPROC.\n\nThis initially defines only two interrupts. INTERRUPT_GENERAL_WAKEUP is \nthe main one, sending that interrupt to a process replaces setting the \nprocess's generic process latch in PGPROC:\n\n* SetLatch(MyLatch) -> RaiseInterrupt(INTERRUPT_GENERAL_WAKEUP)\n\n* SetLatch(&ProcGlobal->allProcs[procno].procLatch) -> \nSendInterrupt(procno, INTERRUPT_GENERAL_WAKEUP\n\n* ResetLatch(MyLatch) -> ClearInterrupt(INTERRUPT_GENERAL_WAKEUP)\n\n* WaitLatch(MyLatch) -> WaitInterrupt(1 << INTERRUPT_GENERAL_WAKEUP)\n\nThere was only one extra Latch in addition the process's generic \nprocLatch, the recoveryWakeupLatch. It's replaced by the second \ninterrupt bit, INTERRUPT_RECOVERY_WAKEUP.\n\nThis is complementary or preliminary work to your patch set. All the \nchanges to replace ProcSignals with different interrupt bits could go on \ntop of this.\n\nThis patch set is work in progress, I'd love to hear your thoughts on \nthis before I spent more time on this. (Haven't tested on Windows for \nexample).\n\nPatches 0001 - 0006 are just little cleanups and minor refactorings that \nI think make sense even without the rest of the work, though.\n\n0007 is the main patch.\n\nPatch 0010 addresses the problem discussed at \nhttps://www.postgresql.org/message-id/CALDaNm01_KEgHM1tKtgXkCGLJ5209SMSmGw3UmhZbOz365_%3DeA%40mail.gmail.com. \nOther solutions are discussed on that thread, but while working on \nthese, I realized that with these new interrupts, it's pretty \nstraightforward to fix by introducing one more interrupt reason.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Sat, 24 Aug 2024 20:17:47 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "On Sun, Aug 25, 2024 at 5:17 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 07/08/2024 17:59, Heikki Linnakangas wrote:\n> > I'm also wondering about the relationship between interrupts and\n> > latches. Currently, SendInterrupt sets a latch to wake up the target\n> > process. I wonder if it should be the other way 'round? Move all the\n> > wakeup code, with the signalfd, the self-pipe etc to interrupt.c, and in\n> > SetLatch, call SendInterrupt to wake up the target process? Somehow that\n> > feels more natural to me, I think.\n>\n> I explored that a little, see attached patch set. It's going towards the\n> same end state as your patches, I think, but it starts from different\n> angle. In a nutshell:\n>\n> Remove Latch as an abstraction, and replace all use of Latches with\n> Interrupts. When I originally created the Latch abstraction, I imagined\n> that we would have different latches for different purposes, but in\n> reality, almost all code just used the general-purpose \"process latch\".\n> this patch accepts that reality and replaces the Latch struct directly\n> with the interrupt mask in PGPROC.\n\nSome very initial reactions:\n\n* I like it!\n\n* This direction seems to fit quite nicely with future ideas about\nasynchronous network I/O. That may sound unrelated, but imagine that\na future version of WaitEventSet is built on Linux io_uring (or\nWindows iorings, or Windows IOCP, or kqueue), and waits for the kernel\nto tell you that network data has been transferred directly into a\nuser space buffer. You could wait for the interrupt word to change at\nthe same time by treating it as a futex[1]. Then all that other stuff\n-- signalfd, is_set, maybe_sleeping -- just goes away, and all we have\nleft is one single word in memory. (That it is possible to do that is\nnot really a coincidence, as our own Mr Freund asked Mr Axboe to add\nit[2]. The existing latch implementation techniques could be used as\nfallbacks, but when looked at from the right angle, once you squish\nall the wakeup reasons into a single word, it's all just an\nimplementation of a multiplexable futex with extra steps.)\n\n* Speaking of other problems in other threads that might be solved by\nthis redesign, I think I see the outline of some solutions to the\nproblem of different classes of wakeup which you can handle at\ndifferent times, using masks. There is a tension in a few places\nwhere we want to handle some kind of interrupts but not others in\nlocalised wait points, which we sort of try to address by holding\ninterrupts or holding cancel interrupts, but it's not satisfying and\nthere are some places where it doesn't work well. Needs a lot more\nthought, but a basic step would be: after old_interrupt_vector =\npg_atomic_fetch_or_u32(interrupt_vector, new_bits), if\n(old_interrupt_vector & new_bits) == new_bits, then you didn't\nactually change any bits, so you probably don't really need to wake\nthe other backend. If someone is currently unable to handle that type\nof interrupt (has ignored, ie not cleared, those bits) or is already\nin the process of handling it (is currently being rescheduled but\nhasn't cleared those bits yet), then you don't bother to wake it up.\nConcretely, it could mean that we avoid some of the useless wakeup\nstorm problems we see in vacuum delays or while executing a query and\nnot in a good place to handle sinval wakeups, etc. These are just\nsome raw thoughts, I am not sure about the bigger picture of that\ntopic yet.\n\n* Archeological note on terminology: the reason almost every relation\ndatabase and all the literature uses the term \"latch\" for something\nlike our LWLocks seems to be that latches were/are one of the kinds of\nsystem-provided mutex on IBM System/370 and modern descendents ie\nz/OS. Oracle and other systems that started as knock-offs of the IBM\nSystem R (the original SQL system, of which DB2 is the modern heir)\ncontinued that terminology, even though they ran on VMS or Unix or\nwhatever. I would not be sad if we removed our unusual use of the\nterm latch.\n\n[1] https://man7.org/linux/man-pages/man3/io_uring_prep_futex_wait.3.html\n[2] https://lore.kernel.org/lkml/20230720221858.135240-1-axboe@kernel.dk/\n\n\n", "msg_date": "Mon, 26 Aug 2024 11:05:46 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "On 26/08/2024 02:05, Thomas Munro wrote:\n> On Sun, Aug 25, 2024 at 5:17 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> On 07/08/2024 17:59, Heikki Linnakangas wrote:\n>>> I'm also wondering about the relationship between interrupts and\n>>> latches. Currently, SendInterrupt sets a latch to wake up the target\n>>> process. I wonder if it should be the other way 'round? Move all the\n>>> wakeup code, with the signalfd, the self-pipe etc to interrupt.c, and in\n>>> SetLatch, call SendInterrupt to wake up the target process? Somehow that\n>>> feels more natural to me, I think.\n>>\n>> I explored that a little, see attached patch set. It's going towards the\n>> same end state as your patches, I think, but it starts from different\n>> angle. In a nutshell:\n>>\n>> Remove Latch as an abstraction, and replace all use of Latches with\n>> Interrupts. When I originally created the Latch abstraction, I imagined\n>> that we would have different latches for different purposes, but in\n>> reality, almost all code just used the general-purpose \"process latch\".\n>> this patch accepts that reality and replaces the Latch struct directly\n>> with the interrupt mask in PGPROC.\n> \n> Some very initial reactions:\n> \n> * I like it!\n\nOk, here's a new version, with a bunch of bugs and FIXMEs fixed, \ncomments and other polish. A few things remain that I'm not sure about:\n\n- Terminology. We now have storage/interrupt.h and \npostmaster/interrupt.h. They're not the same thing, but there's also \nsome overlap, especially after your patch set is applied.\n\nAside from filenames, \"interrupt\" now means at least two things: a \nwakeup that you can send to a backend with SendInterrupt, and the thing \nthat you check with CHECK_FOR_INTERRUPTS(). They're related, but \ndifferent. To show case that confusion, the patch now contains this gem, \nwhich was a result of mechanically replacing \"latch\" with \"interrupt\":\n\n* We process interrupts whenever the interrupt has been set, so\n* cancel/die interrupts are processed quickly.\n\n\n- Fujii: this replaces the \"recoveryWakeupLatch\" with a separate \ninterrupt type. There was an earlier attempt at replacing \nrecoveryWakeupLatch with the process's regular latch which was reverted, \ncommits ac22929a26, and 00f690a239. I think this solution doesn't suffer \nfrom the same problems as that earlier attempt, but if you have a chance \nto review this, I would appreciate that. In a nutshell, \nINTERRUPT_RECOVERY_CONTINUE interrupt is now used instead of \nrecoveryWakeupLatch, but the places that previously waited just on \nrecoveryWakeupLatch, now wait on both INTERRUPT_GENERAL_WAKEUP, which is \nequivalent to waiting on the process latch, and \nINTERRUPT_RECOVERY_CONTINUE. So those loops should now react more \nquickly to signals like SIGHUP.\n\n- Backwards-compatibility and extensions. This will break any extensions \nthat use Latches. Extensions that just used WaitLatch(MyLatch) or \nsimilar are easy to convert to use latches instead. Or we could keep \naround some backwards-compatibility macros like 0008 here does, to avoid \nthe code churn. However, if an extension is creating its own latches or \ndoing more complicated stuff with them, it gets harder to maintain \nsource-code compatibility for them.\n\nAside from the backwards-compatibility aspect, should we reserve a few \nINTERRUPT_* values for extensions?\n\n\n\n> * This direction seems to fit quite nicely with future ideas about\n> asynchronous network I/O. That may sound unrelated, but imagine that\n> a future version of WaitEventSet is built on Linux io_uring (or\n> Windows iorings, or Windows IOCP, or kqueue), and waits for the kernel\n> to tell you that network data has been transferred directly into a\n> user space buffer. You could wait for the interrupt word to change at\n> the same time by treating it as a futex[1]. Then all that other stuff\n> -- signalfd, is_set, maybe_sleeping -- just goes away, and all we have\n> left is one single word in memory. (That it is possible to do that is\n> not really a coincidence, as our own Mr Freund asked Mr Axboe to add\n> it[2]. The existing latch implementation techniques could be used as\n> fallbacks, but when looked at from the right angle, once you squish\n> all the wakeup reasons into a single word, it's all just an\n> implementation of a multiplexable futex with extra steps.)\n\nCool\n\n> * Speaking of other problems in other threads that might be solved by\n> this redesign, I think I see the outline of some solutions to the\n> problem of different classes of wakeup which you can handle at\n> different times, using masks. There is a tension in a few places\n> where we want to handle some kind of interrupts but not others in\n> localised wait points, which we sort of try to address by holding\n> interrupts or holding cancel interrupts, but it's not satisfying and\n> there are some places where it doesn't work well. Needs a lot more\n> thought, but a basic step would be: after old_interrupt_vector =\n> pg_atomic_fetch_or_u32(interrupt_vector, new_bits), if\n> (old_interrupt_vector & new_bits) == new_bits, then you didn't\n> actually change any bits, so you probably don't really need to wake\n> the other backend. If someone is currently unable to handle that type\n> of interrupt (has ignored, ie not cleared, those bits) or is already\n> in the process of handling it (is currently being rescheduled but\n> hasn't cleared those bits yet), then you don't bother to wake it up.\n> Concretely, it could mean that we avoid some of the useless wakeup\n> storm problems we see in vacuum delays or while executing a query and\n> not in a good place to handle sinval wakeups, etc. These are just\n> some raw thoughts, I am not sure about the bigger picture of that\n> topic yet.\n\nYeah, I expect this work to help with those issues, but also not sure of \nthe details yet.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Sat, 31 Aug 2024 01:17:40 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Interrupts vs signals" }, { "msg_contents": "On Sat, Aug 31, 2024 at 10:17 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > * This direction seems to fit quite nicely with future ideas about\n> > asynchronous network I/O. That may sound unrelated, but imagine that\n> > a future version of WaitEventSet is built on Linux io_uring (or\n> > Windows iorings, or Windows IOCP, or kqueue), and waits for the kernel\n> > to tell you that network data has been transferred directly into a\n> > user space buffer. You could wait for the interrupt word to change at\n> > the same time by treating it as a futex[1]. Then all that other stuff\n> > -- signalfd, is_set, maybe_sleeping -- just goes away, and all we have\n> > left is one single word in memory. (That it is possible to do that is\n> > not really a coincidence, as our own Mr Freund asked Mr Axboe to add\n> > it[2]. The existing latch implementation techniques could be used as\n> > fallbacks, but when looked at from the right angle, once you squish\n> > all the wakeup reasons into a single word, it's all just an\n> > implementation of a multiplexable futex with extra steps.)\n>\n> Cool\n\nJust by the way, speaking of future tricks and the connections between\nthis code and other problems in other threads, I wanted to mention\nthat the above thought is also connected to CF #3998. When I started\nworking on this, in parallel I had an experimental patch set using\nfutexes[1] (back then, to try out futexes, I had to patch my OS[2]\nbecause Linux couldn't multiplex them yet, and macOS/*BSD had\nsomething sort of vaguely similar but effectively only usable between\nthreads in one process). I planned to switch to waiting directly on\nthe interrupt vector as a futex when bringing that idea together with\nthe one in this thread, but I guess I assumed we had to keep latches\ntoo since they seemed like such a central concept in PostgreSQL. Your\nidea seems much better, the more I think about it, but maybe only the\ninventor of latches could have the idea of blowing them up :-)\nAnyway, in that same experiment I realised I could wake multiple\nbackends in one system call, which led to more discoveries about the\nnegative interactions between latches and locks, and begat CF #3998\n(SetLatches()). By way of excuse, unfortunately I got blocked in my\nprogress on interrupt vectors for a couple of release cycles by the\nrecovery conflict system, a set of procsignals that were not like the\nothers, and turned out to be broken more or less as a result. That\nwas tricky to fix (CF #3615), leading to journeys into all kinds of\nstrange places like the regex code...\n\n[1] https://github.com/macdice/postgres/commits/kqueue-usermem/\n[2] https://reviews.freebsd.org/D37102\n\n\n", "msg_date": "Sat, 31 Aug 2024 12:12:23 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Interrupts vs signals" } ]
[ { "msg_contents": "Attached is a series of patches that address some performance and\ncorrectness problems we've recently identified in pg_dump.\n\nPatch 0001 refactors the way pg_dump keeps track of which \"components\"\n(definition, data, comment, ACL, etc) of a dumpable object need to be\ndumped. The problem addressed here is that the current coding breaks\nthe intended optimization that we don't run pg_dump's secondary data\ncollection queries for objects we're not actually going to dump.\nThat's because we initialize DumpableObject.dump to DUMP_COMPONENT_ALL\n(0xFFFF), and then we're not very careful about clearing meaningless\nbits out of that. That leads to the bitmask not being zero when it's\ntested at the start of a per-object dump function such as dumpFunc(),\neven though DUMP_COMPONENT_DEFINITION might be clear and the object\nmight not have any comment, security label, or ACL. An example\nof the problem here is that we'll end up running through dumpFunc()\nfor every function defined in an extension, even though the only\nones we'd print anything for are those with modified ACLs. In a\ndatabase with a lot of extensions, that results in a lot of useless\nqueries and consequent performance problems [1].\n\nThere are a couple of ways we could rewrite this, but what seemed\nto me to be the clearest and most robust is to separate\nDumpableObject.dump into two bitmasks, one recording the components\nwe've requested to dump and the other recording the components we've\nactually found the object to possess. (The existing logic conflates\nthese purposes by setting and later clearing bits, which I think is\nconfusing and also bug-prone.) Then, when we reach the point where we\nneed to decide if there's anything to do, we can AND those two masks\ntogether to derive the set of things we are really going to dump.\n\nSo the patch adds a \"components\" field with the same bits that \"dump\"\nhas, and arranges to set appropriate bits of that field as we discover\nthat objects have comments, ACLs, etc. (I wonder if we should rename\n\"dump\", perhaps to \"requests\" or the like. But I didn't do so here.)\nIt's straightforward other than that, with just a few things maybe\nworthy of note:\n1. collectComments and collectSecLabels are now run unconditionally,\nnot on-demand, because we need those bit-setting actions to happen\nbefore we start dumping objects.\n2. I had to add collection of pg_class.reltype so that when we find\na comment or seclabel for a composite type's column, we can redirect\nthe bit-setting action from the pg_class entry to the pg_type entry.\n3. Testing for no-components-to-dump is now centralized in\ndumpDumpableObject(), instead of each dumpXXX function doing it.\n\n\nPatch 0002 rethinks the way we handle dumping of ACLs. My main\ngoal here was to get rid of the expensive sub-selects that the\npg_init_privs patch added. The core idea is to drop all of those\nin favor of just reading pg_init_privs once and loading the\ninformation into the DumpableObjects, much the same way as we\nhandle comments and seclabels. I also realized that that patch's\ninsistence on doing assembly/disassembly of ACLs in SQL was a big\nperformance loser. We can calculate the delta between two ACLs\nright in buildACLCommands, just by doing string comparisons after\nwe've disassembled the aclitems array into elements, so it's really\npretty cheap.\n\nThis also led me to the conclusion that it was a bad idea to have\npreserved the \"old\" (pre-9.6) logic in buildACLCommands. The new\napproach of calculating \"add\" and \"remove\" lists as deltas from an\nacldefault() value works perfectly well before 9.6, at least for\nbranches back to 9.2 where acldefault() was added. For older servers,\nI thought briefly about putting hard-wired knowledge of the older\nbranches' default ACLs into dumputils.c, but rejected that in favor of\nemitting REVOKE ALL and then emitting all the ACL's items when we are\nlacking acldefault() results. This turns out to be just as efficient,\nand sometimes more so, as what the old code was doing. For example,\ncomparing pg_dump of the 9.1 regression database between old and new\nlogic, the only difference is\n\n@@ -60,9 +60,7 @@\n -- Name: SCHEMA public; Type: ACL; Schema: -; Owner: postgres\n --\n \n-REVOKE ALL ON SCHEMA public FROM PUBLIC;\n-REVOKE ALL ON SCHEMA public FROM postgres;\n-GRANT ALL ON SCHEMA public TO postgres;\n+REVOKE USAGE ON SCHEMA public FROM PUBLIC;\n GRANT ALL ON SCHEMA public TO PUBLIC;\n\nfor a net savings of two commands. The REVOKE USAGE appears because\nof the hacks Noah added in a7a7be1f2 and b073c3ccd to cause pg_dump\nto believe that schema public has a specific initprivs value even\nwhen it doesn't. In HEAD, that code only triggers for dumps from\nservers >= 9.6, but this patch makes it apply before that too, which\nseems correct to me. A target server >= v15 is going to have that\nACL for the public schema, no matter what the source version is.\n\n(BTW, to replicate that behavior I ended up having to write some\nclient-side functions to construct the text form of an aclitems array.\nThe duplication of logic with aclitemout() is a bit annoying. Not\nsure if it's worth trying to refactor to combine code.)\n\n0002 does not yet incorporate any logic change corresponding to\nthe bug fix under discussion at [2], but it will be easy to add.\nI left that out for now so as not to change any test results.\n\n\nLastly, patch 0003 addresses the concern I raised at [3] that it's\nunsafe to call pg_get_partkeydef() and pg_get_expr(relpartbound)\nin getTables(). Looking closer I realized that we can't cast\npg_class.reloftype to regtype at that point either, since regtypeout\nis going to notice if the type has been concurrently dropped.\n\nIn [3] I'd imagined that we could just delay those calls to a second\nquery in getTables(), but that doesn't work at all: if we apply\nthese functions to every row of pg_class, we still risk failure\nagainst any relation that we didn't lock. So there seems little\nalternative but to push these functions out to secondary queries\nexecuted later.\n\nArguably, 0003 is a bug fix that we should consider back-patching.\nHowever, I've not heard field reports of the problems it fixes,\nso maybe there's no need to bother.\n\n\nPutting all of this together, I did some performance measurements\ncomparing HEAD pg_dump to pg_dump with these three patches. On\nmy machine, using the current regression database as a test case,\nI find that \"pg_dump -s regression >/dev/null\" requires 1.21 sec\non HEAD and 0.915 sec with these patches, or about 24% faster.\nEven more importantly, the time to execute getTables' query drops\nfrom 129 ms to 16 ms. Since that's the window before we can start\nto acquire table locks, making it as short as possible is useful.\n\nTo have another data point, I also experimented with a database\nconstructed like this:\n\ndo $$\nbegin\n for i in 1..2500 loop\n execute 'create table tst' || i || ' (f1 int primary key, f2 text)';\n end loop;\nend $$;\n\nFor that, \"pg_dump -s\" required 4.94 sec vs 4.38 sec, or about 11%\nfaster, and the getTables query dropped from 316 ms to 41 ms.\n\nThere's some other micro-optimizations I'm thinking about, but this\nprobably gets most of the available win, and it seems like a coherent\nset of changes to present at once.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/1414363.1630341759%40sss.pgh.pa.us\n[2] https://www.postgresql.org/message-id/flat/CAA3qoJnr2%2B1dVJObNtfec%3DqW4Z0nz%3DA9%2Br5bZKoTSy5RDjskMw%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/1462940.1634496313%40sss.pgh.pa.us", "msg_date": "Wed, 20 Oct 2021 17:14:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Assorted improvements in pg_dump" } ]
[ { "msg_contents": "Remove unused wait events.\n\nCommit 464824323e introduced the wait events which were neither used by\nthat commit nor by follow-up commits for that work.\n\nAuthor: Masahiro Ikeda\nBackpatch-through: 14, where it was introduced\nDiscussion: https://postgr.es/m/ff077840-3ab2-04dd-bbe4-4f5dfd2ad481@oss.nttdata.com\n\nBranch\n------\nREL_14_STABLE\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/671eb8f34404d24c8f16ae40e94becb38afd93bb\n\nModified Files\n--------------\ndoc/src/sgml/monitoring.sgml | 16 ----------------\nsrc/backend/utils/activity/wait_event.c | 12 ------------\nsrc/include/utils/wait_event.h | 6 +-----\n3 files changed, 1 insertion(+), 33 deletions(-)", "msg_date": "Thu, 21 Oct 2021 02:52:04 +0000", "msg_from": "Amit Kapila <akapila@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Remove unused wait events." }, { "msg_contents": "On Wed, Oct 20, 2021 at 10:52 PM Amit Kapila <akapila@postgresql.org> wrote:\n> Remove unused wait events.\n>\n> Commit 464824323e introduced the wait events which were neither used by\n> that commit nor by follow-up commits for that work.\n\nThis commit forces a recompile of every extension that knows about the\ninteger values assigned to the enums in WaitEventIO. I know of 2\nextensions that are affected by this. I think that it should be\nreverted in v14 and kept only in master.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Oct 2021 12:17:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Remove unused wait events." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Oct 20, 2021 at 10:52 PM Amit Kapila <akapila@postgresql.org> wrote:\n>> Remove unused wait events.\n\n> This commit forces a recompile of every extension that knows about the\n> integer values assigned to the enums in WaitEventIO. I know of 2\n> extensions that are affected by this. I think that it should be\n> reverted in v14 and kept only in master.\n\nUm ... the removed symbols were at the end of the WaitEventIO enum,\nso is there really an ABI break? I suppose if an extension contains\nactual references to the removed symbols, it would fail to recompile,\nwhich'd be annoying for a released branch.\n\nOn the whole, I agree that this patch had no business being committed\nto v14.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Oct 2021 12:38:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Remove unused wait events." }, { "msg_contents": "On Mon, Oct 25, 2021 at 12:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Um ... the removed symbols were at the end of the WaitEventIO enum,\n> so is there really an ABI break? I suppose if an extension contains\n> actual references to the removed symbols, it would fail to recompile,\n> which'd be annoying for a released branch.\n\nI think that you're right. I believe one of the two extensions I know\nabout hopes that values won't be renumbered or become invalid across\nminor releases, and the other contains specific references to these\nparticular constants.\n\nNow of course it is always arguable whether or not anything that some\nextension is doing ought to be deemed an acceptable use of the\nfacilities provided by core, and how much stability ought to be\nguaranteed. But while I agree it's good to remove unused stuff in the\nmaster, it doesn't seem like we really need to back-patch it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Oct 2021 13:10:25 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Remove unused wait events." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> ... But while I agree it's good to remove unused stuff in the\n> master, it doesn't seem like we really need to back-patch it.\n\nYeah, exactly. I don't see any benefit that's commensurate with\neven a small risk of breaking extensions --- and apparently, in\nthis case that's not a risk but a certainty.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Oct 2021 13:18:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Remove unused wait events." }, { "msg_contents": "> On 25 Oct 2021, at 19:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Robert Haas <robertmhaas@gmail.com> writes:\n>> ... But while I agree it's good to remove unused stuff in the\n>> master, it doesn't seem like we really need to back-patch it.\n> \n> Yeah, exactly. I don't see any benefit that's commensurate with\n> even a small risk of breaking extensions --- and apparently, in\n> this case that's not a risk but a certainty.\n\nSince this will cause integer values to have different textual enum value\nrepresentations in 14 and 15+, do we want to skip two numbers by assigning the\nnext wait event the integer value of WAIT_EVENT_WAL_WRITE incremented by three?\nOr enum integer reuse not something we guarantee against across major versions?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 19:34:37 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pgsql: Remove unused wait events." }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Since this will cause integer values to have different textual enum value\n> representations in 14 and 15+, do we want to skip two numbers by assigning the\n> next wait event the integer value of WAIT_EVENT_WAL_WRITE incremented by three?\n> Or enum integer reuse not something we guarantee against across major versions?\n\nWe require a recompile across major versions. I don't see a reason why\nthis particular enum needs more stability than any other one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Oct 2021 13:39:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Remove unused wait events." }, { "msg_contents": "On 2021-10-25 13:39:44 -0400, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> > Since this will cause integer values to have different textual enum value\n> > representations in 14 and 15+, do we want to skip two numbers by assigning the\n> > next wait event the integer value of WAIT_EVENT_WAL_WRITE incremented by three?\n> > Or enum integer reuse not something we guarantee against across major versions?\n> \n> We require a recompile across major versions. I don't see a reason why\n> this particular enum needs more stability than any other one.\n\n+1. That'd end up pushing us to be more conservative about defining new wait\nevents, which I think would be bad tradeoff.\n\n\n", "msg_date": "Mon, 25 Oct 2021 11:01:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Remove unused wait events." }, { "msg_contents": "> On 25 Oct 2021, at 20:01, Andres Freund <andres@anarazel.de> wrote:\n> \n> On 2021-10-25 13:39:44 -0400, Tom Lane wrote:\n>> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> Since this will cause integer values to have different textual enum value\n>>> representations in 14 and 15+, do we want to skip two numbers by assigning the\n>>> next wait event the integer value of WAIT_EVENT_WAL_WRITE incremented by three?\n>>> Or enum integer reuse not something we guarantee against across major versions?\n>> \n>> We require a recompile across major versions. I don't see a reason why\n>> this particular enum needs more stability than any other one.\n> \n> +1. That'd end up pushing us to be more conservative about defining new wait\n> events, which I think would be bad tradeoff.\n\nFair enough, makes sense.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 20:03:54 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pgsql: Remove unused wait events." }, { "msg_contents": "On Mon, Oct 25, 2021 at 01:18:26PM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> ... But while I agree it's good to remove unused stuff in the\n>> master, it doesn't seem like we really need to back-patch it.\n> \n> Yeah, exactly. I don't see any benefit that's commensurate with\n> even a small risk of breaking extensions --- and apparently, in\n> this case that's not a risk but a certainty.\n\n+1.\n--\nMichael", "msg_date": "Tue, 26 Oct 2021 10:19:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Remove unused wait events." }, { "msg_contents": "On Tue, Oct 26, 2021 at 6:50 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Oct 25, 2021 at 01:18:26PM -0400, Tom Lane wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> >> ... But while I agree it's good to remove unused stuff in the\n> >> master, it doesn't seem like we really need to back-patch it.\n> >\n> > Yeah, exactly. I don't see any benefit that's commensurate with\n> > even a small risk of breaking extensions --- and apparently, in\n> > this case that's not a risk but a certainty.\n>\n> +1.\n>\n\nI agree with the points raised here and will revert this for v14.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 26 Oct 2021 07:50:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Remove unused wait events." }, { "msg_contents": "On 26.10.21 04:20, Amit Kapila wrote:\n> I agree with the points raised here and will revert this for v14.\n\nThanks, Amit. I appreciate the revert.\n\nNote that the removed events were almost at the end of WaitEventIO enum, \nexcept for one last entry: WAIT_EVENT_WAL_WRITE.\n\nJust as a data point: Our BDR extension indeed references the wait \nevents in question (or at least it used to do so up until that commit).\n\n-- \nMarkus Wanner\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 26 Oct 2021 16:02:22 +0200", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Remove unused wait events." }, { "msg_contents": "On Tue, Oct 26, 2021 at 7:34 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 26.10.21 04:20, Amit Kapila wrote:\n> > I agree with the points raised here and will revert this for v14.\n>\n> Thanks, Amit. I appreciate the revert.\n>\n> Note that the removed events were almost at the end of WaitEventIO enum,\n> except for one last entry: WAIT_EVENT_WAL_WRITE.\n>\n> Just as a data point: Our BDR extension indeed references the wait\n> events in question (or at least it used to do so up until that commit).\n>\n\nThanks for the relevant information.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 27 Oct 2021 07:30:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Remove unused wait events." } ]
[ { "msg_contents": "Hi,\n\nI'd like to propose to add new wait event reported while archiver process\nis executing archive_command. This would be helpful to observe\nwhat archiver is doing and check whether it has some troubles or not.\nThought? PoC patch attached.\n\nAlso how about adding wait events for other commands like\narchive_cleanup_command, restore_command and recovery_end_command?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 21 Oct 2021 22:57:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "wait event and archive_command" }, { "msg_contents": "On Thu, Oct 21, 2021 at 7:28 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> I'd like to propose to add new wait event reported while archiver process\n> is executing archive_command. This would be helpful to observe\n> what archiver is doing and check whether it has some troubles or not.\n> Thought? PoC patch attached.\n>\n> Also how about adding wait events for other commands like\n> archive_cleanup_command, restore_command and recovery_end_command?\n\n+1 for the wait event.\n\nThe following activitymsg that are being set to ps display in\nXLogFileRead and pgarch_archiveXlog have come up for one of our\ninternal team discussions recently:\n\n snprintf(activitymsg, sizeof(activitymsg), \"waiting for %s\",\n xlogfname);\n set_ps_display(activitymsg);\n\n snprintf(activitymsg, sizeof(activitymsg), \"recovering %s\",\n xlogfname);\n set_ps_display(activitymsg);\n\n snprintf(activitymsg, sizeof(activitymsg), \"archiving %s\", xlog);\n set_ps_display(activitymsg);\n\nThe ps display info might be useful if we run postgres on a stand\nalone box and there's someone monitoring at the ps output, but it\ndoesn't help debugging after an issue has occurred. How about we have\nthe following statements which will be useful for someone to look at\nthe server logs and know what was/is happening during the recovery and\narchiving. IMO, we should also have the elog statement.\n\nelog(LOG, \"waiting for %s\", xlogfname);\nelog(LOG, \"recovering %s\"\", xlogfname);\nelog(LOG, \"archiving %s\", xlog);\n\nAnother idea could be to have a hook emitting the above info to\noutside components, but a hook just for this purpose isn't a great\nidea IMO.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 21 Oct 2021 20:25:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wait event and archive_command" }, { "msg_contents": "On Thu, Oct 21, 2021 at 10:57:50PM +0900, Fujii Masao wrote:\n> Also how about adding wait events for other commands like\n> archive_cleanup_command, restore_command and recovery_end_command?\n\n+1 to add something for all of them as we track the startup process in\npg_stat_activity. Thinking with a larger picture, this comes down to\nthe usage of system(). We could introduce a small wrapper of system()\nthat takes as argument a wait event for the backend.\n--\nMichael", "msg_date": "Fri, 22 Oct 2021 18:32:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: wait event and archive_command" }, { "msg_contents": "On 2021/10/21 23:55, Bharath Rupireddy wrote:\n>> Also how about adding wait events for other commands like\n>> archive_cleanup_command, restore_command and recovery_end_command?\n> \n> +1 for the wait event.\n\nThanks!\nI added the wait events for also restore_command, etc into the patch.\nI attached that updated version of the patch.\n\n\n> The following activitymsg that are being set to ps display in\n> XLogFileRead and pgarch_archiveXlog have come up for one of our\n> internal team discussions recently:\n> \n> snprintf(activitymsg, sizeof(activitymsg), \"waiting for %s\",\n> xlogfname);\n> set_ps_display(activitymsg);\n> \n> snprintf(activitymsg, sizeof(activitymsg), \"recovering %s\",\n> xlogfname);\n> set_ps_display(activitymsg);\n> \n> snprintf(activitymsg, sizeof(activitymsg), \"archiving %s\", xlog);\n> set_ps_display(activitymsg);\n> \n> The ps display info might be useful if we run postgres on a stand\n> alone box and there's someone monitoring at the ps output, but it\n> doesn't help debugging after an issue has occurred. How about we have\n> the following statements which will be useful for someone to look at\n> the server logs and know what was/is happening during the recovery and\n> archiving.\n\nIf an issue occurs while the command is executing,\nthe error message is logged, isn't it? Isn't that enough for your case?\n\n\n> IMO, we should also have the elog statement.\n> \n> elog(LOG, \"waiting for %s\", xlogfname);\n> elog(LOG, \"recovering %s\"\", xlogfname);\n> elog(LOG, \"archiving %s\", xlog);\n\nI'm afraid that some people think that it's noisy to always log those messages.\n\n\n> Another idea could be to have a hook emitting the above info to\n> outside components, but a hook just for this purpose isn't a great\n> idea IMO.\n\nYes, this idea sounds overkill to me.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 1 Nov 2021 18:01:13 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: wait event and archive_command" }, { "msg_contents": "\n\nOn 2021/10/22 18:32, Michael Paquier wrote:\n> On Thu, Oct 21, 2021 at 10:57:50PM +0900, Fujii Masao wrote:\n>> Also how about adding wait events for other commands like\n>> archive_cleanup_command, restore_command and recovery_end_command?\n> \n> +1 to add something for all of them as we track the startup process in\n> pg_stat_activity. Thinking with a larger picture, this comes down to\n> the usage of system(). We could introduce a small wrapper of system()\n> that takes as argument a wait event for the backend.\n\nThat's an idea, but as far as I implemented the patch, introduing such wrapper\nfunction doesn't seem to simplify or improve the source code.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 1 Nov 2021 18:04:22 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: wait event and archive_command" }, { "msg_contents": "On Mon, Nov 1, 2021 at 2:31 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/10/21 23:55, Bharath Rupireddy wrote:\n> >> Also how about adding wait events for other commands like\n> >> archive_cleanup_command, restore_command and recovery_end_command?\n> >\n> > +1 for the wait event.\n>\n> Thanks!\n> I added the wait events for also restore_command, etc into the patch.\n> I attached that updated version of the patch.\n\nThanks for the patch. It looks good to me other than the following comment:\n\n1) Can't we determine the wait event type based on commandName in\nExecuteRecoveryCommand instead of passing it as an extra param?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 10 Nov 2021 16:49:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wait event and archive_command" }, { "msg_contents": "Great, so great. Thanks you\n\nBharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> schrieb am Mi.,\n10. Nov. 2021, 12:20:\n\n> On Mon, Nov 1, 2021 at 2:31 PM Fujii Masao <masao.fujii@oss.nttdata.com>\n> wrote:\n> > On 2021/10/21 23:55, Bharath Rupireddy wrote:\n> > >> Also how about adding wait events for other commands like\n> > >> archive_cleanup_command, restore_command and recovery_end_command?\n> > >\n> > > +1 for the wait event.\n> >\n> > Thanks!\n> > I added the wait events for also restore_command, etc into the patch.\n> > I attached that updated version of the patch.\n>\n> Thanks for the patch. It looks good to me other than the following comment:\n>\n> 1) Can't we determine the wait event type based on commandName in\n> ExecuteRecoveryCommand instead of passing it as an extra param?\n>\n> Regards,\n> Bharath Rupireddy.\n>\n>\n>\n\nGreat, so great. Thanks youBharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> schrieb am Mi., 10. Nov. 2021, 12:20:On Mon, Nov 1, 2021 at 2:31 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/10/21 23:55, Bharath Rupireddy wrote:\n> >> Also how about adding wait events for other commands like\n> >> archive_cleanup_command, restore_command and recovery_end_command?\n> >\n> > +1 for the wait event.\n>\n> Thanks!\n> I added the wait events for also restore_command, etc into the patch.\n> I attached that updated version of the patch.\n\nThanks for the patch. It looks good to me other than the following comment:\n\n1) Can't we determine the wait event type based on commandName in\nExecuteRecoveryCommand instead of passing it as an extra param?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Wed, 10 Nov 2021 12:25:58 +0100", "msg_from": "Sascha Kuhl <yogidabanli@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wait event and archive_command" }, { "msg_contents": "On Mon, Nov 1, 2021 at 2:31 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > The following activitymsg that are being set to ps display in\n> > XLogFileRead and pgarch_archiveXlog have come up for one of our\n> > internal team discussions recently:\n> >\n> > snprintf(activitymsg, sizeof(activitymsg), \"waiting for %s\",\n> > xlogfname);\n> > set_ps_display(activitymsg);\n> >\n> > snprintf(activitymsg, sizeof(activitymsg), \"recovering %s\",\n> > xlogfname);\n> > set_ps_display(activitymsg);\n> >\n> > snprintf(activitymsg, sizeof(activitymsg), \"archiving %s\", xlog);\n> > set_ps_display(activitymsg);\n> >\n> > The ps display info might be useful if we run postgres on a stand\n> > alone box and there's someone monitoring at the ps output, but it\n> > doesn't help debugging after an issue has occurred. How about we have\n> > the following statements which will be useful for someone to look at\n> > the server logs and know what was/is happening during the recovery and\n> > archiving.\n>\n> If an issue occurs while the command is executing,\n> the error message is logged, isn't it? Isn't that enough for your case?\n\nYou are right when an issue occurs. However, these messages will be\nuseful 1) if the recovery or archiving is taking a lot of time and one\nwould want to understand how it is progressing. 2) if these commands\npass but an issue occurs in some other area of the code. IMO, we\nshould have these as LOG messages instead of just setting in the ps\ndisplay for temporary purposes which doesn't work well with the\npostgres on cloud where users/admins/developers don't get to see the\nps display.\n\n> > IMO, we should also have the elog statement.\n> >\n> > elog(LOG, \"waiting for %s\", xlogfname);\n> > elog(LOG, \"recovering %s\"\", xlogfname);\n> > elog(LOG, \"archiving %s\", xlog);\n>\n> I'm afraid that some people think that it's noisy to always log those messages.\n\nI don't think these are noisy messages at all. In fact, they will be\nuseful to answer (if not exact answers, but an approximation) some of\nthe customer queries like \"what is happening in my server during the\nrecovery/archiving phase? how much more time recovery might take?\".\nToday, the server emits lot of LOGs, adding these will not blow up the\nserver logs at all if the log rotation policy is configured\nappropriately.\n\nHaving said the above, I plan to discuss these things in a separate thread.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 10 Nov 2021 17:00:20 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wait event and archive_command" }, { "msg_contents": "On Wed, Nov 10, 2021 at 5:00 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Nov 1, 2021 at 2:31 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > > The following activitymsg that are being set to ps display in\n> > > XLogFileRead and pgarch_archiveXlog have come up for one of our\n> > > internal team discussions recently:\n> > >\n> > > snprintf(activitymsg, sizeof(activitymsg), \"waiting for %s\",\n> > > xlogfname);\n> > > set_ps_display(activitymsg);\n> > >\n> > > snprintf(activitymsg, sizeof(activitymsg), \"recovering %s\",\n> > > xlogfname);\n> > > set_ps_display(activitymsg);\n> > >\n> > > snprintf(activitymsg, sizeof(activitymsg), \"archiving %s\", xlog);\n> > > set_ps_display(activitymsg);\n> > >\n> > > The ps display info might be useful if we run postgres on a stand\n> > > alone box and there's someone monitoring at the ps output, but it\n> > > doesn't help debugging after an issue has occurred. How about we have\n> > > the following statements which will be useful for someone to look at\n> > > the server logs and know what was/is happening during the recovery and\n> > > archiving.\n> >\n> > If an issue occurs while the command is executing,\n> > the error message is logged, isn't it? Isn't that enough for your case?\n>\n> You are right when an issue occurs. However, these messages will be\n> useful 1) if the recovery or archiving is taking a lot of time and one\n> would want to understand how it is progressing. 2) if these commands\n> pass but an issue occurs in some other area of the code. IMO, we\n> should have these as LOG messages instead of just setting in the ps\n> display for temporary purposes which doesn't work well with the\n> postgres on cloud where users/admins/developers don't get to see the\n> ps display.\n>\n> > > IMO, we should also have the elog statement.\n> > >\n> > > elog(LOG, \"waiting for %s\", xlogfname);\n> > > elog(LOG, \"recovering %s\"\", xlogfname);\n> > > elog(LOG, \"archiving %s\", xlog);\n> >\n> > I'm afraid that some people think that it's noisy to always log those messages.\n>\n> I don't think these are noisy messages at all. In fact, they will be\n> useful to answer (if not exact answers, but an approximation) some of\n> the customer queries like \"what is happening in my server during the\n> recovery/archiving phase? how much more time recovery might take?\".\n> Today, the server emits lot of LOGs, adding these will not blow up the\n> server logs at all if the log rotation policy is configured\n> appropriately.\n>\n> Having said the above, I plan to discuss these things in a separate thread.\n\nJust for the records - I've started a new thread for the above\ndiscussion - https://www.postgresql.org/message-id/CALj2ACUfMU%3Dahxivfy%2BZmpVZccd5PASG-_10mLpM55_Y_h4-VA%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 10 Nov 2021 23:10:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wait event and archive_command" }, { "msg_contents": "\n\nOn 2021/11/10 20:19, Bharath Rupireddy wrote:\n> Thanks for the patch. It looks good to me other than the following comment:\n\nThanks for the review!\n\n\n> 1) Can't we determine the wait event type based on commandName in\n> ExecuteRecoveryCommand instead of passing it as an extra param?\n\nYes, that's possible. But isn't it uglier to make ExecuteRecoveryCommand() have\nthe map of command name and wait event? So I feel inclined to avoid adding\nsomething like the following code into the function... Thought?\n\nif (strcmp(commandName, \"recovery_end_command\") == 0)\n wait_event_info = WAIT_EVENT_RECOVERY_END_COMMAND;\nelse if (strcmp(commandName, \"archive_command_command\") == 0)\n...\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 18 Nov 2021 11:23:17 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: wait event and archive_command" }, { "msg_contents": "On Thu, Nov 18, 2021 at 11:23:17AM +0900, Fujii Masao wrote:\n> Yes, that's possible. But isn't it uglier to make ExecuteRecoveryCommand() have\n> the map of command name and wait event? So I feel inclined to avoid adding\n> something like the following code into the function... Thought?\n\nFWIW, I find cleaner, and less bug-prone, the approach taken by\nFujii-san's patch to have the wait event set as an argument of the\nfunction rather than trying to guess it from the command data.\n--\nMichael", "msg_date": "Thu, 18 Nov 2021 12:43:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: wait event and archive_command" }, { "msg_contents": "On Thu, Nov 18, 2021 at 7:53 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > 1) Can't we determine the wait event type based on commandName in\n> > ExecuteRecoveryCommand instead of passing it as an extra param?\n>\n> Yes, that's possible. But isn't it uglier to make ExecuteRecoveryCommand() have\n> the map of command name and wait event? So I feel inclined to avoid adding\n> something like the following code into the function... Thought?\n>\n> if (strcmp(commandName, \"recovery_end_command\") == 0)\n> wait_event_info = WAIT_EVENT_RECOVERY_END_COMMAND;\n> else if (strcmp(commandName, \"archive_command_command\") == 0)\n\nYeah let's not do that. I'm fine with the\nwait_event_for_archive_command_v2.patch as is.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 18 Nov 2021 10:04:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wait event and archive_command" }, { "msg_contents": "On Thu, Nov 18, 2021 at 10:04:57AM +0530, Bharath Rupireddy wrote:\n> Yeah let's not do that. I'm fine with the\n> wait_event_for_archive_command_v2.patch as is.\n\nSwitched the patch as RfC, then.\n--\nMichael", "msg_date": "Fri, 19 Nov 2021 16:54:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: wait event and archive_command" }, { "msg_contents": "\n\nOn 2021/11/19 16:54, Michael Paquier wrote:\n> On Thu, Nov 18, 2021 at 10:04:57AM +0530, Bharath Rupireddy wrote:\n>> Yeah let's not do that. I'm fine with the\n>> wait_event_for_archive_command_v2.patch as is.\n> \n> Switched the patch as RfC, then.\n\nThanks! Barring any objection, I will commit the patch.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 20 Nov 2021 00:19:02 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: wait event and archive_command" }, { "msg_contents": "\n\nOn 2021/11/20 0:19, Fujii Masao wrote:\n> \n> \n> On 2021/11/19 16:54, Michael Paquier wrote:\n>> On Thu, Nov 18, 2021 at 10:04:57AM +0530, Bharath Rupireddy wrote:\n>>> Yeah let's not do that. I'm fine with the\n>>> wait_event_for_archive_command_v2.patch as is.\n>>\n>> Switched the patch as RfC, then.\n> \n> Thanks! Barring any objection, I will commit the patch.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 22 Nov 2021 10:31:57 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: wait event and archive_command" } ]
[ { "msg_contents": "One problem I've seen in multiple databases and is when a table has a\nmixture of data sets within it. E.g. A queue table where 99% of the\nentries are \"done\" but most queries are working with the 1% that are\n\"new\" or in other states. Often the statistics are skewed by the\n\"done\" entries and give bad estimates for query planning when the\nquery is actually looking at the other rows.\n\nWe've always talked about this as a \"skewed distribution\" or\n\"intercolumn correlation\" problem. And we've developed some tools for\ndealing with those issues. But I've been thinking that's not the only\nproblem with these cases.\n\nThe problem I'm finding is that the distribution of these small\nsubsets can swing quickly. And understanding intercolumn correlations\neven if we could do it perfectly would be no help at all.\n\nConsider a table with millions of rows that are \"done\" but none that\nare \"pending\". Inserting just a few hundred or thousand new pending\nrows makes any estimates based on the existing statistics entirely\nincorrect. Even if we had perfect statistics capable of making perfect\nestimates they would be entirely wrong once a few inserts of pending\nrows are done.\n\nWorse, this is kind of true for even n_dead_tup, n_mod_since_analyze,\netc are kind of affected by this. It's easy (at least on older\nversions, maybe Peter's work has fixed this for btree) to get severe\nindex bloat because vacuum doesn't run for a long time relative to the\nsize of the busy portion of a table.\n\nI'm imagining to really tackle this we should be doing something like\nnoticing when inserts, updates, deletes are affecting key values that\nare \"rare\" according to the statistics and triggering autovacuum\nANALYZE statements that use indexes to only update the statistics for\nthe relevant key ranges.\n\nObviously this could get complex quickly. Perhaps it should be\nsomething users could declare. Some kind of \"partitioned statistics\"\nwhere you declare a where clause and we generate statistics for the\ntable where that where clause is true. Then we could fairly easily\nalso count things like n_mod_since_analyze for that where clause too.\n\nAnd yes, partitioning the table could be a solution to this in some\ncases. I think there are reasons why it isn't always going to work for\nthese issues though, not least that users will likely have other ways\nthey want to partition the data already.\n\n\n-- \ngreg\n\n\n", "msg_date": "Thu, 21 Oct 2021 17:12:38 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Thinking about ANALYZE stats and autovacuum and large non-uniform\n tables" }, { "msg_contents": "On Fri, Oct 22, 2021 at 10:13 AM Greg Stark <stark@mit.edu> wrote:\n> Obviously this could get complex quickly. Perhaps it should be\n> something users could declare. Some kind of \"partitioned statistics\"\n> where you declare a where clause and we generate statistics for the\n> table where that where clause is true. Then we could fairly easily\n> also count things like n_mod_since_analyze for that where clause too.\n\nIt's a different thing, but somehow related and maybe worth\nmentioning, that in DB2 you can declare a table to be VOLATILE. In\nthat case, by some unspecified different heuristics, it'll prefer\nindex scans over table scans, and it's intended to give stable\nperformance for queue-like tables by defending against automatically\nscheduled stats being collected at a bad time. It's been a while\nsince I ran busy queue-like workloads on DB2 but I seem to recall it\nwas more about the dangers of tables that sometimes have say 10 rows\nand something 42 million, rather than the case of 42 million DONE rows\nand 0-10 PENDING rows, but not a million miles off.\n\n\n", "msg_date": "Fri, 22 Oct 2021 10:42:23 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Thinking about ANALYZE stats and autovacuum and large non-uniform\n tables" }, { "msg_contents": "On Thu, Oct 21, 2021 at 2:13 PM Greg Stark <stark@mit.edu> wrote:\n> The problem I'm finding is that the distribution of these small\n> subsets can swing quickly. And understanding intercolumn correlations\n> even if we could do it perfectly would be no help at all.\n>\n> Consider a table with millions of rows that are \"done\" but none that\n> are \"pending\". Inserting just a few hundred or thousand new pending\n> rows makes any estimates based on the existing statistics entirely\n> incorrect. Even if we had perfect statistics capable of making perfect\n> estimates they would be entirely wrong once a few inserts of pending\n> rows are done.\n\nI am very sympathetic to this view of things. Because this asymmetry\nobviously exists, and matters. There is no getting around that.\n\n> Worse, this is kind of true for even n_dead_tup, n_mod_since_analyze,\n> etc are kind of affected by this. It's easy (at least on older\n> versions, maybe Peter's work has fixed this for btree) to get severe\n> index bloat because vacuum doesn't run for a long time relative to the\n> size of the busy portion of a table.\n\nMy work (especially in 14) has definitely helped a great deal with\nindex bloat, by cleaning it up in a targeted fashion, based on\npage-level considerations. This is just the only thing that can work;\nwe can never expect VACUUM to be able to deal with that, no matter\nwhat. Simply because it's totally normal and expected for index bloat\nto grow at an uneven rate over time.\n\nI do still think that there is an unsolved issue here, which leads to\nproblems with index bloat when there isn't \"B-Tree keyspace\nconcentration\" of garbage index tuples. That problem is with the\nstatistics that drive VACUUM themselves; they just don't work very\nwell in certain cases [1]. Statistics that drive autovacuum usually\ncome from ANALYZE, of course. The entire intellectual justification\nfor database statistics doesn't really carry over to VACUUM. There are\ncertain \"physical database\" implementation details that bleed into the\nway ANALYZE counts dead rows. For example, most dead tuples are\nusually LP_DEAD stub line pointers (not even tuples). They're only 4\nbytes, whereas live tuples are about 30 bytes at a minimum (depending\non how you count it). This leads to the ANALYZE block-based sampling\nbecoming confused.\n\nThis confusion seems related to the fact that ANALYZE is really a\n\"logical database\" thing. It's slightly amazing that statistics from\nANALYZE work as well as they do for query planning, so we shouldn't be\ntoo surprised.\n\nNote that the TPC-C issue I describe in [1] involves a table that's a\nlittle bit like a queue table, but with lots of non-HOT updates (lots\noverall, but only one update per logical row, ever). This might tie\nthings to what Thomas just said about DB2 and queue tables.\n\n> I'm imagining to really tackle this we should be doing something like\n> noticing when inserts, updates, deletes are affecting key values that\n> are \"rare\" according to the statistics and triggering autovacuum\n> ANALYZE statements that use indexes to only update the statistics for\n> the relevant key ranges.\n\nI'm not sure. I tend to think that the most promising approaches all\ninvolve some kind of execution time smarts about the statistics, and\ntheir inherent unreliability. Somehow query execution itself should\nbecome less gullible, at least in cases where we really can have high\nconfidence in the statistics being wrong at this exact time, for this\nexact key space.\n\n[1] https://postgr.es/m/CAH2-Wz=9R83wcwZcPUH4FVPeDM4znzbzMvp3rt21+XhQWMU8+g@mail.gmail.com\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 21 Oct 2021 14:45:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Thinking about ANALYZE stats and autovacuum and large non-uniform\n tables" } ]
[ { "msg_contents": "Hi,\nw.r.t. 0001-Partial-aggregates-push-down-v03.patch\n\nFor partial_agg_ok(),\n\n+ if (agg->aggdistinct || agg->aggvariadic || agg->aggkind !=\nAGGKIND_NORMAL || agg->aggorder != NIL)\n+ ok = false;\n\nSince SearchSysCache1() is not called yet, you can return false directly.\n\n+ if (aggform->aggpartialpushdownsafe != true)\n\nThe above can be written as:\n\n if (!aggform->aggpartialpushdownsafe)\n\nFor build_conv_list():\n\n+ Oid converter_oid = InvalidOid;\n+\n+ if (IsA(tlentry->expr, Aggref))\n...\n+ }\n+ convlist = lappend_oid(convlist, converter_oid);\n\nDo you intend to append InvalidOid to convlist (when tlentry->expr is\nnot Aggref) ?\n\nCheers\n\nHi,w.r.t. 0001-Partial-aggregates-push-down-v03.patchFor partial_agg_ok(),+   if (agg->aggdistinct || agg->aggvariadic || agg->aggkind != AGGKIND_NORMAL || agg->aggorder != NIL)+       ok = false;Since SearchSysCache1() is not called yet, you can return false directly.+       if (aggform->aggpartialpushdownsafe != true)The above can be written as:       if (!aggform->aggpartialpushdownsafe)For build_conv_list():+           Oid         converter_oid = InvalidOid;++           if (IsA(tlentry->expr, Aggref))...+           }+           convlist = lappend_oid(convlist, converter_oid);Do you intend to append InvalidOid to convlist (when tlentry->expr is not Aggref) ?Cheers", "msg_date": "Thu, 21 Oct 2021 14:43:38 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "Re: Partial aggregates pushdown" }, { "msg_contents": "Zhihong Yu писал 2021-10-22 00:43:\n> Hi,\n> w.r.t. 0001-Partial-aggregates-push-down-v03.patch\n> \n\nHi.\n\n> For partial_agg_ok(),\n> \n> + if (agg->aggdistinct || agg->aggvariadic || agg->aggkind !=\n> AGGKIND_NORMAL || agg->aggorder != NIL)\n> + ok = false;\n> \n> Since SearchSysCache1() is not called yet, you can return false\n> directly.\n\nFixed.\n\n> \n> + if (aggform->aggpartialpushdownsafe != true)\n> \n> The above can be written as:\n> \n> if (!aggform->aggpartialpushdownsafe)\n\nFixed.\n\n> \n> For build_conv_list():\n> \n> + Oid converter_oid = InvalidOid;\n> +\n> + if (IsA(tlentry->expr, Aggref))\n> \n> ...\n> + }\n> + convlist = lappend_oid(convlist, converter_oid);\n> \n> Do you intend to append InvalidOid to convlist (when tlentry->expr is\n> not Aggref) ?\n\nYes, for each tlist member (which matches fpinfo->grouped_tlist in case \nwhen foreignrel is UPPER_REL) we need to find corresponding converter.\nIf we don't append InvalidOid, we can't find convlist member, \ncorresponding to tlist member. Added comments to build_conv_list.\n\nAlso fixed error in pg_dump.c (we selected '0' when \naggpartialconverterfn was not defined in schema, but checked for '-').\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Fri, 22 Oct 2021 09:26:50 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Partial aggregates pushdown" } ]
[ { "msg_contents": "Today, pg_dump does a lot of internal lookups via binary search\nin presorted arrays. I thought it might improve matters\nto replace those binary searches with hash tables, theoretically\nconverting O(log N) searches into O(1) searches. So I tried making\na hash table indexed by CatalogId (tableoid+oid) with simplehash.h,\nand replacing as many data structures as I could with that.\n\nThis makes the code shorter and (IMO anyway) cleaner, but\n\n(a) the executable size increases by a few KB --- apparently, even\nthe minimum subset of simplehash.h's functionality is code-wasteful.\n\n(b) I couldn't measure any change in performance at all. I tried\nit on the regression database and on a toy DB with 10000 simple\ntables. Maybe on a really large DB you'd notice some difference,\nbut I'm not very optimistic now.\n\nSo this experiment feels like a failure, but I thought I'd post\nthe patch and results for the archives' sake. Maybe somebody\nwill think of a way to improve matters. Or maybe it's worth\ndoing just to shorten the code?\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 21 Oct 2021 18:27:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Experimenting with hash tables inside pg_dump" }, { "msg_contents": "On 10/21/21, 3:29 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> (b) I couldn't measure any change in performance at all. I tried\r\n> it on the regression database and on a toy DB with 10000 simple\r\n> tables. Maybe on a really large DB you'd notice some difference,\r\n> but I'm not very optimistic now.\r\n\r\nI wonder how many tables you'd need to start seeing a difference.\r\nThere are certainly databases out there with many more than 10,000\r\ntables. I'll look into this...\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 21 Oct 2021 23:13:11 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Hi,\n\nOn 2021-10-21 18:27:25 -0400, Tom Lane wrote:\n> Today, pg_dump does a lot of internal lookups via binary search\n> in presorted arrays. I thought it might improve matters\n> to replace those binary searches with hash tables, theoretically\n> converting O(log N) searches into O(1) searches. So I tried making\n> a hash table indexed by CatalogId (tableoid+oid) with simplehash.h,\n> and replacing as many data structures as I could with that.\n\nThat does sound like a good idea in theory...\n\n\n> This makes the code shorter and (IMO anyway) cleaner, but\n> \n> (a) the executable size increases by a few KB --- apparently, even\n> the minimum subset of simplehash.h's functionality is code-wasteful.\n\nHm. Surprised a bit by that. In an optimized build the difference is a\nsmaller, at least.\n\noptimized:\n text\t data\t bss\t dec\t hex\tfilename\n 448066\t 7048\t 1368\t 456482\t 6f722\tsrc/bin/pg_dump/pg_dump\n 447530\t 7048\t 1496\t 456074\t 6f58a\tsrc/bin/pg_dump/pg_dump.orig\n\ndebug:\n text\t data\t bss\t dec\t hex\tfilename\n 516883\t 7024\t 1352\t 525259\t 803cb\tsrc/bin/pg_dump/pg_dump\n 509819\t 7024\t 1480\t 518323\t 7e8b3\tsrc/bin/pg_dump/pg_dump.orig\n\nThe fact that optimization plays such a role makes me wonder if a good chunk\nof the difference is the slightly more complicated find{Type,Func,...}ByOid()\nfunctions.\n\n\n> (b) I couldn't measure any change in performance at all. I tried\n> it on the regression database and on a toy DB with 10000 simple\n> tables. Maybe on a really large DB you'd notice some difference,\n> but I'm not very optimistic now.\n\nDid you measure runtime of pg_dump, or how much CPU it used? I think a lot of\nthe time the backend is a bigger bottleneck than pg_dump...\n\nFor the regression test DB the majority of the time seems to be spent below\ntwo things:\n1) libpq\n2) sortDumpableObjects().\n\nI don't think 2) hits the binary search / hashtable path?\n\n\nIt does seem interesting that a substantial part of the time is spent in/below\nPQexec() and PQfnumber(). Especially the latter shouldn't be too hard to\noptimize away...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 21 Oct 2021 16:37:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "On 10/21/21, 4:14 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 10/21/21, 3:29 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n>> (b) I couldn't measure any change in performance at all. I tried\r\n>> it on the regression database and on a toy DB with 10000 simple\r\n>> tables. Maybe on a really large DB you'd notice some difference,\r\n>> but I'm not very optimistic now.\r\n>\r\n> I wonder how many tables you'd need to start seeing a difference.\r\n> There are certainly databases out there with many more than 10,000\r\n> tables. I'll look into this...\r\n\r\nWell, I tested with 200,000 tables and saw no difference with this.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 21 Oct 2021 23:59:57 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Did you measure runtime of pg_dump, or how much CPU it used?\n\nI was looking mostly at wall-clock runtime, though I did notice\nthat the CPU time looked about the same too.\n\n> I think a lot of\n> the time the backend is a bigger bottleneck than pg_dump...\n\nYeah, that. I tried doing a system-wide \"perf\" measurement, and soon\nrealized that a big fraction of the time for a \"pg_dump -s\" run is\nbeing spent in the planner :-(. I'm currently experimenting with\nPREPARE'ing pg_dump's repetitive queries, and it's looking very\npromising. More later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Oct 2021 20:22:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Hi,\n\nOn 2021-10-21 16:37:57 -0700, Andres Freund wrote:\n> On 2021-10-21 18:27:25 -0400, Tom Lane wrote:\n> > (a) the executable size increases by a few KB --- apparently, even\n> > the minimum subset of simplehash.h's functionality is code-wasteful.\n> \n> Hm. Surprised a bit by that. In an optimized build the difference is a\n> smaller, at least.\n> \n> optimized:\n> text\t data\t bss\t dec\t hex\tfilename\n> 448066\t 7048\t 1368\t 456482\t 6f722\tsrc/bin/pg_dump/pg_dump\n> 447530\t 7048\t 1496\t 456074\t 6f58a\tsrc/bin/pg_dump/pg_dump.orig\n> \n> debug:\n> text\t data\t bss\t dec\t hex\tfilename\n> 516883\t 7024\t 1352\t 525259\t 803cb\tsrc/bin/pg_dump/pg_dump\n> 509819\t 7024\t 1480\t 518323\t 7e8b3\tsrc/bin/pg_dump/pg_dump.orig\n> \n> The fact that optimization plays such a role makes me wonder if a good chunk\n> of the difference is the slightly more complicated find{Type,Func,...}ByOid()\n> functions.\n\nIt's not that.\n\nIn a debug build a good chunk of it is due to a bunch of Assert()s. Another\npart is that trivial helper functions like SH_PREV() don't get inlined.\n\nThe increase for an optimized build seems to boil down to pg_log_error()\ninvocations. If I replace those with an exit(1), the resulting binaries are\nwithin 100 byte.\n\nIf I prevent the compiler from inlining findObjectByCatalogId() in all the\nfind*ByOid() routines, your version is smaller than master even without other\nchanges.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 21 Oct 2021 17:47:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Hi,\n\nOn 2021-10-21 20:22:56 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> Yeah, that. I tried doing a system-wide \"perf\" measurement, and soon\n> realized that a big fraction of the time for a \"pg_dump -s\" run is\n> being spent in the planner :-(.\n\nA trick for seeing the proportions of this easily in perf is to start both\npostgres and pg_dump pinned to a specific CPU, and profile that cpu. That gets\nrid of most of the noise of other programs etc.\n\n\n\n> I'm currently experimenting with\n> PREPARE'ing pg_dump's repetitive queries, and it's looking very\n> promising. More later.\n\nGood idea.\n\nI wonder though if for some of them we should instead replace the per-object\nqueries with one query returning the information for all objects of a type. It\ndoesn't make all that much sense that we build and send one query for each\ntable and index.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 21 Oct 2021 18:09:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I wonder though if for some of them we should instead replace the per-object\n> queries with one query returning the information for all objects of a type. It\n> doesn't make all that much sense that we build and send one query for each\n> table and index.\n\nThe trick is the problem I alluded to in another thread: it's not safe to\ndo stuff like pg_get_expr() on tables we don't have lock on.\n\nI've thought about doing something like\n\nSELECT unsafe-functions FROM pg_class WHERE oid IN (someoid, someoid, ...)\n\nbut in cases with tens of thousands of tables, it seems unlikely that\nthat's going to behave all that nicely.\n\nThe *real* fix, I suppose, would be to fix all those catalog-inspection\nfunctions so that they operate with respect to the query's snapshot.\nBut that's not a job I'm volunteering for. Besides which, pg_dump\nstill has to cope with back-rev servers where it wouldn't be safe.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Oct 2021 22:13:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Hi,\n\nOn 2021-10-21 22:13:22 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I wonder though if for some of them we should instead replace the per-object\n> > queries with one query returning the information for all objects of a type. It\n> > doesn't make all that much sense that we build and send one query for each\n> > table and index.\n> \n> The trick is the problem I alluded to in another thread: it's not safe to\n> do stuff like pg_get_expr() on tables we don't have lock on.\n\nI was looking at getTableAttrs() - sending one query instead of #tables\nqueries yields a quite substantial speedup in a quick prototype. And I don't\nthink it changes anything around locking semantics.\n\n\n> I've thought about doing something like\n> \n> SELECT unsafe-functions FROM pg_class WHERE oid IN (someoid, someoid, ...)\n> \n> but in cases with tens of thousands of tables, it seems unlikely that\n> that's going to behave all that nicely.\n\nThat's kinda what I'm doing in the quick hack. But instead of using IN(...) I\nmade it unnest('{oid, oid, ...}'), that scales much better.\n\nA pg_dump --schema-only of the regression database goes from\n\nreal\t0m0.675s\nuser\t0m0.039s\nsys\t0m0.029s\n\nto\n\nreal\t0m0.477s\nuser\t0m0.037s\nsys\t0m0.020s\n\nwhich isn't half-bad.\n\nThere's a few more cases like this I think. But most are harder because the\ndumping happens one-by-one from dumpDumpableObject(). The relatively easy but\nsubstantial cases I could find quickly were getIndexes(), getConstraints(),\ngetTriggers()\n\n\nTo see where it's worth putting in time it'd be useful if getSchemaData() in\nverbose mode printed timing information...\n\n\n> The *real* fix, I suppose, would be to fix all those catalog-inspection\n> functions so that they operate with respect to the query's snapshot.\n> But that's not a job I'm volunteering for. Besides which, pg_dump\n> still has to cope with back-rev servers where it wouldn't be safe.\n\nYea, that's not a small change :(. I suspect that we'd need a bunch of new\ncaching infrastructure to make that reasonably performant, since this\npresumably couldn't use syscache etc.\n\nGreetings,\n\nAndres Freund", "msg_date": "Thu, 21 Oct 2021 22:59:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-10-21 22:13:22 -0400, Tom Lane wrote:\n>> I've thought about doing something like\n>> SELECT unsafe-functions FROM pg_class WHERE oid IN (someoid, someoid, ...)\n>> but in cases with tens of thousands of tables, it seems unlikely that\n>> that's going to behave all that nicely.\n\n> That's kinda what I'm doing in the quick hack. But instead of using IN(...) I\n> made it unnest('{oid, oid, ...}'), that scales much better.\n\nI'm skeptical of that, mainly because it doesn't work in old servers,\nand I really don't want to maintain two fundamentally different\nversions of getTableAttrs(). I don't think you actually need the\nmulti-array form of unnest() here --- we know the TableInfo array\nis in OID order --- but even the single-array form only works\nback to 8.4.\n\nHowever ... looking through getTableAttrs' main query, it seems\nlike the only thing there that's potentially unsafe is the\n\"format_type(t.oid, a.atttypmod)\" call. I wonder if it could be\nsane to convert it into a single query that just scans all of\npg_attribute, and then deal with creating the formatted type names\nseparately, perhaps with an improved version of getFormattedTypeName\nthat could cache the results for non-default typmods. The main\nknock on this approach is the temptation for somebody to stick some\nunsafe function into the query in future. We could stick a big fat\nwarning comment into the code, but lately I despair of people reading\ncomments.\n\n> To see where it's worth putting in time it'd be useful if getSchemaData() in\n> verbose mode printed timing information...\n\nI've been running test cases with log_min_duration_statement = 0,\nwhich serves well enough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Oct 2021 10:53:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Hi,\n\nOn 2021-10-22 10:53:31 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-10-21 22:13:22 -0400, Tom Lane wrote:\n> >> I've thought about doing something like\n> >> SELECT unsafe-functions FROM pg_class WHERE oid IN (someoid, someoid, ...)\n> >> but in cases with tens of thousands of tables, it seems unlikely that\n> >> that's going to behave all that nicely.\n> \n> > That's kinda what I'm doing in the quick hack. But instead of using IN(...) I\n> > made it unnest('{oid, oid, ...}'), that scales much better.\n> \n> I'm skeptical of that, mainly because it doesn't work in old servers,\n> and I really don't want to maintain two fundamentally different\n> versions of getTableAttrs(). I don't think you actually need the\n> multi-array form of unnest() here --- we know the TableInfo array\n> is in OID order --- but even the single-array form only works\n> back to 8.4.\n\nI think we can address that, if we think it's overall a promising approach to\npursue. E.g. if we don't need the indexes, we can make it = ANY().\n\n\n> However ... looking through getTableAttrs' main query, it seems\n> like the only thing there that's potentially unsafe is the\n> \"format_type(t.oid, a.atttypmod)\" call.\n\nI assume the default expression bit would also be unsafe?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 22 Oct 2021 08:21:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-10-22 10:53:31 -0400, Tom Lane wrote:\n>> I'm skeptical of that, mainly because it doesn't work in old servers,\n\n> I think we can address that, if we think it's overall a promising approach to\n> pursue. E.g. if we don't need the indexes, we can make it = ANY().\n\nHmm ... yeah, I guess we could get away with that. It might not scale\nas nicely to a huge database, but probably dumping a huge database\nfrom an ancient server isn't all that interesting.\n\nI'm inclined to think that it could be sane to make getTableAttrs\nand getIndexes use this style, but we probably still want functions\nand such to use per-object queries. In those other catalogs there\nare many built-in objects that we don't really care about. The\nprepared-queries hack I was working on last night is probably plenty\ngood enough there, and it's a much less invasive patch.\n\nWere you planning to pursue this further, or did you want me to?\nI'd want to layer it on top of the work I did at [1], else there's\ngoing to be lots of merge conflicts.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/2273648.1634764485%40sss.pgh.pa.us\n\n\n", "msg_date": "Fri, 22 Oct 2021 11:54:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n>> On 2021-10-21 18:27:25 -0400, Tom Lane wrote:\n>>> (a) the executable size increases by a few KB --- apparently, even\n>>> the minimum subset of simplehash.h's functionality is code-wasteful.\n\n> If I prevent the compiler from inlining findObjectByCatalogId() in all the\n> find*ByOid() routines, your version is smaller than master even without other\n> changes.\n\nHmm ... seems to depend a lot on which compiler you use.\n\nI was originally looking at it with gcc 8.4.1 (RHEL8 default),\nx86_64. On that, adding pg_noinline to findObjectByCatalogId\nhelps a little, but it's still 3.6K bigger than HEAD.\n\nI then tried gcc 11.2.1/x86_64, finding that the patch adds\nabout 2K and pg_noinline makes no difference.\n\nI also tried it on Apple's clang 13.0.0, both x86_64 and ARM\nversions. On that, the change seems to be a wash or slightly\nsmaller, with maybe a little benefit from pg_noinline but not\nmuch. It's hard to tell for sure because size(1) seems to be\nrounding off to a page boundary on that platform.\n\nAnyway, these are all sub-one-percent changes in the code\nsize, so probably we should not sweat that much about it.\nI'm kind of leaning now towards pushing the patch, just\non the grounds that getting rid of all those single-purpose\nindex arrays (and likely future need for more of them)\nis worth it from a maintenance perspective.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Oct 2021 13:32:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Hi, \n\nOn October 22, 2021 8:54:13 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>> On 2021-10-22 10:53:31 -0400, Tom Lane wrote:\n>>> I'm skeptical of that, mainly because it doesn't work in old servers,\n>\n>> I think we can address that, if we think it's overall a promising approach to\n>> pursue. E.g. if we don't need the indexes, we can make it = ANY().\n>\n>Hmm ... yeah, I guess we could get away with that. It might not scale\n>as nicely to a huge database, but probably dumping a huge database\n>from an ancient server isn't all that interesting.\n\nI think compared to the overhead of locking that many tables and sending O(N) queries it shouldn't be a huge factor.\n\nOne think that looks like it might be worth doing, and not hard, is to use single row mode. No need to materialize all that data twice in memory.\n\n\nAt a later stage it might be worth sending the array separately as a parameter. Perhaps even binary encoded.\n\n\n>I'm inclined to think that it could be sane to make getTableAttrs\n>and getIndexes use this style, but we probably still want functions\n>and such to use per-object queries. In those other catalogs there\n>are many built-in objects that we don't really care about. The\n>prepared-queries hack I was working on last night is probably plenty\n>good enough there, and it's a much less invasive patch.\n\nYes, that seems reasonable. I think the triggers query would benefit from the batch approach though - I see that taking a long time in aggregate on a test database with many tables I had around (partially due to the self join), and we already materialize it.\n\n\n>Were you planning to pursue this further, or did you want me to?\n\nIt seems too nice an improvement to drop on the floor. That said, I don't really have the mental bandwidth to pursue this beyond the POC stage - it seemed complicated enough that suggestion accompanied by a prototype was a good idea. So I'd be happy for you to incorporate this into your other changes.\n\n\n>I'd want to layer it on top of the work I did at [1], else there's\n>going to be lots of merge conflicts.\n\nMakes sense. Even if nobody else were doing anything in the area I'd probably want to split it into one commit creating the query once, and then separately implement the batching.\n\nRegards,\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Fri, 22 Oct 2021 11:30:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On October 22, 2021 8:54:13 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Were you planning to pursue this further, or did you want me to?\n\n> It seems too nice an improvement to drop on the floor. That said, I don't really have the mental bandwidth to pursue this beyond the POC stage - it seemed complicated enough that suggestion accompanied by a prototype was a good idea. So I'd be happy for you to incorporate this into your other changes.\n\nCool, I'll see what I can do with it, as long as I'm poking around\nin the area.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Oct 2021 14:36:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Hi, \n\nOn October 22, 2021 10:32:30 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>>> On 2021-10-21 18:27:25 -0400, Tom Lane wrote:\n>>>> (a) the executable size increases by a few KB --- apparently, even\n>>>> the minimum subset of simplehash.h's functionality is code-wasteful.\n>\n>> If I prevent the compiler from inlining findObjectByCatalogId() in all the\n>> find*ByOid() routines, your version is smaller than master even without other\n>> changes.\n>\n>Hmm ... seems to depend a lot on which compiler you use.\n\nInline heuristics change a lot over time, so that'd make sense.\n\nI see some win by marking pg_log_error cold. That might be useful more generally too.\n\n\nWhich made me look at the code invoking it from simplehash. I think the patch that made simplehash work in frontend code isn't quite right, because pg_log_error() returns...\n\n\nWonder if we should mark simplehash's grow as noinline? Even with a single caller it seems better to not inline it to remove register allocator pressure.\n\n\n>Anyway, these are all sub-one-percent changes in the code\n>size, so probably we should not sweat that much about it.\n>I'm kind of leaning now towards pushing the patch, just\n>on the grounds that getting rid of all those single-purpose\n>index arrays (and likely future need for more of them)\n>is worth it from a maintenance perspective.\n\n+1\n\nThe only thought I had wrt the patch is that I'd always create the hash table. That way the related branches can be removed, which is a win code size wise (as well as speed presumably, but I think we're far away from that mattering).\n\n\nThis type of code is where I most wish for a language with proper generic data types/containers...\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Fri, 22 Oct 2021 11:44:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Which made me look at the code invoking it from simplehash. I think the patch that made simplehash work in frontend code isn't quite right, because pg_log_error() returns...\n\nIndeed, that's broken. I guess we want pg_log_fatal then exit(1).\n\n> Wonder if we should mark simplehash's grow as noinline? Even with a single caller it seems better to not inline it to remove register allocator pressure.\n\nSeems plausible --- you want me to go change that?\n\n> The only thought I had wrt the patch is that I'd always create the hash\n> table.\n\nThat'd require adding an explicit init function and figuring out where to\ncall it, which we could do but I didn't (and don't) think it's worth the\ntrouble. One more branch here isn't going to matter, especially given\nthat we can't even measure the presumed macro improvement.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Oct 2021 15:41:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Wonder if we should mark simplehash's grow as noinline? Even with a single caller it seems better to not inline it to remove register allocator pressure.\n\n> Seems plausible --- you want me to go change that?\n\nHmm, harder than it sounds. If I remove \"inline\" from SH_SCOPE then\nthe compiler complains about unreferenced static functions, while\nif I leave it there than adding pg_noinline causes a complaint about\nconflicting options. Seems like we need a less quick-and-dirty\napproach to dealing with unnecessary simplehash support functions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Oct 2021 16:32:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Hi,\n\nThanks for pushing the error handling cleanup etc!\n\nOn 2021-10-22 16:32:39 -0400, Tom Lane wrote:\n> I wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> Wonder if we should mark simplehash's grow as noinline? Even with a single caller it seems better to not inline it to remove register allocator pressure.\n>\n> > Seems plausible --- you want me to go change that?\n>\n> Hmm, harder than it sounds. If I remove \"inline\" from SH_SCOPE then\n> the compiler complains about unreferenced static functions, while\n> if I leave it there than adding pg_noinline causes a complaint about\n> conflicting options.\n\nThe easy way out would be to to not declare SH_GROW inside SH_DECLARE - that'd\ncurrently work, because there aren't any calls to grow from outside of\nsimplehash.h. The comment says:\n * ... But resizing to the exact input size can be advantageous\n * performance-wise, when known at some point.\n\nBut perhaps that's sufficiently served to create the table with the correct\nsize immediately?\n\nIf we were to go for that, we'd just put SH_GROW in the SH_DEFINE section not\nuse SH_SCOPE, but just static. That works here, and I have some hope it'd not\ncause warnings on other compilers either, because there'll be references from\nthe other inline functions. Even if there's a SH_SCOPE=static inline\nsimplehash use inside a header and there aren't any callers in a TU, there'd\nstill be static inline references to it.\n\n\nAnother alternative would be to use __attribute__((unused)) or such on\nnon-static-inline functions that might or might not be used.\n\n\n> Seems like we need a less quick-and-dirty approach to dealing with\n> unnecessary simplehash support functions.\n\nI don't think the problem is unnecessary ones? It's \"cold\" functions we don't\nwant to have inlined into larger functions.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Oct 2021 10:50:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-10-22 16:32:39 -0400, Tom Lane wrote:\n>> Hmm, harder than it sounds. If I remove \"inline\" from SH_SCOPE then\n>> the compiler complains about unreferenced static functions, while\n>> if I leave it there than adding pg_noinline causes a complaint about\n>> conflicting options.\n\n> The easy way out would be to to not declare SH_GROW inside SH_DECLARE - that'd\n> currently work, because there aren't any calls to grow from outside of\n> simplehash.h.\n\nSeems like a reasonable approach. If somebody wanted to call that\nfrom outside, I'd personally feel they were getting way too friendly\nwith the implementation.\n\n>> Seems like we need a less quick-and-dirty approach to dealing with\n>> unnecessary simplehash support functions.\n\n> I don't think the problem is unnecessary ones?\n\nI was thinking about the stuff like SH_ITERATE, which you might or\nmight not have use for in any particular file. In the case at hand\nhere, a file that doesn't call SH_INSERT would be at risk of getting\nunused-function complaints about SH_GROW. But as you say, if we do\nfind that happening, __attribute__((unused)) would probably be\nenough to silence it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Oct 2021 13:58:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" }, { "msg_contents": "Hi,\n\nOn 2021-10-25 13:58:06 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> >> Seems like we need a less quick-and-dirty approach to dealing with\n> >> unnecessary simplehash support functions.\n> \n> > I don't think the problem is unnecessary ones?\n> \n> I was thinking about the stuff like SH_ITERATE, which you might or\n> might not have use for in any particular file. In the case at hand\n> here, a file that doesn't call SH_INSERT would be at risk of getting\n> unused-function complaints about SH_GROW. But as you say, if we do\n> find that happening, __attribute__((unused)) would probably be\n> enough to silence it.\n\nI was hoping that a reference from a static inline function ought to be\nsufficient to prevent warning about an unused-static-not-inline function, even\nif the referencing static inline function is unused... It does work that way\nwith at least the last few versions of gcc (tested 8-11) and clang (tested 6.0\nto 13).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Oct 2021 11:39:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Experimenting with hash tables inside pg_dump" } ]
[ { "msg_contents": "I returned to the 1995 paper \"Efficient Search of Multidimensional\nB-Trees\" [1] as part of the process of reviewing v39 of the skip scan\npatch, which was posted back in May. It's a great paper, and anybody\ninvolved in the skip scan effort should read it thoroughly (if they\nhaven't already). It's easy to see why people get excited about skip\nscan [2]. But there is a bigger picture here.\n\nI don't necessarily expect to come away from this discussion with a\nmuch better high level architecture for the patch, or any kind of\ndeeper insight, or even a frame of reference for further discussion. I\njust think that we ought to *try* to impose some order on this stuff.\n\nLike many difficult patches, the skip scan patch is not so much\ntroubled by problems with the implementation as it is troubled by\n*ambiguity* about the design. Particularly concerning how skip scan\nmeshes with existing designs, as well as future designs --\nparticularly designs for other MDAM techniques. I've started this\nthread to have a big picture conversation about how to think about\nthese things. Many other MDAM techniques also seem highly appealing.\nMuch of the MDAM stuff is for data warehousing use-cases, while skip\nscan/loose index scan is seen as more of an OLTP thing. But they are\nstill related, clearly.\n\nI'd like to also talk about another patch, that ISTM had that same\nquality -- it was also held back by high level design uncertainty. Back in 2018,\nTom abandoned a patch that transformed a star-schema style query with\nleft outer joins on dimension tables with OR conditions, into an\nequivalent query that UNIONs together 2 distinct queries [3][4].\n\nBelieve it or not, I am now reminded of that patch by the example of\n\"IN() Lists\", from page 5 of the paper. We see this example SQL query:\n\nSELECT date, item_class, store, sum(total_sales)\nFROM sales\nWHERE date between '06/01/95' and '06/30/95' and\nitem_class IN (20,35,50) and\nstore IN (200,250)\nGROUP BY dept, date, item_class, store;\n\nGranted, this SQL might not seem directly relevant to Tom's patch at\nfirst -- there is no join for the optimizer to even try to eliminate,\nwhich was the whole basis of Jim Nasby's original complaint, which is\nwhat spurred Tom to write the patch in the first place. But hear me\nout: there is still a fact table (the sales table) with some\ndimensions (the 'D' from 'MDAM') shown in the predicate. Moreover, the\ntable (and this SQL query) drives discussion of an optimization\ninvolving transforming a predicate with many ORs (which is explicitly\nsaid to be logically/semantically equivalent to the IN() lists from\nthe query). They transform the query into a bunch of disjunct clauses\nthat can easily be independently executed, and combined at the end\n(see also \"General OR Optimization\" on page 6 of the paper).\n\nAlso...I'm not entirely sure that the intended underlying \"physical\nplan\" is truly free of join-like scans. If you squint just right, you\nmight see something that you could think of as a \"physical join\" (at\nleast very informally). The whole point of this particular \"IN()\nLists\" example is that we get to the following, for each distinct\n\"dept\" and \"date\" in the table:\n\ndept=1, date='06/04/95', item_class=20, store=200\ndept=1, date='06/04/95', item_class=20, store=250\ndept=1, date='06/04/95', item_class=35, store=200\ndept=1, date='06/04/95', item_class=35, store=250\ndept=1, date='06/04/95', item_class=50, store=200\ndept=1, date='06/04/95', item_class=50, store=250\n\nThere are 2400 such accesses in total after transformation -- imagine\nadditional lines like these, for every distinct combination of dept\nand date (only for those dates that actually had sales, which they\nenumerate up-front), for store 200 and 250, and item_class 20, 35, and\n50. This adds up to 2400 lines in total. Even 2400 index probes will\nbe much faster than a full table scan, given that this is a large fact\ntable. The \"sales\" table is a clustered index whose keys are on the\ncolumns \"(dept, date, item_class, store)\", per note at the top of page\n4. The whole point is to avoid having any secondary indexes on this\nfact table, without getting a full scan. We can just probe the primary\nkey 2400 times instead, following this transformation. No need for\nsecondary indexes.\n\nThe plan can be thought of as a DAG, at least informally. This is also\nsomewhat similar to what Tom was thinking about back in 2018. Tom had\nto deduplicate rows during execution (IIRC using a UNION style ad-hoc\napproach that sorted on TIDs), whereas I think that they can get away\nwith skipping that extra step. Page 7 says \"MDAM removes duplicates\nbefore reading the data, so it does not have to do any post read\noperations to accomplish duplicate elimination (a common problem with\nOR optimization)\".\n\nMy general concern is that the skip scan patch may currently be\nstructured in a way that paints us into a corner, MDAM-wise.\n\nNote that the MDAM paper treats skipping a prefix of columns as a case\nwhere the prefix is handled by pretending that there is a clause that\nlooks like this: \"WHERE date between -inf AND +inf\" -- which is not so\ndifferent from the original sales SQL query example that I have\nhighlighted. We don't tend to think of queries like this (like my\nsales query) as in any way related to skip-scan, because we don't\nimagine that there is any skipping going on. But maybe we should\nrecognize the similarities.\n\nBTW, these imaginary -inf/+inf values seem to me to be just like the\nsentinel values already used inside nbtree, for pivot tuples -- they\nhave explicit -inf values for truncated suffix key columns, and you\ncan think of a rightmost page as having a +inf high key, per the L&Y\npaper. Wearing my B-Tree hat, I don't see much difference between\nimaginary -inf/+inf values, and values from the BETWEEN \"date\" range\nfrom the example SQL query. I have in the past wondered if\n_bt_get_endpoint() should have been implemented that way -- we could\ngo through _bt_search() instead, and get rid of that code. All we need\nis insertion scan keys that can explicitly contain the same -inf/+inf\nsentinel values. Maybe this also allows us to get rid of\nBTScanInsertData.nextkey semantics (not sure offhand).\n\nAnother more concrete concern about the patch series comes from the\nbackwards scan stuff. This is added by a later patch in the patch\nseries, \"v39-0004-Extend-amskip-implementation-for-Btree.patch\". It\nstrikes me as a bad thing that we cannot just do leaf-page-at-a-time\nprocessing, without usually needing to hold a pin on the leaf page.\nAfter all, ordinary backwards scans manage to avoid that today, albeit\nby using trickery inside _bt_walk_left(). MDAM-style \"Maintenance of\nIndex Order\" (as described on page 8) seems like a good goal for us\nhere. I don't like the idea of doing ad-hoc duplicate TID elimination\ninside nbtree, across calls made from the executor (whether it's\nduring backwards skip scans, or at any other time). Not because it\nseems to go against the approach taken by the MDAM paper (though it\ndoes); just because it seems kludgy. (I think that Tom felt the same\nway about the TID deduplication stuff in his own patch back in 2018,\ntoo.)\n\nOpen question: What does all of this MDAM business mean for\nScalarArrayOpExpr, if anything?\n\nI freely admit that I could easily be worrying over nothing here. But\nif I am, I'd really like to know *why* that's the case.\n\n[1] http://vldb.org/conf/1995/P710.PDF\n[2] https://blog.timescale.com/blog/how-we-made-distinct-queries-up-to-8000x-faster-on-postgresql/\n[3] https://www.postgresql.org/message-id/flat/7f70bd5a-5d16-e05c-f0b4-2fdfc8873489%40BlueTreble.com\n[4] https://www.postgresql.org/message-id/flat/14593.1517581614%40sss.pgh.pa.us#caf373b36332f25acb7673bff331c95e\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 21 Oct 2021 19:16:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "Great to see some interest in the skip scan patch series again!\r\n\r\n> Like many difficult patches, the skip scan patch is not so much troubled by\r\n> problems with the implementation as it is troubled by\r\n> *ambiguity* about the design. Particularly concerning how skip scan meshes\r\n> with existing designs, as well as future designs -- particularly designs for\r\n> other MDAM techniques. I've started this thread to have a big picture\r\n> conversation about how to think about these things. Many other MDAM\r\n> techniques also seem highly appealing.\r\n\r\nI think it is good to have this discussion. In my opinion, Postgres could make really good use of some of the described MDAM techniques.\r\n\r\n> Much of the MDAM stuff is for data warehousing use-cases, while skip\r\n> scan/loose index scan is seen as more of an OLTP thing. But they are still\r\n> related, clearly.\r\n\r\nFWIW I think skip scan is very much data warehousing use-case related - hence why the TimescaleDb people in your [2] reference implemented a simple form of it already for their extension. Skip scan is a really useful feature for large data sets. However, I agree it is only one part of the bigger MDAM picture.\r\n\r\n> \r\n> My general concern is that the skip scan patch may currently be structured in\r\n> a way that paints us into a corner, MDAM-wise.\r\n> \r\n\r\nOne of the concerns I raised before was that the patch may be thinking too simplistically on some things, that would make it difficult to adopt more complex optimizations in the future. One concrete example can be illustrated by a different query on the sales table of the paper's example:\r\n\r\nSELECT DISTINCT dept, date WHERE item_class = 100\r\n\r\nThis should skip with prefix of (dept, date). Suppose we're at (dept, date) = (1, 2021-01-01) and it's skipping to the next prefix, the patch just implements what the MDAM paper describes as the 'probing' step. It finds the beginning of the next prefix. This could be for example (dept, date, item_class) = (1, 2021-01-02, 1). From there onwards, it would just scan the index until it finds item_class=100. What it should do however, is to first 'probe' for the next prefix value and then skip directly to (1, 2021-01-02, 100) (skipping item_class 1-99 altogether). The problem if it doesn't support this is that skip scan could have a quite unpredictable performance, because sometimes it'll end up going through most of the index where it should be skipping.\r\n\r\nA while ago, I spent quite some time trying to come up with an implementation that would work in this more general case. The nice thing is that with such a more generic implementation, you get almost all the features from the MDAM paper almost for free. I focused on the executor code, not on the planner code - the planner code is for the DISTINCT skip part very similar to the original patch and I hacked in a way to make it choose a 'skip scan' also for non-DISTINCT queries for testing purposes. For this discussion about MDAM, the planner part is less relevant though. There's still a lot of discussion and work on the planner-side too, but I think that is completely independent from each other.\r\n\r\nThe more generic patch I originally posted in [1], together with some technical considerations. That was quite a while ago so it obviously doesn't apply anymore on master. Therefore, I've attached a rebased version. Unfortunately, it's still based on an older version of the UniqueKeys patch - but since that patch is all planner machinery as well, it doesn't matter so much for the discussion about the executor code either.\r\n\r\nI believe if we want something that fits better with future MDAM use cases, we should take a closer look at the executor code of this patch to drive this discussion. The logic is definitely more complex than the original patch, however I believe it is also more flexible and future-proof.\r\n\r\n> \r\n> Another more concrete concern about the patch series comes from the\r\n> backwards scan stuff. This is added by a later patch in the patch series, \"v39-\r\n> 0004-Extend-amskip-implementation-for-Btree.patch\". It strikes me as a bad\r\n> thing that we cannot just do leaf-page-at-a-time processing, without usually\r\n> needing to hold a pin on the leaf page.\r\n> After all, ordinary backwards scans manage to avoid that today, albeit by\r\n> using trickery inside _bt_walk_left(). MDAM-style \"Maintenance of Index\r\n> Order\" (as described on page 8) seems like a good goal for us here. I don't\r\n> like the idea of doing ad-hoc duplicate TID elimination inside nbtree, across\r\n> calls made from the executor (whether it's during backwards skip scans, or at\r\n> any other time). Not because it seems to go against the approach taken by\r\n> the MDAM paper (though it does); just because it seems kludgy. (I think that\r\n> Tom felt the same way about the TID deduplication stuff in his own patch\r\n> back in 2018,\r\n> too.)\r\n\r\nIt's good to mention that the patch I attached does proper 'leaf-page-at-a-time' processing, so it avoids the problem you describe with v39. It is implemented instead in the same way as a \"regular\" index scan - we process the full leaf page and store the matched tuples in the local state. If a DISTINCT scan wants to do a skip, we check our local state first if that skipping would be possible with the matched tuples from the current page. Then we avoid double work and also the need to look at the same page again.\r\n\r\n> Open question: What does all of this MDAM business mean for\r\n> ScalarArrayOpExpr, if anything?\r\n> \r\n\r\nThis is a really interesting combination actually. I think, ideally, you'd probably get rid of it and provide full support for that with the 'skip' based approach (essentially the ScalarArrayOpExpr seems to do some form of skipping already - it transforms x IN (1,2,3) into 3 separate index scans for x).\r\nHowever, even without doing any work on it, it actually interacts nicely with the skip based approach.\r\n\r\nAs an example, here's some queries based on the 'sales' table of the paper with some data in it (18M rows total, see sales_query.sql attachment for the full example):\r\n\r\n-- terminology from paper: \"intervening range predicate\"\r\nselect date, sum(total_sales)\r\nfrom sales\r\nwhere dept between 2 and 3 and date between '2021-01-05' and '2021-01-10' and item_class=20 and store=250\r\ngroup by dept, date\r\n;\r\nPatch: Execution Time: 0.368 ms\r\nMaster: Execution Time: 416.792 ms\r\n\r\n-- terminology from paper: \"missing key predicate\"\r\nselect date, sum(total_sales)\r\nfrom sales\r\nwhere date between '2021-01-05' and '2021-01-10' and item_class=20 and store=250\r\ngroup by dept, date\r\n;\r\nPatch: Execution Time: 0.667 ms\r\nMaster: Execution Time: 654.684 ms\r\n\r\n-- terminology from paper: \"IN lists\" \r\n-- this is similar to the query from your example with an IN list\r\n-- in the current patch, this query is done with a skip scan with skip prefix (dept, date) and then the ScalarOpArray for item_class=(20,30,50)\r\nselect date, sum(total_sales)\r\nfrom sales\r\nwhere date between '2021-01-05' and '2021-01-10' and item_class in (20, 30, 50) and store=250\r\ngroup by dept, date\r\n;\r\nPatch: Execution Time: 1.767 ms\r\nMaster: Execution Time: 629.792 ms\r\n\r\nThe other mentioned MDAM optimizations in the paper (NOT =, general OR) are not implemented but don't seem to be conflicting at all with the current implementation - they seem to be just a matter of transforming the expressions into the right form.\r\n\r\nFrom the patch series above, v9-0001/v9-0002 is the UniqueKeys patch series, which focuses on the planner. v2-0001 is Dmitry's patch that extends it to make it possible to use UniqueKeys for the skip scan. Both of these are unfortunately still older patches, but because they are planner machinery they are less relevant to the discussion about the executor here.\r\nPatch v2-0002 contains most of my work and introduces all the executor logic for the skip scan and hooks up the planner for DISTINCT queries to use the skip scan.\r\nPatch v2-0003 is a planner hack that makes the planner pick a skip scan on virtually every possibility. This also enables the skip scan on the queries above that don't have a DISTINCT but where it could be useful.\r\n\r\n\r\n-Floris\r\n\r\n[1] https://www.postgresql.org/message-id/c5c5c974714a47f1b226c857699e8680@opammb0561.comp.optiver.com", "msg_date": "Sat, 23 Oct 2021 19:30:47 +0000", "msg_from": "Floris Van Nee <florisvannee@Optiver.com>", "msg_from_op": false, "msg_subject": "RE: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "> On Thu, Oct 21, 2021 at 07:16:00PM -0700, Peter Geoghegan wrote:\n>\n> My general concern is that the skip scan patch may currently be\n> structured in a way that paints us into a corner, MDAM-wise.\n>\n> Note that the MDAM paper treats skipping a prefix of columns as a case\n> where the prefix is handled by pretending that there is a clause that\n> looks like this: \"WHERE date between -inf AND +inf\" -- which is not so\n> different from the original sales SQL query example that I have\n> highlighted. We don't tend to think of queries like this (like my\n> sales query) as in any way related to skip-scan, because we don't\n> imagine that there is any skipping going on. But maybe we should\n> recognize the similarities.\n\nTo avoid potential problems with extensibility in this sense, the\nimplementation needs to explicitly work with sets of disjoint intervals\nof values instead of simple prefix size, one set of intervals per scan\nkey. An interesting idea, doesn't seem to be a big change in terms of\nthe patch itself.\n\n\n", "msg_date": "Sun, 24 Oct 2021 04:44:01 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "Hi Peter,\n\nOn 10/21/21 22:16, Peter Geoghegan wrote:\n> I returned to the 1995 paper \"Efficient Search of Multidimensional\n> B-Trees\" [1] as part of the process of reviewing v39 of the skip scan\n> patch, which was posted back in May. It's a great paper, and anybody\n> involved in the skip scan effort should read it thoroughly (if they\n> haven't already). It's easy to see why people get excited about skip\n> scan [2]. But there is a bigger picture here.\n> \n\nThanks for starting this thread !\n\nThe Index Skip Scan patch could affect a lot of areas, so keeping MDAM \nin mind is definitely important.\n\nHowever, I think the key part to progress on the \"core\" functionality \n(B-tree related changes) is to get the planner functionality in place \nfirst. Hopefully we can make progress on that during the November \nCommitFest based on Andy's patch.\n\nBest regards,\n Jesper\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 08:10:56 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "Hi,\n\nOn Sat, Oct 23, 2021 at 07:30:47PM +0000, Floris Van Nee wrote:\n> \n> From the patch series above, v9-0001/v9-0002 is the UniqueKeys patch series,\n> which focuses on the planner. v2-0001 is Dmitry's patch that extends it to\n> make it possible to use UniqueKeys for the skip scan. Both of these are\n> unfortunately still older patches, but because they are planner machinery\n> they are less relevant to the discussion about the executor here. Patch\n> v2-0002 contains most of my work and introduces all the executor logic for\n> the skip scan and hooks up the planner for DISTINCT queries to use the skip\n> scan. Patch v2-0003 is a planner hack that makes the planner pick a skip\n> scan on virtually every possibility. This also enables the skip scan on the\n> queries above that don't have a DISTINCT but where it could be useful.\n\nThe patchset doesn't apply anymore:\nhttp://cfbot.cputube.org/patch_36_1741.log\n=== Applying patches on top of PostgreSQL commit ID a18b6d2dc288dfa6e7905ede1d4462edd6a8af47 ===\n=== applying patch ./v2-0001-Extend-UniqueKeys.patch\n[...]\npatching file src/include/optimizer/paths.h\nHunk #2 FAILED at 299.\n1 out of 2 hunks FAILED -- saving rejects to file src/include/optimizer/paths.h.rej\n\nCould you send a rebased version? In the meantime I will change the status on\nthe cf app to Waiting on Author.\n\n\n", "msg_date": "Thu, 13 Jan 2022 18:36:12 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "> \n> Could you send a rebased version? In the meantime I will change the status\n> on the cf app to Waiting on Author.\n\nAttached a rebased version.", "msg_date": "Thu, 13 Jan 2022 15:27:08 +0000", "msg_from": "Floris Van Nee <florisvannee@Optiver.com>", "msg_from_op": false, "msg_subject": "RE: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "> On Thu, Jan 13, 2022 at 03:27:08PM +0000, Floris Van Nee wrote:\n> >\n> > Could you send a rebased version? In the meantime I will change the status\n> > on the cf app to Waiting on Author.\n>\n> Attached a rebased version.\n\nFYI, I've attached this thread to the CF item as an informational one,\nbut as there are some patches posted here, folks may get confused. For\nthose who have landed here with no context, I feel obliged to mention\nthat now there are two alternative patch series posted under the same\nCF item:\n\n* the original one lives in [1], waiting for reviews since the last May\n* an alternative one posted here from Floris\n\n[1]: https://www.postgresql.org/message-id/flat/20200609102247.jdlatmfyeecg52fi@localhost\n\n\n", "msg_date": "Fri, 14 Jan 2022 08:55:26 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 14, 2022 at 08:55:26AM +0100, Dmitry Dolgov wrote:\n> \n> FYI, I've attached this thread to the CF item as an informational one,\n> but as there are some patches posted here, folks may get confused. For\n> those who have landed here with no context, I feel obliged to mention\n> that now there are two alternative patch series posted under the same\n> CF item:\n> \n> * the original one lives in [1], waiting for reviews since the last May\n> * an alternative one posted here from Floris\n\nAh, I indeed wasn't sure of which patchset(s) should actually be reviewed.\nIt's nice to have the alternative approach threads linkied in the commit fest,\nbut it seems that the cfbot will use the most recent attachments as the only\npatchset, thus leaving the \"original\" one untested.\n\nI'm not sure of what's the best approach in such situation. Maybe creating a\ndifferent CF entry for each alternative, and link the other cf entry on the cf\napp using the \"Add annotations\" or \"Links\" feature rather than attaching\nthreads?\n\n\n", "msg_date": "Fri, 14 Jan 2022 16:03:41 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "> On Fri, Jan 14, 2022 at 04:03:41PM +0800, Julien Rouhaud wrote:\n> Hi,\n>\n> On Fri, Jan 14, 2022 at 08:55:26AM +0100, Dmitry Dolgov wrote:\n> >\n> > FYI, I've attached this thread to the CF item as an informational one,\n> > but as there are some patches posted here, folks may get confused. For\n> > those who have landed here with no context, I feel obliged to mention\n> > that now there are two alternative patch series posted under the same\n> > CF item:\n> >\n> > * the original one lives in [1], waiting for reviews since the last May\n> > * an alternative one posted here from Floris\n>\n> Ah, I indeed wasn't sure of which patchset(s) should actually be reviewed.\n> It's nice to have the alternative approach threads linkied in the commit fest,\n> but it seems that the cfbot will use the most recent attachments as the only\n> patchset, thus leaving the \"original\" one untested.\n>\n> I'm not sure of what's the best approach in such situation. Maybe creating a\n> different CF entry for each alternative, and link the other cf entry on the cf\n> app using the \"Add annotations\" or \"Links\" feature rather than attaching\n> threads?\n\nI don't mind having all of the alternatives under the same CF item, only\none being tested seems to be only a small limitation of cfbot.\n\n\n", "msg_date": "Sat, 22 Jan 2022 22:37:19 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "Hi,\n\nOn 2022-01-22 22:37:19 +0100, Dmitry Dolgov wrote:\n> > On Fri, Jan 14, 2022 at 04:03:41PM +0800, Julien Rouhaud wrote:\n> > Hi,\n> >\n> > On Fri, Jan 14, 2022 at 08:55:26AM +0100, Dmitry Dolgov wrote:\n> > >\n> > > FYI, I've attached this thread to the CF item as an informational one,\n> > > but as there are some patches posted here, folks may get confused. For\n> > > those who have landed here with no context, I feel obliged to mention\n> > > that now there are two alternative patch series posted under the same\n> > > CF item:\n> > >\n> > > * the original one lives in [1], waiting for reviews since the last May\n> > > * an alternative one posted here from Floris\n> >\n> > Ah, I indeed wasn't sure of which patchset(s) should actually be reviewed.\n> > It's nice to have the alternative approach threads linkied in the commit fest,\n> > but it seems that the cfbot will use the most recent attachments as the only\n> > patchset, thus leaving the \"original\" one untested.\n> >\n> > I'm not sure of what's the best approach in such situation. Maybe creating a\n> > different CF entry for each alternative, and link the other cf entry on the cf\n> > app using the \"Add annotations\" or \"Links\" feature rather than attaching\n> > threads?\n> \n> I don't mind having all of the alternatives under the same CF item, only\n> one being tested seems to be only a small limitation of cfbot.\n\nIMO it's pretty clear that having \"duelling\" patches below one CF entry is a\nbad idea. I think they should be split, with inactive approaches marked as\nreturned with feeback or whatnot.\n\nEither way, currently this patch fails on cfbot due to a new GUC:\nhttps://api.cirrus-ci.com/v1/artifact/task/5134905372835840/log/src/test/recovery/tmp_check/regression.diffs\nhttps://cirrus-ci.com/task/5134905372835840\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Mar 2022 18:34:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "> On Mon, Mar 21, 2022 at 06:34:09PM -0700, Andres Freund wrote:\n>\n> > I don't mind having all of the alternatives under the same CF item, only\n> > one being tested seems to be only a small limitation of cfbot.\n>\n> IMO it's pretty clear that having \"duelling\" patches below one CF entry is a\n> bad idea. I think they should be split, with inactive approaches marked as\n> returned with feeback or whatnot.\n\nOn the other hand even for patches with dependencies (i.e. the patch A\ndepends on the patch B) different CF items cause a lot of confusion for\nreviewers. I guess for various flavours of the same patch it would be\neven worse. But I don't have a strong opinion here.\n\n> Either way, currently this patch fails on cfbot due to a new GUC:\n> https://api.cirrus-ci.com/v1/artifact/task/5134905372835840/log/src/test/recovery/tmp_check/regression.diffs\n> https://cirrus-ci.com/task/5134905372835840\n\nThis seems to be easy to solve.", "msg_date": "Tue, 22 Mar 2022 21:00:08 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "On Tue, Mar 22, 2022 at 2:34 PM Andres Freund <andres@anarazel.de> wrote:\n> IMO it's pretty clear that having \"duelling\" patches below one CF entry is a\n> bad idea. I think they should be split, with inactive approaches marked as\n> returned with feeback or whatnot.\n\nI have the impression that this thread is getting some value from\nhaving a CF entry, as a multi-person collaboration where people are\ntrading ideas and also making progress that no one wants to mark as\nreturned, but it's vexing for people managing the CF because it's not\nreally proposed for 15. Perhaps what we lack is a new status, \"Work\nIn Progress\" or something?\n\n\n", "msg_date": "Wed, 23 Mar 2022 09:33:50 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Like many difficult patches, the skip scan patch is not so much\n> troubled by problems with the implementation as it is troubled by\n> *ambiguity* about the design. Particularly concerning how skip scan\n> meshes with existing designs, as well as future designs --\n> particularly designs for other MDAM techniques. I've started this\n> thread to have a big picture conversation about how to think about\n> these things.\n\nPeter asked me off-list to spend some time thinking about the overall\ndirection we ought to be pursuing here. I have done that, and here\nare a few modest suggestions.\n\n1. Usually I'm in favor of doing this sort of thing in an index AM\nagnostic way, but here I don't see much point. All of the ideas at\nstake rely fundamentally on having a lexicographically-ordered multi\ncolumn index; but we don't have any of those except btree, nor do\nI think we're likely to get any soon. This motivates the general\ntenor of my remarks below, which is \"do it in access/nbtree/ not in\nthe planner\".\n\n2. The MDAM paper Peter cited is really interesting. You can see\nfragments of those ideas in our existing btree code, particularly in\nthe scan setup stuff that detects redundant or contradictory keys and\ndetermines a scan start strategy. The special handling we implemented\nawhile ago for ScalarArrayOp index quals is also a subset of what they\nwere talking about. It seems to me that if we wanted to implement more\nof those ideas, the relevant work should almost all be done in nbtree\nproper. The planner would need only minor adjustments: btcostestimate\nwould have to be fixed to understand the improvements, and there are\nsome rules in indxpath.c that prevent us from passing \"too complicated\"\nsets of indexquals to the AM, which would need to be relaxed or removed\naltogether.\n\n3. \"Loose\" indexscan (i.e., sometimes re-descend from the tree root\nto find the next index entry) is again something that seems like it's\nmainly nbtree's internal problem. Loose scan is interesting if we\nhave index quals for columns that are after the first column that lacks\nan equality qual, otherwise not. I've worried in the past that we'd\nneed planner/statistical support to figure out whether a loose scan\nis likely to be useful compared to just plowing ahead in the index.\nHowever, that seems to be rendered moot by the idea used in the current\npatchsets, ie scan till we find that we'll have to step off the current\npage, and re-descend at that point. (When and if we find that that\nheuristic is inadequate, we could work on passing some statistical data\nforward. But we don't need any in the v1 patch.) Again, we need some\nwork in btcostestimate to understand how the index scan cost will be\naffected, but I still don't see any pressing need for major planner\nchanges or plan tree contents changes.\n\n4. I find each of the above ideas to be far more attractive than\noptimizing SELECT-DISTINCT-that-matches-an-index, so I don't really\nunderstand why the current patchsets seem to be driven largely\nby that single use-case. I wouldn't even bother with that for the\ninitial patch. When we do get around to it, it still doesn't need\nmajor planner support, I think --- again fixing the cost estimation\nis the bulk of the work. Munro's original 2014 patch showed that we\ndon't really need all that much to get the planner to build such a\nplan; the problem is to convince it that that plan will be cheap.\n\nIn short: I would throw out just about all the planner infrastructure\nthat's been proposed so far. It looks bulky, expensive, and\ndrastically undercommented, and I don't think it's buying us anything\nof commensurate value. The part of the planner that actually needs\nserious thought is btcostestimate, which has been woefully neglected in\nboth of the current patchsets.\n\nBTW, I've had a bee in my bonnet for a long time about whether some of\nnbtree's scan setup work could be done once during planning, rather than\nover again during each indexscan start. This issue might become more\npressing if the work becomes significantly more complicated/expensive,\nwhich these ideas might cause. But it's a refinement that could be\nleft for later --- and in any case, the responsibility would still\nfundamentally be nbtree's. I don't think the planner would do more\nthan call some AM routine that could add decoration to an IndexScan\nplan node.\n\nNow ... where did I put my flameproof vest?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Mar 2022 16:55:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "Hi,\n\nOn 2022-03-22 16:55:49 -0400, Tom Lane wrote:\n> 4. I find each of the above ideas to be far more attractive than\n> optimizing SELECT-DISTINCT-that-matches-an-index, so I don't really\n> understand why the current patchsets seem to be driven largely\n> by that single use-case.\n\nIt's something causing plenty pain in production environments... Obviously\nit'd be even better if the optimization also triggered in cases like\n SELECT some_indexed_col FROM blarg GROUP BY some_indexed_col\nwhich seems to be what ORMs like to generate.\n\n\n> BTW, I've had a bee in my bonnet for a long time about whether some of\n> nbtree's scan setup work could be done once during planning, rather than\n> over again during each indexscan start.\n\nIt does show up in simple-index-lookup heavy workloads. Not as a major thing,\nbut it's there. And it's just architecturally displeasing :)\n\nAre you thinking of just moving the setup stuff in nbtree (presumably parts of\n_bt_first() / _bt_preprocess_keys()) or also stuff in\nExecIndexBuildScanKeys()?\n\nThe latter does show up a bit more heavily in profiles than nbtree specific\nsetup, and given that it's generic executor type stuff, seems even more\namenable to being moved to plan time.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 22 Mar 2022 16:06:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-03-22 16:55:49 -0400, Tom Lane wrote:\n>> BTW, I've had a bee in my bonnet for a long time about whether some of\n>> nbtree's scan setup work could be done once during planning, rather than\n>> over again during each indexscan start.\n\n> It does show up in simple-index-lookup heavy workloads. Not as a major thing,\n> but it's there. And it's just architecturally displeasing :)\n> Are you thinking of just moving the setup stuff in nbtree (presumably parts of\n> _bt_first() / _bt_preprocess_keys()) or also stuff in\n> ExecIndexBuildScanKeys()?\n\nDidn't really have specifics in mind. The key stumbling block is\nthat some (not all) of the work depends on knowing the specific\nvalues of the indexqual comparison keys, so while you could do\nthat work in advance for constant keys, you'd still have to be\nprepared to do work at scan start for non-constant keys. I don't\nhave a clear idea about how to factorize that effectively.\n\nA couple of other random ideas in this space:\n\n* I suspect that a lot of this work overlaps with the efforts that\nbtcostestimate makes along the way to getting a cost estimate.\nSo it's interesting to wonder whether we could refactor so that\nbtcostestimate is integrated with this hypothetical plan-time key\npreprocessing and doesn't duplicate work.\n\n* I think that we run through most or all of that preprocessing\nlogic even for internal catalog accesses, where we know darn well\nhow the keys are set up. We ought to think harder about how we\ncould short-circuit pointless work in those code paths.\n\nI don't think any of this is an essential prerequisite to getting\nsomething done for loose index scans, which ISTM ought to be the first\npoint of attack for v16. Loose index scans per se shouldn't add much\nto the key preprocessing costs. But these ideas likely would be\nuseful to look into before anyone starts on the more complicated\npreprocessing that would be needed for the ideas in the MDAM paper.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Mar 2022 19:34:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "> On Tue, Mar 22, 2022 at 04:55:49PM -0400, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > Like many difficult patches, the skip scan patch is not so much\n> > troubled by problems with the implementation as it is troubled by\n> > *ambiguity* about the design. Particularly concerning how skip scan\n> > meshes with existing designs, as well as future designs --\n> > particularly designs for other MDAM techniques. I've started this\n> > thread to have a big picture conversation about how to think about\n> > these things.\n>\n> Peter asked me off-list to spend some time thinking about the overall\n> direction we ought to be pursuing here. I have done that, and here\n> are a few modest suggestions.\n\nThanks. To make sure I understand your proposal better, I have a couple\nof questions:\n\n> In short: I would throw out just about all the planner infrastructure\n> that's been proposed so far. It looks bulky, expensive, and\n> drastically undercommented, and I don't think it's buying us anything\n> of commensurate value.\n\nBroadly speaking planner related changes proposed in the patch so far\nare: UniqueKey, taken from the neighbour thread about select distinct;\nlist of uniquekeys to actually pass information about the specified\nloose scan prefix into nbtree; some verification logic to prevent\napplying skipping when it's not supported. I can imagine taking out\nUniqueKeys and passing loose scan prefix in some other form (the other\nparts seems to be essential) -- is that what you mean?\n\n\n", "msg_date": "Wed, 23 Mar 2022 21:47:47 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "Dmitry Dolgov <9erthalion6@gmail.com> writes:\n> On Tue, Mar 22, 2022 at 04:55:49PM -0400, Tom Lane wrote:\n>> In short: I would throw out just about all the planner infrastructure\n>> that's been proposed so far. It looks bulky, expensive, and\n>> drastically undercommented, and I don't think it's buying us anything\n>> of commensurate value.\n\n> Broadly speaking planner related changes proposed in the patch so far\n> are: UniqueKey, taken from the neighbour thread about select distinct;\n> list of uniquekeys to actually pass information about the specified\n> loose scan prefix into nbtree; some verification logic to prevent\n> applying skipping when it's not supported. I can imagine taking out\n> UniqueKeys and passing loose scan prefix in some other form (the other\n> parts seems to be essential) -- is that what you mean?\n\nMy point is that for pure loose scans --- that is, just optimizing a scan,\nnot doing AM-based duplicate-row-elimination --- you do not need to pass\nany new data to btree at all. It can infer what to do on the basis of the\nset of index quals it's handed.\n\nThe bigger picture here is that I think the reason this patch series has\nfailed to progress is that it's too scattershot. You need to pick a\nminimum committable feature and get that done, and then you can move on\nto the next part. I think the minimum committable feature is loose scans,\nwhich will require a fair amount of work in access/nbtree/ but very little\nnew planner code, and will be highly useful in their own right even if we\nnever do anything more.\n\nIn general I feel that the UniqueKey code is a solution looking for a\nproblem, and that treating it as the core of the patchset is a mistake.\nWe should be driving this work off of what nbtree needs to make progress,\nand not building more infrastructure elsewhere than we have to. Maybe\nwe'll end up with something that looks like UniqueKeys, but I'm far from\nconvinced of that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Mar 2022 17:32:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "> On Wed, Mar 23, 2022 at 05:32:46PM -0400, Tom Lane wrote:\n> Dmitry Dolgov <9erthalion6@gmail.com> writes:\n> > On Tue, Mar 22, 2022 at 04:55:49PM -0400, Tom Lane wrote:\n> >> In short: I would throw out just about all the planner infrastructure\n> >> that's been proposed so far. It looks bulky, expensive, and\n> >> drastically undercommented, and I don't think it's buying us anything\n> >> of commensurate value.\n>\n> > Broadly speaking planner related changes proposed in the patch so far\n> > are: UniqueKey, taken from the neighbour thread about select distinct;\n> > list of uniquekeys to actually pass information about the specified\n> > loose scan prefix into nbtree; some verification logic to prevent\n> > applying skipping when it's not supported. I can imagine taking out\n> > UniqueKeys and passing loose scan prefix in some other form (the other\n> > parts seems to be essential) -- is that what you mean?\n>\n> My point is that for pure loose scans --- that is, just optimizing a scan,\n> not doing AM-based duplicate-row-elimination --- you do not need to pass\n> any new data to btree at all. It can infer what to do on the basis of the\n> set of index quals it's handed.\n>\n> The bigger picture here is that I think the reason this patch series has\n> failed to progress is that it's too scattershot. You need to pick a\n> minimum committable feature and get that done, and then you can move on\n> to the next part. I think the minimum committable feature is loose scans,\n> which will require a fair amount of work in access/nbtree/ but very little\n> new planner code, and will be highly useful in their own right even if we\n> never do anything more.\n>\n> In general I feel that the UniqueKey code is a solution looking for a\n> problem, and that treating it as the core of the patchset is a mistake.\n> We should be driving this work off of what nbtree needs to make progress,\n> and not building more infrastructure elsewhere than we have to. Maybe\n> we'll end up with something that looks like UniqueKeys, but I'm far from\n> convinced of that.\n\nI see. I'll need some thinking time about how it may look like (will\nprobably return with more questions).\n\nThe CF item could be set to RwF, what would you say, Jesper?\n\n\n", "msg_date": "Wed, 23 Mar 2022 23:22:32 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "On 3/23/22 18:22, Dmitry Dolgov wrote:\n> \n> The CF item could be set to RwF, what would you say, Jesper?\n> \n\nWe want to thank the community for the feedback that we have received \nover the years for this feature. Hopefully a future implementation can \nuse Tom's suggestions to get closer to a committable solution.\n\nHere is the last CommitFest entry [1] for the archives.\n\nRwF\n\n[1] https://commitfest.postgresql.org/37/1741/\n\nBest regards,\n Dmitry & Jesper\n\n\n\n", "msg_date": "Thu, 24 Mar 2022 07:32:14 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "On Tue, Mar 22, 2022 at 4:06 PM Andres Freund <andres@anarazel.de> wrote:\n> Are you thinking of just moving the setup stuff in nbtree (presumably parts of\n> _bt_first() / _bt_preprocess_keys()) or also stuff in\n> ExecIndexBuildScanKeys()?\n>\n> The latter does show up a bit more heavily in profiles than nbtree specific\n> setup, and given that it's generic executor type stuff, seems even more\n> amenable to being moved to plan time.\n\nWhen I was working on the patch series that became the nbtree Postgres\n12 work, this came up. At one point I discovered that using palloc0()\nfor the insertion scankey in _bt_first() was a big problem with nested\nloop joins -- it became a really noticeable bottleneck with one of my\ntest cases. I independently discovered what Tom must have figured out\nback in 2005, when he committed d961a56899. This led to my making the\nnew insertion scan key structure (BTScanInsertData) not use dynamic\nallocation. So _bt_first() is definitely performance critical for\ncertain types of queries.\n\nWe could get rid of dynamic allocations for BTStackData in\n_bt_first(), perhaps. The problem is that there is no simple,\nreasonable proof of the maximum height on a B-tree, even though a\nB-Tree with more than 7 or 8 levels seems extraordinarily unlikely.\nYou could also invent a slow path (maybe do what we do in\n_bt_insert_parent() in the event of a concurrent root page split/NULL\nstack), but that runs into the problem of being awkward to test, and\npretty ugly. It's doable, but I wouldn't do it unless there was a\npretty noticeable payoff.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 28 Mar 2022 17:13:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> We could get rid of dynamic allocations for BTStackData in\n> _bt_first(), perhaps. The problem is that there is no simple,\n> reasonable proof of the maximum height on a B-tree, even though a\n> B-Tree with more than 7 or 8 levels seems extraordinarily unlikely.\n\nStart with a few entries preallocated, and switch to dynamically\nallocated space if there turn out to be more levels than that,\nperhaps? Not sure if it's worth the trouble.\n\nIn any case, what I was on about is _bt_preprocess_keys() and\nadjacent code. I'm surprised that those aren't more expensive\nthan one palloc in _bt_first. Maybe that logic falls through very\nquickly in simple cases, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Mar 2022 20:21:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "On Tue, Mar 22, 2022 at 1:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Peter asked me off-list to spend some time thinking about the overall\n> direction we ought to be pursuing here.\n\nThanks for taking a look!\n\n\"5.5 Exploiting Key Prefixes\" and \"5.6 Ordered Retrieval\" from \"Modern\nB-Tree Techniques\" are also good, BTW.\n\nThe terminology in this area is a mess. MySQL calls\nSELECT-DISTINCT-that-matches-an-index \"loose index scans\". I think\nthat you're talking about skip scan when you say \"loose index scan\".\nSkip scan is where there is an omitted prefix of columns in the SQL\nquery -- omitted columns after the first column that lack an equality\nqual. Pretty sure that MySQL/InnoDB can't do that -- it can only\n\"skip\" to the extent required to make\nSELECT-DISTINCT-that-matches-an-index perform well, but that's about\nit.\n\nIt might be useful for somebody to go write a \"taxonomy of MDAM\ntechniques\", or a glossary. The existing \"Loose indexscan\" Postgres\nwiki page doesn't seem like enough. Something very high level and\nexplicit, with examples, just so we don't end up talking at cross\npurposes too much.\n\n> 1. Usually I'm in favor of doing this sort of thing in an index AM\n> agnostic way, but here I don't see much point. All of the ideas at\n> stake rely fundamentally on having a lexicographically-ordered multi\n> column index; but we don't have any of those except btree, nor do\n> I think we're likely to get any soon. This motivates the general\n> tenor of my remarks below, which is \"do it in access/nbtree/ not in\n> the planner\".\n\nThat was my intuition all along, but I didn't quite have the courage\nto say so -- sounds too much like something that an optimizer\ndilettante like me would be expected to say. :-)\n\nSeems like one of those things where lots of high level details\nintrinsically need to be figured out on-the-fly, at execution time,\nrather than during planning. Perhaps it'll be easier to correctly\ndetermine that a skip scan plan is the cheapest in practice than to\naccurately cost skip scan plans. If the only alternative is a\nsequential scan, then perhaps a very approximate cost model will work\nwell enough. It's probably way too early to tell right now, though.\n\n> I've worried in the past that we'd\n> need planner/statistical support to figure out whether a loose scan\n> is likely to be useful compared to just plowing ahead in the index.\n\nI don't expect to be able to come up with a structure that leaves no\nunanswered questions about future MDAM work -- it's not realistic to\nexpect everything to just fall into place. But that's okay. Just\nhaving everybody agree on roughly the right conceptual model is the\nreally important thing. That now seems quite close, which I count as\nreal progress.\n\n> 4. I find each of the above ideas to be far more attractive than\n> optimizing SELECT-DISTINCT-that-matches-an-index, so I don't really\n> understand why the current patchsets seem to be driven largely\n> by that single use-case. I wouldn't even bother with that for the\n> initial patch.\n\nI absolutely agree. I wondered about that myself in the past. My best\nguess is that a certain segment of users are familiar with\nSELECT-DISTINCT-that-matches-an-index from MySQL. And so to some\nextent application frameworks evolved in a world where that capability\nexisted. IIRC Jesper once said that Hibernate relied on this\ncapability.\n\nIt's probably a lot easier to implement\nSELECT-DISTINCT-that-matches-an-index if you have the MySQL storage\nengine model, with concurrency control that's typically based on\ntwo-phase locking. I think that MySQL does some amount of\ndeduplication in its executor here -- and *not* in what they call the storage\nengine. This is exactly what I'd like to avoid in Postgres; as I said\n\"Maintenance of Index Order\" (as the paper calls it) seems important,\nand not something to be added later on. Optimizer and executor layers\nthat each barely know the difference between a skip scan and a full\nindex scan seems like something we might actually want to aim for,\nrather than avoid. Teaching nbtree to transform quals into ranges\nsounds odd at first, but it seems like the right approach now, on\nbalance -- that's the only *good* way to maintain index order.\n(Maintaining index order is needed to avoid needing or relying on\ndeduplication in the executor proper, which is even inappropriate in\nan implementation of SELECT-DISTINCT-that-matches-an-index IMO.)\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 28 Mar 2022 18:58:22 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "On Mon, Mar 28, 2022 at 5:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> In any case, what I was on about is _bt_preprocess_keys() and\n> adjacent code. I'm surprised that those aren't more expensive\n> than one palloc in _bt_first. Maybe that logic falls through very\n> quickly in simple cases, though.\n\nI assume that it doesn't really appear in very simple cases (also\ncommon cases). But delaying the scan setup work until execution time\ndoes seem ugly. That's probably a good enough reason to refactor.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 28 Mar 2022 19:03:17 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> The terminology in this area is a mess. MySQL calls\n> SELECT-DISTINCT-that-matches-an-index \"loose index scans\". I think\n> that you're talking about skip scan when you say \"loose index scan\".\n> Skip scan is where there is an omitted prefix of columns in the SQL\n> query -- omitted columns after the first column that lack an equality\n> qual.\n\nRight, that's the case I had in mind --- apologies if my terminology\nwas faulty. btree can actually handle such a case now, but what it\nfails to do is re-descend from the tree root instead of plowing\nforward in the index to find the next matching entry.\n\n> It might be useful for somebody to go write a \"taxonomy of MDAM\n> techniques\", or a glossary.\n\n+1. We at least need to be sure we all are using these terms\nthe same way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Mar 2022 22:07:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" }, { "msg_contents": "On Mon, Mar 28, 2022 at 7:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Right, that's the case I had in mind --- apologies if my terminology\n> was faulty. btree can actually handle such a case now, but what it\n> fails to do is re-descend from the tree root instead of plowing\n> forward in the index to find the next matching entry.\n\nKNNGIST seems vaguely related to what we'd build for nbtree skip scan,\nthough. GiST index scans are \"inherently loose\", though. KNNGIST uses\na pairing heap/priority queue, which seems like the kind of thing\nnbtree skip scan can avoid.\n\n> +1. We at least need to be sure we all are using these terms\n> the same way.\n\nYeah, there are *endless* opportunities for confusion here.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 28 Mar 2022 20:19:15 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: MDAM techniques and Index Skip Scan patch" } ]
[ { "msg_contents": "Hello there hackers,\n\nWe at Zalando have faced some issues around long running idle\ntransactions and were thinking about increasing the visibility of\npg_stat_* views to capture them easily. What I found is that currently\nin pg_stat_activity there is a lot of good information about the\ncurrent state of the process, but it is lacking the cumulative\ninformation on how much time the connection spent being idle, idle in\ntransaction or active, we would like to see cumulative values for each\nof these per connection. I believe it would be helpful for us and more\npeople out there if we could have total connection active and idle\ntime displayed in pg_stat_activity.\n\nTo provide this information I was digging into how the statistics\ncollector is working and found out there is already information like\ntotal time that a connection is active as well as idle computed in\npgstat_report_activity[1]. Ideally, this would be the values we would\nlike to see per process in pg_stat_activity.\n\nCurious to know your thoughts on this.\n\n[1]https://github.com/postgres/postgres/blob/cd3f429d9565b2e5caf0980ea7c707e37bc3b317/src/backend/utils/activity/backend_status.c#L593\n\n\n\n-- \nRegards,\nRafia Sabih\n\n\n", "msg_date": "Fri, 22 Oct 2021 10:22:54 +0200", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": true, "msg_subject": "Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Fri, 22 Oct 2021 at 10:22, Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n>\n> Hello there hackers,\n>\n> We at Zalando have faced some issues around long running idle\n> transactions and were thinking about increasing the visibility of\n> pg_stat_* views to capture them easily. What I found is that currently\n> in pg_stat_activity there is a lot of good information about the\n> current state of the process, but it is lacking the cumulative\n> information on how much time the connection spent being idle, idle in\n> transaction or active, we would like to see cumulative values for each\n> of these per connection. I believe it would be helpful for us and more\n> people out there if we could have total connection active and idle\n> time displayed in pg_stat_activity.\n>\n> To provide this information I was digging into how the statistics\n> collector is working and found out there is already information like\n> total time that a connection is active as well as idle computed in\n> pgstat_report_activity[1]. Ideally, this would be the values we would\n> like to see per process in pg_stat_activity.\n>\n> Curious to know your thoughts on this.\n>\n> [1]https://github.com/postgres/postgres/blob/cd3f429d9565b2e5caf0980ea7c707e37bc3b317/src/backend/utils/activity/backend_status.c#L593\n>\n>\n>\n> --\n> Regards,\n> Rafia Sabih\n\nPlease find the attached patch for the idea of our intentions.\nIt basically adds three attributes for idle, idle_in_transaction, and\nactive time respectively.\nPlease let me know your views on this and I shall add this to the\nupcoming commitfest for better tracking.\n\n\n--\nRegards,\nRafia Sabih", "msg_date": "Tue, 26 Oct 2021 13:46:48 +0200", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Tue, Oct 26, 2021 at 5:17 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n>\n> >\n> > To provide this information I was digging into how the statistics\n> > collector is working and found out there is already information like\n> > total time that a connection is active as well as idle computed in\n> > pgstat_report_activity[1]. Ideally, this would be the values we would\n> > like to see per process in pg_stat_activity.\n> >\n> > Curious to know your thoughts on this.\n\n+1 for the idea\n\n> Please find the attached patch for the idea of our intentions.\n> It basically adds three attributes for idle, idle_in_transaction, and\n> active time respectively.\n> Please let me know your views on this and I shall add this to the\n> upcoming commitfest for better tracking.\n\nAbout the patch, IIUC earlier all the idle time was accumulated in the\n\"pgStatTransactionIdleTime\" counter, now with your patch you have\nintroduced one more counter which specifically tracks the\nSTATE_IDLEINTRANSACTION state. But my concern is that the\nSTATE_IDLEINTRANSACTION_ABORTED is still computed under STATE_IDLE and\nthat looks odd to me. Either STATE_IDLEINTRANSACTION_ABORTED should\nbe accumulated in the \"pgStatTransactionIdleInTxnTime\" counter or\nthere should be a separate counter for that. But after your patch we\ncan not accumulate this in the \"pgStatTransactionIdleTime\" counter.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Nov 2021 13:29:42 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Tue, 2 Nov 2021 at 09:00, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Oct 26, 2021 at 5:17 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> >\n> > >\n> > > To provide this information I was digging into how the statistics\n> > > collector is working and found out there is already information like\n> > > total time that a connection is active as well as idle computed in\n> > > pgstat_report_activity[1]. Ideally, this would be the values we would\n> > > like to see per process in pg_stat_activity.\n> > >\n> > > Curious to know your thoughts on this.\n>\n> +1 for the idea\n>\nThanks!\n\n> > Please find the attached patch for the idea of our intentions.\n> > It basically adds three attributes for idle, idle_in_transaction, and\n> > active time respectively.\n> > Please let me know your views on this and I shall add this to the\n> > upcoming commitfest for better tracking.\n>\n> About the patch, IIUC earlier all the idle time was accumulated in the\n> \"pgStatTransactionIdleTime\" counter, now with your patch you have\n> introduced one more counter which specifically tracks the\n> STATE_IDLEINTRANSACTION state. But my concern is that the\n> STATE_IDLEINTRANSACTION_ABORTED is still computed under STATE_IDLE and\n> that looks odd to me. Either STATE_IDLEINTRANSACTION_ABORTED should\n> be accumulated in the \"pgStatTransactionIdleInTxnTime\" counter or\n> there should be a separate counter for that. But after your patch we\n> can not accumulate this in the \"pgStatTransactionIdleTime\" counter.\n>\nAs per your comments I have added it in pgStatTransactionIdleInTxnTime.\nPlease let me know if there are any further comments.\n\n-- \nRegards,\nRafia Sabih", "msg_date": "Tue, 9 Nov 2021 15:58:27 +0100", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Tue, Nov 9, 2021 at 8:28 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n>\n> On Tue, 2 Nov 2021 at 09:00, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n\n> > About the patch, IIUC earlier all the idle time was accumulated in the\n> > \"pgStatTransactionIdleTime\" counter, now with your patch you have\n> > introduced one more counter which specifically tracks the\n> > STATE_IDLEINTRANSACTION state. But my concern is that the\n> > STATE_IDLEINTRANSACTION_ABORTED is still computed under STATE_IDLE and\n> > that looks odd to me. Either STATE_IDLEINTRANSACTION_ABORTED should\n> > be accumulated in the \"pgStatTransactionIdleInTxnTime\" counter or\n> > there should be a separate counter for that. But after your patch we\n> > can not accumulate this in the \"pgStatTransactionIdleTime\" counter.\n> >\n> As per your comments I have added it in pgStatTransactionIdleInTxnTime.\n> Please let me know if there are any further comments.\n\nI have a few comments,\n\n nulls[29] = true;\n+ values[30] = true;\n+ values[31] = true;\n+ values[32] = true;\n\nThis looks wrong, this should be nulls[] = true not values[]=true.\n\nif ((beentry->st_state == STATE_RUNNING ||\n beentry->st_state == STATE_FASTPATH ||\n beentry->st_state == STATE_IDLEINTRANSACTION ||\n beentry->st_state == STATE_IDLEINTRANSACTION_ABORTED) &&\n state != beentry->st_state)\n{\nif (beentry->st_state == STATE_RUNNING ||\nbeentry->st_state == STATE_FASTPATH)\n{\n pgstat_count_conn_active_time((PgStat_Counter) secs * 1000000 + usecs);\n beentry->st_active_time = pgStatActiveTime;\n}\nelse if (beentry->st_state == STATE_IDLEINTRANSACTION ||\n beentry->st_state == STATE_IDLEINTRANSACTION_ABORTED)\n{\n pgstat_count_conn_txn_idle_in_txn_time((PgStat_Counter) secs *\n1000000 + usecs);\n beentry->st_idle_in_transaction_time = pgStatTransactionIdleInTxnTime;\n}\nelse\n{\n pgstat_count_conn_txn_idle_time((PgStat_Counter) secs * 1000000 + usecs);\n beentry->st_idle_time = pgStatTransactionIdleTime;\n}\n\nIt seems that in beentry->st_idle_time, you want to compute the\nSTATE_IDLE, but that state is not handled in the outer \"if\", that\nmeans whenever it comes out of the\nSTATE_IDLE, it will not enter inside this if check. You can run and\ntest, I am sure that with this patch the \"idle_time\" will always\nremain 0.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Nov 2021 13:35:31 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Wed, 10 Nov 2021 at 09:05, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Nov 9, 2021 at 8:28 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> >\n> > On Tue, 2 Nov 2021 at 09:00, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n>\n> > > About the patch, IIUC earlier all the idle time was accumulated in the\n> > > \"pgStatTransactionIdleTime\" counter, now with your patch you have\n> > > introduced one more counter which specifically tracks the\n> > > STATE_IDLEINTRANSACTION state. But my concern is that the\n> > > STATE_IDLEINTRANSACTION_ABORTED is still computed under STATE_IDLE and\n> > > that looks odd to me. Either STATE_IDLEINTRANSACTION_ABORTED should\n> > > be accumulated in the \"pgStatTransactionIdleInTxnTime\" counter or\n> > > there should be a separate counter for that. But after your patch we\n> > > can not accumulate this in the \"pgStatTransactionIdleTime\" counter.\n> > >\n> > As per your comments I have added it in pgStatTransactionIdleInTxnTime.\n> > Please let me know if there are any further comments.\n>\n> I have a few comments,\n>\n> nulls[29] = true;\n> + values[30] = true;\n> + values[31] = true;\n> + values[32] = true;\n>\n> This looks wrong, this should be nulls[] = true not values[]=true.\n>\n> if ((beentry->st_state == STATE_RUNNING ||\n> beentry->st_state == STATE_FASTPATH ||\n> beentry->st_state == STATE_IDLEINTRANSACTION ||\n> beentry->st_state == STATE_IDLEINTRANSACTION_ABORTED) &&\n> state != beentry->st_state)\n> {\n> if (beentry->st_state == STATE_RUNNING ||\n> beentry->st_state == STATE_FASTPATH)\n> {\n> pgstat_count_conn_active_time((PgStat_Counter) secs * 1000000 + usecs);\n> beentry->st_active_time = pgStatActiveTime;\n> }\n> else if (beentry->st_state == STATE_IDLEINTRANSACTION ||\n> beentry->st_state == STATE_IDLEINTRANSACTION_ABORTED)\n> {\n> pgstat_count_conn_txn_idle_in_txn_time((PgStat_Counter) secs *\n> 1000000 + usecs);\n> beentry->st_idle_in_transaction_time = pgStatTransactionIdleInTxnTime;\n> }\n> else\n> {\n> pgstat_count_conn_txn_idle_time((PgStat_Counter) secs * 1000000 + usecs);\n> beentry->st_idle_time = pgStatTransactionIdleTime;\n> }\n>\n> It seems that in beentry->st_idle_time, you want to compute the\n> STATE_IDLE, but that state is not handled in the outer \"if\", that\n> means whenever it comes out of the\n> STATE_IDLE, it will not enter inside this if check. You can run and\n> test, I am sure that with this patch the \"idle_time\" will always\n> remain 0.\n>\nThank you Dilip for your time on this.\nAnd yes you are right in both your observations.\nPlease find the attached patch for the updated version.\n\n-- \nRegards,\nRafia Sabih", "msg_date": "Wed, 10 Nov 2021 09:17:22 +0100", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Wed, Nov 10, 2021 at 1:47 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n>\n> > It seems that in beentry->st_idle_time, you want to compute the\n> > STATE_IDLE, but that state is not handled in the outer \"if\", that\n> > means whenever it comes out of the\n> > STATE_IDLE, it will not enter inside this if check. You can run and\n> > test, I am sure that with this patch the \"idle_time\" will always\n> > remain 0.\n> >\n> Thank you Dilip for your time on this.\n> And yes you are right in both your observations.\n> Please find the attached patch for the updated version.\n\nLooks fine now except these variable names,\n\n PgStat_Counter pgStatTransactionIdleTime = 0;\n+PgStat_Counter pgStatTransactionIdleInTxnTime = 0;\n\nNow, pgStatTransactionIdleTime is collecting just the Idle time so\npgStatTransactionIdleTime should be renamed to \"pgStatIdleTime\" and\npgStatTransactionIdleInTxnTime should be renamed to\n\"pgStatTransactionIdleTime\"\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Nov 2021 14:54:15 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Mon, 15 Nov 2021 at 10:24, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Nov 10, 2021 at 1:47 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> >\n> > > It seems that in beentry->st_idle_time, you want to compute the\n> > > STATE_IDLE, but that state is not handled in the outer \"if\", that\n> > > means whenever it comes out of the\n> > > STATE_IDLE, it will not enter inside this if check. You can run and\n> > > test, I am sure that with this patch the \"idle_time\" will always\n> > > remain 0.\n> > >\n> > Thank you Dilip for your time on this.\n> > And yes you are right in both your observations.\n> > Please find the attached patch for the updated version.\n>\n> Looks fine now except these variable names,\n>\n> PgStat_Counter pgStatTransactionIdleTime = 0;\n> +PgStat_Counter pgStatTransactionIdleInTxnTime = 0;\n>\n> Now, pgStatTransactionIdleTime is collecting just the Idle time so\n> pgStatTransactionIdleTime should be renamed to \"pgStatIdleTime\" and\n> pgStatTransactionIdleInTxnTime should be renamed to\n> \"pgStatTransactionIdleTime\"\n>\nGood point!\nDone.\n\n\n-- \nRegards,\nRafia Sabih", "msg_date": "Mon, 15 Nov 2021 12:15:46 +0100", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Mon, Nov 15, 2021 at 4:46 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n>\n> On Mon, 15 Nov 2021 at 10:24, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Nov 10, 2021 at 1:47 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> > >\n> > > > It seems that in beentry->st_idle_time, you want to compute the\n> > > > STATE_IDLE, but that state is not handled in the outer \"if\", that\n> > > > means whenever it comes out of the\n> > > > STATE_IDLE, it will not enter inside this if check. You can run and\n> > > > test, I am sure that with this patch the \"idle_time\" will always\n> > > > remain 0.\n> > > >\n> > > Thank you Dilip for your time on this.\n> > > And yes you are right in both your observations.\n> > > Please find the attached patch for the updated version.\n> >\n> > Looks fine now except these variable names,\n> >\n> > PgStat_Counter pgStatTransactionIdleTime = 0;\n> > +PgStat_Counter pgStatTransactionIdleInTxnTime = 0;\n> >\n> > Now, pgStatTransactionIdleTime is collecting just the Idle time so\n> > pgStatTransactionIdleTime should be renamed to \"pgStatIdleTime\" and\n> > pgStatTransactionIdleInTxnTime should be renamed to\n> > \"pgStatTransactionIdleTime\"\n> >\n> Good point!\n> Done.\n\n@@ -1018,7 +1019,7 @@ pgstat_send_tabstat(PgStat_MsgTabstat *tsmsg,\nTimestampTz now)\n pgLastSessionReportTime = now;\n tsmsg->m_session_time = (PgStat_Counter) secs * 1000000 + usecs;\n tsmsg->m_active_time = pgStatActiveTime;\n- tsmsg->m_idle_in_xact_time = pgStatTransactionIdleTime;\n+ tsmsg->m_idle_in_xact_time = pgStatIdleTime;\n\nI think this change is wrong, basically, \"tsmsg->m_idle_in_xact_time\"\nis used for counting the database level idle in transaction count, you\ncan check \"pg_stat_get_db_idle_in_transaction_time\" function for that.\nSo \"pgStatTransactionIdleTime\" is the variable counting the idle in\ntransaction time, pgStatIdleTime is just counting the idle time\noutside the transaction so if we make this change we are changing the\nmeaning of tsmsg->m_idle_in_xact_time.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Nov 2021 17:10:06 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Mon, 15 Nov 2021 at 12:40, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 4:46 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> >\n> > On Mon, 15 Nov 2021 at 10:24, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Wed, Nov 10, 2021 at 1:47 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> > > >\n> > > > > It seems that in beentry->st_idle_time, you want to compute the\n> > > > > STATE_IDLE, but that state is not handled in the outer \"if\", that\n> > > > > means whenever it comes out of the\n> > > > > STATE_IDLE, it will not enter inside this if check. You can run and\n> > > > > test, I am sure that with this patch the \"idle_time\" will always\n> > > > > remain 0.\n> > > > >\n> > > > Thank you Dilip for your time on this.\n> > > > And yes you are right in both your observations.\n> > > > Please find the attached patch for the updated version.\n> > >\n> > > Looks fine now except these variable names,\n> > >\n> > > PgStat_Counter pgStatTransactionIdleTime = 0;\n> > > +PgStat_Counter pgStatTransactionIdleInTxnTime = 0;\n> > >\n> > > Now, pgStatTransactionIdleTime is collecting just the Idle time so\n> > > pgStatTransactionIdleTime should be renamed to \"pgStatIdleTime\" and\n> > > pgStatTransactionIdleInTxnTime should be renamed to\n> > > \"pgStatTransactionIdleTime\"\n> > >\n> > Good point!\n> > Done.\n>\n> @@ -1018,7 +1019,7 @@ pgstat_send_tabstat(PgStat_MsgTabstat *tsmsg,\n> TimestampTz now)\n> pgLastSessionReportTime = now;\n> tsmsg->m_session_time = (PgStat_Counter) secs * 1000000 + usecs;\n> tsmsg->m_active_time = pgStatActiveTime;\n> - tsmsg->m_idle_in_xact_time = pgStatTransactionIdleTime;\n> + tsmsg->m_idle_in_xact_time = pgStatIdleTime;\n>\n> I think this change is wrong, basically, \"tsmsg->m_idle_in_xact_time\"\n> is used for counting the database level idle in transaction count, you\n> can check \"pg_stat_get_db_idle_in_transaction_time\" function for that.\n> So \"pgStatTransactionIdleTime\" is the variable counting the idle in\n> transaction time, pgStatIdleTime is just counting the idle time\n> outside the transaction so if we make this change we are changing the\n> meaning of tsmsg->m_idle_in_xact_time.\n\nGot it.\nUpdated\n\n-- \nRegards,\nRafia Sabih", "msg_date": "Tue, 16 Nov 2021 12:35:52 +0100", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Tue, Nov 16, 2021 at 5:06 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> > I think this change is wrong, basically, \"tsmsg->m_idle_in_xact_time\"\n> > is used for counting the database level idle in transaction count, you\n> > can check \"pg_stat_get_db_idle_in_transaction_time\" function for that.\n> > So \"pgStatTransactionIdleTime\" is the variable counting the idle in\n> > transaction time, pgStatIdleTime is just counting the idle time\n> > outside the transaction so if we make this change we are changing the\n> > meaning of tsmsg->m_idle_in_xact_time.\n>\n> Got it.\n> Updated\n\nOkay, thanks, I will look into it one more time early next week and if\nI see no issues then I will move it to RFC.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Nov 2021 13:57:28 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Tue, Nov 16, 2021 at 5:06 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> Got it.\n> Updated\n\nThanks for the patch. +1 for adding the idle/idle_in_txn_time/active\ntime. I believe these are the total times a backend in its lifetime\naccumulates. For instance, if a backend runs 100 txns, then these new\ncolumns show the total time that the backend spent during these 100\ntxns, right?\n\nFew comments on the patch:\n\n1) Patch is missing a commit message. It is good to have a commit\nmessage describing the high-level of the feature.\n2) This patch needs to bump the catalog version, at the end of the\ncommit message, we usually keep a note \"Bump the catalog version\".\n3) It looks like the documentation is missing [1], for the new columns.\n4) When will these backend variables be reset? Is it at the backend\nstartup? Or some other? If these variables are reset only at the\nbackend startup and do they keep growing during the entire life of the\nbackend process? If yes, what happens for a long running backend/user\nsession, don't they get overflowed?\n\n+\n+ int64 st_active_time;\n+ int64 st_transaction_idle_time;\n+ int64 st_idle_time;\n } PgBackendStatus;\n\n5) Is there any way you can get them tested?\n6) What will be entries of st_active_time, st_transaction_idle_time,\nst_idle_time for non-backend processes, like bg writer, checkpointer,\nparallel worker, bg worker, logical replication launcher, stats\ncollector, sys logger etc?\n\n[1] https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 27 Nov 2021 08:00:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Sat, Nov 27, 2021 at 8:00 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Nov 16, 2021 at 5:06 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> > Got it.\n> > Updated\n>\n> Thanks for the patch. +1 for adding the idle/idle_in_txn_time/active\n> time. I believe these are the total times a backend in its lifetime\n> accumulates. For instance, if a backend runs 100 txns, then these new\n> columns show the total time that the backend spent during these 100\n> txns, right?\n>\n> Few comments on the patch:\n>\n> 1) Patch is missing a commit message. It is good to have a commit\n> message describing the high-level of the feature.\n> 2) This patch needs to bump the catalog version, at the end of the\n> commit message, we usually keep a note \"Bump the catalog version\".\n> 3) It looks like the documentation is missing [1], for the new columns.\n> 4) When will these backend variables be reset? Is it at the backend\n> startup? Or some other? If these variables are reset only at the\n> backend startup and do they keep growing during the entire life of the\n> backend process? If yes, what happens for a long running backend/user\n> session, don't they get overflowed?\n\nThis is a 64-bit variable so I am not sure do we really need to worry\nabout overflow? I mean if we are storing microseconds then also this\nwill be able to last for ~300,000 years no?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Nov 2021 11:32:44 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Fri, Oct 22, 2021 at 1:53 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> To provide this information I was digging into how the statistics\n> collector is working and found out there is already information like\n> total time that a connection is active as well as idle computed in\n> pgstat_report_activity[1]. Ideally, this would be the values we would\n> like to see per process in pg_stat_activity.\nIt's definitely useful to know how much time a backend has spent for\nquery executions. Once you've this info, you can easily calculate the\nidle time using this information: (now() - backend_start) -\nactive_time. But, I'm wondering why you need to distinguish between\nidle and idle in transactions - what's the usage? Either the backend\nis doing some work or it sits idle. Another useful information would\nbe when the last query execution was ended. From this information, you\ncan figure out whether a backend is idle for a long time since the\nlast execution and the execution time of the last query (query_end -\nquery_start).\n\nYou also need to update the documentation.\n\n-- \nThanks & Regards,\nKuntal Ghosh\n\n\n", "msg_date": "Mon, 29 Nov 2021 20:34:14 +0530", "msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn Mon, Nov 29, 2021 at 11:04 PM Kuntal Ghosh\n<kuntalghosh.2007@gmail.com> wrote:\n>\n> You also need to update the documentation.\n\nYou also need to update rules.sql: https://cirrus-ci.com/task/6145265819189248\n\n\n", "msg_date": "Wed, 12 Jan 2022 14:16:35 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn Wed, Jan 12, 2022 at 02:16:35PM +0800, Julien Rouhaud wrote:\n> \n> On Mon, Nov 29, 2021 at 11:04 PM Kuntal Ghosh\n> <kuntalghosh.2007@gmail.com> wrote:\n> >\n> > You also need to update the documentation.\n> \n> You also need to update rules.sql: https://cirrus-ci.com/task/6145265819189248\n\nThere has been multiple comments in the last two months that weren't addressed\nsince, and also the patch doesn't pass the regression tests anymore.\n\nRafia, do you plan to send a new version soon? Without update in the next few\ndays this patch will be closed as Returned with Feedback, per the commitfest\nrules.\n\n\n", "msg_date": "Tue, 25 Jan 2022 22:22:45 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hello,\n\n> Without update in the next few\n> days this patch will be closed as Returned with Feedback,\n\nThank you for the reminder, Julien.\n\nPer agreement with Rafia I have reworked the patch in the past days.\nThe new version 6 is now ready for review.\n\nRegards,\nSergey Dudoladov", "msg_date": "Thu, 27 Jan 2022 11:43:26 +0100", "msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn Thu, Jan 27, 2022 at 11:43:26AM +0100, Sergey Dudoladov wrote:\n> \n> Per agreement with Rafia I have reworked the patch in the past days.\n> The new version 6 is now ready for review.\n\nGreat, thanks a lot Sergey!\n\nThe cfbot is happy with this new version:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/3405\n\n\n", "msg_date": "Thu, 27 Jan 2022 20:36:56 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi.\n\nAt Thu, 27 Jan 2022 20:36:56 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in\n> On Thu, Jan 27, 2022 at 11:43:26AM +0100, Sergey Dudoladov wrote:\n> > \n> > Per agreement with Rafia I have reworked the patch in the past days.\n> > The new version 6 is now ready for review.\n> \n> Great, thanks a lot Sergey!\n> \n> The cfbot is happy with this new version:\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/3405\n\nI think we can easily add the duration of the current state to the two\nin pg_stat_get_activity and it would offer better information.\n\n\n \t\tif (beentry->st_state == STATE_RUNNING ||\n \t\t\tbeentry->st_state == STATE_FASTPATH)\n-\t\t\tpgstat_count_conn_active_time((PgStat_Counter) secs * 1000000 + usecs);\n+\t\t{\n+\t\t\tpgstat_count_conn_active_time((PgStat_Counter) usecs_diff);\n+\t\t\tbeentry->st_total_active_time += usecs_diff;\n+\t\t}\n\nThe two lines operates exactly the same way on variables with slightly\ndifferent behavior. pgStatActiveTime is reported at transaction end\nand reset at every tabstat reporting. st_total_active_time is reported\nimmediately and reset at session end. Since we do the latter, the\nfirst can be omitted by remembering the last values for the local\nvariables at every reporting. This needs additional two exporting\nfunction in pgstatfuncs like pgstat_get_my_queryid so others might\nthink differently.\n\nThe write operation to beentry needs to be enclosed by\nPGSTAT_BEGIN/END_WRITE_ACTIVITY(). In that perspective, it would be\nbetter to move that writes to the PGSTAT_WRITE_ACTIVITY section just\nbelow.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 28 Jan 2022 14:36:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "At Mon, 29 Nov 2021 20:34:14 +0530, Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote in \n> active_time. But, I'm wondering why you need to distinguish between\n> idle and idle in transactions - what's the usage? Either the backend\n> is doing some work or it sits idle. Another useful information would\n\nI believe many people suffer from mysterious long idle in\ntransactions, which harm server performance many ways. In many cases\ntransactions with unexpectedly long idle time is an omen or a cause of\ntrouble.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 28 Jan 2022 14:40:24 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "At Fri, 28 Jan 2022 14:36:31 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Hi.\n> \n> At Thu, 27 Jan 2022 20:36:56 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in\n> > On Thu, Jan 27, 2022 at 11:43:26AM +0100, Sergey Dudoladov wrote:\n> > > \n> > > Per agreement with Rafia I have reworked the patch in the past days.\n> > > The new version 6 is now ready for review.\n> > \n> > Great, thanks a lot Sergey!\n> > \n> > The cfbot is happy with this new version:\n> > https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/3405\n> \n> I think we can easily add the duration of the current state to the two\n> in pg_stat_get_activity and it would offer better information.\n> \n> \n> \t\tif (beentry->st_state == STATE_RUNNING ||\n> \t\t\tbeentry->st_state == STATE_FASTPATH)\n> -\t\t\tpgstat_count_conn_active_time((PgStat_Counter) secs * 1000000 + usecs);\n> +\t\t{\n> +\t\t\tpgstat_count_conn_active_time((PgStat_Counter) usecs_diff);\n> +\t\t\tbeentry->st_total_active_time += usecs_diff;\n> +\t\t}\n> \n> The two lines operates exactly the same way on variables with slightly\n> different behavior. pgStatActiveTime is reported at transaction end\n> and reset at every tabstat reporting. st_total_active_time is reported\n> immediately and reset at session end. Since we do the latter, the\n\n> first can be omitted by remembering the last values for the local\n> variables at every reporting. This needs additional two exporting\n\nOf course it's typo(?) of \"values of the shared variables\".\nSorry for the mistake.\n\n> function in pgstatfuncs like pgstat_get_my_queryid so others might\n> think differently.\n> \n> The write operation to beentry needs to be enclosed by\n> PGSTAT_BEGIN/END_WRITE_ACTIVITY(). In that perspective, it would be\n> better to move that writes to the PGSTAT_WRITE_ACTIVITY section just\n> below.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 28 Jan 2022 14:43:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi,\n\nThank you for the reviews.\n\n> > The write operation to beentry needs to be enclosed by\n> > PGSTAT_BEGIN/END_WRITE_ACTIVITY(). In that perspective, it would be\n> > better to move that writes to the PGSTAT_WRITE_ACTIVITY section just\n> > below.\n\nI have fixed it in the new version.\n\n> > if (beentry->st_state == STATE_RUNNING ||\n> > beentry->st_state == STATE_FASTPATH)\n> > - pgstat_count_conn_active_time((PgStat_Counter) secs * 1000000 + usecs);\n> > + {\n> > + pgstat_count_conn_active_time((PgStat_Counter) usecs_diff);\n> > + beentry->st_total_active_time += usecs_diff;\n> > + }\n> >\n> > The two lines operates exactly the same way on variables with slightly\n> > different behavior. pgStatActiveTime is reported at transaction end\n> > and reset at every tabstat reporting. st_total_active_time is reported\n> > immediately and reset at session end. Since we do the latter, the\n> > first can be omitted by remembering the last values for the local\n> > variables at every reporting. This needs additional two exporting\n>\n> Of course it's typo(?) of \"values of the shared variables\".\n\nCould you please elaborate on this idea ?\nSo we have pgStatActiveTime and pgStatIdleInTransactionTime ultimately\nused to report respective metrics in pg_stat_database.\nNow beentry's st_total_active_time / st_total_transaction_idle_time\nduplicates this info, so one may get rid of pgStat*Time counters. Is\nthe idea to report instead of them at every tabstat reporting the\ndifference between the last memorized value of st_total_*_time and\nits current value ?\n\n> > This needs additional two exporting\n> > function in pgstatfuncs like pgstat_get_my_queryid so others might\n> > think differently.\n\nWhat would be example functions to look at ?\n\nRegards,\nSergey", "msg_date": "Mon, 31 Jan 2022 15:11:56 +0100", "msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "At Mon, 31 Jan 2022 15:11:56 +0100, Sergey Dudoladov <sergey.dudoladov@gmail.com> wrote in \n> > > if (beentry->st_state == STATE_RUNNING ||\n> > > beentry->st_state == STATE_FASTPATH)\n> > > - pgstat_count_conn_active_time((PgStat_Counter) secs * 1000000 + usecs);\n> > > + {\n> > > + pgstat_count_conn_active_time((PgStat_Counter) usecs_diff);\n> > > + beentry->st_total_active_time += usecs_diff;\n> > > + }\n> > >\n> > > The two lines operates exactly the same way on variables with slightly\n> > > different behavior. pgStatActiveTime is reported at transaction end\n> > > and reset at every tabstat reporting. st_total_active_time is reported\n> > > immediately and reset at session end. Since we do the latter, the\n> > > first can be omitted by remembering the last values for the local\n> > > variables at every reporting. This needs additional two exporting\n> >\n> > Of course it's typo(?) of \"values of the shared variables\".\n> \n> Could you please elaborate on this idea ?\n> So we have pgStatActiveTime and pgStatIdleInTransactionTime ultimately\n> used to report respective metrics in pg_stat_database.\n> Now beentry's st_total_active_time / st_total_transaction_idle_time\n> duplicates this info, so one may get rid of pgStat*Time counters. Is\n> the idea to report instead of them at every tabstat reporting the\n> difference between the last memorized value of st_total_*_time and\n> its current value ?\n\nExactly. The attached first diff is the schetch of that.\n\n> > > This needs additional two exporting\n> > > function in pgstatfuncs like pgstat_get_my_queryid so others might\n> > > think differently.\n> \n> What would be example functions to look at ?\n\npgstat_get_my_queryid..\n\n\nAnd, it seems like I forgot to mention this, but as Kuntal suggested\n(in a different context and objective, though) upthraed, I think that\nwe can show realtime values in the two time fields by adding the time\nof the current state. See the attached second diff.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c\nindex 0646f53098..27419c1851 100644\n--- a/src/backend/postmaster/pgstat.c\n+++ b/src/backend/postmaster/pgstat.c\n@@ -249,8 +249,8 @@ static int\tpgStatXactRollback = 0;\n PgStat_Counter pgStatBlockReadTime = 0;\n PgStat_Counter pgStatBlockWriteTime = 0;\n static PgStat_Counter pgLastSessionReportTime = 0;\n-PgStat_Counter pgStatActiveTime = 0;\n-PgStat_Counter pgStatTransactionIdleTime = 0;\n+PgStat_Counter pgStatLastActiveTime = 0;\n+PgStat_Counter pgStatLastTransactionIdleTime = 0;\n SessionEndType pgStatSessionEndCause = DISCONNECT_NORMAL;\n \n /* Record that's written to 2PC state file when pgstat state is persisted */\n@@ -1026,8 +1026,13 @@ pgstat_send_tabstat(PgStat_MsgTabstat *tsmsg, TimestampTz now)\n \t\t\tTimestampDifference(pgLastSessionReportTime, now, &secs, &usecs);\n \t\t\tpgLastSessionReportTime = now;\n \t\t\ttsmsg->m_session_time = (PgStat_Counter) secs * 1000000 + usecs;\n-\t\t\ttsmsg->m_active_time = pgStatActiveTime;\n-\t\t\ttsmsg->m_idle_in_xact_time = pgStatTransactionIdleTime;\n+\n+\t\t\t/* send the difference since the last report */\n+\t\t\ttsmsg->m_active_time =\n+\t\t\t\tpgstat_get_my_active_time() - pgStatLastActiveTime;\n+\t\t\ttsmsg->m_idle_in_xact_time =\n+\t\t\t\tpgstat_get_my_transaction_idle_time() -\n+\t\t\t\tpgStatLastTransactionIdleTime;\n \t\t}\n \t\telse\n \t\t{\n@@ -1039,8 +1044,8 @@ pgstat_send_tabstat(PgStat_MsgTabstat *tsmsg, TimestampTz now)\n \t\tpgStatXactRollback = 0;\n \t\tpgStatBlockReadTime = 0;\n \t\tpgStatBlockWriteTime = 0;\n-\t\tpgStatActiveTime = 0;\n-\t\tpgStatTransactionIdleTime = 0;\n+\t\tpgStatLastActiveTime = pgstat_get_my_active_time();\n+\t\tpgStatLastTransactionIdleTime = pgstat_get_my_transaction_idle_time();\n \t}\n \telse\n \t{\ndiff --git a/src/backend/utils/activity/backend_status.c b/src/backend/utils/activity/backend_status.c\nindex 5f15dcdc05..8b6836a662 100644\n--- a/src/backend/utils/activity/backend_status.c\n+++ b/src/backend/utils/activity/backend_status.c\n@@ -613,15 +613,9 @@ pgstat_report_activity(BackendState state, const char *cmd_str)\n \t\t */\n \t\tif (beentry->st_state == STATE_RUNNING ||\n \t\t\tbeentry->st_state == STATE_FASTPATH)\n-\t\t{\n-\t\t\tpgstat_count_conn_active_time((PgStat_Counter) usecs_diff);\n \t\t\tactive_time_diff = usecs_diff;\n-\t\t}\n \t\telse\n-\t\t{\n-\t\t\tpgstat_count_conn_txn_idle_time((PgStat_Counter) usecs_diff);\n \t\t\ttransaction_idle_time_diff = usecs_diff;\n-\t\t}\n \t}\n \n \t/*\n@@ -1078,6 +1072,48 @@ pgstat_get_my_query_id(void)\n }\n \n \n+/* ----------\n+ * pgstat_get_my_active_time() -\n+ *\n+ * Return current backend's accumulated active time.\n+ */\n+uint64\n+pgstat_get_my_active_time(void)\n+{\n+\tif (!MyBEEntry)\n+\t\treturn 0;\n+\n+\t/*\n+\t * There's no need for a lock around pgstat_begin_read_activity /\n+\t * pgstat_end_read_activity here as it's only called from\n+\t * pg_stat_get_activity which is already protected, or from the same\n+\t * backend which means that there won't be concurrent writes.\n+\t */\n+\treturn MyBEEntry->st_total_active_time;\n+}\n+\n+\n+/* ----------\n+ * pgstat_get_my_transaction_idle_time() -\n+ *\n+ * Return current backend's accumulated in-transaction idel time.\n+ */\n+uint64\n+pgstat_get_my_transaction_idle_time(void)\n+{\n+\tif (!MyBEEntry)\n+\t\treturn 0;\n+\n+\t/*\n+\t * There's no need for a lock around pgstat_begin_read_activity /\n+\t * pgstat_end_read_activity here as it's only called from\n+\t * pg_stat_get_activity which is already protected, or from the same\n+\t * backend which means that there won't be concurrent writes.\n+\t */\n+\treturn MyBEEntry->st_total_transaction_idle_time;\n+}\n+\n+\n /* ----------\n * pgstat_fetch_stat_beentry() -\n *\ndiff --git a/src/include/pgstat.h b/src/include/pgstat.h\nindex e10d20222a..382d7202c1 100644\n--- a/src/include/pgstat.h\n+++ b/src/include/pgstat.h\n@@ -1185,10 +1185,6 @@ extern void pgstat_initstats(Relation rel);\n \t(pgStatBlockReadTime += (n))\n #define pgstat_count_buffer_write_time(n)\t\t\t\t\t\t\t\\\n \t(pgStatBlockWriteTime += (n))\n-#define pgstat_count_conn_active_time(n)\t\t\t\t\t\t\t\\\n-\t(pgStatActiveTime += (n))\n-#define pgstat_count_conn_txn_idle_time(n)\t\t\t\t\t\t\t\\\n-\t(pgStatTransactionIdleTime += (n))\n \n extern void pgstat_count_heap_insert(Relation rel, PgStat_Counter n);\n extern void pgstat_count_heap_update(Relation rel, bool hot);\ndiff --git a/src/include/utils/backend_status.h b/src/include/utils/backend_status.h\nindex 96d432ce49..1791dd6842 100644\n--- a/src/include/utils/backend_status.h\n+++ b/src/include/utils/backend_status.h\n@@ -309,6 +309,8 @@ extern const char *pgstat_get_backend_current_activity(int pid, bool checkUser);\n extern const char *pgstat_get_crashed_backend_activity(int pid, char *buffer,\n \t\t\t\t\t\t\t\t\t\t\t\t\t int buflen);\n extern uint64 pgstat_get_my_query_id(void);\n+extern uint64 pgstat_get_my_active_time(void);\n+extern uint64 pgstat_get_my_transaction_idle_time(void);\n \n \n /* ----------\n\ndiff --git a/src/backend/utils/activity/backend_status.c b/src/backend/utils/activity/backend_status.c\nindex 8b6836a662..996f4e88d7 100644\n--- a/src/backend/utils/activity/backend_status.c\n+++ b/src/backend/utils/activity/backend_status.c\n@@ -587,10 +587,7 @@ pgstat_report_activity(BackendState state, const char *cmd_str)\n \t * If the state has changed from \"active\" or \"idle in transaction\",\n \t * calculate the duration.\n \t */\n-\tif ((beentry->st_state == STATE_RUNNING ||\n-\t\t beentry->st_state == STATE_FASTPATH ||\n-\t\t beentry->st_state == STATE_IDLEINTRANSACTION ||\n-\t\t beentry->st_state == STATE_IDLEINTRANSACTION_ABORTED) &&\n+\tif ((PGSTAT_IS_ACTIVE(beentry) || PGSTAT_IS_IDLEINTRANSACTION(beentry)) &&\n \t\tstate != beentry->st_state)\n \t{\n \t\tlong\t\tsecs;\n@@ -611,8 +608,7 @@ pgstat_report_activity(BackendState state, const char *cmd_str)\n \t\t * 2. The latter values are reset to 0 once the data has been sent\n \t\t * to the statistics collector.\n \t\t */\n-\t\tif (beentry->st_state == STATE_RUNNING ||\n-\t\t\tbeentry->st_state == STATE_FASTPATH)\n+\t\tif (PGSTAT_IS_ACTIVE(beentry))\n \t\t\tactive_time_diff = usecs_diff;\n \t\telse\n \t\t\ttransaction_idle_time_diff = usecs_diff;\ndiff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c\nindex 7c2776c14c..48c0ffa33a 100644\n--- a/src/backend/utils/adt/pgstatfuncs.c\n+++ b/src/backend/utils/adt/pgstatfuncs.c\n@@ -675,6 +675,7 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n \t\t{\n \t\t\tSockAddr\tzero_clientaddr;\n \t\t\tchar\t *clipped_activity;\n+\t\t\tint64\t\ttmp_time;\n \n \t\t\tswitch (beentry->st_state)\n \t\t\t{\n@@ -917,9 +918,25 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n \t\t\telse\n \t\t\t\tvalues[29] = UInt64GetDatum(beentry->st_query_id);\n \n-\t\t\t/* convert to msec for display */\n-\t\t\tvalues[30] = Float8GetDatum(beentry->st_total_active_time / 1000.0) ;\n-\t\t\tvalues[31] = Float8GetDatum(beentry->st_total_transaction_idle_time / 1000.0);\n+\t\t\ttmp_time = beentry->st_total_active_time;\n+\n+\t\t\t/* add the realtime value to the counter if needed */\n+\t\t\tif (PGSTAT_IS_ACTIVE(beentry))\n+\t\t\t\ttmp_time +=\n+\t\t\t\t\tGetCurrentTimestamp() - beentry->st_state_start_timestamp;\n+\n+\t\t\t/* convert it to msec */\n+\t\t\tvalues[30] = Float8GetDatum(tmp_time / 1000.0) ;\n+\n+\t\t\ttmp_time = beentry->st_total_transaction_idle_time;\n+\n+\t\t\t/* add the realtime value to the counter if needed */\n+\t\t\tif (PGSTAT_IS_IDLEINTRANSACTION(beentry))\n+\t\t\t\ttmp_time +=\n+\t\t\t\t\tGetCurrentTimestamp() - beentry->st_state_start_timestamp;\n+\n+\t\t\t/* convert it to msec */\n+\t\t\tvalues[31] = Float8GetDatum(tmp_time);\n \t\t}\n \t\telse\n \t\t{\ndiff --git a/src/include/utils/backend_status.h b/src/include/utils/backend_status.h\nindex 1791dd6842..a03225c4f0 100644\n--- a/src/include/utils/backend_status.h\n+++ b/src/include/utils/backend_status.h\n@@ -235,6 +235,12 @@ typedef struct PgBackendStatus\n \t((before_changecount) == (after_changecount) && \\\n \t ((before_changecount) & 1) == 0)\n \n+/* macros to identify the states for time accounting */\n+#define PGSTAT_IS_ACTIVE(s)\t\t\t\t\t\t\t\t\t\t\t\t\\\n+\t((s)->st_state == STATE_RUNNING || (s)->st_state == STATE_FASTPATH)\n+#define PGSTAT_IS_IDLEINTRANSACTION(s)\t\t\t \\\n+\t((s)->st_state == STATE_IDLEINTRANSACTION ||\t\t\\\n+\t (s)->st_state == STATE_IDLEINTRANSACTION_ABORTED)\n \n /* ----------\n * LocalPgBackendStatus", "msg_date": "Tue, 01 Feb 2022 13:55:16 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi,\n\n> > Could you please elaborate on this idea ?\n> > So we have pgStatActiveTime and pgStatIdleInTransactionTime ultimately\n> > used to report respective metrics in pg_stat_database.\n> > Now beentry's st_total_active_time / st_total_transaction_idle_time\n> > duplicates this info, so one may get rid of pgStat*Time counters. Is\n> > the idea to report instead of them at every tabstat reporting the\n> > difference between the last memorized value of st_total_*_time and\n> > its current value ?\n>\n> Exactly. The attached first diff is the schetch of that.\n\nThis diff actually adds more code than it removes and somewhat bloats the patch.\nI decided to incorporate it anyway because the diff explicitly shows\nthat time differences since the last report\nare send to the statistics collector,which is not immediately evident\nfrom the existing PgStat*Time counters.\nThat point may be worth further discussion though.\n\n\n> And, it seems like I forgot to mention this, but as Kuntal suggested\n> (in a different context and objective, though) upthraed, I think that\n> we can show realtime values in the two time fields by adding the time\n> of the current state. See the attached second diff.\n\nThat is exactly what we need in our infra, also included into the patch.\n\n\n@Kyotaro Horiguchi\nThank you for the contribution. I included both of your diffs with\nminor changes.\nShould I add you to the authors of the patch given that now half of it\nis basically your code ?\n\nRegards,\nSergey", "msg_date": "Fri, 4 Feb 2022 10:58:24 +0100", "msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 2022-02-04 10:58:24 +0100, Sergey Dudoladov wrote:\n> Thank you for the contribution. I included both of your diffs with\n> minor changes.\n\nThis currently doesn't apply: http://cfbot.cputube.org/patch_37_3405.log\n\nCould you rebase? Marking as waiting on author for now.\n\n- Andres\n\n\n", "msg_date": "Mon, 21 Mar 2022 17:13:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hello,\n\nI've updated the patch in preparation for the upcoming commitfest.\n\nRegards,\nSergey.", "msg_date": "Mon, 13 Jun 2022 16:51:00 +0200", "msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 6/13/22 4:51 PM, Sergey Dudoladov wrote:\n> Hello,\n>\n> I've updated the patch in preparation for the upcoming commitfest.\n\nI really like the idea of adding additional information like the ones in \nthis patch, so +1 for the patch.\n\nAs far the patch:\n\n@@ -864,7 +864,9 @@ CREATE VIEW pg_stat_activity AS\n              s.backend_xmin,\n              S.query_id,\n              S.query,\n-            S.backend_type\n+            S.backend_type,\n+            S.active_time,\n+            S.idle_in_transaction_time\n\nwhat about using total_active_time and total_idle_in_transaction_time?\n\nI think that would avoid any confusion and \"total_\" is also already used \nin other pg_stat_* views when appropriate.\n\n@@ -468,6 +468,13 @@ pgstat_beshutdown_hook(int code, Datum arg)\n\n         beentry->st_procpid = 0;        /* mark invalid */\n\n+       /*\n+        * Reset per-backend counters so that accumulated values for the \ncurrent\n+        * backend are not used for future backends.\n+        */\n+       beentry->st_total_active_time = 0;\n+       beentry->st_total_transaction_idle_time = 0;\n\nshouldn't that be in pgstat_bestart() instead? (and just let \npgstat_beshutdown_hook() set st_procpid to 0)\n\n         /* so that functions can check if backend_status.c is up via \nMyBEEntry */\n@@ -524,6 +531,8 @@ pgstat_report_activity(BackendState state, const \nchar *cmd_str)\n         TimestampTz start_timestamp;\n         TimestampTz current_timestamp;\n         int                     len = 0;\n+       int64           active_time_diff = 0;\n+       int64           transaction_idle_time_diff = 0;\n\nI think here we can use only a single variable say \"state_time_diff\" for \nexample, as later only one of those two is incremented anyway.\n\n+++ b/src/backend/utils/adt/pgstatfuncs.c\n@@ -539,7 +539,7 @@ pg_stat_get_progress_info(PG_FUNCTION_ARGS)\n  Datum\n  pg_stat_get_activity(PG_FUNCTION_ARGS)\n  {\n-#define PG_STAT_GET_ACTIVITY_COLS      30\n+#define PG_STAT_GET_ACTIVITY_COLS      32\n         int                     num_backends = \npgstat_fetch_stat_numbackends();\n         int                     curr_backend;\n         int                     pid = PG_ARGISNULL(0) ? -1 : \nPG_GETARG_INT32(0);\n@@ -621,6 +621,7 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n                 {\n                         SockAddr        zero_clientaddr;\n                         char       *clipped_activity;\n+                       int64           time_to_report;\n\nwhat about total_time_to_report instead?\n\nAlso, maybe not for this patch but I think that would be also useful to \nget the total time waited (so that we would get more inside of what the \n\"active\" time was made of).\n\nRegards,\n\n-- \n\nBertrand Drouvot\nAmazon Web Services:https://aws.amazon.com\n\n\n\n", "msg_date": "Mon, 4 Jul 2022 18:29:13 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hello,\n\nthanks for the helpful review. I have incorporated most of the\nsuggestions into the patch. I have also rebased and tested the patch\non top of the current master (2cd2569c72b89200).\n\n> + int64 active_time_diff = 0;\n> + int64 transaction_idle_time_diff = 0;\n>\n> I think here we can use only a single variable say \"state_time_diff\" for\n> example, as later only one of those two is incremented anyway.\n\nI have written it this way to avoid cluttering the critical section\nbetween PGSTAT_(BEGIN|END)_WRITE_ACTIVITY.\nWith two variable one can leave only actual increments in the section\nand check conditions / call TimestampDifference outside of it.\n\nRegards,\nSergey", "msg_date": "Mon, 11 Jul 2022 16:43:12 +0200", "msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Rafia, Sergey,\n\nMany thanks for working on this!\n\n> I have incorporated most of the suggestions into the patch. I have also rebased and tested the patch on top of the current master\n\nI noticed that this patch is marked as \"Needs Review\" and decided to\ntake a look.\n\nI believe there is a bug in the implementation. Here is what I did:\n\n```\n57033 (master) =# select * from pg_stat_activity where pid = 57033;\n...\ntotal_active_time | 9.128\ntotal_idle_in_transaction_time | 0\n\n57033 (master) =# select * from pg_stat_activity where pid = 57033;\n...\ntotal_active_time | 10.626\ntotal_idle_in_transaction_time | 0\n\n57033 (master) =# BEGIN;\n57033 (master) =# select * from pg_stat_activity where pid = 57033;\n...\ntotal_active_time | 17.443\ntotal_idle_in_transaction_time | 2314.703\n\n57033 (master) =# select * from pg_stat_activity where pid = 57033;\n...\ntotal_active_time | 2514.635\ntotal_idle_in_transaction_time | 2314.703\n\n57033 (master) =# COMMIT;\n57033 (master) =# select * from pg_stat_activity where pid = 57033;\n...\ntotal_active_time | 22.048\ntotal_idle_in_transaction_time | 7300.911\n```\n\nSo it looks like total_active_time tracks seconds when a user executes\nsingle expressions and milliseconds when running a transaction. It\nshould always track milliseconds.\n\nPlease use `git format-patch` for the next patch and provide a commit\nmessage, as it was previously pointed out by Bharath. Please specify\nthe list of the authors and reviewers and add a note about\nincrementing the catalog version.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 13 Jul 2022 11:56:44 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi again,\n\n> 57033 (master) =# select * from pg_stat_activity where pid = 57033;\n> ...\n> total_active_time | 2514.635\n> total_idle_in_transaction_time | 2314.703\n>\n> 57033 (master) =# COMMIT;\n> 57033 (master) =# select * from pg_stat_activity where pid = 57033;\n> ...\n> total_active_time | 22.048\n> total_idle_in_transaction_time | 7300.911\n> ```\n\nMy previous message was wrong, total_active_time doesn't track\nseconds. I got confused by the name of this column. Still I'm pretty\nconfident it shouldn't decrease.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 13 Jul 2022 12:09:13 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Rafia, Sergey,\n\n+1 for adding the total_active_time and total_idle_in_transaction_time \nto pg_stat_activity.\n\nI reviewed the patch and here are some comments.\n\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>total_active_time</structfield> <type>double \nprecision</type>\n+ </para>\n+ <para>\n+ Time in milliseconds this backend spent in \n<literal>active</literal> and\n+ <literal>fastpath</literal> states.\n\nIs 'fastpath' an abbreviation of 'fastpath function call'?\nIf so, I feel it's clearer '<literal>fastpath function call</literal>' \nthan '<literal>fastpath</literal>'.\n\n\n+extern uint64 pgstat_get_my_active_time(void);\n+extern uint64 pgstat_get_my_transaction_idle_time(void);\n\nAre these functions necessary?\nIt seems they are not called from anywhere, doesn't it?\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 14 Jul 2022 12:15:24 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hello,\n\nI have addressed the reviews.\n\n@Aleksander Alekseev thanks for reporting the issue. I have altered\nthe patch to respect the behavior of pg_stat_activity, specifically\n[1]\n\n> Another important point is that when a server process is asked to display any of these statistics,\n> it first fetches the most recent report emitted by the collector process and then continues to use this snapshot\n> for all statistical views and functions until the end of its current transaction.\n> So the statistics will show static information as long as you continue the current transaction.\n\nFor the patch it means no computing of real-time values of\ntotal_*_time. Here is an example to illustrate the new behavior:\n\n=# begin;\n\n=*# select total_active_time, total_idle_in_transaction_time from\npg_stat_activity where pid = pg_backend_pid();\n total_active_time | total_idle_in_transaction_time\n-------------------+--------------------------------\n 0.124 | 10505.098\n\npostgres=*# select pg_sleep(10);\n\npostgres=*# select total_active_time, total_idle_in_transaction_time\nfrom pg_stat_activity where pid = pg_backend_pid();\n total_active_time | total_idle_in_transaction_time\n-------------------+--------------------------------\n 0.124 | 10505.098\n\npostgres=*# commit;\n\npostgres=# select total_active_time, total_idle_in_transaction_time\nfrom pg_stat_activity where pid = pg_backend_pid();\n total_active_time | total_idle_in_transaction_time\n-------------------+--------------------------------\n 10015.796 | 29322.831\n\n\n[1] https://www.postgresql.org/docs/14/monitoring-stats.html#MONITORING-STATS-VIEWS\n\nRegards,\nSergey", "msg_date": "Thu, 21 Jul 2022 18:22:51 +0200", "msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi Sergey,\n\n> @Aleksander Alekseev thanks for reporting the issue. I have altered\n> the patch to respect the behavior of pg_stat_activity, specifically\n> [1]\n>\n> > Another important point is that when a server process is asked to\ndisplay any of these statistics,\n> > it first fetches the most recent report emitted by the collector\nprocess and then continues to use this snapshot\n> > for all statistical views and functions until the end of its current\ntransaction.\n> > So the statistics will show static information as long as you continue\nthe current transaction.\n>\n> For the patch it means no computing of real-time values of\n> total_*_time. Here is an example to illustrate the new behavior:\n>\n> =# begin;\n>\n> =*# select total_active_time, total_idle_in_transaction_time from\n> pg_stat_activity where pid = pg_backend_pid();\n> total_active_time | total_idle_in_transaction_time\n> -------------------+--------------------------------\n> 0.124 | 10505.098\n>\n> postgres=*# select pg_sleep(10);\n>\n> postgres=*# select total_active_time, total_idle_in_transaction_time\n> from pg_stat_activity where pid = pg_backend_pid();\n> total_active_time | total_idle_in_transaction_time\n> -------------------+--------------------------------\n> 0.124 | 10505.098\n>\n> postgres=*# commit;\n>\n> postgres=# select total_active_time, total_idle_in_transaction_time\n> from pg_stat_activity where pid = pg_backend_pid();\n> total_active_time | total_idle_in_transaction_time\n> -------------------+--------------------------------\n> 10015.796 | 29322.831\n>\n>\n> [1]\nhttps://www.postgresql.org/docs/14/monitoring-stats.html#MONITORING-STATS-VIEWS\n\nThis looks reasonable.\n\nWhat concerns me though is the fact that total_idle_in_transaction_time for\ngiven session doesn't seem to updated from the perspective of another\nsession:\n\n```\nsession1 (78376) =# BEGIN;\nsession1 (78376) =# select * from pg_stat_activity where pid = 78376;\n...\ntotal_active_time | 40.057\ntotal_idle_in_transaction_time | 34322.171\n\nsession1 (78376) =# select * from pg_stat_activity where pid = 78376;\n...\ntotal_active_time | 40.057\ntotal_idle_in_transaction_time | 34322.171\n\nsession2 (78382) =# select * from pg_stat_activity where pid = 78376;\n...\ntotal_active_time | 46.908\ntotal_idle_in_transaction_time | 96933.518\n\nsession2 (78382) =# select * from pg_stat_activity where pid = 78376;\n...\ntotal_active_time | 46.908\ntotal_idle_in_transaction_time | 96933.518 <--- doesn't change!\n\nsession1 (78376) =# COMMIT;\nsession1 (78376) =# select * from pg_stat_activity where pid = 78376;\n...\ntotal_active_time | 47.16\ntotal_idle_in_transaction_time | 218422.143\n\nsession2 (78382) =# select * from pg_stat_activity where pid = 78376;\ntotal_active_time | 50.631\ntotal_idle_in_transaction_time | 218422.143\n```\n\nThis is consistent with the current documentation:\n\n> Each individual server process transmits new statistical counts to the\ncollector just before going idle; so a query or transaction still in\nprogress does not affect the displayed totals.\n\nBut it makes me wonder if there will be a lot of use of\ntotal_idle_in_transaction_time and if the patch should actually alter this\nbehavior.\n\nThoughts?\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi Sergey,> @Aleksander Alekseev thanks for reporting the issue. I have altered> the patch to respect the behavior of pg_stat_activity, specifically> [1]>> > Another important point is that when a server process is asked to display any of these statistics,> > it first fetches the most recent report emitted by the collector process and then continues to use this snapshot> > for all statistical views and functions until the end of its current transaction.> > So the statistics will show static information as long as you continue the current transaction.>> For the patch it means no computing of real-time values of> total_*_time. Here is an example to illustrate the new behavior:>> =# begin;>> =*# select total_active_time, total_idle_in_transaction_time from> pg_stat_activity where pid = pg_backend_pid();>  total_active_time | total_idle_in_transaction_time> -------------------+-------------------------------->              0.124 |                      10505.098>> postgres=*# select pg_sleep(10);>> postgres=*# select total_active_time, total_idle_in_transaction_time> from pg_stat_activity where pid = pg_backend_pid();>  total_active_time | total_idle_in_transaction_time> -------------------+-------------------------------->              0.124 |                      10505.098>> postgres=*# commit;>> postgres=# select total_active_time, total_idle_in_transaction_time> from pg_stat_activity where pid = pg_backend_pid();>  total_active_time | total_idle_in_transaction_time> -------------------+-------------------------------->          10015.796 |                      29322.831>>> [1] https://www.postgresql.org/docs/14/monitoring-stats.html#MONITORING-STATS-VIEWSThis looks reasonable.What concerns me though is the fact that total_idle_in_transaction_time for given session doesn't seem to updated from the perspective of another session:```session1 (78376) =# BEGIN;session1 (78376) =# select * from pg_stat_activity where pid = 78376;...total_active_time              | 40.057total_idle_in_transaction_time | 34322.171session1 (78376) =# select * from pg_stat_activity where pid = 78376;...total_active_time              | 40.057total_idle_in_transaction_time | 34322.171session2 (78382) =# select * from pg_stat_activity where pid = 78376;...total_active_time              | 46.908total_idle_in_transaction_time | 96933.518session2 (78382) =# select * from pg_stat_activity where pid = 78376;...total_active_time              | 46.908total_idle_in_transaction_time | 96933.518 <--- doesn't change!session1 (78376) =# COMMIT;session1 (78376) =# select * from pg_stat_activity where pid = 78376;...total_active_time              | 47.16total_idle_in_transaction_time | 218422.143session2 (78382) =# select * from pg_stat_activity where pid = 78376;total_active_time              | 50.631total_idle_in_transaction_time | 218422.143```This is consistent with the current documentation:> Each individual server process transmits new statistical counts to the collector just before going idle; so a query or transaction still in progress does not affect the displayed totals.But it makes me wonder if there will be a lot of use of total_idle_in_transaction_time and if the patch should actually alter this behavior.Thoughts?-- Best regards,Aleksander Alekseev", "msg_date": "Fri, 22 Jul 2022 11:32:09 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi hackers,\n\nAll in all the patch seems to be in good shape.\n\n> This is consistent with the current documentation:\n>\n> > Each individual server process transmits new statistical counts to the collector just before going idle; so a query or transaction still in progress does not affect the displayed totals.\n>\n> But it makes me wonder if there will be a lot of use of total_idle_in_transaction_time and if the patch should actually alter this behavior.\n>\n> Thoughts?\n\nOn second thought, this is arguably out of scope of this particular\npatch and this particular discussion. In any case, having some stats\nis better than none.\n\nI'm going to change the status of the patch to \"Ready for Committer\"\nin a short time unless anyone has a second opinion.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 22 Jul 2022 12:42:20 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hello hackers,\n\nIs there anything we can do to facilitate merging of this patch ?\nIt has been in the \"ready-for-commiter\" state for 3 commitfests in a row now.\n\nWe would appreciate if the patch makes it to version 16: the need to\nmonitor idle-in-transaction connections is very real for us.\n\nRegards,\nSergey Dudoladov\n\n\n", "msg_date": "Tue, 8 Nov 2022 19:11:43 +0100", "msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 2022-07-21 18:22:51 +0200, Sergey Dudoladov wrote:\n> From b5298301a3f5223bd78c519ddcddbd1bec9cf000 Mon Sep 17 00:00:00 2001\n> From: Sergey Dudoladov <sergey.dudoladov@gmail.com>\n> Date: Wed, 20 Apr 2022 23:47:37 +0200\n> Subject: [PATCH] pg_stat_activity: add 'total_active_time' and\n> 'total_idle_in_transaction_time'\n> \n> catversion bump because of the change in the contents of pg_stat_activity\n> \n> Author: Sergey Dudoladov, based on the initial version by Rafia Sabih\n> \n> Reviewed-by: Aleksander Alekseev, Bertrand Drouvot, and Atsushi Torikoshi\n> \n> Discussion: https://www.postgresql.org/message-id/flat/CA%2BFpmFcJF0vwi-SWW0wYO-c-FbhyawLq4tCpRDCJJ8Bq%3Dja-gA%40mail.gmail.com\n\nIsn't this patch breaking pg_stat_database? You removed\npgstat_count_conn_active_time() etc and the declaration for pgStatActiveTime /\npgStatTransactionIdleTime (but left the definition in pgstat_database.c), but\ndidn't replace it with anything afaics.\n\n\nSeparately from that, I'm a bit worried about starting to add accumulative\ncounters to pg_stat_activity. It's already gotten hard to use interactively\ndue to the number of columns - and why stop with the columns you suggest? Why\nnot show e.g. the total number of reads/writes, tuples inserted / deleted,\netc. as well?\n\nI wonder if we shouldn't add a pg_stat_session or such for per-connection\ncounters that show not the current state, but accumulated per-session state.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 8 Nov 2022 17:56:14 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Tue, Nov 8, 2022 at 6:56 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> Separately from that, I'm a bit worried about starting to add accumulative\n> counters to pg_stat_activity. It's already gotten hard to use interactively\n> due to the number of columns - and why stop with the columns you suggest?\n> Why\n> not show e.g. the total number of reads/writes, tuples inserted / deleted,\n> etc. as well?\n>\n> I wonder if we shouldn't add a pg_stat_session or such for per-connection\n> counters that show not the current state, but accumulated per-session\n> state.\n>\n>\nI would much rather go down this route than make the existing table wider.\n\npg_stat_activity_state_duration (this patch) [the table - for a given\nbackend - would be empty if track_activities is off]\npg_stat_activity_bandwidth_usage (if someone feels like implementing the\nother items you mention)\n\n\nI'm not really buying into the idea of having multiple states sum their\ntimes together. I would expect one column per state. Actually two,\nbecause I also suggest that not only is the duration recorded, but a\ncounter be incremented each time a given state becomes the currently active\nstate. Seems like having access to a divisor of some form may be useful.\n\nSo 10 columns of data plus pid to join back to pg_stat_activity proper.\n\nDavid J.\n\nOn Tue, Nov 8, 2022 at 6:56 PM Andres Freund <andres@anarazel.de> wrote:\nSeparately from that, I'm a bit worried about starting to add accumulative\ncounters to pg_stat_activity. It's already gotten hard to use interactively\ndue to the number of columns - and why stop with the columns you suggest? Why\nnot show e.g. the total number of reads/writes, tuples inserted / deleted,\netc. as well?\n\nI wonder if we shouldn't add a pg_stat_session or such for per-connection\ncounters that show not the current state, but accumulated per-session state.I would much rather go down this route than make the existing table wider.pg_stat_activity_state_duration (this patch) [the table - for a given backend - would be empty if track_activities is off]pg_stat_activity_bandwidth_usage (if someone feels like implementing the other items you mention)I'm not really buying into the idea of having multiple states sum their times together.  I would expect one column per state.  Actually two, because I also suggest that not only is the duration recorded, but a counter be incremented each time a given state becomes the currently active state.  Seems like having access to a divisor of some form may be useful.So 10 columns of data plus pid to join back to pg_stat_activity proper.David J.", "msg_date": "Tue, 8 Nov 2022 19:25:27 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On 2022-11-08 19:25:27 -0700, David G. Johnston wrote:\n> Actually two, because I also suggest that not only is the duration recorded,\n> but a counter be incremented each time a given state becomes the currently\n> active state. Seems like having access to a divisor of some form may be\n> useful.\n\nWhat for?\n\n\n", "msg_date": "Tue, 8 Nov 2022 18:37:27 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Tue, Nov 8, 2022 at 7:37 PM Andres Freund <andres@anarazel.de> wrote:\n\n> On 2022-11-08 19:25:27 -0700, David G. Johnston wrote:\n> > Actually two, because I also suggest that not only is the duration\n> recorded,\n> > but a counter be incremented each time a given state becomes the\n> currently\n> > active state. Seems like having access to a divisor of some form may be\n> > useful.\n>\n> What for?\n>\n\nBecause 5 hours of idle-in-transaction time in a single block means\nsomething different than the same 5 hours accumulated across 300 mini-idles.\n\nDavid J.\n\nOn Tue, Nov 8, 2022 at 7:37 PM Andres Freund <andres@anarazel.de> wrote:On 2022-11-08 19:25:27 -0700, David G. Johnston wrote:\n> Actually two, because I also suggest that not only is the duration recorded,\n> but a counter be incremented each time a given state becomes the currently\n> active state.  Seems like having access to a divisor of some form may be\n> useful.\n\nWhat for?Because 5 hours of idle-in-transaction time in a single block means something different than the same 5 hours accumulated across 300 mini-idles.David J.", "msg_date": "Tue, 8 Nov 2022 19:43:54 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hello hackers,\n\nI've sketched the first version of a patch to add pg_stat_session.\nPlease review this early version.\n\nRegards,\nSergey.", "msg_date": "Wed, 1 Feb 2023 21:45:52 +0100", "msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Wed, Feb 1, 2023 at 12:46 PM Sergey Dudoladov\n<sergey.dudoladov@gmail.com> wrote:\n>\n> I've sketched the first version of a patch to add pg_stat_session.\n> Please review this early version.\n\nHi Sergey!\n\nI've taken a look into the patch and got some notes.\n1. It is hard to understand what fastpath backend state is. What do\nfastpath metrics mean for a user?\n2. Anyway, the path \"if (PGSTAT_IS_FASTPATH(beentry))\" seems\nunreachable to me. I'm a bit surprised that compilers do not produce\nwarnings about it. Maybe I'm just wrong.\n3. Tests do not check any incrementation logic. I think we can have\nsome test that verifies delta for select some_counter from\npg_stat_session where pid = pg_backend_pid();\n4. Macroses like PGSTAT_IS_RUNNING do not look like net win in code\nreadability and PGSTAT prefix have no semantic load.\n\n\nThat's all I've found so far. Thank you!\n\nBest regards, Andrey Borodin.\n\nPS. We were doing on-air review session [0], I hope Nik will chime-in\nwith \"usability part of a review\".\n\n[0] https://youtu.be/vTV8XhWf3mo?t=2404\n\n\n", "msg_date": "Thu, 16 Feb 2023 13:37:41 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hello hackers,\n\nAndrey and Nik, thank you for selecting this patch for review in\nPostgres Hacking 101: I've modified the patch based both on your email\nand the video.\n\n1. Session statistics is now collected only for client backends. PG\ninternal processes like wal sender seem to stop sending statistics\nafter they have entered their respective main loops.\n2. Fastpath state now counts towards the running state. I think this\nspecial case does not justify tracking two extra numbers for every\nclient backend.\n3. I've added a small test for pg_stat_session similar to other tests\nin src/test/regress/sql/sysviews.sql\n4. Here are the pb_bench results requested in the video review:\n\nConditions: no assertions, number of transactions = 1000\nThe query: SELECT generate_series(1, 10000000) OFFSET 10000000;\nWith pg_stat_session:\n latency average = 324.480 ms\n tps = 3.081857 (without initial connection time)\n\nWithout pg_stat_session:\n latency average = 327.370 ms\n tps = 3.054651 (without initial connection time)\n\n\nRegards,\nSergey", "msg_date": "Wed, 14 Jun 2023 07:50:35 +0200", "msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi Sergey,\n\nI've done a review of this patch. I found the patch idea very useful,\nthank you for the patch. I've noted something observing this patch:\n1. Patch can't be applied on the current master. My review is based on\n application of this patch over ac68323a878\n2. Being applied over ac68323a878 patch works as expected.\n3. Field names seems quite long to me (and they should be uniformly\n named with the same statistics in other views. For example\n \"running\" term is called \"active\" in pg_stat_database)\n4. Meaningless spaces at the end of line:\n - backend_status.c:586\n - monitoring.sgml:5857\n5. Patch adds\n\n usecs_diff = secs * 1000000 + usecs;\n\n at backend_status.c:pgstat_report_activity() to optimize\n calculations. But\n\n pgstat_count_conn_active_time((PgStat_Counter) secs * 1000000 +\nusecs);\n and \n pgstat_count_conn_txn_idle_time((PgStat_Counter) secs * 1000000 +\nusecs);\n\n are left in place after that.\n6. I'm not sure that I can understand the comment\n /* Keep statistics for pg_stat_database intact */\n at backend_status.c:600 correctly. Can you please explain it a\n little?\n7. Tests seems incomplete. It looks like we can check increments in\n all fields playing with transactions in tests.\n\nAlso, I have a thought about other possible improvements fitting to\nthis patch.\n\nThe view pg_stat_session is really needed in Postgres but I think it\nshould have much more statistics. I mean all resource statistics\nrelated to sessions. Every backend has instrumentation that tracks\nresource consumption. Data of this instrumentation goes to the\ncumulative statistics system and is used in monitoring extensions\n(like pg_stat_statements). I think pg_stat_session view is able to add\none more dimension of monitoring - a dimension of sessions. In my\nopinion this view should provide resource consumption statistics of\ncurrent sessions in two cumulative sets of statistics - since backend\nstart and since transaction start. Such view will be really useful in\nmonitoring of long running sessions and transactions providing\nresource consumption information besides timing statistics.\n\nregards, Andrei Zubkov\nPostgres Professional\n\n\n", "msg_date": "Wed, 25 Oct 2023 16:12:42 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi,\n\n> I've done a review of this patch. I found the patch idea very useful,\n> thank you for the patch. I've noted something observing this patch:\n> 1. Patch can't be applied on the current master. My review is based on\n> application of this patch over ac68323a878\n\nOn top of that not sure if I see the patch on the November commitfest\n[1]. Please make sure it's there so that cfbot will check the patch.\n\n[1]: https://commitfest.postgresql.org/45/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 25 Oct 2023 16:17:51 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi Aleksander,\n\nOn Wed, 2023-10-25 at 16:17 +0300, Aleksander Alekseev wrote:\n> On top of that not sure if I see the patch on the November commitfest\n> [1]. Please make sure it's there so that cfbot will check the patch.\n\nYes, this patch is listed on the November commitfest. cfbot says rebase\nneeded since 2023-08-21.\n\nregards, Andrei Zubkov\n\n\n\n", "msg_date": "Wed, 25 Oct 2023 16:36:32 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi,\n\n> On Wed, 2023-10-25 at 16:17 +0300, Aleksander Alekseev wrote:\n> > On top of that not sure if I see the patch on the November commitfest\n> > [1]. Please make sure it's there so that cfbot will check the patch.\n>\n> Yes, this patch is listed on the November commitfest. cfbot says rebase\n> needed since 2023-08-21.\n\nYou are right, I missed the corresponding entry [1].\n\n[1]: https://commitfest.postgresql.org/45/3405/\n\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 25 Oct 2023 16:38:19 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "On Wed, 25 Oct 2023 at 19:06, Andrei Zubkov <zubkov@moonset.ru> wrote:\n>\n> Hi Aleksander,\n>\n> On Wed, 2023-10-25 at 16:17 +0300, Aleksander Alekseev wrote:\n> > On top of that not sure if I see the patch on the November commitfest\n> > [1]. Please make sure it's there so that cfbot will check the patch.\n>\n> Yes, this patch is listed on the November commitfest. cfbot says rebase\n> needed since 2023-08-21.\n\nI have changed the status of commitfest entry to \"Returned with\nFeedback\" as Andrei Zubkov's comments have not yet been resolved.\nPlease feel free to post an updated version of the patch and update\nthe commitfest entry accordingly.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 14 Jan 2024 16:34:44 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi all,\n\n@Andrei Zubkov\nI've modify the patch to address most of your comments.\n\n> I have a thought about other possible improvements fitting to this patch.\n> I think pg_stat_session view is able to add one more dimension of\nmonitoring - a dimension of sessions\n\nI would like to remind here about the initial scope of this patch. The main\ngoal of it was to ease tracking \"idle in transactions\" connections, a\nfeature that would really help in my work. The \"pg_stat_session\" came into\nplay only because the \"pg_stat_activity\" was seen as an unsuitable place\nfor the relevant counters. With that I still would like to maintaint the\nfocus on committing the \"idle in transactions\" part of pg_stat_session\nfirst.\n\nRegards,\nSergey", "msg_date": "Thu, 1 Feb 2024 18:41:55 +0100", "msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi again,\n\n> It looks like we can check increments in all fields playing with\ntransactions in tests.\n\nI've added such tests.\n\nRegards,\nSergey", "msg_date": "Thu, 1 Feb 2024 20:50:15 +0100", "msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" }, { "msg_contents": "Hi Sergei,\n\n> I still would like to maintaint the focus on committing the \"idle in\ntransactions\" part of pg_stat_session first.\n\nAgreed.\n\nI've done a review of version 0004. This version is applied successful\nover ce571434ae7, installcheck passed. The behavior of pg_stat_session\nview and corresponding function looks correct. I've didn't found any\nissues in the code.\n\nNotes about the current state of a patch:\n\nNaming\nthe view and function names 'pg_stat_session' seems correct for this\nparticular scope of a patch. However possible future resource\nconsumption statistics are valid for all backends (vacuum for example).\nRight now it is not clear for me if we can get resource statistics from\nthose backends while those are listed in the pg_stat_activity view but\nrenaming to something like 'pg_stat_backend' seems reasonable to me.\n\nDocs\n1. session states referenced in monitoring.sgml is not uniform with\nthose of the pg_stat_activity view.\nmonitoring.sgml:4635\nmonitoring.sgml:4644\n+ Time in milliseconds this backend spent in the\n<literal>running</literal> or <literal>fastpath</literal> state.\nI think those states should be referenced uniformly with\npg_stat_activity.\n\n2. Description of the 'pg_stat_get_session()' function might be as\nfollows:\n\n Returns a row, showing statistics about the client backend with the\n specified process ID, or one row per client backend if\n <literal>NULL</literal> is specified. The fields returned are the\n same as those of <structname>pg_stat_session</structname> view.\n\nThe main thought here is to get rid of 'each active backend' because\n'active backend' looks like backend in the 'active' state.\n\nTests\nCall to a non-existing function is depend on non-existence of a\nfunction, which can't be guaranteed absolutely. How about to do some\nkind of obvious error here? Couple examples follows:\n\nSELECT 0/0;\n\n- or -\n\nDO $$\nBEGIN\nRAISE 'test error';\nEND;\n$$ LANGUAGE plpgsql;\n\nMy personal choice would be the last one.\n\n-- \nregards, Andrei Zubkov\nPostgres Professional\n\n\n\n", "msg_date": "Mon, 12 Feb 2024 15:30:58 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": false, "msg_subject": "Re: Add connection active, idle time to pg_stat_activity" } ]
[ { "msg_contents": "Hi hackers,\n\nDuring the discussion [1] it was discovered that we have two\nprocedures in execTuples.c that do the same thing:\n\n* MakeSingleTupleTableSlot()\n* MakeTupleTableSlot()\n\nIn fact, MakeSingleTupleTableSlot() is simply a wrapper for\nMakeTupleTableSlot().\n\nI propose keeping only one of these procedures to simplify navigating\nthrough the code and debugging, and maybe saving a CPU cycle or two. A\nsearch for MakeTupleTableSlot produced 8 matches across 2 files, while\nMakeSingleTupleTableSlot is used 41 times across 26 files. Thus the\nproposed patch removes MakeTupleTableSlot and keeps\nMakeSingleTupleTableSlot to keep the patch less invasive and simplify\nbackporting of the other patches. Hopefully, this will not complicate\nthe life of the extension developers too much.\n\nThe patch was tested on MacOS against `master` branch b1ce6c28.\n\n[1]: https://www.postgresql.org/message-id/flat/CAJ7c6TP0AowkUgNL6zcAK-s5HYsVHVBRWfu69FRubPpfwZGM9A%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Fri, 22 Oct 2021 16:39:37 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Refactoring: join MakeSingleTupleTableSlot() and MakeTupleTableSlot()" }, { "msg_contents": "On Fri, Oct 22, 2021 at 04:39:37PM +0300, Aleksander Alekseev wrote:\n> I propose keeping only one of these procedures to simplify navigating\n> through the code and debugging, and maybe saving a CPU cycle or two. A\n> search for MakeTupleTableSlot produced 8 matches across 2 files, while\n> MakeSingleTupleTableSlot is used 41 times across 26 files. Thus the\n> proposed patch removes MakeTupleTableSlot and keeps\n> MakeSingleTupleTableSlot to keep the patch less invasive and simplify\n> backporting of the other patches. Hopefully, this will not complicate\n> the life of the extension developers too much.\n\nTo make the life of extension developers easier, we could as well have\na compatibility macro so as anybody using MakeTupleTableSlot() won't\nbe annoyed by this change. However, looking around, this does not\nlook like a popular API so I'd be fine with your change as proposed.\n\nOther opinions?\n--\nMichael", "msg_date": "Sun, 24 Oct 2021 08:33:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Refactoring: join MakeSingleTupleTableSlot() and\n MakeTupleTableSlot()" }, { "msg_contents": "On 2021-Oct-22, Aleksander Alekseev wrote:\n\n> Hi hackers,\n> \n> During the discussion [1] it was discovered that we have two\n> procedures in execTuples.c that do the same thing:\n> \n> * MakeSingleTupleTableSlot()\n> * MakeTupleTableSlot()\n> \n> In fact, MakeSingleTupleTableSlot() is simply a wrapper for\n> MakeTupleTableSlot().\n\nDid you see the arguments at [1]?\n\n[1] https://www.postgresql.org/message-id/1632520.1613195514%40sss.pgh.pa.us\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 25 Oct 2021 18:47:00 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Refactoring: join MakeSingleTupleTableSlot() and\n MakeTupleTableSlot()" }, { "msg_contents": "Hi Alvaro,\n\n> Did you see the arguments at [1]?\n>\n> [1] https://www.postgresql.org/message-id/1632520.1613195514%40sss.pgh.pa.us\n\nNo, I missed it. Thanks for sharing.\n\n> If you dig in the git history (see f92e8a4b5 in particular) you'll note\n> that the current version of MakeTupleTableSlot originated as code shared\n> between ExecAllocTableSlot and MakeSingleTupleTableSlot.\n> [...]\n> In short: I'm not okay with doing\n> s/MakeTupleTableSlot/MakeSingleTupleTableSlot/g in a patch that doesn't\n> also introduce matching ExecDropSingleTupleTableSlot calls (unless those\n> exist somewhere already; but where?). If we did clean that up, maybe\n> MakeTupleTableSlot could become \"static\". But I'd still be inclined to\n> keep it physically separate, leaving it to the compiler to decide whether\n> to inline it into the callers.\n> [...]\n\nOK, I will need some time to figure out the actual difference between\nthese two functions and to submit an updated version of the patch.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 26 Oct 2021 14:48:47 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Refactoring: join MakeSingleTupleTableSlot() and\n MakeTupleTableSlot()" }, { "msg_contents": "On 2021-Oct-26, Aleksander Alekseev wrote:\n\n> > In short: I'm not okay with doing\n> > s/MakeTupleTableSlot/MakeSingleTupleTableSlot/g in a patch that doesn't\n> > also introduce matching ExecDropSingleTupleTableSlot calls (unless those\n> > exist somewhere already; but where?). If we did clean that up, maybe\n> > MakeTupleTableSlot could become \"static\". But I'd still be inclined to\n> > keep it physically separate, leaving it to the compiler to decide whether\n> > to inline it into the callers.\n\nAnother point that could be made is that perhaps\nMakeSingleTupleTableSlot should always construct a slot using virtual\ntuples rather than passing TTSOps as a parameter?\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Ellos andaban todos desnudos como su madre los parió, y también las mujeres,\naunque no vi más que una, harto moza, y todos los que yo vi eran todos\nmancebos, que ninguno vi de edad de más de XXX años\" (Cristóbal Colón)\n\n\n", "msg_date": "Tue, 26 Oct 2021 08:54:37 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Refactoring: join MakeSingleTupleTableSlot() and\n MakeTupleTableSlot()" }, { "msg_contents": "On Tue, Oct 26, 2021 at 7:54 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Another point that could be made is that perhaps\n> MakeSingleTupleTableSlot should always construct a slot using virtual\n> tuples rather than passing TTSOps as a parameter?\n\nI haven't really looked at this issue deeply but that seems like it\nmight be a bit confusing. Then \"single\" would end up being an alias\nfor \"virtual\" which I don't suppose is what anyone is expecting.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 26 Oct 2021 11:03:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring: join MakeSingleTupleTableSlot() and\n MakeTupleTableSlot()" }, { "msg_contents": "On 2021-Oct-26, Robert Haas wrote:\n\n> On Tue, Oct 26, 2021 at 7:54 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > Another point that could be made is that perhaps\n> > MakeSingleTupleTableSlot should always construct a slot using virtual\n> > tuples rather than passing TTSOps as a parameter?\n> \n> I haven't really looked at this issue deeply but that seems like it\n> might be a bit confusing. Then \"single\" would end up being an alias\n> for \"virtual\" which I don't suppose is what anyone is expecting.\n\nYeah -- another point against that idea is that most of the callers are\nindeed not using virtual tuples, so it doesn't really work. I was just\nthinking that if something wants to process transient tuples they may\njust be virtual and not be forced to make them heap tuples, but on\nlooking again, that's not how the abstraction works.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Siempre hay que alimentar a los dioses, aunque la tierra esté seca\" (Orual)\n\n\n", "msg_date": "Tue, 26 Oct 2021 13:36:03 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Refactoring: join MakeSingleTupleTableSlot() and\n MakeTupleTableSlot()" } ]
[ { "msg_contents": "Hackers,\n\nI noticed recently that permissions checking is done differently for the \nserver certificate key than the client key. Specifically, on the server \nthe key can have 640 perms if it is owned by root.\n\nOn the server side this change was made in 9a83564c and I think the same \nrational applies equally well to the client key. At the time managed \nkeys on the client may not have been common but they are now.\n\nAttached is a patch to make this change.\n\nI was able to this this manually by hacking 001_ssltests.pl like so:\n\n-\tchmod 0640, \"ssl/${key}_tmp.key\"\n+\tchmod 0600, \"ssl/${key}_tmp.key\"\n \t or die \"failed to change permissions on ssl/${key}_tmp.key: $!\";\n-\tsystem_or_bail(\"sudo chown root ssl/${key}_tmp.key\");\n\nBut this is clearly not going to work for general purpose testing. The \nserver keys also not tested for root ownership so perhaps we do not need \nthat here either.\n\nI looked at trying to make this code common between the server and \nclient but due to the differences in error reporting it seemed like more \ntrouble than it was worth.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Fri, 22 Oct 2021 11:41:21 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Allow root ownership of client certificate key" }, { "msg_contents": "On 10/22/21 11:41 AM, David Steele wrote:\n> \n> I noticed recently that permissions checking is done differently for the \n> server certificate key than the client key. Specifically, on the server \n> the key can have 640 perms if it is owned by root.\n> \n> On the server side this change was made in 9a83564c and I think the same \n> rational applies equally well to the client key. At the time managed \n> keys on the client may not have been common but they are now.\n> \n> Attached is a patch to make this change.\n> \n> I was able to this this manually by hacking 001_ssltests.pl like so:\n> \n> -    chmod 0640, \"ssl/${key}_tmp.key\"\n> +    chmod 0600, \"ssl/${key}_tmp.key\"\n>        or die \"failed to change permissions on ssl/${key}_tmp.key: $!\";\n> -    system_or_bail(\"sudo chown root ssl/${key}_tmp.key\");\n> \n> But this is clearly not going to work for general purpose testing. The \n> server keys also not tested for root ownership so perhaps we do not need \n> that here either.\n> \n> I looked at trying to make this code common between the server and \n> client but due to the differences in error reporting it seemed like more \n> trouble than it was worth.\n\nAdded to next CF: https://commitfest.postgresql.org/35/3379\n\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Thu, 28 Oct 2021 09:08:38 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Allow root ownership of client certificate key" }, { "msg_contents": "Greetings,\n\n* David Steele (david@pgmasters.net) wrote:\n> I noticed recently that permissions checking is done differently for the\n> server certificate key than the client key. Specifically, on the server the\n> key can have 640 perms if it is owned by root.\n\nYeah, that strikes me as odd too, particularly given that many many\ncases of client-side certificates are application servers and not actual\nend users. If we can justify having a looser check on the PG server\nside then it surely makes sense that an app server could also be\njustified in having such a permission setup (and it definitely happens\noften in Kubernetes/OpenShift and such places where secrets are mounted\nfrom somewhere else).\n\n> On the server side this change was made in 9a83564c and I think the same\n> rational applies equally well to the client key. At the time managed keys on\n> the client may not have been common but they are now.\n\nAgreed.\n\n> Attached is a patch to make this change.\n> \n> I was able to this this manually by hacking 001_ssltests.pl like so:\n> \n> -\tchmod 0640, \"ssl/${key}_tmp.key\"\n> +\tchmod 0600, \"ssl/${key}_tmp.key\"\n> \t or die \"failed to change permissions on ssl/${key}_tmp.key: $!\";\n> -\tsystem_or_bail(\"sudo chown root ssl/${key}_tmp.key\");\n> \n> But this is clearly not going to work for general purpose testing. The\n> server keys also not tested for root ownership so perhaps we do not need\n> that here either.\n\nMakes sense to me.\n\n> I looked at trying to make this code common between the server and client\n> but due to the differences in error reporting it seemed like more trouble\n> than it was worth.\n\nMaybe we should at least have the comments refer to each other though,\nto hopefully encourage future hackers in this area to maintain\nconsistency between the two and avoid what happened before..?\n\n> diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c\n> index 3a7cc8f774..285e772170 100644\n> --- a/src/interfaces/libpq/fe-secure-openssl.c\n> +++ b/src/interfaces/libpq/fe-secure-openssl.c\n> @@ -1234,11 +1234,38 @@ initialize_SSL(PGconn *conn)\n> \t\t\t\t\t\t\t fnbuf);\n> \t\t\treturn -1;\n> \t\t}\n> +\n> +\t\t/*\n> +\t\t* Refuse to load key files owned by users other than us or root.\n> +\t\t*\n> +\t\t* XXX surely we can check this on Windows somehow, too.\n> +\t\t*/\n\nNot really sure if there's actually much point in having this marked in\nthis way as it's not apparently something we're going to actually fix in\nthe near term. Maybe instead something like \"Would be nice to find a\nway to do this on Windows somehow, too, but it isn't clear today how\nto.\"\n\n> +#ifndef WIN32\n> +\t\tif (buf.st_uid != geteuid() && buf.st_uid != 0)\n> +\t\t{\n> +\t\t\tappendPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\t\t\t\t libpq_gettext(\"private key file \\\"%s\\\" must be owned by the current user or root\\n\"),\n> +\t\t\t\t\t\t\t fnbuf);\n> +\t\t\treturn -1;\n> +\t\t}\n> +#endif\n\nBasically the same check as what is done on the server side, so this\nlooks good to me.\n\n> +\t\t/*\n> +\t\t* Require no public access to key file. If the file is owned by us,\n> +\t\t* require mode 0600 or less. If owned by root, require 0640 or less to\n> +\t\t* allow read access through our gid, or a supplementary gid that allows\n> +\t\t* to read system-wide certificates.\n> +\t\t*\n> +\t\t* XXX temporarily suppress check when on Windows, because there may not\n> +\t\t* be proper support for Unix-y file permissions. Need to think of a\n> +\t\t* reasonable check to apply on Windows.\n> +\t\t*/\n\nOn the server-side, we also include a reference to postmaster.c. Not\nsure if we need to do that or not but just figured I'd mention it.\n\n> #ifndef WIN32\n> -\t\tif (!S_ISREG(buf.st_mode) || buf.st_mode & (S_IRWXG | S_IRWXO))\n> +\t\tif ((buf.st_uid == geteuid() && buf.st_mode & (S_IRWXG | S_IRWXO)) ||\n> +\t\t\t(buf.st_uid == 0 && buf.st_mode & (S_IWGRP | S_IXGRP | S_IRWXO)))\n> \t\t{\n> \t\t\tappendPQExpBuffer(&conn->errorMessage,\n> -\t\t\t\t\t\t\t libpq_gettext(\"private key file \\\"%s\\\" has group or world access; permissions should be u=rw (0600) or less\\n\"),\n> +\t\t\t\t\t\t\t libpq_gettext(\"private key file \\\"%s\\\" has group or world access; file must have permissions u=rw (0600) or less if owned by the current user, or permissions u=rw,g=r (0640) or less if owned by root.\\n\"),\n> \t\t\t\t\t\t\t fnbuf);\n> \t\t\treturn -1;\n> \t\t}\n\nDo we really want to remove the S_ISREG() check? We have that check\n(although a bit earlier) on the server side and we've had it for a very\nlong time, so I don't think that we want to drop it, certainly not\nwithout some additional discussion as to why we should (and why it would\nmake sense to have that be different between the client side and the\nserver side).\n\nThanks,\n\nStephen", "msg_date": "Mon, 8 Nov 2021 14:04:05 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Allow root ownership of client certificate key" }, { "msg_contents": "On 11/8/21 2:04 PM, Stephen Frost wrote:\n> * David Steele (david@pgmasters.net) wrote:\n> \n>> I looked at trying to make this code common between the server and client\n>> but due to the differences in error reporting it seemed like more trouble\n>> than it was worth.\n> \n> Maybe we should at least have the comments refer to each other though,\n> to hopefully encourage future hackers in this area to maintain\n> consistency between the two and avoid what happened before..?\n\nDone.\n\n>> +\n>> +\t\t/*\n>> +\t\t* Refuse to load key files owned by users other than us or root.\n>> +\t\t*\n>> +\t\t* XXX surely we can check this on Windows somehow, too.\n>> +\t\t*/\n> \n> Not really sure if there's actually much point in having this marked in\n> this way as it's not apparently something we're going to actually fix in\n> the near term. Maybe instead something like \"Would be nice to find a\n> way to do this on Windows somehow, too, but it isn't clear today how\n> to.\"\n\nDone.\n\n>> +\t\t/*\n>> +\t\t* Require no public access to key file. If the file is owned by us,\n>> +\t\t* require mode 0600 or less. If owned by root, require 0640 or less to\n>> +\t\t* allow read access through our gid, or a supplementary gid that allows\n>> +\t\t* to read system-wide certificates.\n>> +\t\t*\n>> +\t\t* XXX temporarily suppress check when on Windows, because there may not\n>> +\t\t* be proper support for Unix-y file permissions. Need to think of a\n>> +\t\t* reasonable check to apply on Windows.\n>> +\t\t*/\n> \n> On the server-side, we also include a reference to postmaster.c. Not\n> sure if we need to do that or not but just figured I'd mention it.\n\nLooks like this moved to miscinit.c so probably this comment deserves an \nupdate. That might be better as a separate commit.\n\nIn the patch I referenced the function name instead since that will come \nup in searches when the original function gets renamed/moved.\n\n>> #ifndef WIN32\n>> -\t\tif (!S_ISREG(buf.st_mode) || buf.st_mode & (S_IRWXG | S_IRWXO))\n>> +\t\tif ((buf.st_uid == geteuid() && buf.st_mode & (S_IRWXG | S_IRWXO)) ||\n>> +\t\t\t(buf.st_uid == 0 && buf.st_mode & (S_IWGRP | S_IXGRP | S_IRWXO)))\n>> \t\t{\n>> \t\t\tappendPQExpBuffer(&conn->errorMessage,\n>> -\t\t\t\t\t\t\t libpq_gettext(\"private key file \\\"%s\\\" has group or world access; permissions should be u=rw (0600) or less\\n\"),\n>> +\t\t\t\t\t\t\t libpq_gettext(\"private key file \\\"%s\\\" has group or world access; file must have permissions u=rw (0600) or less if owned by the current user, or permissions u=rw,g=r (0640) or less if owned by root.\\n\"),\n>> \t\t\t\t\t\t\t fnbuf);\n>> \t\t\treturn -1;\n>> \t\t}\n> \n> Do we really want to remove the S_ISREG() check? We have that check\n> (although a bit earlier) on the server side and we've had it for a very\n> long time, so I don't think that we want to drop it, certainly not\n> without some additional discussion as to why we should (and why it would\n> make sense to have that be different between the client side and the\n> server side).\n\nOof. Definitely a copy-paste error.\n\nA new version is attached with these changes, plus I consolidated the \nchecks under one comment block to reduce comment and #ifdef duplication.\n\nWe may want to do the same on the server side to make the code blocks \nlook more similar.\n\nAlso, on the server side the S_ISREG() check gets its own error and that \nmight be a good idea on the client side as well. As it is, the error \nmessage on the client is going to be pretty confusing in this case.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Mon, 8 Nov 2021 17:36:39 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Allow root ownership of client certificate key" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> [ client-key-perm-002.patch ]\n\nI took a quick look at this and agree with the proposed behavior\nchange, but also with your self-criticisms:\n\n> We may want to do the same on the server side to make the code blocks \n> look more similar.\n>\n> Also, on the server side the S_ISREG() check gets its own error and that \n> might be a good idea on the client side as well. As it is, the error \n> message on the client is going to be pretty confusing in this case.\n\nParticularly, I think the S_ISREG check should happen before any\nownership/permissions checks; it just seems saner that way.\n\nThe only other nitpick I have is that I'd make the cross-references be\nto the two file names, ie like \"Note that similar checks are performed\nin fe-secure-openssl.c ...\" References to the specific functions seem\nlikely to bit-rot in the face of future code rearrangements.\nI suppose filename references could become obsolete too, but it\nseems less likely.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Jan 2022 15:41:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow root ownership of client certificate key" }, { "msg_contents": "On 1/18/22 15:41, Tom Lane wrote:\n> David Steele <david@pgmasters.net> writes:\n> \n> I took a quick look at this and agree with the proposed behavior\n> change, but also with your self-criticisms:\n> \n>> We may want to do the same on the server side to make the code blocks\n>> look more similar.\n>>\n>> Also, on the server side the S_ISREG() check gets its own error and that\n>> might be a good idea on the client side as well. As it is, the error\n>> message on the client is going to be pretty confusing in this case.\n> \n> Particularly, I think the S_ISREG check should happen before any\n> ownership/permissions checks; it just seems saner that way.\n\nI was worried about doing too much refactoring in this commit since I \nhave hopes and dreams of it being back-patched. But I'll go ahead and do \nthat and if any part of this can be back-patched we'll consider that \nseparately.\n\n> The only other nitpick I have is that I'd make the cross-references be\n> to the two file names, ie like \"Note that similar checks are performed\n> in fe-secure-openssl.c ...\" References to the specific functions seem\n> likely to bit-rot in the face of future code rearrangements.\n> I suppose filename references could become obsolete too, but it\n> seems less likely.\n\nIt's true that functions are more likely to be renamed, but when I \nrename a function I then search for all the places where it is used so I \ncan update them. If the function name appears in a comment that gets \nupdated as well.\n\nIf you would still prefer filenames I have no strong argument against \nthat, just wanted to explain my logic.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 18 Jan 2022 16:44:29 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Allow root ownership of client certificate key" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> On 1/18/22 15:41, Tom Lane wrote:\n>> The only other nitpick I have is that I'd make the cross-references be\n>> to the two file names, ie like \"Note that similar checks are performed\n>> in fe-secure-openssl.c ...\" References to the specific functions seem\n>> likely to bit-rot in the face of future code rearrangements.\n>> I suppose filename references could become obsolete too, but it\n>> seems less likely.\n\n> It's true that functions are more likely to be renamed, but when I \n> rename a function I then search for all the places where it is used so I \n> can update them. If the function name appears in a comment that gets \n> updated as well.\n\nHarsh experience says that a lot of Postgres contributors have zero\ninterest in updating comments two lines away from what they're editing,\nlet alone in some distant branch of the source tree. But I'm not dead\nset on it either way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Jan 2022 16:51:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow root ownership of client certificate key" }, { "msg_contents": "Hi Tom,\n\nOn 1/18/22 14:41, Tom Lane wrote:\n> David Steele <david@pgmasters.net> writes:\n>> [ client-key-perm-002.patch ]\n> \n> I took a quick look at this and agree with the proposed behavior\n> change, but also with your self-criticisms:\n> \n>> We may want to do the same on the server side to make the code blocks\n>> look more similar.\n>>\n>> Also, on the server side the S_ISREG() check gets its own error and that\n>> might be a good idea on the client side as well. As it is, the error\n>> message on the client is going to be pretty confusing in this case.\n> \n> Particularly, I think the S_ISREG check should happen before any\n> ownership/permissions checks; it just seems saner that way.\n\nThe two blocks of code now look pretty much identical except for error \nhandling and the reference to the other file. Also, the indentation for \nthe comment on the server side is less but I kept the comment formatting \nthe same to make it easier to copy the comment back and forth.\n\n> The only other nitpick I have is that I'd make the cross-references be\n> to the two file names, ie like \"Note that similar checks are performed\n> in fe-secure-openssl.c ...\" References to the specific functions seem\n> likely to bit-rot in the face of future code rearrangements.\n> I suppose filename references could become obsolete too, but it\n> seems less likely.\n\nUpdated these to reference the file instead of the function.\n\nI still don't think we can commit the test for root ownership, but \ntesting it manually got a whole lot easier after the refactor in \nc3b34a0f. Before that you had to hack up the source tree, which is a \npain depending on how it is mounted (I'm testing in a container).\n\nSo, to test the new functionality, just add this snippet on line 57 of \n001_ssltests.pl:\n\nchmod 0640, \"$cert_tempdir/client.key\"\n\tor die \"failed to change permissions on $cert_tempdir/client.key: $!\";\nsystem_or_bail(\"sudo chown root $cert_tempdir/client.key\");\n\nIf you can think of a way to add this to the tests I'm all ears. Perhaps \nwe could add these lines commented out and explain what they are for?\n\nRegards,\n-David", "msg_date": "Wed, 16 Feb 2022 12:57:15 -0600", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Allow root ownership of client certificate key" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> [ client-key-perm-003.patch ]\n\nPushed with a bit of copy-editing of the comments.\n\n> So, to test the new functionality, just add this snippet on line 57 of \n> 001_ssltests.pl:\n> chmod 0640, \"$cert_tempdir/client.key\"\n> \tor die \"failed to change permissions on $cert_tempdir/client.key: $!\";\n> system_or_bail(\"sudo chown root $cert_tempdir/client.key\");\n> If you can think of a way to add this to the tests I'm all ears. Perhaps \n> we could add these lines commented out and explain what they are for?\n\nI believe we have some precedents for invoking this sort of test\noptionally if an appropriate environment variable is set. However,\nI'm having a pretty hard time seeing that there's any real use-case\nfor a test set up like this. The TAP tests are meant for automatic\ntesting, and nobody is going to run automatic tests in an environment\nwhere they'd be allowed to sudo. (Or at least I sure hope nobody\nworking on this project is that naive.)\n\nIf somebody wants to put this in despite that, I'd merely suggest\nthat the server-side logic ought to get exercised too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Feb 2022 14:20:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow root ownership of client certificate key" }, { "msg_contents": "On 2/28/22 13:20, Tom Lane wrote:\n> David Steele <david@pgmasters.net> writes:\n>> [ client-key-perm-003.patch ]\n> \n> Pushed with a bit of copy-editing of the comments.\n\nThank you!\n\nAny thoughts on back-patching at least the client portion of this? \nProbably hard to argue that it's a bug, but it is certainly painful.\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 28 Feb 2022 18:07:28 -0600", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Allow root ownership of client certificate key" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> Any thoughts on back-patching at least the client portion of this? \n> Probably hard to argue that it's a bug, but it is certainly painful.\n\nI'd be more eager to do that if we had some field complaints\nabout it. Since we don't, my inclination is not to, but I'm\nonly -0.1 or so; anybody else want to vote?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Feb 2022 19:12:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow root ownership of client certificate key" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> David Steele <david@pgmasters.net> writes:\n> > Any thoughts on back-patching at least the client portion of this? \n> > Probably hard to argue that it's a bug, but it is certainly painful.\n> \n> I'd be more eager to do that if we had some field complaints\n> about it. Since we don't, my inclination is not to, but I'm\n> only -0.1 or so; anybody else want to vote?\n\nThis patch was specifically developed in response to field complaints\nabout it working differently, so there's that. Currently it's being\nworked around in the container environments by copying the key from the\nsecret that's provided to a temporary space where we can modify the\nprivileges, but that's pretty terrible. Would be great to be able to\nget rid of that in favor of being able to use it directly.\n\nThanks,\n\nStephen", "msg_date": "Mon, 28 Feb 2022 19:31:53 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Allow root ownership of client certificate key" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> I'd be more eager to do that if we had some field complaints\n>> about it. Since we don't, my inclination is not to, but I'm\n>> only -0.1 or so; anybody else want to vote?\n\n> This patch was specifically developed in response to field complaints\n> about it working differently, so there's that.\n\nHmm ... I didn't recall seeing any on the lists, but a bit of archive\nsearching found\n\nhttps://www.postgresql.org/message-id/flat/20170213184323.6099.18278%40wrigleys.postgresql.org\n\nwherein we'd considered the idea and rejected it, or at least decided\nthat we wanted finer-grained control than the server side needs.\nSo that's *a* field complaint. But are we still worried about the\nconcerns that were raised there?\n\nRe-reading, it looks like the submitter then wanted us to just drop the\nprohibition of group-readability without tying it to root ownership,\nwhich I feel would indeed be pretty dangerous given how many systems have\ngroups like \"users\". But I don't think root-owned-group-readable is such\na problem: if you can create such a file then you can make one owned by\nthe calling user, too.\n\nAnyway, I'd be happier about back-patching if we could document\nactual requests to make it work like the server side does.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Feb 2022 22:15:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow root ownership of client certificate key" }, { "msg_contents": "On 3/1/22 3:15 AM, Tom Lane wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n>> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>>> I'd be more eager to do that if we had some field complaints\n>>> about it. Since we don't, my inclination is not to, but I'm\n>>> only -0.1 or so; anybody else want to vote?\n> \n>> This patch was specifically developed in response to field complaints\n>> about it working differently, so there's that.\n> \n> Anyway, I'd be happier about back-patching if we could document\n> actual requests to make it work like the server side does.\n> \n\nThis patch is tidy and addresses an incompatibility with Kubernetes, so\n+1 from me for a back-patch.\n\n\nPGO runs PostgreSQL 10 through 14 in Kubernetes, and we have to work\naround this issue when using certificates for system accounts.\n\nFor example, we use certificates to encrypt and authenticate streaming\nreplication connections. We store certificates in the Kubernetes API as\nSecrets.[1] Kubernetes then hands those certificates/secrets to a\nrunning container by mounting them as files on the filesystem.\n\nThose files and their directories are managed by Kubernetes (as root)\nfrom outside the container, and processes inside the container (as\nnot-root) cannot change them. They are mounted with these permissions:\n\n drwxrwsrwt root postgres /pgconf/tls\n -rw-r----- root postgres /pgconf/tls/ca.crt\n -rw-r----- root postgres /pgconf/tls/tls.crt\n -rw-r----- root postgres /pgconf/tls/tls.key\n\n drwxr-sr-x root postgres /pgconf/tls/replication\n -rw-r----- root postgres /pgconf/tls/replication/ca.crt\n -rw-r----- root postgres /pgconf/tls/replication/tls.crt\n -rw-r----- root postgres /pgconf/tls/replication/tls.key\n\nKubernetes treats the server certificate (top) with the same ownership\nand permissions as the client certificate for the replication user\n(bottom). The server is happy but anything libpq, including walreceiver,\nrejects the latter files for not being \"u=rw (0600) or less\".\n\n\nThere is an open request in the Kubernetes project to provide more\ncontrol over ownership and permissions of mounted secrets.[2] PostgreSQL\nis mentioned repeatedly as motivation for the feature.\n\n\n[1]: https://kubernetes.io/docs/concepts/configuration/secret/\n[2]: https://issue.kubernetes.io/81089\n\n-- Chris\n\n\n", "msg_date": "Tue, 1 Mar 2022 23:30:25 -0600", "msg_from": "Chris Bandy <bandy.chris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow root ownership of client certificate key" }, { "msg_contents": "Chris Bandy <bandy.chris@gmail.com> writes:\n> On 3/1/22 3:15 AM, Tom Lane wrote:\n>> Anyway, I'd be happier about back-patching if we could document\n>> actual requests to make it work like the server side does.\n\n> PGO runs PostgreSQL 10 through 14 in Kubernetes, and we have to work\n> around this issue when using certificates for system accounts.\n\nSold then, I'll make it so in a bit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Mar 2022 09:40:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow root ownership of client certificate key" }, { "msg_contents": "On 3/2/22 08:40, Tom Lane wrote:\n> Chris Bandy <bandy.chris@gmail.com> writes:\n>> On 3/1/22 3:15 AM, Tom Lane wrote:\n>>> Anyway, I'd be happier about back-patching if we could document\n>>> actual requests to make it work like the server side does.\n> \n>> PGO runs PostgreSQL 10 through 14 in Kubernetes, and we have to work\n>> around this issue when using certificates for system accounts.\n> \n> Sold then, I'll make it so in a bit.\n\nThank you! I think the containers community is really going to \nappreciate this.\n\nRegards,\n-David\n\n\n", "msg_date": "Wed, 2 Mar 2022 11:17:34 -0600", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Allow root ownership of client certificate key" } ]
[ { "msg_contents": "Hello Tom!\n\nI noticed you are improving pg_dump just now.\n\nSome time ago I experimented with a customer database dump in parallel directory mode -F directory -j (2-4)\n\nI noticed it took quite long to complete.\n\nFurther investigation showed that in this mode with multiple jobs the tables are processed in decreasing size order, which makes sense to avoid a long tail of a big table in one of the jobs prolonging overall dump time.\n\nExactly one table took very long, but seemed to be of moderate size.\n\nBut the size-determination fails to consider the size of toast tables and this table had a big associated toast-table of bytea column(s).\nEven with an analyze at loading time there where no size information of the toast-table in the catalog tables.\n\nI thought of the following alternatives to ameliorate:\n\n1. Using pg_table_size() function in the catalog query\nPos: This reflects the correct size of every relation\nNeg: This goes out to disk and may take a huge impact on databases with very many tables\n\n2. Teaching vacuum to set the toast-table size like it sets it on normal tables\n\n3. Have a command/function for occasionly setting the (approximate) size of toast tables \n\nI think with further work under the way (not yet ready), pg_dump can really profit from parallel/not compressing mode, especially considering the huge amount of bytea/blob/string data in many big customer scenarios.\n\nThoughts?\n\nHans Buschmann\n\n\n", "msg_date": "Fri, 22 Oct 2021 16:36:27 +0000", "msg_from": "Hans Buschmann <buschmann@nidsa.net>", "msg_from_op": true, "msg_subject": "Re: Assorted improvements in pg_dump" }, { "msg_contents": "Hans Buschmann <buschmann@nidsa.net> writes:\n> Some time ago I experimented with a customer database dump in parallel directory mode -F directory -j (2-4)\n> I noticed it took quite long to complete.\n> Further investigation showed that in this mode with multiple jobs the tables are processed in decreasing size order, which makes sense to avoid a long tail of a big table in one of the jobs prolonging overall dump time.\n> Exactly one table took very long, but seemed to be of moderate size.\n> But the size-determination fails to consider the size of toast tables and this table had a big associated toast-table of bytea column(s).\n\nHmm, yeah, we just use pg_class.relpages for scheduling parallel dumps.\nI'd supposed that would be fine, but maybe it's worth being smarter.\nI think it should be sufficient to add on the toast table's relpages\nvalue; that's maintained by autovacuum on the same terms as relpages\nfor regular tables. See 0005 below.\n\nHere's an update of this patch series:\n\n0001 is the same as before, except I changed collectComments and\ncollectSecLabels to strdup the strings they want and then release\ntheir PGresults. The previous behavior confused valgrind's leak\ntracking, which is only a minor annoyance, but I think we can\njustify changing it now that these functions don't save all of\nthe collected comments or seclabels. In particular, we've got\nno need for the several thousand comments on built-in objects,\nso that that PGresult is at least 100KB bigger than what we're\ngoing to keep.\n\n0002 is updated to account for commit 2acc84c6f.\n\n0003 is the same except I added a missing free().\n\n0004 is a new patch based on an idea from Andres Freund [1]:\nin the functions that repetitively issue the same query against\ndifferent tables, issue just one query and use a WHERE clause\nto restrict the output to the tables we care about. I was\nskeptical about this to start with, but it turns out to be\nquite a spectacular win. On my machine, the time to pg_dump\nthe regression database (with \"-s\") drops from 0.91 seconds\nto 0.39 seconds. For a database with 10000 toy tables, the\ntime drops from 18.1 seconds to 2.3 seconds.\n\n0004 is not committable as-is, because it assumes that the source\nserver has single-array unnest(), which is not true before 8.4.\nWe could fix that by using \"oid = ANY(array-constant)\" conditions\ninstead, but I'm unsure about the performance properties of that\nfor large OID arrays on those old server versions. There's a\ndiscussion at [2] about whether it'd be okay to drop pg_dump's\nsupport for pre-8.4 servers; if we do so, then it would become\nunnecessary to do anything more here.\n\n0005 implements your suggestion of accounting for TOAST data while\nscheduling parallel dumps. I realized while looking at that that\nthere's a pre-existing bug, which this'd exacerbate: on machines\nwith 32-bit off_t, dataLength can overflow. Admittedly such machines\nare just about extinct in the wild, but we do still trouble to support\nthe case. So 0005 also includes code to check for overflow and clamp\nthe result to INT_MAX blocks.\n\nMaybe we should back-patch 0005. OTOH, how likely is it that anyone\nis wrangling tables exceeding 16TB on a machine with 32-bit off_t?\nOr that poor parallel dump scheduling would be a real problem in\nsuch a case?\n\nLastly, 0006 implements the other idea we'd discussed in the other\nthread: for queries that are issued repetitively but not within a\nsingle pg_dump function invocation, use PREPARE/EXECUTE to cut down\nthe overhead. This gets only diminishing returns after 0004, but\nit still brings \"pg_dump -s regression\" down from 0.39s to 0.33s,\nso maybe it's worth doing. I stopped after caching the plans for\nfunctions/aggregates/operators/types, though. The remaining sorts\nof objects aren't likely to appear in typical databases enough times\nto be worth worrying over. (This patch will be a net loss if there\nare more than zero but less than perhaps 10 instances of an object type,\nso there's definitely reasons beyond laziness for not doing more.)\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/20211022055939.z6fihsm7hdzbjttf%40alap3.anarazel.de\n[2] https://www.postgresql.org/message-id/flat/2923349.1634942313%40sss.pgh.pa.us", "msg_date": "Sun, 24 Oct 2021 17:10:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assorted improvements in pg_dump" }, { "msg_contents": "On Sun, Oct 24, 2021 at 05:10:55PM -0400, Tom Lane wrote:\n> 0003 is the same except I added a missing free().\n> \n> 0004 is a new patch based on an idea from Andres Freund [1]:\n> in the functions that repetitively issue the same query against\n> different tables, issue just one query and use a WHERE clause\n> to restrict the output to the tables we care about. I was\n> skeptical about this to start with, but it turns out to be\n> quite a spectacular win. On my machine, the time to pg_dump\n> the regression database (with \"-s\") drops from 0.91 seconds\n> to 0.39 seconds. For a database with 10000 toy tables, the\n> time drops from 18.1 seconds to 2.3 seconds.\n\n+ if (tbloids->len > 1) \n+ appendPQExpBufferChar(tbloids, ','); \n+ appendPQExpBuffer(tbloids, \"%u\", tbinfo->dobj.catId.oid); \n\nI think this should say \n\n+ if (tbloids->len > 0) \n\nThat doesn't matter much since catalogs aren't dumped as such, and we tend to\ncount in base 10 and not base 10000.\n\nBTW, the ACL patch makes the overhead 6x lower (6.9sec vs 1.2sec) for pg_dump -t\nof a single, small table. Thanks for that.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 24 Oct 2021 17:03:37 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Assorted improvements in pg_dump" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sun, Oct 24, 2021 at 05:10:55PM -0400, Tom Lane wrote:\n>> + if (tbloids->len > 1)\n\n> I think this should say \n> + if (tbloids->len > 0)\n\nNo, >1 is the correct test, because it's checking the string length\nand we started by stuffing a '{' into the string. Maybe needs a\ncomment.\n\n> BTW, the ACL patch makes the overhead 6x lower (6.9sec vs 1.2sec) for pg_dump -t\n> of a single, small table. Thanks for that.\n\nYeah --- I haven't done any formal measurements of the case where you're\nselecting a small number of tables, but I did note that it decreased a\ngood deal compared to HEAD.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 24 Oct 2021 18:58:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assorted improvements in pg_dump" }, { "msg_contents": "\n________________________________________\n1. Von: Tom Lane <tgl@sss.pgh.pa.us>\n>Maybe we should back-patch 0005. OTOH, how likely is it that anyone\n>is wrangling tables exceeding 16TB on a machine with 32-bit off_t?\n>Or that poor parallel dump scheduling would be a real problem in\n>such a case?\n\nI tested your patch on Windows x64, pg15_devel_25.10.2021 against the customer database\n( 2 core/4 thread NUC,32 GB RAM, 1 NVME-SSD, 4 jobs)\n\npg_dump manually patched with your changes\ndatabase pg14.0, 20 GB shared buffers.\n\nThe dump of the database tables took 3min7sec for a 16 GB database resulting in a directory of 31.1 GB with 1628 files!\n\nThe dump worked like a rush: full cpu-usage, finish.\n\nI don't have the old performance data available, but it is a real improvement, so backpatching may be really woth the effort.\n\nThe former slowing-down table has a ratio from 5169 relpages to 673024 toastpages.\n\nDespite of the great disk usage (about dubbling the size from the db) directory mode seems to be by far the fastest mode especially for databases in the range 1TB+.\n\nFor archiving purposes an extern_to_ postgres tool often fits better for compression and can be applied to the dumped data not withholding the dump process.\n\nI am still working on another big speedup in this scenario (comming soon).\n\n-----------------------------------------------------------\n\n2. Another suggestion considering pg_dump\n\nWith some customer databases I follow a yearly practice of pg_dump/pg_restore to the new major version.\nThis eliminates all bloat and does a full reindex, so the disk data layout is already quite clean.\n\nIt would be very favorable to dump the pages according to the CLUSTER index when defined for a table. This would only concern the select to retrieve the rows and not harm pg_dump's logic.\n\nThis would give perfectly reorganized tables in a pg_dump/pg_restore round.\n\nIf a cluster index is defined by the customer, this expresses the whish to have the table layout in this way and nothing is introduced arbitrarily.\n\nI would suggest to have a flag (--cluster) for pg_dump to activate this new behavior.\n\nI think this is not immediately part of the current patchset, but should be taken into account for pg_dump ameliorations in PG15.\n\nIn the moment I have not yet enough knowledge to propose a patch of this kind ( a logic similar to the cluster command itself). Perhaps someone could jump in ...\n\nThanks for the patch and awaiting your thoughts\n\nHans Buschmann\n\n", "msg_date": "Mon, 25 Oct 2021 12:23:59 +0000", "msg_from": "Hans Buschmann <buschmann@nidsa.net>", "msg_from_op": true, "msg_subject": "AW: Assorted improvements in pg_dump" }, { "msg_contents": "Hi,\n\nOn 2021-10-24 17:10:55 -0400, Tom Lane wrote:\n> 0004 is not committable as-is, because it assumes that the source\n> server has single-array unnest(), which is not true before 8.4.\n> We could fix that by using \"oid = ANY(array-constant)\" conditions\n> instead, but I'm unsure about the performance properties of that\n> for large OID arrays on those old server versions.\n\nIt doesn't seem bad at all. 8.3 assert:\n\nCREATE TABLE foo(oid oid primary key);\nINSERT INTO foo SELECT generate_series(1, 1000000);\npostgres[1164129][1]=# explain ANALYZE SELECT count(*) FROM foo WHERE oid = ANY(ARRAY(SELECT generate_series(1100000, 1, -1)));\n┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ QUERY PLAN │\n├─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\n│ Aggregate (cost=81.54..81.55 rows=1 width=0) (actual time=2433.656..2433.656 rows=1 loops=1) │\n│ InitPlan │\n│ -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.004..149.425 rows=1100000 loops=1) │\n│ -> Bitmap Heap Scan on foo (cost=42.70..81.50 rows=10 width=0) (actual time=2275.137..2369.478 rows=1000000 loops=1) │\n│ Recheck Cond: (oid = ANY (($0)::oid[])) │\n│ -> Bitmap Index Scan on foo_pkey (cost=0.00..42.69 rows=10 width=0) (actual time=2274.077..2274.077 rows=1000000 loops=1) │\n│ Index Cond: (oid = ANY (($0)::oid[])) │\n│ Total runtime: 2436.201 ms │\n└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n(8 rows)\n\nTime: 2437.568 ms (00:02.438)\n\n\n> Lastly, 0006 implements the other idea we'd discussed in the other\n> thread: for queries that are issued repetitively but not within a\n> single pg_dump function invocation, use PREPARE/EXECUTE to cut down\n> the overhead. This gets only diminishing returns after 0004, but\n> it still brings \"pg_dump -s regression\" down from 0.39s to 0.33s,\n> so maybe it's worth doing.\n\nI think it's worth doing. There's things that the batch approach won't help\nwith and even if it doesn't help a lot with the regression test database, I'd\nexpect it to help plenty with other cases.\n\nA test database I had around with lots of functions got drastically faster to\ndump (7.4s to 2.5s), even though the number of queries didn't change\nsignificantly. According to pg_stat_statements plan_time for the dumpFunc\nquery went from 2352ms to 0.4ms - interestingly execution time nearly halves\nas well.\n\n\n> I stopped after caching the plans for\n> functions/aggregates/operators/types, though. The remaining sorts\n> of objects aren't likely to appear in typical databases enough times\n> to be worth worrying over. (This patch will be a net loss if there\n> are more than zero but less than perhaps 10 instances of an object type,\n> so there's definitely reasons beyond laziness for not doing more.)\n\nSeems reasonable.\n\n\n> @@ -7340,25 +7340,37 @@ getDomainConstraints(Archive *fout, TypeInfo *tyinfo)\n> \t\t\t\ti_consrc;\n> \tint\t\t\tntups;\n> \n> -\tquery = createPQExpBuffer();\n> +\tstatic bool query_prepared = false;\n> \n> -\tif (fout->remoteVersion >= 90100)\n> -\t\tappendPQExpBuffer(query, \"SELECT tableoid, oid, conname, \"\n> -\t\t\t\t\t\t \"pg_catalog.pg_get_constraintdef(oid) AS consrc, \"\n> -\t\t\t\t\t\t \"convalidated \"\n> -\t\t\t\t\t\t \"FROM pg_catalog.pg_constraint \"\n> -\t\t\t\t\t\t \"WHERE contypid = '%u'::pg_catalog.oid \"\n> -\t\t\t\t\t\t \"ORDER BY conname\",\n> -\t\t\t\t\t\t tyinfo->dobj.catId.oid);\n> +\tif (!query_prepared)\n> +\t{\n\nI wonder if it'd be better to store this in Archive or such. The approach with\nstatic variables might run into problems with parallel pg_dump at some\npoint. These objects aren't dumped in parallel yet, but still...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Oct 2021 12:30:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Assorted improvements in pg_dump" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-10-24 17:10:55 -0400, Tom Lane wrote:\n>> +\tstatic bool query_prepared = false;\n\n> I wonder if it'd be better to store this in Archive or such. The approach with\n> static variables might run into problems with parallel pg_dump at some\n> point. These objects aren't dumped in parallel yet, but still...\n\nYeah, I wasn't too happy with the static bools either. However, each\nfunction would need its own field in the struct, which seems like a\nmaintenance annoyance, plus a big hazard for future copy-and-paste\nchanges (ie, copy and paste the wrong flag name -> trouble). Also\nthe Archive struct is shared between dump and restore cases, so\nadding a dozen fields that are irrelevant for restore didn't feel\nright. So I'd like a better idea, but I'm not sure that that one\nis better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Oct 2021 16:02:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assorted improvements in pg_dump" }, { "msg_contents": "On 2021-Oct-25, Tom Lane wrote:\n\n> Yeah, I wasn't too happy with the static bools either. However, each\n> function would need its own field in the struct, which seems like a\n> maintenance annoyance, plus a big hazard for future copy-and-paste\n> changes (ie, copy and paste the wrong flag name -> trouble). Also\n> the Archive struct is shared between dump and restore cases, so\n> adding a dozen fields that are irrelevant for restore didn't feel\n> right. So I'd like a better idea, but I'm not sure that that one\n> is better.\n\nWhat about a separate struct passed from pg_dump's main() to the\nfunctions that execute queries, containing a bunch of bools? This'd\nstill have the problem that mindless copy and paste would cause a bug,\nbut I wonder if that isn't overstated: if you use the wrong flag,\npg_dump would fail as soon as you try to invoke your query when it\nhasn't been prepared yet.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I'm impressed how quickly you are fixing this obscure issue. I came from \nMS SQL and it would be hard for me to put into words how much of a better job\nyou all are doing on [PostgreSQL].\"\n Steve Midgley, http://archives.postgresql.org/pgsql-sql/2008-08/msg00000.php\n\n\n", "msg_date": "Mon, 25 Oct 2021 17:42:23 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Assorted improvements in pg_dump" }, { "msg_contents": "Hi,\n\nOn 2021-10-25 16:02:34 -0400, Tom Lane wrote:\n> So I'd like a better idea, but I'm not sure that that one is better.\n\nI guess we could move the prepared-statement handling into a query execution\nhelper. That could then use a hashtable or something similar to check if a\ncertain prepared statement already exists. That'd then centrally be extensible\nto deal with multiple connects etc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Oct 2021 14:35:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Assorted improvements in pg_dump" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I guess we could move the prepared-statement handling into a query execution\n> helper. That could then use a hashtable or something similar to check if a\n> certain prepared statement already exists. That'd then centrally be extensible\n> to deal with multiple connects etc.\n\nThat seems like more mechanism than is warranted. I tried it with a\nsimple array of booleans, and that seems like not too much of a mess;\nsee revised 0006 attached.\n\n(0001-0005 are the same as before; including them just to satisfy\nthe cfbot.)\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 26 Oct 2021 18:31:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assorted improvements in pg_dump" }, { "msg_contents": "Here's an updated version of this patch set. The only non-line-number\nchanges are\n\n(1) in 0004, I dealt with the issue of not having unnest() in old branches\nby bumping the minimum remote server version to 8.4. Seeing that we seem\nto have consensus in the other thread to push the minimum up to somewhere\naround 9.2, I see no point in making this patch put in conditional code\nthat we'd shortly rip out again.\n\n(2) I also added some comments to 0004 to hopefully address Justin's\nconfusion about string lengths.\n\nI feel quite fortunate that a month's worth of commitfest hacking\ndidn't break any of these patches. Unless someone intends to\nreview these more thoroughly than they already have, I'd like to\ngo ahead and push them.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 03 Dec 2021 16:33:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assorted improvements in pg_dump" }, { "msg_contents": "Hello Tom,\n\nfrom your mail from 25.10.2021:\n\n>0005 implements your suggestion of accounting for TOAST data while\n>scheduling parallel dumps. I realized while looking at that that\n>there's a pre-existing bug, which this'd exacerbate: on machines\n>with 32-bit off_t, dataLength can overflow. Admittedly such machines\n>are just about extinct in the wild, but we do still trouble to support\n>the case. So 0005 also includes code to check for overflow and clamp\n>the result to INT_MAX blocks.\n\n>Maybe we should back-patch 0005. OTOH, how likely is it that anyone\n>is wrangling tables exceeding 16TB on a machine with 32-bit off_t?\n>Or that poor parallel dump scheduling would be a real problem in\n>such a case?\n\nI noticed that you patched master with all the improvements in pg_dump.\n\nDid you change your mind about backpatching patch 0005 to fix the toast size matter?\n\nIt would be rather helpfull for handling existent user data in active branches.\n\n\nOn the matter of 32bit versions I think they are used only in much more little instances.\n\nBTW the 32 bit build of postgres on Windows does not work any more with more modern tool sets (tested with VS2019 and VS2022) albeit not excluded explicitly in the docs. But no one complained yet (for a long time now...).\n\nThanks\n\nHans Buschmann\n\n", "msg_date": "Tue, 7 Dec 2021 08:05:32 +0000", "msg_from": "Hans Buschmann <buschmann@nidsa.net>", "msg_from_op": true, "msg_subject": "AW: Assorted improvements in pg_dump" }, { "msg_contents": "Hans Buschmann <buschmann@nidsa.net> writes:\n> I noticed that you patched master with all the improvements in pg_dump.\n> Did you change your mind about backpatching patch 0005 to fix the toast size matter?\n\nI looked briefly at that and found that the patch would have to be\nlargely rewritten, because getTables() looks quite different in the\nolder branches. I'm not really sufficiently excited about the point\nto do that rewriting and re-testing. I think that cases where the\nold logic gets the scheduling badly wrong are probably rare.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Dec 2021 11:18:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Assorted improvements in pg_dump" } ]
[ { "msg_contents": "While doing some desultory testing, I realized that the commit\nI just pushed (92316a458) broke pg_dump against 8.0 servers:\n\n$ pg_dump -p5480 -s regression\npg_dump: error: schema with OID 11 does not exist\n\nThe reason turns out to be something I'd long forgotten about: except\nfor the few \"bootstrap\" catalogs, our system catalogs didn't use to\nhave fixed OIDs. That changed at 7c13781ee, but 8.0 predates that.\nSo when pg_dump reads a catalog on 8.0, it gets some weird number for\n\"tableoid\", and the logic I just put into common.c's findNamespaceByOid\net al fails to find the resulting DumpableObjects.\n\nSo my first thought was just to revert 92316a458 and give up on it as\na bad idea. However ... does anyone actually still care about being\nable to dump from such ancient servers? In addition to this issue,\nI'm thinking of the discussion at [1] about wanting to use unnest()\nin pg_dump, and of what we would need to do instead in pre-8.4 servers\nthat lack that. Maybe it'd be better to move up pg_dump's minimum\nsupported server version to 8.4 or 9.0, and along the way whack a\nfew more lines of its backward-compatibility hacks. If there is\nanyone out there still using an 8.x server, they could use its\nown pg_dump whenever they get around to migration.\n\nAnother idea would be to ignore \"tableoid\" and instead use the OIDs\nwe're expecting, but that's way too ugly for my taste, especially\ngiven the rather thin argument for committing 92316a458 at all.\n\nAnyway, I think the default answer is \"revert 92316a458 and keep the\ncompatibility goalposts where they are\". But I wanted to open up a\ndiscussion to see if anyone likes the other approach better.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/20211022055939.z6fihsm7hdzbjttf%40alap3.anarazel.de\n\n\n", "msg_date": "Fri, 22 Oct 2021 18:38:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pg_dump versus ancient server versions" }, { "msg_contents": "On Fri, Oct 22, 2021 at 3:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Anyway, I think the default answer is \"revert 92316a458 and keep the\n> compatibility goalposts where they are\". But I wanted to open up a\n> discussion to see if anyone likes the other approach better.\n>\n> [1]\n> https://www.postgresql.org/message-id/20211022055939.z6fihsm7hdzbjttf%40alap3.anarazel.de\n>\n>\nI'd rather drop legacy support than revert. Even if the benefit of\n92316a456 of is limited to refactoring the fact it was committed is enough\nfor me to feel it is a worthwhile improvement. It's still yet another five\nyears before there won't be a supported release that can dump/restore this\n- so 20 years for someone to have upgraded without having to go to the (not\nthat big a) hassle of installing an out-of-support version as a stop-over.\n\nIn short, IMO, the bar for this kind of situation should be 10 releases at\nmost - 5 of which would be in support at the time the patch goes in. We\ndon't have to actively drop support of older stuff but anything older\nshouldn't be preventing new commits.\n\nDavid J.\n\nOn Fri, Oct 22, 2021 at 3:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Anyway, I think the default answer is \"revert 92316a458 and keep the\ncompatibility goalposts where they are\".  But I wanted to open up a\ndiscussion to see if anyone likes the other approach better.\n[1] https://www.postgresql.org/message-id/20211022055939.z6fihsm7hdzbjttf%40alap3.anarazel.deI'd rather drop legacy support than revert.  Even if the benefit of 92316a456 of is limited to refactoring the fact it was committed is enough for me to feel it is a worthwhile improvement.  It's still yet another five years before there won't be a supported release that can dump/restore this - so 20 years for someone to have upgraded without having to go to the (not that big a) hassle of installing an out-of-support version as a stop-over.In short, IMO, the bar for this kind of situation should be 10 releases at most - 5 of which would be in support at the time the patch goes in.  We don't have to actively drop support of older stuff but anything older shouldn't be preventing new commits.David J.", "msg_date": "Fri, 22 Oct 2021 16:00:19 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On Fri, Oct 22, 2021 at 6:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So my first thought was just to revert 92316a458 and give up on it as\n> a bad idea. However ... does anyone actually still care about being\n> able to dump from such ancient servers?\n\nI think I recently heard about an 8.4 server still out there in the\nwild, but AFAICR it's been a long time since I've heard about anything\nolder.\n\nIt seems to me that if you're upgrading by a dozen server versions in\none shot, it's not a totally crazy idea that you might want to do it\nin steps, or use the pg_dump for the version you have and then hack\nthe dump. I kind of wonder if there's really any hope of a pain-free\nupgrade across that many versions anyway. There are things that can\nbite you despite all the work we've put into pg_dump, like having\nobjects that depend on system objects whose definition has changed\nover the years, plus implicit casting differences, operator precedence\nchanges, => getting deprecated, lots of GUC changes, etc. You are\ngoing to be able to upgrade in the end, but it's probably going to\ntake some work. So I'm not really sure that giving up pg_dump\ncompatibility for versions that old is losing as much as it may seem.\n\nAnother thing to think about in that regard: how likely is that\nPostgreSQL 7.4 and PostgreSQL 15 both compile and run on the same\noperating system? I suspect the answer is \"not very.\" I seem to recall\nGreg Stark trying to compile really old versions of PostgreSQL for a\nconference talk some years ago, and he got back to a point where it\njust became impossible to make work on modern toolchains even with a\ndecent amount of hackery. One tends to think of C as about as static a\nthing as can be, but that's kind of missing the point. On my laptop\nfor example, my usual configure invocation fails on 7.4 with:\n\nchecking for SSL_library_init in -lssl... no\nconfigure: error: library 'ssl' is required for OpenSSL\n\nIn fact, I get that same failure on every branch older than 9.2. I\nexpect I could work around that by disabling SSL or finding an older\nversion of OpenSSL that works the way those branches expect, but that\nmight not be the only problem, either. Now I understand you could\nhave PostgreSQL 15 on a new box and PostgreSQL 7.x on an ancient one\nand connect via the network, and it would in all fairness be cool if\nthat Just Worked. But I suspect that even if that did happen in the\nlab, reality wouldn't often be so kind.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Oct 2021 19:26:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Fri, Oct 22, 2021 at 3:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Anyway, I think the default answer is \"revert 92316a458 and keep the\n>> compatibility goalposts where they are\". But I wanted to open up a\n>> discussion to see if anyone likes the other approach better.\n\n> ... IMO, the bar for this kind of situation should be 10 releases at\n> most - 5 of which would be in support at the time the patch goes in. We\n> don't have to actively drop support of older stuff but anything older\n> shouldn't be preventing new commits.\n\nYeah. I checked into when it was that we dropped pre-8.0 support\nfrom pg_dump, and the answer is just about five years ago (64f3524e2).\nSo moving the bar forward by five releases isn't at all out of line.\n8.4 would be eight years past EOL by the time v15 comes out.\n\nOne of the arguments for the previous change was that it was getting\nvery hard to build old releases on modern platforms, thus making it\nhard to do any compatibility testing. I believe the same is starting\nto become true of the 8.x releases, though I've not tried personally\nto build any of them in some time. (The executables I'm using for\nthem date from 2014 or earlier, and have not been recompiled in\nsubsequent platform upgrades ...) Anyway it's definitely not free\nto continue to support old source server versions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Oct 2021 19:30:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Another thing to think about in that regard: how likely is that\n> PostgreSQL 7.4 and PostgreSQL 15 both compile and run on the same\n> operating system? I suspect the answer is \"not very.\" I seem to recall\n> Greg Stark trying to compile really old versions of PostgreSQL for a\n> conference talk some years ago, and he got back to a point where it\n> just became impossible to make work on modern toolchains even with a\n> decent amount of hackery.\n\nRight. The toolchains keep moving, even if the official language\ndefinition doesn't. For grins, I just checked out REL8_4_STABLE\non my M1 Mac, and found that it only gets this far:\n\nchecking test program... ok\nchecking whether long int is 64 bits... no\nchecking whether long long int is 64 bits... no\nconfigure: error: Cannot find a working 64-bit integer type.\n\nwhich turns out to be down to a configure-script issue we fixed\nsome years ago, ie using exit() without a prototype:\n\nconftest.c:158:3: error: implicitly declaring library function 'exit' with type\\\n 'void (int) __attribute__((noreturn))' [-Werror,-Wimplicit-function-declaratio\\\nn]\n exit(! does_int64_work());\n ^\n\nI notice that the configure script is also selecting some warning\nswitches that this compiler doesn't much like, plus it doesn't\nbelieve 2.6.x flex is usable. So that's *at least* three things\nthat'd have to be hacked even to get to a successful configure run.\n\nIndividually such issues are (usually) not very painful, but when\nyou have to recreate all of them at once it's a daunting project.\n\nSo if I had to rebuild 8.4 from scratch right now, I would not be\na happy camper. That seems like a good argument for not deeming\nit to be something we still have to support.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Oct 2021 19:48:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On 2021-Oct-22, Robert Haas wrote:\n\n> In fact, I get that same failure on every branch older than 9.2. I\n> expect I could work around that by disabling SSL or finding an older\n> version of OpenSSL that works the way those branches expect, but that\n> might not be the only problem, either.\n\nI just tried to build 9.1. My config line there doesn't have ssl, but I\ndo get this in the compile stage:\n\ngram.c:69:25: error: conflicting types for ‘base_yylex’\n 69 | #define yylex base_yylex\n | ^~~~~~~~~~\nscan.c:15241:12: note: in expansion of macro ‘yylex’\n15241 | extern int yylex \\\n | ^~~~~\nIn file included from /pgsql/source/REL9_1_STABLE/src/backend/parser/gram.y:60:\n/pgsql/source/REL9_1_STABLE/src/include/parser/gramparse.h:66:12: note: previous declaration of ‘base_yylex’ was here\n 66 | extern int base_yylex(YYSTYPE *lvalp, YYLTYPE *llocp,\n | ^~~~~~~~~~\ngram.c:69:25: error: conflicting types for ‘base_yylex’\n 69 | #define yylex base_yylex\n | ^~~~~~~~~~\nscan.c:15244:21: note: in expansion of macro ‘yylex’\n15244 | #define YY_DECL int yylex \\\n | ^~~~~\nscan.c:15265:1: note: in expansion of macro ‘YY_DECL’\n15265 | YY_DECL\n | ^~~~~~~\nIn file included from /pgsql/source/REL9_1_STABLE/src/backend/parser/gram.y:60:\n/pgsql/source/REL9_1_STABLE/src/include/parser/gramparse.h:66:12: note: previous declaration of ‘base_yylex’ was here\n 66 | extern int base_yylex(YYSTYPE *lvalp, YYLTYPE *llocp,\n | ^~~~~~~~~~\nmake[3]: *** [../../../src/Makefile.global:655: gram.o] Error 1\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte\"\n(Ijon Tichy en Viajes, Stanislaw Lem)\n\n\n", "msg_date": "Fri, 22 Oct 2021 20:51:22 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On Fri, Oct 22, 2021 at 7:51 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I just tried to build 9.1. My config line there doesn't have ssl, but I\n> do get this in the compile stage:\n\nHmm.\n\nYou know, one thing we could think about doing is patching some of the\nolder branches to make them compile on modern machines. That would not\nonly be potentially useful for people who are upgrading from ancient\nversions, but also for hackers trying to do research on the origin of\nbugs or performance problems, and also for people who are trying to\nmaintain some kind of backward compatibility or other and want to test\nagainst old versions.\n\nI don't know whether that's really worth the effort and I expect Tom\nwill say that it's not. If he does say that, he may be right. But I\nthink if I were trying to extract my data from an old 7.4 database, I\nthink I'd find it a lot more useful if I could make 9.0 or 9.2 or\nsomething compile and talk to it than if I had to use v15 and hope\nthat held together somehow. It doesn't really make sense to try to\nkeep compatibility of any sort with versions we can no longer test\nagainst.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 24 Oct 2021 16:45:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> You know, one thing we could think about doing is patching some of the\n> older branches to make them compile on modern machines. That would not\n> only be potentially useful for people who are upgrading from ancient\n> versions, but also for hackers trying to do research on the origin of\n> bugs or performance problems, and also for people who are trying to\n> maintain some kind of backward compatibility or other and want to test\n> against old versions.\n\nYeah. We have done that in the past; I thought more than once,\nbut right now the only case I can find is d13f41d21/105f3ef49.\nThere are some other post-EOL commits in git, but I think the\nothers were mistakes from over-enthusiastic back-patching, while\nthat one was definitely an intentional portability fix for EOL'd\nversions.\n\n> I don't know whether that's really worth the effort and I expect Tom\n> will say that it's not. If he does say ,that, he may be right.\n\nHmm ... I guess the question is how much work we feel like putting\ninto that, and how we'd track whether old branches still work,\nand on what platforms. It could easily turn into a time sink\nthat's not justified by the value. I do see your point that there's\nsome value in it; I'm just not sure about the cost/benefit ratio.\n\nOne thing we could do that would help circumscribe the costs is to say\n\"we are not going to consider issues involving new compiler warnings\nor bugs caused by more-aggressive optimization\". We could mechanize\nthat pretty effectively by changing configure shortly after a branch's\nEOL to select -O0 and no extra warning flags, so that anyone building\nfrom branch tip would get those switch choices.\n\n(I have no idea what this might look like on the Windows side, but\nI'm concerned by the fact that we seem to need fixes every time a\nnew Visual Studio major version comes out.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 24 Oct 2021 17:46:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On 2021-Oct-24, Robert Haas wrote:\n\n> You know, one thing we could think about doing is patching some of the\n> older branches to make them compile on modern machines. That would not\n> only be potentially useful for people who are upgrading from ancient\n> versions, but also for hackers trying to do research on the origin of\n> bugs or performance problems, and also for people who are trying to\n> maintain some kind of backward compatibility or other and want to test\n> against old versions.\n\nI think it is worth *some* effort, at least as far back as we want to\nclaim that we maintain pg_dump and/or psql compatibility, assuming it is\nnot too onerous. For instance, I wouldn't want to clutter buildfarm or\nCI dashboards with testing these branches, unless it is well isolated\nfrom regular ones; we shouldn't commit anything that's too invasive; and\nwe shouldn't make any claims about supportability of these abandoned\nbranches.\n\nAs an example, I did backpatch one such fix to 8.3 (just over a year)\nand 8.2 (four years) after they had closed -- see d13f41d21538 and\n105f3ef492ab.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"\n\n\n", "msg_date": "Sun, 24 Oct 2021 18:52:00 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On Fri, 2021-10-22 at 19:26 -0400, Robert Haas wrote:\n> On Fri, Oct 22, 2021 at 6:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > So my first thought was just to revert 92316a458 and give up on it as\n> > a bad idea.  However ... does anyone actually still care about being\n> > able to dump from such ancient servers?\n> \n> I think I recently heard about an 8.4 server still out there in the\n> wild, but AFAICR it's been a long time since I've heard about anything\n> older.\n\nI had a customer with 8.3 in the not too distant past, but that need not\nstop the show. If necessary, they can dump with 8.3 and restire that.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 10:29:19 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "\nOn 10/22/21 19:30, Tom Lane wrote:\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>> On Fri, Oct 22, 2021 at 3:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Anyway, I think the default answer is \"revert 92316a458 and keep the\n>>> compatibility goalposts where they are\". But I wanted to open up a\n>>> discussion to see if anyone likes the other approach better.\n>> ... IMO, the bar for this kind of situation should be 10 releases at\n>> most - 5 of which would be in support at the time the patch goes in. We\n>> don't have to actively drop support of older stuff but anything older\n>> shouldn't be preventing new commits.\n> Yeah. I checked into when it was that we dropped pre-8.0 support\n> from pg_dump, and the answer is just about five years ago (64f3524e2).\n> So moving the bar forward by five releases isn't at all out of line.\n> 8.4 would be eight years past EOL by the time v15 comes out.\n>\n> One of the arguments for the previous change was that it was getting\n> very hard to build old releases on modern platforms, thus making it\n> hard to do any compatibility testing. I believe the same is starting\n> to become true of the 8.x releases, though I've not tried personally\n> to build any of them in some time. (The executables I'm using for\n> them date from 2014 or earlier, and have not been recompiled in\n> subsequent platform upgrades ...) Anyway it's definitely not free\n> to continue to support old source server versions.\n\n\nBut we don't need to build them on modern platforms, just run them on\nmodern platforms, ISTM.\n\nSome months ago I built binaries all the way back to 7.2 that with a\nlittle help run on modern Fedora and Ubuntu systems. I just upgraded my\nFedora system from 31 to 34 and they still run. See\n<https://gitlab.com/adunstan/pg-old-bin> One of the intended use cases\nwas to test pg_dump against old versions.\n\nI'm not opposed to us cutting off support for very old versions,\nalthough I think we should only do that very occasionally (no more than\nonce every five years, say) unless there's a very good reason. I'm also\nnot opposed to us making small adjustments to allow us to build old\nversions on modern platforms, but if we do that then we should probably\nhave some buildfarm support for it.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 08:29:24 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On Sun, Oct 24, 2021 at 5:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm ... I guess the question is how much work we feel like putting\n> into that, and how we'd track whether old branches still work,\n> and on what platforms. It could easily turn into a time sink\n> that's not justified by the value. I do see your point that there's\n> some value in it; I'm just not sure about the cost/benefit ratio.\n\nRight. Well, we could leave it up to people who care to decide how\nmuch work they want to do, perhaps. But I do find it annoying that\npg_dump is supposed to maintain compatibility with server releases\nthat I can't easily build. Fortunately I don't patch pg_dump very\noften, but if I did, it'd be very difficult for me to verify that\nthings work against really old versions. I know that you (Tom) do a\nlot of work of this type though. In my opinion, if you find yourself\nworking on a project of this type and as part of that you do some\nfixes to an older branch to make it compile, maybe you ought to commit\nthose so that the next person doesn't have the same problem. And maybe\nwhen we add support for newer versions of OpenSSL or Windows, we ought\nto consider back-patching those even to unsupported releases if\nsomeone's willing to do the work. If they're not, they're not, but I\nthink we tend to strongly discourage commits to EOL branches, and I\nthink maybe we should stop doing that. Not that people should\nroutinely back-patch bug fixes, but stuff that makes it easier to\nbuild seems fair game.\n\nI don't think we need to worry too much about users getting the wrong\nimpression. People who want to know what is supported are going to\nlook at our web site for that information, and they are going to look\nfor releases. I can't rule out the possibility that someone is going\nto build an updated version of 7.4 or 8.2 with whatever patches we\nmight choose to commit there, but they're unlikely to think that means\nthose are fully supported branches. And if they somehow do think that\ndespite all evidence to the contrary, we can just tell them that they\nare mistaken.\n\n> One thing we could do that would help circumscribe the costs is to say\n> \"we are not going to consider issues involving new compiler warnings\n> or bugs caused by more-aggressive optimization\". We could mechanize\n> that pretty effectively by changing configure shortly after a branch's\n> EOL to select -O0 and no extra warning flags, so that anyone building\n> from branch tip would get those switch choices.\n\nI don't much like the idea of including -O0 because it seems like it\ncould be confusing. People might not realize that that the build\nsettings have been changed. I don't think that's really the problem\nanyway: anybody who hits compiler warnings in older branches could\ndecide to fix them (and as long as it's a committer who will be\nresponsible for their own work, I think that's totally fine) or enable\n-O0 locally. I routinely do that when I hit problems on older\nbranches, and it helps a lot, but the way I see it, that's such an\neasy change that there's little reason to make it in the source code.\nWhat's a lot more annoying is if the compile fails altogether, or you\ncan't even get past the configure step.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Oct 2021 08:55:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On Mon, Oct 25, 2021 at 8:29 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> But we don't need to build them on modern platforms, just run them on\n> modern platforms, ISTM.\n\nI don't really agree with this.\n\n> Some months ago I built binaries all the way back to 7.2 that with a\n> little help run on modern Fedora and Ubuntu systems. I just upgraded my\n> Fedora system from 31 to 34 and they still run. See\n> <https://gitlab.com/adunstan/pg-old-bin> One of the intended use cases\n> was to test pg_dump against old versions.\n\nThat's cool, but I don't have a Fedora or Ubuntu VM handy, and it does\nseem like if people are working on testing against old versions, they\nmight even want to be able to recompile with debugging statements\nadded or something. So I think actually compiling is a lot better than\nbeing able to get working binaries from someplace, even though the\nlatter is better than nothing.\n\n> I'm not opposed to us cutting off support for very old versions,\n> although I think we should only do that very occasionally (no more than\n> once every five years, say) unless there's a very good reason. I'm also\n> not opposed to us making small adjustments to allow us to build old\n> versions on modern platforms, but if we do that then we should probably\n> have some buildfarm support for it.\n\nYeah, I think having a small number of buildfarm animals testing very\nold versions would be nice. Perhaps we can call them tyrannosaurus,\nbrontosaurus, triceratops, etc. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Oct 2021 08:59:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Right. Well, we could leave it up to people who care to decide how\n> much work they want to do, perhaps. But I do find it annoying that\n> pg_dump is supposed to maintain compatibility with server releases\n> that I can't easily build. Fortunately I don't patch pg_dump very\n> often, but if I did, it'd be very difficult for me to verify that\n> things work against really old versions. I know that you (Tom) do a\n> lot of work of this type though. In my opinion, if you find yourself\n> working on a project of this type and as part of that you do some\n> fixes to an older branch to make it compile, maybe you ought to commit\n> those so that the next person doesn't have the same problem.\n\nWell, the answer to that so far is that I've never done such fixes.\nI have the last released versions of old branches laying around,\nand that's what I test against. It's been sufficient so far, although\nif I suddenly needed to do (say) SSL-enabled testing, that would be\na problem because I don't think I built with SSL for any of those\nbranches.\n\nBecause of that angle, I concur with your position that it'd really\nbe desirable to be able to build old versions on modern platforms.\nEven if you've got an old executable, it might be misconfigured for\nthe purpose you have in mind.\n\n> And maybe\n> when we add support for newer versions of OpenSSL or Windows, we ought\n> to consider back-patching those even to unsupported releases if\n> someone's willing to do the work. If they're not, they're not, but I\n> think we tend to strongly discourage commits to EOL branches, and I\n> think maybe we should stop doing that. Not that people should\n> routinely back-patch bug fixes, but stuff that makes it easier to\n> build seems fair game.\n\nWhat concerns me here is that we not get into a position where we're\neffectively still maintaining EOL'd versions. Looking at the git\nhistory yesterday reminded me that we had such a situation back in\nthe early 7.x days. I can see that I still occasionally made commits\ninto 7.1 and 7.2 years after the last releases of those branches,\nwhich ended up being a complete waste of effort. There was no policy\nguiding what to back-patch into what branches, partly because we\ndidn't have a defined EOL policy then. So I want to have a policy\n(and a pretty tight one) before I'll go back to doing that.\n\nRoughly speaking, I think the policy should be \"no feature bug fixes,\nnot even security fixes, for EOL'd branches; only fixes that are\nminimally necessary to make it build on newer platforms\". And\nI want to have a sunset provision even for that. Fixing every branch\nforevermore doesn't scale.\n\nThere's also the question of how we get to a working state in the\nfirst place -- as we found upthread, there's a fair-sized amount\nof work to do just to restore buildability right now, for anything\nthat was EOL'd more than a year or two back. I'm not volunteering\nfor that, but somebody would have to to get things off the ground.\n\nAlso, I concur with Andrew's point that we'd really have to have\nbuildfarm support. However, this might not be as bad as it seems.\nIn principle we might just need to add resurrected branches back to\nthe branches_to_build list. Given my view of what the back-patching\npolicy ought to be, a new build in an old branch might only be\nrequired a couple of times a year, which would not be an undue\ninvestment of buildfarm resources. (Hmmm ... but disk space could\nbecome a problem, particularly on older machines with not so much\ndisk. Do we really need to maintain a separate checkout for each\nbranch? It seems like a fresh checkout from the repo would be\nlittle more expensive than the current copy-a-checkout process.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Oct 2021 10:23:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On Mon, Oct 25, 2021 at 10:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> What concerns me here is that we not get into a position where we're\n> effectively still maintaining EOL'd versions. Looking at the git\n> history yesterday reminded me that we had such a situation back in\n> the early 7.x days. I can see that I still occasionally made commits\n> into 7.1 and 7.2 years after the last releases of those branches,\n> which ended up being a complete waste of effort. There was no policy\n> guiding what to back-patch into what branches, partly because we\n> didn't have a defined EOL policy then. So I want to have a policy\n> (and a pretty tight one) before I'll go back to doing that.\n>\n> Roughly speaking, I think the policy should be \"no feature bug fixes,\n> not even security fixes, for EOL'd branches; only fixes that are\n> minimally necessary to make it build on newer platforms\". And\n> I want to have a sunset provision even for that. Fixing every branch\n> forevermore doesn't scale.\n\nSure, but you can ameliorate that a lot by just saying it's something\npeople have the *option* to do, not something anybody is *expected* to\ndo. I agree it's best if we continue to discourage back-patching bug\nfixes into supported branches, but I also think we don't need to be\ntoo stringent about this. What I think we don't want is, for example,\nsomebody working at company X deciding to back-patch all the bug fixes\nthat customers of company X cares about into our back-branches, but\nnot the other ones. But on the other hand if somebody is trying to\nbenchmark or test compatibility an old branch and it keeps crashing\nbecause of some bug, telling them that they're not allowed to fix that\nbug because it's not a sufficiently-minimal change to a dead branch is\nkind of ridiculous. In other words, if you try to police every change\nanyone wants to make, e.g. \"well I know that would help YOU build on a\nnewer platform but it doesn't seem like it meets the criteria of the\nminimum necessary change to make it build on a newer platform,\" then\nyou might as well just give up now. Nobody cares about the older\nbranches enough to put work into fixing whatever's wrong and then\nhaving to argue about whether that work ought to be thrown away\nanyway.\n\n> There's also the question of how we get to a working state in the\n> first place -- as we found upthread, there's a fair-sized amount\n> of work to do just to restore buildability right now, for anything\n> that was EOL'd more than a year or two back. I'm not volunteering\n> for that, but somebody would have to to get things off the ground.\n\nRight.\n\n> Also, I concur with Andrew's point that we'd really have to have\n> buildfarm support. However, this might not be as bad as it seems.\n> In principle we might just need to add resurrected branches back to\n> the branches_to_build list. Given my view of what the back-patching\n> policy ought to be, a new build in an old branch might only be\n> required a couple of times a year, which would not be an undue\n> investment of buildfarm resources. (Hmmm ... but disk space could\n> become a problem, particularly on older machines with not so much\n> disk. Do we really need to maintain a separate checkout for each\n> branch? It seems like a fresh checkout from the repo would be\n> little more expensive than the current copy-a-checkout process.)\n\nI suppose it would be useful if we had the ability to do new runs only\nwhen the source code has changed...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Oct 2021 10:40:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Oct 25, 2021 at 10:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Roughly speaking, I think the policy should be \"no feature bug fixes,\n>> not even security fixes, for EOL'd branches; only fixes that are\n>> minimally necessary to make it build on newer platforms\". And\n>> I want to have a sunset provision even for that. Fixing every branch\n>> forevermore doesn't scale.\n\n> Sure, but you can ameliorate that a lot by just saying it's something\n> people have the *option* to do, not something anybody is *expected* to\n> do. I agree it's best if we continue to discourage back-patching bug\n> fixes into supported branches, but I also think we don't need to be\n> too stringent about this.\n\nActually, I think we do. If I want to test against 7.4, ISTM I want\nto test against the last released 7.4 version, not something with\narbitrary later changes. Otherwise, what exactly is the point?\n\n>> In principle we might just need to add resurrected branches back to\n>> the branches_to_build list. Given my view of what the back-patching\n>> policy ought to be, a new build in an old branch might only be\n>> required a couple of times a year, which would not be an undue\n>> investment of buildfarm resources.\n\n> I suppose it would be useful if we had the ability to do new runs only\n> when the source code has changed...\n\nUh, don't we have that already? I know you can configure a buildfarm\nanimal to force a run at least every-so-often, but it's not required,\nand I don't think it's even the default.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Oct 2021 11:00:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On 2021-Oct-25, Tom Lane wrote:\n\n> Roughly speaking, I think the policy should be \"no feature bug fixes,\n> not even security fixes, for EOL'd branches; only fixes that are\n> minimally necessary to make it build on newer platforms\". And\n> I want to have a sunset provision even for that. Fixing every branch\n> forevermore doesn't scale.\n\nAgreed. I think dropping such support at the same time we drop\npsql/pg_dump support is a decent answer to that. That meets the stated\npurpose of being able to test such support, and also it moves forward\naccording to subjective choice per development needs.\n\n> Also, I concur with Andrew's point that we'd really have to have\n> buildfarm support. However, this might not be as bad as it seems.\n> In principle we might just need to add resurrected branches back to\n> the branches_to_build list.\n\nWell, we would add them to *some* list, but not to the one used by stock\nBF members -- not only because of the diskspace issue but also because\nof the time to build. I suggest that we should have a separate\nlist-of-branches file that would only be used by BF members especially\nconfigured to do so; and hopefully we won't allow more than a handful\nanimals to do that but rather a well-chosen subset, and also maybe allow\nonly GCC rather than try to support other compilers. (There's no need\nto ensure compilability on any Windows platform, for example.)\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Ed is the standard text editor.\"\n http://groups.google.com/group/alt.religion.emacs/msg/8d94ddab6a9b0ad3\n\n\n", "msg_date": "Mon, 25 Oct 2021 12:05:22 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On Mon, Oct 25, 2021 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Actually, I think we do. If I want to test against 7.4, ISTM I want\n> to test against the last released 7.4 version, not something with\n> arbitrary later changes. Otherwise, what exactly is the point?\n\n1. You're free to check out any commit you like.\n\n2. Nothing I said can reasonably be confused with \"let's allow\narbitrary later changes.\"\n\n> Uh, don't we have that already? I know you can configure a buildfarm\n> animal to force a run at least every-so-often, but it's not required,\n> and I don't think it's even the default.\n\nOh, OK. I wonder how that plays with the buildfarm status page's\ndesire to drop old results that are more than 30 days old. I guess\nyou'd just need to force a run at least every 28 days or something.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Oct 2021 11:09:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "\nOn 10/25/21 11:09, Robert Haas wrote:\n> On Mon, Oct 25, 2021 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Actually, I think we do. If I want to test against 7.4, ISTM I want\n>> to test against the last released 7.4 version, not something with\n>> arbitrary later changes. Otherwise, what exactly is the point?\n> 1. You're free to check out any commit you like.\n>\n> 2. Nothing I said can reasonably be confused with \"let's allow\n> arbitrary later changes.\"\n>\n>> Uh, don't we have that already? I know you can configure a buildfarm\n>> animal to force a run at least every-so-often, but it's not required,\n>> and I don't think it's even the default.\n\n\nYes, in fact its rather discouraged. The default is just to build when\nthere's a code change detected.\n\n\n> Oh, OK. I wonder how that plays with the buildfarm status page's\n> desire to drop old results that are more than 30 days old. I guess\n> you'd just need to force a run at least every 28 days or something.\n>\n\nWell, we could do that, or we could modify the way the server does the\nstatus. The table it's based on has the last 500 records for each branch\nfor each animal, so the data is there.\n\n\ncheers\n\n\nandrew\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 11:25:00 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Oct 25, 2021 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Actually, I think we do. If I want to test against 7.4, ISTM I want\n>> to test against the last released 7.4 version, not something with\n>> arbitrary later changes. Otherwise, what exactly is the point?\n\n> 1. You're free to check out any commit you like.\n\nYeah, and get something that won't build. If there's any point\nto this work at all, it has to be that we'll maintain the closest\npossible buildable approximation to the last released version.\n\n> Oh, OK. I wonder how that plays with the buildfarm status page's\n> desire to drop old results that are more than 30 days old. I guess\n> you'd just need to force a run at least every 28 days or something.\n\nI don't think it's a problem. If we haven't committed anything to\nbranch X in a month, it's likely not interesting. It might be worth\nhaving a way to get the website to show results further back than\na month, but that doesn't need to be in the default view.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Oct 2021 11:26:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "\nOn 10/25/21 10:23, Tom Lane wrote:\n>\n> Also, I concur with Andrew's point that we'd really have to have\n> buildfarm support. However, this might not be as bad as it seems.\n> In principle we might just need to add resurrected branches back to\n> the branches_to_build list. Given my view of what the back-patching\n> policy ought to be, a new build in an old branch might only be\n> required a couple of times a year, which would not be an undue\n> investment of buildfarm resources. (Hmmm ... but disk space could\n> become a problem, particularly on older machines with not so much\n> disk. Do we really need to maintain a separate checkout for each\n> branch? It seems like a fresh checkout from the repo would be\n> little more expensive than the current copy-a-checkout process.)\n\n\nIf you set it up with these settings then the disk space used is minimal:\n\n     git_use_workdirs => 1,\n     rm_worktrees => 1,\n\nSo I have this on crake:\n\n andrew@emma:root $ du -sh REL*/pgsql\n 5.5M    REL_10_STABLE/pgsql\n 5.6M    REL_11_STABLE/pgsql\n 5.6M    REL_12_STABLE/pgsql\n 5.6M    REL_13_STABLE/pgsql\n 2.0M    REL_14_STABLE/pgsql\n 2.6M    REL9_5_STABLE/pgsql\n 5.5M    REL9_6_STABLE/pgsql\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 11:28:04 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "\nOn 10/25/21 11:05, Alvaro Herrera wrote:\n>\n>> Also, I concur with Andrew's point that we'd really have to have\n>> buildfarm support. However, this might not be as bad as it seems.\n>> In principle we might just need to add resurrected branches back to\n>> the branches_to_build list.\n> Well, we would add them to *some* list, but not to the one used by stock\n> BF members -- not only because of the diskspace issue but also because\n> of the time to build. I suggest that we should have a separate\n> list-of-branches file that would only be used by BF members especially\n> configured to do so; and hopefully we won't allow more than a handful\n> animals to do that but rather a well-chosen subset, and also maybe allow\n> only GCC rather than try to support other compilers. (There's no need\n> to ensure compilability on any Windows platform, for example.)\n\n\nWell, we do build with gcc on Windows :-) But yes, maybe we should make\nthis a more opt-in process.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 11:30:40 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Oct-25, Tom Lane wrote:\n>> Also, I concur with Andrew's point that we'd really have to have\n>> buildfarm support. However, this might not be as bad as it seems.\n>> In principle we might just need to add resurrected branches back to\n>> the branches_to_build list.\n\n> Well, we would add them to *some* list, but not to the one used by stock\n> BF members -- not only because of the diskspace issue but also because\n> of the time to build. I suggest that we should have a separate\n> list-of-branches file that would only be used by BF members especially\n> configured to do so; and hopefully we won't allow more than a handful\n> animals to do that but rather a well-chosen subset, and also maybe allow\n> only GCC rather than try to support other compilers. (There's no need\n> to ensure compilability on any Windows platform, for example.)\n\nMeh. I don't think that's a great approach, because then we're only\nensuring buildability on a rather static set of platforms. The whole\npoint here is that when release N+1 of $your_favorite_platform arrives,\nwe want to know whether the old branches still build on it. If the\ndefault behavior for new buildfarm animals is to ignore the old branches,\nwe're much less likely to find that out.\n\nIt's also unclear to me why we'd leave Windows out of this discussion.\nWe keep saying we want to encourage Windows-based hackers to contribute,\nso doesn't that require testing it on the same basis as other platforms?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Oct 2021 11:33:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 10/25/21 10:23, Tom Lane wrote:\n>> (Hmmm ... but disk space could\n>> become a problem, particularly on older machines with not so much\n>> disk. Do we really need to maintain a separate checkout for each\n>> branch? It seems like a fresh checkout from the repo would be\n>> little more expensive than the current copy-a-checkout process.)\n\n> If you set it up with these settings then the disk space used is minimal:\n>      git_use_workdirs => 1,\n>      rm_worktrees => 1,\n\nMaybe we should make those the defaults? AFAICS the current\ndefault setup uses circa 200MB per back branch, even between runs.\nI'm not sure what that is buying us.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Oct 2021 11:38:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On 2021-Oct-25, Tom Lane wrote:\n\n> It's also unclear to me why we'd leave Windows out of this discussion.\n> We keep saying we want to encourage Windows-based hackers to contribute,\n> so doesn't that require testing it on the same basis as other platforms?\n\nTesting of in-support branches, sure -- I don't propose to break that.\nBut this is all about providing *some* server against which to test\nclient-side changes with, right? Not to test the old servers\nthemselves. Looking at Amit K's \"Postgres person of the week\" interview[1]\nand remembering conversations with David Rowley, Windows hackers seem\nperfectly familiar with getting Linux builds going, so we wouldn't need\nto force MSVC fixes in order for them to have old servers available.\n\nBut anyway, I was thinking that the fixes required for MSVC buildability\nwere quite invasive, but on looking again they don't seem all that\nbad[2], so I withdraw that comment.\n\nI do think you have moved the goalposts: to reiterate what I said above,\nI thought what we wanted was to have *some* server in order to test\nclient-side changes with; not to be able to get a server running on\nevery possible platform. I'm not really on board with the idea that old\nbranches have to be buildable everywhere all the time.\n\n[1] https://postgresql.life/post/amit_kapila/\n[2] e.g., commit 2b1394fc2b52a2573d08aa626e7b49568f27464e\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 25 Oct 2021 13:25:04 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I do think you have moved the goalposts: to reiterate what I said above,\n> I thought what we wanted was to have *some* server in order to test\n> client-side changes with; not to be able to get a server running on\n> every possible platform. I'm not really on board with the idea that old\n> branches have to be buildable everywhere all the time.\n\nAgreed, that might be too much work compared to the value. But if we're\nto be selective about support for this, I'm unclear on how we decide\nwhich platforms are supported --- and, more importantly, how we keep\nthat list up to date over time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Oct 2021 12:43:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Hi,\n\nOn 2021-10-22 19:30:25 -0400, Tom Lane wrote:\n> Yeah. I checked into when it was that we dropped pre-8.0 support\n> from pg_dump, and the answer is just about five years ago (64f3524e2).\n> So moving the bar forward by five releases isn't at all out of line.\n> 8.4 would be eight years past EOL by the time v15 comes out.\n\nI'd really like us to adopt a \"default\" policy on this. I think it's a waste\nto spend time every few years arguing what exact versions to drop. I'd much\nrather say that, unless there are concrete reasons to deviate from that, we\nprovide pg_dump compatibility for 5+3 releases, pg_upgrade for 5+1, and psql\nfor 5 releases or something like that.\n\nIt's fine to not actually spend the time to excise support for old versions\nevery release if not useful, but we should be able to \"just do it\" whenever\nversion compat is a meaningful hindrance.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Oct 2021 09:56:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Hi,\n\nOn 2021-10-25 10:23:40 -0400, Tom Lane wrote:\n> Also, I concur with Andrew's point that we'd really have to have\n> buildfarm support. However, this might not be as bad as it seems.\n> In principle we might just need to add resurrected branches back to\n> the branches_to_build list. Given my view of what the back-patching\n> policy ought to be, a new build in an old branch might only be\n> required a couple of times a year, which would not be an undue\n> investment of buildfarm resources.\n\nFWIW, if helpful I could easily specify a few additional branches to some of\nmy buildfarm animals. Perhaps serinus/flaviventris (snapshot gcc wo/w\noptimizations) so we'd see problems coming early? I could also add\nrecent-clang one.\n\nI think doing this to a few designated animals is a better idea than wasting\ncycles and space on a lot of animals.\n\n\n> It seems like a fresh checkout from the repo would be little more expensive\n> than the current copy-a-checkout process.)\n\nI haven't looked in detail, but from what I've seen in the logs the\nis-there-anything-new check is already not cheap, and does a checkout / update\nof the git directory.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Oct 2021 10:06:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-10-22 19:30:25 -0400, Tom Lane wrote:\n>> Yeah. I checked into when it was that we dropped pre-8.0 support\n>> from pg_dump, and the answer is just about five years ago (64f3524e2).\n>> So moving the bar forward by five releases isn't at all out of line.\n>> 8.4 would be eight years past EOL by the time v15 comes out.\n\n> I'd really like us to adopt a \"default\" policy on this. I think it's a waste\n> to spend time every few years arguing what exact versions to drop. I'd much\n> rather say that, unless there are concrete reasons to deviate from that, we\n> provide pg_dump compatibility for 5+3 releases, pg_upgrade for 5+1, and psql\n> for 5 releases or something like that.\n\nI agree with considering something like that to be the minimum support\npolicy, but the actual changes need a bit more care. For example, when\nwe last did this, the technical need was just to drop pre-7.4 versions,\nbut we chose to make the cutoff 8.0 on the grounds that that was more\nunderstandable to users [1]. In the same way, I'm thinking of moving the\ncutoff to 9.0 now, although 8.4 would be sufficient from a technical\nstandpoint.\n\nOTOH, in the new world of one-part major versions, it's less clear that\nthere will be obvious division points for future cutoff changes. Maybe\nversions-divisible-by-five would work? Or versions divisible by ten,\nbut experience so far suggests that we'll want to move the cutoff more\noften than once every ten years.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/2661.1475849167%40sss.pgh.pa.us\n\n\n", "msg_date": "Mon, 25 Oct 2021 13:09:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-10-25 10:23:40 -0400, Tom Lane wrote:\n>> It seems like a fresh checkout from the repo would be little more expensive\n>> than the current copy-a-checkout process.)\n\n> I haven't looked in detail, but from what I've seen in the logs the\n> is-there-anything-new check is already not cheap, and does a checkout / update\n> of the git directory.\n\nYeah, you probably need a checkout to apply the rule about don't rebuild\nafter documentation-only changes. But it seems like the case where the\nbranch tip hasn't moved at all could be optimized fairly easily. I'm not\nsure it's worth the trouble to add code for that given our current usage\nof the buildfarm; but if we were to start tracking branches that only\nchange a couple of times a year, it would be.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Oct 2021 13:14:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Hi,\n\nOn 2021-10-25 12:43:15 -0400, Tom Lane wrote:\n> Agreed, that might be too much work compared to the value. But if we're\n> to be selective about support for this, I'm unclear on how we decide\n> which platforms are supported --- and, more importantly, how we keep\n> that list up to date over time.\n\nI honestly think that if we just test on linux with a single distribution,\nwe're already covering most of the benefit. From memory there have been two\nrough classes of doesn't-build-anymore:\n\n1) New optimizations / warnings. At least between gcc and clang, within a year\n or two, most of the issues end up being visible with the other compiler\n too. These aren't particularly distribution / OS specific.\n\n2) Library dependencies cause problems, like the ssl detection mentioned\n elsewhere in this thread. This is also not that OS dependent. It's also not\n that clear that we can do something about the issues with a reasonable\n amount of effort in all cases. It's easy enough if it's just a minor\n configure fix, but we'd not want to backpatch larger SSL changes or such.\n\n\nMaybe there's also a case for building older releases with msvc, but that\nseems like a pain due to the msvc project generation needing to support a\nspecific version of msvc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Oct 2021 10:17:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Hi,\n\nOn 2021-10-25 13:09:43 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I'd really like us to adopt a \"default\" policy on this. I think it's a waste\n> > to spend time every few years arguing what exact versions to drop. I'd much\n> > rather say that, unless there are concrete reasons to deviate from that, we\n> > provide pg_dump compatibility for 5+3 releases, pg_upgrade for 5+1, and psql\n> > for 5 releases or something like that.\n> \n> I agree with considering something like that to be the minimum support\n> policy, but the actual changes need a bit more care. For example, when\n> we last did this, the technical need was just to drop pre-7.4 versions,\n> but we chose to make the cutoff 8.0 on the grounds that that was more\n> understandable to users [1]. In the same way, I'm thinking of moving the\n> cutoff to 9.0 now, although 8.4 would be sufficient from a technical\n> standpoint.\n\nI think that'd be less of a concern if we had a documented policy\nsomewhere. It'd not be hard to include a version table in that policy to make\nit easier to understand. We could even add it to the table in\nhttps://www.postgresql.org/support/versioning/ or something similar.\n\n\n> OTOH, in the new world of one-part major versions, it's less clear that\n> there will be obvious division points for future cutoff changes. Maybe\n> versions-divisible-by-five would work?\n\nI think that's more confusing than helpful, because the support timeframes\nthen differ between releases. It's easier to just subtract a number of major\nreleases for from a specific major version. Especially if there's a table\nsomewhere.\n\n\n> Or versions divisible by ten, but experience so far suggests that we'll want\n> to move the cutoff more often than once every ten years.\n\nYes, I think that'd be quite a bit too restrictive.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Oct 2021 10:24:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "\nOn 10/25/21 13:06, Andres Freund wrote:\n> Hi,\n>\n> On 2021-10-25 10:23:40 -0400, Tom Lane wrote:\n>> Also, I concur with Andrew's point that we'd really have to have\n>> buildfarm support. However, this might not be as bad as it seems.\n>> In principle we might just need to add resurrected branches back to\n>> the branches_to_build list. Given my view of what the back-patching\n>> policy ought to be, a new build in an old branch might only be\n>> required a couple of times a year, which would not be an undue\n>> investment of buildfarm resources.\n> FWIW, if helpful I could easily specify a few additional branches to some of\n> my buildfarm animals. Perhaps serinus/flaviventris (snapshot gcc wo/w\n> optimizations) so we'd see problems coming early? I could also add\n> recent-clang one.\n>\n> I think doing this to a few designated animals is a better idea than wasting\n> cycles and space on a lot of animals.\n\n\nRight now the server will only accept results for something in\nbranches_of_interest.txt. So we would need to modify that.\n\n\nI tend to agree that we don't need a whole lot of cross platform testing\nhere.\n\n\n>\n>\n>> It seems like a fresh checkout from the repo would be little more expensive\n>> than the current copy-a-checkout process.)\n> I haven't looked in detail, but from what I've seen in the logs the\n> is-there-anything-new check is already not cheap, and does a checkout / update\n> of the git directory.\n>\n>\n\nIf you have removed the work tree (with the \"rm_worktrees => 1\" setting)\nthen it restores it by doing a checkout. It then does a \"git fetch\", and\nthen as you say looks to see if there is anything new. If you know of a\nbetter way to manage it then please let me know. On crake (which is\nactually checking out four different repos) the checkout step typically\ntakes one or two seconds.\n\n\nCopying the work tree can take a few seconds - to avoid that on\nUnix/msys use vpath builds.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 16:29:11 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Anyway, to get back to the original point ...\n\nNo one has spoken against moving up the cutoff for pg_dump support,\nso I did a very quick pass to see how much code could be removed.\nThe answer is right about 1000 lines, counting both pg_dump and\npg_upgrade, so it seems like it's worth doing independently of the\nunnest() issue.\n\nThe attached is just draft-quality, because I don't really want\nto pursue the point until after committing the pg_dump changes\nbeing discussed in the other thread. If I push this first it'll\nbreak a lot of those patches. (Admittedly, pushing those first\nwill break this one, but this one is a lot easier to re-do.)\n\nBTW, while looking at pg_upgrade I chanced to notice\ncheck_for_isn_and_int8_passing_mismatch(), which seems like it's\nnot well thought out at all. It's right that contrib/isn will\nnot upgrade nicely if the target cluster has a different\nfloat8_pass_by_value setting from the source. What's wrong is\nthe assumption that no other extension has the same issue.\nWe invented and publicized the \"LIKE type\" option for CREATE TYPE\nprecisely so that people could build types that act just like isn,\nso it seems pretty foolish to imagine that no one has done so.\n\nI think we should nuke check_for_isn_and_int8_passing_mismatch()\nand just refuse to upgrade if float8_pass_by_value differs, full stop.\nI can see little practical need to allow that case.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 25 Oct 2021 18:52:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On Mon, Oct 25, 2021 at 11:38:51AM -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 10/25/21 10:23, Tom Lane wrote:\n> >> (Hmmm ... but disk space could\n> >> become a problem, particularly on older machines with not so much\n> >> disk. Do we really need to maintain a separate checkout for each\n> >> branch? It seems like a fresh checkout from the repo would be\n> >> little more expensive than the current copy-a-checkout process.)\n> \n> > If you set it up with these settings then the disk space used is minimal:\n> > �� � git_use_workdirs => 1,\n> > �� � rm_worktrees => 1,\n> \n> Maybe we should make those the defaults? AFAICS the current\n> default setup uses circa 200MB per back branch, even between runs.\n> I'm not sure what that is buying us.\n\nMaybe git's shared/\"alternates\" would be helpful to minimize the size of\n.git/objects?\n\nI'm not sure - it looks like the BF client does its own stuff with symlinks.\nIs that for compatibility with old git ?\nhttps://github.com/PGBuildFarm/client-code/blob/main/PGBuild/SCM.pm\n\nIf you \"clone\" a local location, it uses hard links by default.\nIf you use --shared or --reference, then it uses references to the configured\n\"alternates\", if any.\n\nIn both cases, .git/objects requires no additional space (but the \"checked out\"\ncopy still takes up however much space).\n\n$ mkdir tmp\n$ git clone --quiet ./postgresql tmp/pg\n$ du -sh tmp/pg\n492M tmp/pg\n\n$ rm -fr tmp/pg\n$ git clone --quiet --shared ./postgresql tmp/pg\n$ du -sh tmp/pg\n124M tmp/pg\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 25 Oct 2021 18:12:56 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "\nOn 10/25/21 19:12, Justin Pryzby wrote:\n> On Mon, Oct 25, 2021 at 11:38:51AM -0400, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> On 10/25/21 10:23, Tom Lane wrote:\n>>>> (Hmmm ... but disk space could\n>>>> become a problem, particularly on older machines with not so much\n>>>> disk. Do we really need to maintain a separate checkout for each\n>>>> branch? It seems like a fresh checkout from the repo would be\n>>>> little more expensive than the current copy-a-checkout process.)\n>>> If you set it up with these settings then the disk space used is minimal:\n>>>      git_use_workdirs => 1,\n>>>      rm_worktrees => 1,\n>> Maybe we should make those the defaults? AFAICS the current\n>> default setup uses circa 200MB per back branch, even between runs.\n>> I'm not sure what that is buying us.\n> Maybe git's shared/\"alternates\" would be helpful to minimize the size of\n> .git/objects?\n>\n> I'm not sure - it looks like the BF client does its own stuff with symlinks.\n> Is that for compatibility with old git ?\n> https://github.com/PGBuildFarm/client-code/blob/main/PGBuild/SCM.pm\n\n\nIt's actually based on the git contrib script git-new-workdir. And using\nit is the default (except on Windows, where it doesn't work due to\nissues with symlinking plain files :-( )\n\nSince what we have is not broken I'm not inclined to fix it.\n\nThe issue Tom was complaining about is different, namely the storage for\neach branch's working tree. As I mentioned upthread, you can alleviate\nthat by setting \"rm_worktrees => 1\" in your config. That works\neverywhere, including Windows, and will be the default in the next\nbuildfarm release.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 26 Oct 2021 13:41:32 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "\nOn 10/25/21 13:09, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2021-10-22 19:30:25 -0400, Tom Lane wrote:\n>>> Yeah. I checked into when it was that we dropped pre-8.0 support\n>>> from pg_dump, and the answer is just about five years ago (64f3524e2).\n>>> So moving the bar forward by five releases isn't at all out of line.\n>>> 8.4 would be eight years past EOL by the time v15 comes out.\n>> I'd really like us to adopt a \"default\" policy on this. I think it's a waste\n>> to spend time every few years arguing what exact versions to drop. I'd much\n>> rather say that, unless there are concrete reasons to deviate from that, we\n>> provide pg_dump compatibility for 5+3 releases, pg_upgrade for 5+1, and psql\n>> for 5 releases or something like that.\n> I agree with considering something like that to be the minimum support\n> policy, but the actual changes need a bit more care. For example, when\n> we last did this, the technical need was just to drop pre-7.4 versions,\n> but we chose to make the cutoff 8.0 on the grounds that that was more\n> understandable to users [1]. In the same way, I'm thinking of moving the\n> cutoff to 9.0 now, although 8.4 would be sufficient from a technical\n> standpoint.\n>\n> OTOH, in the new world of one-part major versions, it's less clear that\n> there will be obvious division points for future cutoff changes. Maybe\n> versions-divisible-by-five would work? Or versions divisible by ten,\n> but experience so far suggests that we'll want to move the cutoff more\n> often than once every ten years.\n>\n> \t\n\n\npg_upgrade claims to be able to operate on 8.4, which might be all the\nbetter for some regular testing (which this could enable), so that seems\nto me more like where the cutoff should be at least for this round.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 26 Oct 2021 13:59:34 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "I was thinking a bit about formulating a policy for pg_dump backward\ncompatibility, based on the discussions in this thread.\n\nPremises and preparatory thoughts:\n\n- Users (and developers) want pg_dump to support server versions that\n are much older than non-EOL versions.\n\n- Less critically, much-longer backward compatibility has also\n historically been provided for psql, so keeping those two the same\n would make sense.\n\n- The policy for other client-side tools (list at [0]) is less clear\n and arguably less important. I suggest we focus on pg_dump and psql\n first, and then we can decide for the rest whether they want to\n match a longer window, a shorter window, or a different policy\n altogether (e.g., ecpg).\n\n- If we are going to maintain compatibility with very old server\n versions, we need to make sure the older server versions can at\n least still be built while an allegedly-compatible client tool is\n under support.\n\n[0]: https://www.postgresql.org/docs/devel/reference-client.html\n\nProposal:\n\n* pg_dump and psql will maintain compatibility with servers at least\n ten major releases back.\n\nThis assumes a yearly major release cadence.\n\nI use the count of major releases here instead of some number of\nyears, as was previously discussed, for two reasons. First, it makes\ncomputing the cutoff easier, because you are not bothered by whether\nsome old release was released a few weeks before or after the\nequivalent date in the current year for the new release. Second,\nthere is no ambiguity about what happens during the lifetime of a\nmajor release: If major release $NEW supports major release $OLD at\nthe time of $NEW's release, then that stays the same for the whole\nlife of $NEW; we don't start dropping support for $OLD in $NEW.5\nbecause a year has passed.\n\nI say \"at least\" because I wouldn't go around aggressively removing\nsupport for old releases. If $NEW is supposed to support 9.5 but\nthere is code that says `if (version > 9.4)`, I would not s/9.4/9.5/\nthat unless that code is touched for other reasons.\n\nThen ...\n\n* We keep old major release branches buildable as long as a new major\n release that has support for that old release is under support.\n\nBuildable for this purpose means just enough that you can use it to\ntest pg_dump and psql. This probably includes being able to run make\ninstallcheck and use pg_dump and psql against the regression database.\nIt does not require support for any additional build-time options that\nare not required for this purpose (e.g., new OpenSSL releases).\nConversely, it should be buildable with default compiler options. For\nexample, if it fails to build and test cleanly unless you use -O0,\nthat should be fixed. Fixes in very-old branches should normally be\nbackpatches that have stabilized in under-support branches. Changes\nthat silence compiler warnings in newer compilers are by themselves\nnot considered a backpatch-worthy fix.\n\n(In some cases, the support window of typical compilers should be\nconsidered. If adding support for a very new compiler with new\naggressive optimizations turns out to be too invasive, then that\ncompiler might simply be declared not supported for that release. But\nwe should strive to support at least one compiler that still has some\nupstream support.)\n\nThis keep-buildable effort is on an as-needed basis. There is no\nrequirement to keep the buildability current at all times, and there\nis no requirement to keep all platforms working at all times.\nObviously, any changes made to improve buildability should not\nknowingly adversely affect other platforms.\n\n(The above could be reconsidered if buildfarm support is available,\nbut I don't consider that necessary and wouldn't want to wait for it.)\n\nThere is no obligation on anyone backpatching fixes to supported\nbranches to also backpatch them to keep-buildable branches. It is up\nto those working on pg_dump/psql and requiring testing against old\nversions to pick available fixes and apply them to keep-buildable\nbranches as needed.\n\nFinally, none of this is meant to imply that there will be any\nreleases, packages, security support, production support, or community\nsupport for keep-buildable branches. This is a Git-repo-only,\ndeveloper-focusing effort.\n\nExample under this proposal:\n\nPG 15 supports PG 9.2\nPG 14 supports PG 9.1\nPG 13 supports PG 9.0\nPG 12 supports PG 8.4\nPG 11 supports PG 8.3\nPG 10 supports PG 8.2\n\nIn practice, the effort can focus on keeping the most recent cutoff\nrelease buildable. So in the above example, we really only need to\nkeep PG >=9.2 buildable to support ongoing development. The chances\nthat some needs to touch code pertaining to older versions in\nbackbranches is lower, so those really would need to be dealt with\nvery rarely.\n\n\nThe parent message has proposed to remove support for PG <9.0 from\nmaster. But I think that was chosen mainly because it was a round\nnumber. I suggest we pick a cutoff based on years, as I had\ndescribed, and then proceed with that patch.\n\n\n", "msg_date": "Thu, 2 Dec 2021 11:01:47 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On Thu, Dec 2, 2021 at 5:01 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> * pg_dump and psql will maintain compatibility with servers at least\n> ten major releases back.\n>\n> * We keep old major release branches buildable as long as a new major\n> release that has support for that old release is under support.\n>\n> Buildable for this purpose means just enough that you can use it to\n> test pg_dump and psql. This probably includes being able to run make\n> installcheck and use pg_dump and psql against the regression database.\n> It does not require support for any additional build-time options that\n> are not required for this purpose (e.g., new OpenSSL releases).\n> Conversely, it should be buildable with default compiler options. For\n> example, if it fails to build and test cleanly unless you use -O0,\n> that should be fixed. Fixes in very-old branches should normally be\n> backpatches that have stabilized in under-support branches. Changes\n> that silence compiler warnings in newer compilers are by themselves\n> not considered a backpatch-worthy fix.\n\nSounds reasonable. It doesn't really make sense to insist that the\ntools have to be compatible with releases that most developers can't\nactually build.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Dec 2021 06:54:04 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Proposal:\n\n> * pg_dump and psql will maintain compatibility with servers at least\n> ten major releases back.\n> * We keep old major release branches buildable as long as a new major\n> release that has support for that old release is under support.\n\n> This assumes a yearly major release cadence.\n\nIf the point is to not have to count dates carefully, why does the cadence\nmatter?\n\n> I say \"at least\" because I wouldn't go around aggressively removing\n> support for old releases. If $NEW is supposed to support 9.5 but\n> there is code that says `if (version > 9.4)`, I would not s/9.4/9.5/\n> that unless that code is touched for other reasons.\n\nI can get behind something roughly like this, but I wonder if it wouldn't\nbe better to formulate the policy in a reactive way, i.e. when X happens\nwe'll do Y. If we don't plan to proactively remove some code every year,\nthen it seems like the policy really is more like \"when something breaks,\nthen we'll make an attempt to keep it working if the release is less than\nten majors back; otherwise we'll declare that release no longer\nbuildable.\"\n\nHowever, this'd imply continuing to test against releases that are out of\nthe ten-year window but have not yet been found to be broken. Not sure\nif that's a useful expenditure of test resources or not.\n\n> Buildable for this purpose means just enough that you can use it to\n> test pg_dump and psql. This probably includes being able to run make\n> installcheck and use pg_dump and psql against the regression database.\n> It does not require support for any additional build-time options that\n> are not required for this purpose (e.g., new OpenSSL releases).\n\nI agree with the idea of being conservative about what outside\ndependencies we will worry about for \"buildable\" old versions.\n(Your nearby message about Python breakage is a good example of\nwhy we must limit that.) But I wonder about, say, libxml or libicu,\nor even if we can afford to drop all the non-plpgsql PLs. An\nexample of why that seems worrisome is that it's not clear we'd\nhave any meaningful coverage of transforms in pg_dump with no PLs.\nI don't have any immediate proposal here, but it seems like an area\nthat needs some thought and specific policy.\n\n> Example under this proposal:\n\n> PG 15 supports PG 9.2\n> PG 14 supports PG 9.1\n> PG 13 supports PG 9.0\n> PG 12 supports PG 8.4\n> PG 11 supports PG 8.3\n> PG 10 supports PG 8.2\n\nI was going to express concern about having to resurrect branches\nback to 8.2, but:\n\n> In practice, the effort can focus on keeping the most recent cutoff\n> release buildable. So in the above example, we really only need to\n> keep PG >=9.2 buildable to support ongoing development. The chances\n> that some needs to touch code pertaining to older versions in\n> backbranches is lower, so those really would need to be dealt with\n> very rarely.\n\nOK. Also, when you do need to check that, there are often other ways\nthan rebuilding the old branch on modern platforms --- people may\nwell have still-executable builds laying about, even if rebuilding\nfrom source would be problematic.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Dec 2021 12:30:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "\nOn 12/2/21 12:30, Tom Lane wrote:\n>\n>> In practice, the effort can focus on keeping the most recent cutoff\n>> release buildable. So in the above example, we really only need to\n>> keep PG >=9.2 buildable to support ongoing development. The chances\n>> that some needs to touch code pertaining to older versions in\n>> backbranches is lower, so those really would need to be dealt with\n>> very rarely.\n> OK. Also, when you do need to check that, there are often other ways\n> than rebuilding the old branch on modern platforms --- people may\n> well have still-executable builds laying about, even if rebuilding\n> from source would be problematic.\n>\n> \t\t\t\n\n\n\nI have a very old fedora instance where I can build every release back\nto 7.2 :-) And with only slight massaging for the very old releases,\nthese builds run on my Fedora 34 development system. Certainly 8.2 and\nup wouldn't be a problem. Currently I have only tested building without\nany extra libraries/PLs, but I can look at other combinations. So, long\nstory short this is fairly doable at least in some environments. This\nprovides a good use case for the work I have been doing on backwards\ncompatibility of the TAP framework. I need to get back to that now that\nthe great module namespace adjustment has settled down.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 2 Dec 2021 15:46:09 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Hi,\n\nOn 2021-12-02 11:01:47 +0100, Peter Eisentraut wrote:\n> - The policy for other client-side tools (list at [0]) is less clear\n> and arguably less important. I suggest we focus on pg_dump and psql\n> first, and then we can decide for the rest whether they want to\n> match a longer window, a shorter window, or a different policy\n> altogether (e.g., ecpg).\n\nI think we should at least include pg_upgrade in this as well, it's pretty\nclosely tied to at least pg_dump.\n\n\n> * pg_dump and psql will maintain compatibility with servers at least\n> ten major releases back.\n\nPersonally I think that's too long... It boils down keeping branches buildable\nfor ~15 years after they've been released. That strikes me as pretty far into\ndiminishing-returns, and steeply increasing costs, territory.\n\nI realize it's more complicated for users, but a policy based on supporting a\ncertain number of out-of-support branches calculated from the newest major\nversion is more realistic. I'd personally go for something like newest-major -\n7 (i.e. 2 extra releases), but I realize that others think it's worthwhile to\nsupport a few more. I think there's a considerable advantage of having one\ncutoff date across all branches.\n\nThat's not to say we'd remove support for older versions from back\nbranches. Just that we don't ever consider them supported (or test them) once\nbelow the cutoff.\n\n\n> I use the count of major releases here instead of some number of\n> years, as was previously discussed, for two reasons. First, it makes\n> computing the cutoff easier, because you are not bothered by whether\n> some old release was released a few weeks before or after the\n> equivalent date in the current year for the new release. Second,\n> there is no ambiguity about what happens during the lifetime of a\n> major release: If major release $NEW supports major release $OLD at\n> the time of $NEW's release, then that stays the same for the whole\n> life of $NEW; we don't start dropping support for $OLD in $NEW.5\n> because a year has passed.\n\nMakes sense.\n\n\n> * We keep old major release branches buildable as long as a new major\n> release that has support for that old release is under support.\n\n> Buildable for this purpose means just enough that you can use it to\n> test pg_dump and psql. This probably includes being able to run make\n> installcheck and use pg_dump and psql against the regression database.\n\nI think we should explicitly limit the number of platforms we care about for\nthis purpose. I don't think we should even try to keep 8.2 compile on AIX or\nwhatnot.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Dec 2021 14:16:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On 02.12.21 18:30, Tom Lane wrote:\n>> This assumes a yearly major release cadence.\n> \n> If the point is to not have to count dates carefully, why does the cadence\n> matter?\n\nIf we were to change the release cadence, then it would be appropriate \nto review this policy.\n\n> I can get behind something roughly like this, but I wonder if it wouldn't\n> be better to formulate the policy in a reactive way, i.e. when X happens\n> we'll do Y. If we don't plan to proactively remove some code every year,\n> then it seems like the policy really is more like \"when something breaks,\n> then we'll make an attempt to keep it working if the release is less than\n> ten majors back; otherwise we'll declare that release no longer\n> buildable.\"\n\nThis sounds like it would give license to accidentally break support for \nold releases in the code and only fix them if someone complains. That's \nnot really what I would be aiming for.\n\n> I agree with the idea of being conservative about what outside\n> dependencies we will worry about for \"buildable\" old versions.\n> (Your nearby message about Python breakage is a good example of\n> why we must limit that.) But I wonder about, say, libxml or libicu,\n> or even if we can afford to drop all the non-plpgsql PLs. An\n> example of why that seems worrisome is that it's not clear we'd\n> have any meaningful coverage of transforms in pg_dump with no PLs.\n> I don't have any immediate proposal here, but it seems like an area\n> that needs some thought and specific policy.\n\nYeah, I think questions like this will currently quickly lead to dead \nends. We are talking 5 years this, 10 years that here. Everybody else \n(apart from RHEL) is talking at best in the range 3-5 years. We will \nhave to figure this out as we go.\n\n\n", "msg_date": "Fri, 3 Dec 2021 17:19:47 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On 02.12.21 23:16, Andres Freund wrote:\n> I think we should at least include pg_upgrade in this as well, it's pretty\n> closely tied to at least pg_dump.\n\nright\n\n>> * pg_dump and psql will maintain compatibility with servers at least\n>> ten major releases back.\n> \n> Personally I think that's too long... It boils down keeping branches buildable\n> for ~15 years after they've been released. That strikes me as pretty far into\n> diminishing-returns, and steeply increasing costs, territory.\n\nWell, it is a lot, but it's on the order of what we have historically \nprovided.\n\n> I realize it's more complicated for users, but a policy based on supporting a\n> certain number of out-of-support branches calculated from the newest major\n> version is more realistic. I'd personally go for something like newest-major -\n> 7 (i.e. 2 extra releases), but I realize that others think it's worthwhile to\n> support a few more. I think there's a considerable advantage of having one\n> cutoff date across all branches.\n\nI'm not sure it will be clear what this would actually mean. Assume \nPG11 supports back to 9.4 (14-7) now, but when PG15 comes out, we drop \n9.4 support. But the PG11 code hasn't changed, and PG9.4 hasn't changed, \nso it will most likely still work. Then we have messaging that is out \nof sync with reality. I can see the advantage of this approach, but the \ncommunication around it might have to be refined.\n\n> I think we should explicitly limit the number of platforms we care about for\n> this purpose. I don't think we should even try to keep 8.2 compile on AIX or\n> whatnot.\n\nIt's meant to be developer-facing, so only for platforms that developers \nuse. I think that can police itself, if we define it that way.\n\n\n", "msg_date": "Fri, 3 Dec 2021 17:29:41 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> [ policy requiring that 9.2 and up be kept buildable, as of today ]\n\nI experimented to see what this would entail exactly. Using\ncurrent macOS (Apple clang version 13.0.0) on M1 hardware,\nI built with minimal configure options (--enable-debug --enable-cassert)\nand ran the core regression tests. I found that commit\n1c0cf52b3 (Use return instead of exit() in configure) is needed\nin 9.4 and before, else we don't get through configure.\nThat's the only fix needed to get a clean build in 9.4 and 9.3.\n9.2 shows several compiler warnings, the scarier ones of which could\nbe cleaned up by back-patching c74d586d2 (Fix function return type\nconfusion). The remainder are variable-may-be-used-uninitialized\nwarnings, which I think people are accustomed to ignoring in\ndubious cases. In any case, I failed to get rid of them without\nback-patching 71450d7fd (Teach compiler that ereport(>=ERROR) does\nnot return), which seems like a bridge too far.\n\nI also tried 9.1, but it has multiple compile-time problems:\n* fails to select a spinlock implementation\n* \"conflicting types for 'base_yylex'\"\n* strange type-conflict warnings in zlib calls\n\nSo at least on this platform, there are solid technical reasons\nto select 9.2 not 9.1 as the cutoff.\n\nObviously, we might find some other things to fix if we checked\nwith other compilers, or tested more than the core tests.\nBut this much seems quite doable, and it's probably prerequisite\nfor any further testing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Dec 2021 12:10:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "\nOn 12/3/21 12:10, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> [ policy requiring that 9.2 and up be kept buildable, as of today ]\n> I experimented to see what this would entail exactly. Using\n> current macOS (Apple clang version 13.0.0) on M1 hardware,\n> I built with minimal configure options (--enable-debug --enable-cassert)\n> and ran the core regression tests. \n\n\nI've mentioned my efforts on fedora previously. But like you I used a\nminimal configuration. So what would be reasonable to test? I know you\nmentioned building with perl and python upthread so we could possibly\ntest transforms. Anything else? I don't think we need to worry about all\nthe authentication-supporting options. XML/XSLT maybe.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 3 Dec 2021 12:28:11 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 12/3/21 12:10, Tom Lane wrote:\n>> I experimented to see what this would entail exactly. Using\n>> current macOS (Apple clang version 13.0.0) on M1 hardware,\n>> I built with minimal configure options (--enable-debug --enable-cassert)\n>> and ran the core regression tests. \n\n> I've mentioned my efforts on fedora previously. But like you I used a\n> minimal configuration. So what would be reasonable to test? I know you\n> mentioned building with perl and python upthread so we could possibly\n> test transforms. Anything else? I don't think we need to worry about all\n> the authentication-supporting options. XML/XSLT maybe.\n\nNot sure. I think we should evaluate based on\n1. how integral is the option, ie how much PG code can't we test if\n we don't enable it.\n2. how stable is the referenced code.\n\nPoint 2 makes me want to exclude both Python and OpenSSL, as they've\nboth proven to be moving API targets. If we want to have tests for\ntransform modules, plperl would be sufficient for that, and perl seems\nto be a lot more stable than python. libxml is pretty morib^H^H^Hstable,\nbut on the other hand it seems quite noncritical for the sorts of tests\nwe want to run against old servers, so I'd be inclined to exclude it.\n\nLooking through the other configure options, the only one that I find\nto be a hard call is --enable-nls. In theory this shouldn't be\ncritical for testing pg_dump or psql ... but you never know, and it\nhasn't been a stability problem. Every other one I think we could\nignore for these purposes. At some point --with-icu might become\ninteresting, but it isn't yet relevant to any out-of-support\nbranches, so we can leave that call for another day.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Dec 2021 13:09:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 02.12.21 23:16, Andres Freund wrote:\n>> I realize it's more complicated for users, but a policy based on supporting a\n>> certain number of out-of-support branches calculated from the newest major\n>> version is more realistic. I'd personally go for something like newest-major -\n>> 7 (i.e. 2 extra releases), but I realize that others think it's worthwhile to\n>> support a few more. I think there's a considerable advantage of having one\n>> cutoff date across all branches.\n\n> I'm not sure it will be clear what this would actually mean. Assume \n> PG11 supports back to 9.4 (14-7) now, but when PG15 comes out, we drop \n> 9.4 support. But the PG11 code hasn't changed, and PG9.4 hasn't changed, \n> so it will most likely still work. Then we have messaging that is out \n> of sync with reality. I can see the advantage of this approach, but the \n> communication around it might have to be refined.\n\nI don't find this suggestion to be an improvement over Peter's original\nformulation, for two reasons:\n\n* I'm not convinced that it saves us any actual work; as you say, the\ncode doesn't stop working just because we declare it out-of-support.\n\n* There's a real-world use-case underneath here. If somewhere you've\ndiscovered a decades-old server that you need to upgrade, and current\npg_dump won't dump from it, you would like it to be well-defined\nwhich intermediate pg_dump versions you can use. So if 10.19 can\ndump from that hoary server, it would not be nice if 10.20 can't;\nnor if the documentation lies to you about that based on which minor\nversion you happen to consult.\n\n>> I think we should explicitly limit the number of platforms we care about for\n>> this purpose. I don't think we should even try to keep 8.2 compile on AIX or\n>> whatnot.\n\n> It's meant to be developer-facing, so only for platforms that developers \n> use. I think that can police itself, if we define it that way.\n\nI agree that if you care about doing this sort of test on platform X,\nit's up to you to patch for that. I think Andres' concern is about\nthe amount of committer bandwidth that might be needed to handle\nsuch patches submitted by non-committers. However, based on the\nexperiment I just ran, I think it's not really likely to be a big deal:\nthere are not that many problems, and most of them just amount to\nback-patching something that originally wasn't back-patched.\n\nWhat's most likely to happen IMO is that committers will just start\nback-patching essential portability fixes into out-of-support-but-\nstill-in-the-buildability-window branches, contemporaneously with\nthe original fix. Yeah, that does mean more committer effort,\nbut only for a very small number of patches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Dec 2021 13:30:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On Fri, Dec 3, 2021 at 1:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> What's most likely to happen IMO is that committers will just start\n> back-patching essential portability fixes into out-of-support-but-\n> still-in-the-buildability-window branches, contemporaneously with\n> the original fix. Yeah, that does mean more committer effort,\n> but only for a very small number of patches.\n\nI agree. I think that's exactly what we want to have happen, and if a\ngiven policy won't have exactly this result then the policy needs\nadjusting.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Dec 2021 15:46:04 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "I ran a new set of experiments concerning building back branches\non modern platforms, this time trying Fedora 35 (gcc 11.2.1)\non x86_64. I widened the scope of the testing a bit by adding\n\"--enable-nls --with-perl\" and running check-world not just the\ncore tests. Salient results:\n\n* Everything back to 9.2 passes the test, although with more\nand more compile warnings the further back you go.\n\n* 9.1 fails with \"conflicting types for 'base_yylex'\", much as\nI saw on macOS except it's a hard error on this compiler.\n\n* Parallel check-world is pretty unreliable before v10 (I knew\nthis already, actually). But without parallelism, it's fine.\n\nBased on these results, I think maybe we should raise our ambitions\na bit compared to Peter's original proposal. Specifically,\nI wonder if it wouldn't be wise to try to silence compile warnings\nin these branches. The argument for this is basically that if we\ndon't, then every time someone builds one of these branches, they\nhave to tediously go through the warnings and verify that\nthey're not important. It won't take long for the accumulated\ntime-wastage from that to exceed the cost of back-patching whatever\nwe did to silence the warning in later branches.\n\nNow, I'm still not interested in trying to silence\nmaybe-uninitialized warnings pre-9.3, mainly because of the\nereport-ERROR-doesnt-return issue. (I saw far fewer of those\nunder gcc than clang, but not zero.) We could ignore those\nfiguring that 9.2 will be out of scope in a year anyway, or else\nteach 9.2's configure to select -Wno-maybe-uninitialized where\npossible.\n\nLikewise, getting check-world to parallelize successfully pre-v10\nseems like a bridge too far. But I would, for example, be in favor\nof back-patching eb9812f27 (Make pg_upgrade's test.sh less chatty).\nIt's just annoying to run check-world and get that output now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Dec 2021 19:41:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On Sun, Dec 5, 2021 at 7:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Based on these results, I think maybe we should raise our ambitions\n> a bit compared to Peter's original proposal. Specifically,\n> I wonder if it wouldn't be wise to try to silence compile warnings\n> in these branches. The argument for this is basically that if we\n> don't, then every time someone builds one of these branches, they\n> have to tediously go through the warnings and verify that\n> they're not important. It won't take long for the accumulated\n> time-wastage from that to exceed the cost of back-patching whatever\n> we did to silence the warning in later branches.\n\nYep. I have long been of the view, and have said before, that there is\nvery little harm in doing some maintenance of EOL branches. Making it\neasy to test against them is a great way to improve our chances of\nactually having the amount of backward-compatibility that we say we\nwant to have.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Dec 2021 15:38:55 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sun, Dec 5, 2021 at 7:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Based on these results, I think maybe we should raise our ambitions\n>> a bit compared to Peter's original proposal. Specifically,\n>> I wonder if it wouldn't be wise to try to silence compile warnings\n>> in these branches.\n\n> Yep. I have long been of the view, and have said before, that there is\n> very little harm in doing some maintenance of EOL branches. Making it\n> easy to test against them is a great way to improve our chances of\n> actually having the amount of backward-compatibility that we say we\n> want to have.\n\nRight. The question that's on the table is how much is the right\namount of maintenance. I think that back-patching user-visible bug\nfixes, for example, is taking things too far. What we want is to\nbe able to replicate the behavior of the branch's last released\nversion, using whatever build tools we are currently using. So\nback-patching something like that is counterproductive, because\nnow the behavior is not what was released.\n\nA minimal amount of maintenance would be \"only back-patch fixes\nfor issues that cause failure-to-build\". The next step up is \"fix\nissues that cause failure-to-pass-regression-tests\", and then above\nthat is \"fix developer-facing annoyances, such as compiler warnings\nor unwanted test output, as long as you aren't changing user-facing\nbehavior\". I now think that it'd be reasonable to include this\nlast group, although I'm pretty sure Peter didn't have that in mind\nin his policy sketch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Dec 2021 16:19:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On Mon, Dec 6, 2021 at 4:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Right. The question that's on the table is how much is the right\n> amount of maintenance. I think that back-patching user-visible bug\n> fixes, for example, is taking things too far. What we want is to\n> be able to replicate the behavior of the branch's last released\n> version, using whatever build tools we are currently using. So\n> back-patching something like that is counterproductive, because\n> now the behavior is not what was released.\n>\n> A minimal amount of maintenance would be \"only back-patch fixes\n> for issues that cause failure-to-build\". The next step up is \"fix\n> issues that cause failure-to-pass-regression-tests\", and then above\n> that is \"fix developer-facing annoyances, such as compiler warnings\n> or unwanted test output, as long as you aren't changing user-facing\n> behavior\". I now think that it'd be reasonable to include this\n> last group, although I'm pretty sure Peter didn't have that in mind\n> in his policy sketch.\n\nYep, that seems reasonable to me.\n\nI guess the point about user-visible bug fixes is that, as soon as we\nstart doing that, we don't really want it to be hit-or-miss. We could\nmake a decision to back-patch all bug fixes or those of a certain\nseverity or whatever we like back to older branches, and then those\nbranches would be supported or semi-supported depending on what rule\nwe adopted, and we could even continue to do releases for them if we\nso chose. However, it wouldn't be a great idea to back-patch a\ncompletely arbitrary subset of our fixes into those branches, because\nthen it sort of gets confusing to understand what the status of that\nbranch is. I don't know that I'm terribly bothered by the idea that\nthe behavior of the branch might deviate from the last official\nrelease, because most bug fixes are pretty minor and wouldn't really\naffect testing much, but it would be a little annoying to explain to\nusers that those branches contain an arbitrary subset of newer fixes,\nand a little hard for us to understand what is and is not there.\n\nThat being said, suppose that a new compiler version comes out and on\nthat new compiler version, 'make check' crashes on the older branch\ndue to a missing WhateverGetDatum() call that we rectified in a later,\nback-patched commit. I would consider it reasonable to back-patch that\nparticular bug fix into an unsupported branch to make it testable,\njust like we would do for a failure-to-build issue.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Dec 2021 11:33:38 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I guess the point about user-visible bug fixes is that, as soon as we\n> start doing that, we don't really want it to be hit-or-miss. We could\n> make a decision to back-patch all bug fixes or those of a certain\n> severity or whatever we like back to older branches, and then those\n> branches would be supported or semi-supported depending on what rule\n> we adopted, and we could even continue to do releases for them if we\n> so chose. However, it wouldn't be a great idea to back-patch a\n> completely arbitrary subset of our fixes into those branches, because\n> then it sort of gets confusing to understand what the status of that\n> branch is.\n\nYup, and also confusing to understand whether a given new fix should\nbe back-patched into the out-of-support-but-keep-buildable branches.\nI want to settle on a reasonably well-defined policy for that.\n\nI'm basically suggesting that the policy should be \"back-patch the\nminimal fix needed so that you can still get a clean build and clean\ncheck-world run, using thus-and-such configure options\". (The point\nof the configure options limitation being to exclude moving-target\nexternal dependencies, such as Python.) I think that Peter's\noriginal suggestion could be read the same way except for the\nadjective \"clean\". He also said that only core regression needs\nto pass not check-world; but if we're trying to test things like\npg_dump compatibility, I think we want the wider scope of what to\nkeep working.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Dec 2021 12:46:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "\n\n> On Dec 7, 2021, at 8:33 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> However, it wouldn't be a great idea to back-patch a\n> completely arbitrary subset of our fixes into those branches, because\n> then it sort of gets confusing to understand what the status of that\n> branch is. I don't know that I'm terribly bothered by the idea that\n> the behavior of the branch might deviate from the last official\n> release, because most bug fixes are pretty minor and wouldn't really\n> affect testing much, but it would be a little annoying to explain to\n> users that those branches contain an arbitrary subset of newer fixes,\n> and a little hard for us to understand what is and is not there.\n\nWouldn't you be able to see what changed by comparing the last released tag for version X.Y against the RELX_Y_STABLE branch? Something like `git diff REL8_4_22 origin/REL8_4_STABLE > buildability.patch`?\n\nHaving such a patch should make reproducing old corruption bugs easier, as you could apply the buildability.patch to the last branch that contained the bug. If anybody did that work, would we want it committed somewhere? REL8_4_19_BUILDABLE or such? For patches that apply trivially, that might not be worth keeping, but if the merge is difficult, maybe sharing with the community would make sense.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 7 Dec 2021 10:26:13 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> Wouldn't you be able to see what changed by comparing the last released tag for version X.Y against the RELX_Y_STABLE branch? Something like `git diff REL8_4_22 origin/REL8_4_STABLE > buildability.patch`?\n\n> Having such a patch should make reproducing old corruption bugs easier, as you could apply the buildability.patch to the last branch that contained the bug. If anybody did that work, would we want it committed somewhere? REL8_4_19_BUILDABLE or such? For patches that apply trivially, that might not be worth keeping, but if the merge is difficult, maybe sharing with the community would make sense.\n\nI'm not entirely following ... are you suggesting that each released minor\nversion needs to be kept buildable separately? That seems like a huge\namount of extra committer effort with not much added value. If someone\ncomes to me and wants to investigate a bug in a branch that's already\nout-of-support, and they then say they're not running the last minor\nrelease, I'm going to tell them to come back after updating.\n\nIt is (I suspect) true that diffing the last release against branch\ntip would often yield a patch that could be used to make an older\nminor release buildable again. But when that patch doesn't work\ntrivially, I for one am not interested in making it work. And\nespecially not interested in doing so \"on spec\", with no certainty\nthat anyone would ever need it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Dec 2021 13:52:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "\n\n> On Dec 7, 2021, at 10:52 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I'm not entirely following ... are you suggesting that each released minor\n> version needs to be kept buildable separately?\n\nNo. I'm just wondering if we want to share the product of such efforts if anybody (me, for instance) volunteers to do it for some subset of minor releases. For my heap corruption checking work, I might want to be able to build a small number of old minor releases that I know had corruption bugs.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 7 Dec 2021 10:59:02 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "\nOn 12/7/21 13:59, Mark Dilger wrote:\n>> On Dec 7, 2021, at 10:52 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> I'm not entirely following ... are you suggesting that each released minor\n>> version needs to be kept buildable separately?\n> No. I'm just wondering if we want to share the product of such efforts if anybody (me, for instance) volunteers to do it for some subset of minor releases. For my heap corruption checking work, I might want to be able to build a small number of old minor releases that I know had corruption bugs.\n>\n\nI doubt there's going to be a whole lot of changes. You should just be\nable to cherry-pick them in most cases I suspect.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 7 Dec 2021 14:19:36 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On 06.12.21 22:19, Tom Lane wrote:\n> A minimal amount of maintenance would be \"only back-patch fixes\n> for issues that cause failure-to-build\". The next step up is \"fix\n> issues that cause failure-to-pass-regression-tests\", and then above\n> that is \"fix developer-facing annoyances, such as compiler warnings\n> or unwanted test output, as long as you aren't changing user-facing\n> behavior\". I now think that it'd be reasonable to include this\n> last group, although I'm pretty sure Peter didn't have that in mind\n> in his policy sketch.\n\nI would be okay with that.\n\n\n", "msg_date": "Thu, 9 Dec 2021 15:01:43 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "[ mostly for the archives' sake ]\n\nI wrote:\n> I ran a new set of experiments concerning building back branches\n> on modern platforms, this time trying Fedora 35 (gcc 11.2.1)\n> on x86_64. I widened the scope of the testing a bit by adding\n> \"--enable-nls --with-perl\" and running check-world not just the\n> core tests. Salient results:\n\n> * 9.1 fails with \"conflicting types for 'base_yylex'\", much as\n> I saw on macOS except it's a hard error on this compiler.\n\nI poked a little harder at what might be needed to get 9.1 compiled\non modern platforms. It looks like the base_yylex issue is down\nto newer versions of flex doing things differently. We fixed\nthat in the v10 era via 72b1e3a21 (Build backend/parser/scan.l and\ninterfaces/ecpg/preproc/pgc.l standalone) and 92fb64983 (Use \"%option\nprefix\" to set API names in ecpg's lexer), which were later back-patched\nas far down as 9.2. It might not be out of the question to back-patch\nthose further, but the 9.2 patches don't apply cleanly to 9.1, so some\neffort would be needed.\n\nWorrisomely, I also noted warnings like\n\nparse_coerce.c:791:67: warning: array subscript 1 is above array bounds of 'Oid[1]' {aka 'unsigned int[1]'} [-Warray-bounds]\n 791 | Assert(nargs < 2 || procstruct->proargtypes.values[1] == INT4OID);\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~\n\nwhich remind me that 9.1 lacks 8137f2c32 (Hide most variable-length fields\nfrom Form_pg_* structs). We did stick -fno-aggressive-loop-optimizations\ninto 9.1 and older branches back in 2015, but I don't have a lot of\nconfidence that that'd be sufficient to prevent misoptimizations in\ncurrent-vintage compilers. Back-patching 8137f2c32 and all the follow-on\nwork is very clearly not something to consider, so dialing down the -O\nlevel might be necessary if you were interested in making this go.\n\nIn short then, there is a really large gap between 9.1 and 9.2 in terms\nof how hard they are to build on current toolchains. It's kind of\nfortunate that Peter proposed 9.2 rather than some earlier cutoff.\nIn any case, I've completely lost interest in trying to move the\nkeep-it-buildable cutoff to any earlier than 9.2; it doesn't look\nlike the effort-to-benefit ratio would be attractive at all.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Dec 2021 12:50:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "\nOn 12/9/21 12:50, Tom Lane wrote:\n>\n> In short then, there is a really large gap between 9.1 and 9.2 in terms\n> of how hard they are to build on current toolchains. It's kind of\n> fortunate that Peter proposed 9.2 rather than some earlier cutoff.\n> In any case, I've completely lost interest in trying to move the\n> keep-it-buildable cutoff to any earlier than 9.2; it doesn't look\n> like the effort-to-benefit ratio would be attractive at all.\n>\n> \t\t\t\n\n\n9.2 is how far back crake goes in testing pg_ugrade from old versions,\nso that could well be a convenient stopping point. For older versions\nthere is still the possibility of building on older toolchains and\nrunning on modern ones. Yes it's more cumbersome, but it does mean we\ncan test an awful long way back. I don't remember the last time I saw a\nreally old version in the wild, but I'm sure there are some out there\nsitting in a cupboard humming along.\n\nThis might also be a good time to revive work on making the TAP test\nframework backwards compatible via subclassing.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 11 Dec 2021 10:17:17 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> 9.2 is how far back crake goes in testing pg_ugrade from old versions,\n> so that could well be a convenient stopping point. For older versions\n> there is still the possibility of building on older toolchains and\n> running on modern ones. Yes it's more cumbersome, but it does mean we\n> can test an awful long way back.\n\nRight. I think the point of the current discussion is to ensure that,\nif we expect new patches for pg_dump or psql to work against version-N\nservers, that it's not too unpleasant for patch submitters to build\nand test against version N. There's a different discussion to be had\nabout what we do if we receive a bug report about compatibility with\nsome more-ancient-than-that version. But that is, I hope, a far less\ncommon scenario; so it's okay if it requires extra effort, and/or use\nof setups that not everyone has handy.\n\nAnyway, it seems like there's some consensus that 9.2 is a good\nstopping place for today. I'll push forward with\n(1) back-patching as necessary to make 9.2 and up build cleanly\non the platforms I have handy;\n(2) ripping out pg_dump's support for pre-9.2 servers;\n(3) ripping out psql's support for pre-9.2 servers.\n\nIn a preliminary look, it did not seem that (3) would save very\nmuch code, but it seems like we ought to do it if we're being\nconsistent.\n\nA point we've not discussed is whether to drop any bits of libpq\nthat are only needed for such old servers. I feel a bit more\nuncomfortable about that, mainly because I'm pretty sure that\nonly a few lines of code would be involved, and it seems to have\nmore of an air of burning-the-bridges finality about it than (say)\ndropping psql/describe.c support. On the other hand, the point\nabout what's required to test future patches still applies.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 12 Dec 2021 01:24:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "I wrote:\n> Anyway, it seems like there's some consensus that 9.2 is a good\n> stopping place for today. I'll push forward with\n> (1) back-patching as necessary to make 9.2 and up build cleanly\n> on the platforms I have handy;\n\nI've done as much as I plan to do in that direction. As of the\nrespective branch tips, I see clean builds and check-world\nresults with minimal configure options in all branches back to 9.2\non Fedora 35 (gcc 11.2.1) and macOS Monterey (Apple clang 13.0.0).\n\nA few notes for the archives' sake:\n\n* As discussed, parallel check-world is unreliable before v10;\nperhaps this is worth improving, but I doubt it. I did find\nthat aggressive parallelism in the build process is fine.\n\n* On some compilers, pre-v10 branches produce this warning:\n\nscan.c: In function 'yy_try_NUL_trans':\nscan.c:10189:23: warning: unused variable 'yyg' [-Wunused-variable]\n struct yyguts_t * yyg = (struct yyguts_t*)yyscanner; /* This var may be unused depending upon options. */\n\nIn principle we could back-patch 65d508fd4 to silence that,\nbut I think that fix is more invasive than what we want to\ndo in these branches. We lived with that warning for years\nbefore figuring out how to get rid of it, so I think we can\ncontinue to accept it in these branches.\n\n* 9.2's plperl fails to compile on macOS:\n\n./plperl.h:53:10: fatal error: 'EXTERN.h' file not found\n#include \"EXTERN.h\"\n\nThis is evidently because 9.2 predates the \"sysroot\" hacking\nwe did later (5e2217131 and many many subsequent tweaks).\nI judge this not worth the trouble to fix, because the argument\nfor supporting --with-perl in these branches is basically that\nwe need a PL with transforms to test pg_dump ... but transforms\ndidn't come in until 9.5. (Reviewing the commit log, I suppose\nthat 9.3 and 9.4 would also fail to build in some macOS\nconfigurations, but by the same argument I see no need to work\nfurther on those branches either. 9.5 does have all the\nsysroot changes I can find in the log.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Dec 2021 12:23:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On Mon, Dec 13, 2021 at 12:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I've done as much as I plan to do in that direction. As of the\n> respective branch tips, I see clean builds and check-world\n> results with minimal configure options in all branches back to 9.2\n> on Fedora 35 (gcc 11.2.1) and macOS Monterey (Apple clang 13.0.0).\n\nI think this is great. Thanks for being willing to work on it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Dec 2021 12:33:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "I wrote:\n> Anyway, it seems like there's some consensus that 9.2 is a good\n> stopping place for today. I'll push forward with\n> (1) back-patching as necessary to make 9.2 and up build cleanly\n> on the platforms I have handy;\n> (2) ripping out pg_dump's support for pre-9.2 servers;\n> (3) ripping out psql's support for pre-9.2 servers.\n\nI've completed the pg_dump/pg_dumpall part of that, but while\nupdating the docs I started to wonder whether we shouldn't nuke\npg_dump's --no-synchronized-snapshots option. As far as I can\nmake out, the remaining use case for that is to let you perform an\nunsafe parallel dump from a standby server of an out-of-support\nmajor version. I'm not very clear why we allowed that at all,\never, rather than saying you can't parallelize in such cases.\nBut for sure that remaining use case is paper thin, and leaving\nthe option available seems way more likely to let people shoot\nthemselves in the foot than to let them do anything helpful.\n\nBarring objections, I'll remove that option in a day or two.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Dec 2021 17:18:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On Tue, Dec 14, 2021 at 05:18:44PM -0500, Tom Lane wrote:\n> I wrote:\n> > Anyway, it seems like there's some consensus that 9.2 is a good\n> > stopping place for today. I'll push forward with\n> > (1) back-patching as necessary to make 9.2 and up build cleanly\n> > on the platforms I have handy;\n> > (2) ripping out pg_dump's support for pre-9.2 servers;\n> > (3) ripping out psql's support for pre-9.2 servers.\n> \n> I've completed the pg_dump/pg_dumpall part of that, but while\n\nIs it possible to clean up pg_upgrade, too ?\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 15 Dec 2021 22:08:07 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Tue, Dec 14, 2021 at 05:18:44PM -0500, Tom Lane wrote:\n>> I've completed the pg_dump/pg_dumpall part of that, but while\n\n> Is it possible to clean up pg_upgrade, too ?\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e469f0aaf3c586c8390bd65923f97d4b1683cd9f\n\nI'm still working on psql.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Dec 2021 23:55:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On Wed, Dec 15, 2021 at 10:08:07PM -0600, Justin Pryzby wrote:\n> Is it possible to clean up pg_upgrade, too ?\n\nNevermind - I found yesterday's e469f0aaf3 after git-fetch.\n\nI think you missed a few parts though ?\n\nsrc/bin/pg_upgrade/function.c\n if (GET_MAJOR_VERSION(old_cluster.major_version) <= 900)\n...\n if (GET_MAJOR_VERSION(old_cluster.major_version) <= 900 &&\n strcmp(lib, \"$libdir/plpython\") == 0)\n\nsrc/bin/pg_upgrade/option.c\n * Someday, the port number option could be removed and passed\n * using -o/-O, but that requires postmaster -C to be\n * supported on all old/new versions (added in PG 9.2).\n...\n if (GET_MAJOR_VERSION(cluster->major_version) >= 901)\n\n\n", "msg_date": "Wed, 15 Dec 2021 22:58:04 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I think you missed a few parts though ?\n\nUm. I think those are leftover from when I was intending the\ncutoff to be 9.0 not 9.2. I'll take a fresh look tomorrow.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Dec 2021 00:02:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus ancient server versions" }, { "msg_contents": "On Fri, 22 Oct 2021 at 19:27, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Another thing to think about in that regard: how likely is that\n> PostgreSQL 7.4 and PostgreSQL 15 both compile and run on the same\n> operating system? I suspect the answer is \"not very.\" I seem to recall\n> Greg Stark trying to compile really old versions of PostgreSQL for a\n> conference talk some years ago, and he got back to a point where it\n> just became impossible to make work on modern toolchains even with a\n> decent amount of hackery.\n\nThat was when I compared sorting performance over time. I was able to\nget Postgres to build back to the point where 64-bit architecture\nsupport was added. From Andrew Dunstans comment later in this thread\nI'm guessing that was 7.2\n\nThat was basically at the point where 64-bit architecture support was\nadded. It looks like the earliest date on the graphs in the talk are\n2002-11-27 which matches the 7.3 release date. I think building\nearlier versions would have been doable if I had built them in 32-bit\nmode.\n\n-- \ngreg\n\n\n", "msg_date": "Fri, 17 Dec 2021 03:06:50 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus ancient server versions" } ]
[ { "msg_contents": "Simple patch to implement $SUBJECT attached.\n\npg_signal_backend seems like the appropriate predefined role, because\npg_log_backend_memory_contexts() is implemented by a sending signal.\n\nRegards,\n\tJeff Davis", "msg_date": "Sat, 23 Oct 2021 12:57:02 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On 10/23/21, 12:57 PM, \"Jeff Davis\" <pgsql@j-davis.com> wrote:\r\n> pg_signal_backend seems like the appropriate predefined role, because\r\n> pg_log_backend_memory_contexts() is implemented by a sending signal.\r\n\r\nThis seems reasonable to me. The stated reason in the original commit\r\nmessage for keeping it restricted to superusers is because of the\r\ndenial-of-service risk, but if you've got pg_signal_backend, you can\r\nalready terminate sessions. The predefined roles documentation notes\r\nthat members of pg_signal_backend cannot signal superuser-owned\r\nbackends, but AFAICT pg_log_backend_memory_contexts() has no such\r\nrestriction at the moment. Should we add this?\r\n\r\nOtherwise, presumably we will need to update func.sgml and the comment\r\nabove pg_log_backend_memory_contexts() in mcxtfuncs.c.\r\n\r\nThis is unrelated to this patch, but should we also consider opening\r\nup pg_reload_conf() and pg_rotate_logfile() to members of\r\npg_signal_backend? Those are the other \"server signaling functions\" I\r\nsee in the docs.\r\n\r\nNathan\r\n\r\n", "msg_date": "Sat, 23 Oct 2021 20:42:29 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On Sat, Oct 23, 2021 at 08:42:29PM +0000, Bossart, Nathan wrote:\n> Otherwise, presumably we will need to update func.sgml and the comment\n> above pg_log_backend_memory_contexts() in mcxtfuncs.c.\n\nYes, the documentation of any SQL function whose hardcoded superuser()\ncheck is removed needs a refresh to outline that its execution can be\nGRANT-ed post-initialization, and it should also document which system\nroles are able to use it. See for instance pg_database_size(), that\nmentions roles need to be a member of pg_read_all_stats.\n\n> This is unrelated to this patch, but should we also consider opening\n> up pg_reload_conf() and pg_rotate_logfile() to members of\n> pg_signal_backend? Those are the other \"server signaling functions\" I\n> see in the docs.\n\nYes, there is that as well.\n\n+CREATE ROLE testrole1 IN ROLE pg_signal_backend;\n+CREATE ROLE testrole2;\nAny role created in the regression test needs to be prefixed with\n\"regress_\", or builds with -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\nwill complain (I just add that by default to not fall into this trap\nagain).\n--\nMichael", "msg_date": "Sun, 24 Oct 2021 08:41:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On Sun, Oct 24, 2021 at 1:27 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n>\n> Simple patch to implement $SUBJECT attached.\n>\n> pg_signal_backend seems like the appropriate predefined role, because\n> pg_log_backend_memory_contexts() is implemented by a sending signal.\n\n+1.\n\nIt looks like we are better off with removing explicit superuser()\nchecks from the functions and using normal GRANT based system, see\nothers agreeing on this at [1]. As we have lots of functions that are\ndoing explicit superuser() checks, I'm sure someday they all will have\nto be moved to GRANT system. The current code is a mix - some\nfunctions do explicit checks (I've seen many of them with the comment\nat [2]) and others do it via GRANT system. I'm not saying that we\nshould be dealing with those here in this thread, all I'm looking for\nis that we have a note of it in the postgres todo list in the wiki so\nthat someone interested can pick that work up. Thoughts?\n\nComments on the patch:\n1) testrole1 and testrole2 look generic, how about\nregress_mcxt_role1/2? There's no problem as they are\nmisc_functions.sql local, but still role names can be more readable.\n+CREATE ROLE testrole1 IN ROLE pg_signal_backend;\n+CREATE ROLE testrole2;\n2) It seems like the privileges.sql is the right place to place the\ntest cases, but I'm fine with keeping all the test cases of the\nfunction together.\n3) It might be enough to do has_function_privilege, just a thought -\nisn't it better if we execute the function with the test roles set in.\nThis will help us catch the permission denied error message in the\ntest output files.\n4) Isn't the +#define CATALOG_VERSION_NO 202110230 going to be set to\nthe date on which the patch gets committed?\n5) The following change is being handled in the patch at [3], I know\nit is appropriate to have it in this patch, but please mention it in\nthe commit message on why we do this change. I will remove this change\nfrom my patch at [3].\n-SELECT * FROM pg_log_backend_memory_contexts(pg_backend_pid());\n+SELECT pg_log_backend_memory_contexts(pg_backend_pid());\n\n[1] - https://www.postgresql.org/message-id/CAOuzzgpp0dmOFjWC4JDvk57ZQGm8umCrFdR1at4b80xuF0XChw%40mail.gmail.com\n[2] -\n * Permission checking for this function is managed through the normal\n * GRANT system.\n */\n[3] - https://www.postgresql.org/message-id/CALj2ACVXk1roswqFpiCOMHrsB%2BxxW7HG536krGAzF%3DmWXh3eWQ%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sun, 24 Oct 2021 19:58:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On Sat, 2021-10-23 at 20:42 +0000, Bossart, Nathan wrote:\n> The predefined roles documentation notes\n> that members of pg_signal_backend cannot signal superuser-owned\n> backends, but AFAICT pg_log_backend_memory_contexts() has no such\n> restriction at the moment. Should we add this?\n\nAdded, good catch.\n\n> This is unrelated to this patch, but should we also consider opening\n> up pg_reload_conf() and pg_rotate_logfile() to members of\n> pg_signal_backend? Those are the other \"server signaling functions\"\n> I\n> see in the docs.\n\nThose are actually signalling the postmaster, not an ordinary backend.\nAlso, those functions are already GRANTable, so I think we should leave\nthem as-is.\n\nRegards,\n\tJeff Davis", "msg_date": "Sun, 24 Oct 2021 09:50:58 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On Sun, 2021-10-24 at 19:58 +0530, Bharath Rupireddy wrote:\n> It looks like we are better off with removing explicit superuser()\n> checks from the functions and using normal GRANT based system, see\n> others agreeing on this at [1]. As we have lots of functions that are\n> doing explicit superuser() checks, I'm sure someday they all will\n> have\n> to be moved to GRANT system.\n\nNote that some functions have additional checks that can't be expressed\nwith GRANT -- see pg_cancel_backend(), for example. But I agree in\ngeneral that GRANT is the way to go most of the time.\n\n> The current code is a mix - some\n> functions do explicit checks (I've seen many of them with the comment\n> at [2]) and others do it via GRANT system. I'm not saying that we\n> should be dealing with those here in this thread, all I'm looking for\n> is that we have a note of it in the postgres todo list in the wiki so\n> that someone interested can pick that work up. Thoughts?\n\nIt seems like there's agreement on the direction, but I don't know that\nthere's a good place to write it down. Probably better to just fix as\nmany of the functions as we can, and then when people add new ones,\nthey'll copy the GRANT pattern rather than the explicit superuser\ncheck.\n\n> Comments on the patch:\n> 1) testrole1 and testrole2 look generic, how about\n\nMichael had a similar comment. Renamed, thank you.\n\n> 2) It seems like the privileges.sql is the right place to place the\n> test cases, but I'm fine with keeping all the test cases of the\n> function together.\n\nIf we add all the function privilege checks there, I think it will\noverwhelm the other interesting tests happening in that file.\n\n> 3) It might be enough to do has_function_privilege, just a thought -\n> isn't it better if we execute the function with the test roles set\n> in.\n> This will help us catch the permission denied error message in the\n> test output files.\n\nMissed this comment. I'll tweak this before commit.\n\n> 4) Isn't the +#define CATALOG_VERSION_NO 202110230 going to be set to\n> the date on which the patch gets committed?\n\nI just put it in there so that I wouldn't forget, but I'll update it at\ncommit time.\n\n> 5) The following change is being handled in the patch at [3], I know\n> it is appropriate to have it in this patch, but please mention it in\n> the commit message on why we do this change. I will remove this\n> change\n> from my patch at [3].\n> -SELECT * FROM pg_log_backend_memory_contexts(pg_backend_pid());\n> +SELECT pg_log_backend_memory_contexts(pg_backend_pid());\n\nWhat would you like me to mention?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sun, 24 Oct 2021 10:04:48 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On 10/24/21, 9:51 AM, \"Jeff Davis\" <pgsql@j-davis.com> wrote:\r\n> On Sat, 2021-10-23 at 20:42 +0000, Bossart, Nathan wrote:\r\n>> The predefined roles documentation notes\r\n>> that members of pg_signal_backend cannot signal superuser-owned\r\n>> backends, but AFAICT pg_log_backend_memory_contexts() has no such\r\n>> restriction at the moment. Should we add this?\r\n>\r\n> Added, good catch.\r\n\r\nThe new patch looks good to me.\r\n\r\nNathan\r\n\r\n", "msg_date": "Sun, 24 Oct 2021 20:59:48 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "At Sun, 24 Oct 2021 09:50:58 -0700, Jeff Davis <pgsql@j-davis.com> wrote in \n> On Sat, 2021-10-23 at 20:42 +0000, Bossart, Nathan wrote:\n> > The predefined roles documentation notes\n> > that members of pg_signal_backend cannot signal superuser-owned\n> > backends, but AFAICT pg_log_backend_memory_contexts() has no such\n> > restriction at the moment. Should we add this?\n> \n> Added, good catch.\n> \n> > This is unrelated to this patch, but should we also consider opening\n> > up pg_reload_conf() and pg_rotate_logfile() to members of\n> > pg_signal_backend? Those are the other \"server signaling functions\"\n> > I\n> > see in the docs.\n> \n> Those are actually signalling the postmaster, not an ordinary backend.\n> Also, those functions are already GRANTable, so I think we should leave\n> them as-is.\n\nI'm afraid that it might be wrong that all backend-signalling features\nare allowed by that priviledge. pg_signal_backends is described in\nthe doc as:\n\nhttps://www.postgresql.org/docs/devel/predefined-roles.html\n\n> Signal another backend to cancel a query or terminate its session.\n\nHere, the term \"signal\" there seems to mean interrupting something on\nthat session or the session itself. Addition to that I don't think\n\"terminate a session or the query on a session\" and \"log something on\nanother session\" and \"rotate log file\" don't fall into the same\ncategory in a severity view.\n\nIn other words, I don't think pg_signal_backends is not meant to\ncontrol \"log something on another session\" or \"rotate log file\". It's\ndanger that if we allow somewone to rotate log files, that means to\nallow same user to terminate another session.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 25 Oct 2021 11:53:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On Mon, 2021-10-25 at 11:53 +0900, Kyotaro Horiguchi wrote:\n> In other words, I don't think pg_signal_backends is not meant to\n> control \"log something on another session\" or \"rotate log file\". \n> It's\n> danger that if we allow somewone to rotate log files, that means to\n> allow same user to terminate another session.\n\nThe current patch doesn't allow members of pg_signal_backend to rotate\nthe log file.\n\nDo you think pg_signal_backend is the wrong group to allow usage of\npg_log_backend_memory_contexts()? Alternatively, it could simply not\nGRANT anything, and leave that up to the administrator to choose who\ncan use it.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sun, 24 Oct 2021 20:31:37 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "At Sun, 24 Oct 2021 20:31:37 -0700, Jeff Davis <pgsql@j-davis.com> wrote in \n> On Mon, 2021-10-25 at 11:53 +0900, Kyotaro Horiguchi wrote:\n> > In other words, I don't think pg_signal_backends is not meant to\n> > control \"log something on another session\" or \"rotate log file\". \n> > It's\n> > danger that if we allow somewone to rotate log files, that means to\n> > allow same user to terminate another session.\n> \n> The current patch doesn't allow members of pg_signal_backend to rotate\n> the log file.\n\nAh, sorry, I might have confused with some other discussion.\n\n> Do you think pg_signal_backend is the wrong group to allow usage of\n> pg_log_backend_memory_contexts()? Alternatively, it could simply not\n\nYes. I think it would be danger that who is allowed to dump memory\ncontext into log files by granting pg_signal_backend also can\nterminate other backends.\n\n> pg_log_backend_memory_contexts()? Alternatively, it could simply not\n> GRANT anything, and leave that up to the administrator to choose who\n> can use it.\n\n*I* prefer that. I'm not sure I'm the only one to think so, though..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 25 Oct 2021 13:13:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On Mon, Oct 25, 2021 at 9:43 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> > Do you think pg_signal_backend is the wrong group to allow usage of\n> > pg_log_backend_memory_contexts()? Alternatively, it could simply not\n>\n> Yes. I think it would be danger that who is allowed to dump memory\n> context into log files by granting pg_signal_backend also can\n> terminate other backends.\n>\n> > pg_log_backend_memory_contexts()? Alternatively, it could simply not\n> > GRANT anything, and leave that up to the administrator to choose who\n> > can use it.\n>\n> *I* prefer that. I'm not sure I'm the only one to think so, though..\n\nHow about we have a separate predefined role for the functions that\ndeal with server logs? I'm not sure if Mark Dilger's patch on new\npredefined roles has one, if not, how about something like\npg_write_server_log/pg_manage_server_log/some other name?\n\nIf not with a new predefined role, how about expanding the scope of\nexisting pg_write_server_files role?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 25 Oct 2021 09:57:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On Sun, Oct 24, 2021 at 08:31:37PM -0700, Jeff Davis wrote:\n> The current patch doesn't allow members of pg_signal_backend to rotate\n> the log file.\n> \n> Do you think pg_signal_backend is the wrong group to allow usage of\n> pg_log_backend_memory_contexts()? Alternatively, it could simply not\n> GRANT anything, and leave that up to the administrator to choose who\n> can use it.\n\nHmm. Why don't you split the patch into two parts that can be\ndiscussed separately then? There would be one to remove all the\nsuperuser() checks you can think of, and a potential second to grant \nthose function's execution to some system role.\n\nFWIW, if the barrier between a role and a function is thin, perhaps\nwe'd better just limit ourselves to the removal of the superuser()\nchecks for now rather than trying to plug more groups into the\nfunctions. When I have dealt with such issues in the past, I tend to\njust do the superuser()/REVOKE part without more GRANTs or even more\nsystem roles, as this is enough to give room to users to do what they\nwant with their clusters. And this is a no-brainer.\n--\nMichael", "msg_date": "Mon, 25 Oct 2021 16:10:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On Sun, Oct 24, 2021 at 10:34 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > 5) The following change is being handled in the patch at [3], I know\n> > it is appropriate to have it in this patch, but please mention it in\n> > the commit message on why we do this change. I will remove this\n> > change\n> > from my patch at [3].\n> > -SELECT * FROM pg_log_backend_memory_contexts(pg_backend_pid());\n> > +SELECT pg_log_backend_memory_contexts(pg_backend_pid());\n>\n> What would you like me to mention?\n\nSomething like below in the commit message would be good:\n\"While on this, change the way the tests use pg_log_backend_memory_contexts()\nUsually for functions, we don't use \"SELECT-FROM-<<function>>\",\nwe just use \"SELECT-<<function>>\".\"\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 25 Oct 2021 14:44:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On Mon, Oct 25, 2021 at 12:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Oct 24, 2021 at 08:31:37PM -0700, Jeff Davis wrote:\n> > The current patch doesn't allow members of pg_signal_backend to rotate\n> > the log file.\n> >\n> > Do you think pg_signal_backend is the wrong group to allow usage of\n> > pg_log_backend_memory_contexts()? Alternatively, it could simply not\n> > GRANT anything, and leave that up to the administrator to choose who\n> > can use it.\n>\n> Hmm. Why don't you split the patch into two parts that can be\n> discussed separately then? There would be one to remove all the\n> superuser() checks you can think of, and a potential second to grant\n> those function's execution to some system role.\n\nIMO, in this thread we can focus on remvong the\npg_log_backend_memory_contexts()'s superuser() check and +1 to start a\nseparate thread to remove superuser() checks for the other functions\nand REVOKE the permissions in appropriate places, for system functins\nsystem_functions.sql files, for extension functions, the extension\ninstallation .sql files. See [1] and [2].\n\n[1] - https://www.postgresql.org/message-id/CALj2ACUhCFSUQmZhiQ%2Bw1kZdJGmhNP2cd1LZS4GVGowyjiqftQ%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAOuzzgpp0dmOFjWC4JDvk57ZQGm8umCrFdR1at4b80xuF0XChw%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 25 Oct 2021 14:49:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On 10/25/21, 2:21 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> On Mon, Oct 25, 2021 at 12:40 PM Michael Paquier <michael@paquier.xyz> wrote:\r\n>> Hmm. Why don't you split the patch into two parts that can be\r\n>> discussed separately then? There would be one to remove all the\r\n>> superuser() checks you can think of, and a potential second to grant\r\n>> those function's execution to some system role.\r\n>\r\n> IMO, in this thread we can focus on remvong the\r\n> pg_log_backend_memory_contexts()'s superuser() check and +1 to start a\r\n> separate thread to remove superuser() checks for the other functions\r\n> and REVOKE the permissions in appropriate places, for system functins\r\n> system_functions.sql files, for extension functions, the extension\r\n> installation .sql files. See [1] and [2].\r\n\r\nI like the general idea of removing hard-coded superuser checks first\r\nand granting execution to predefined roles second. I don't have any\r\nstrong opinion about what should be done in this thread and what\r\nshould be done elsewhere.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 25 Oct 2021 16:15:20 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "Hi,\n\nOn 2021-10-23 12:57:02 -0700, Jeff Davis wrote:\n> Simple patch to implement $SUBJECT attached.\n> \n> pg_signal_backend seems like the appropriate predefined role, because\n> pg_log_backend_memory_contexts() is implemented by a sending signal.\n\nI like the idea of making pg_log_backend_memory_contexts() more widely\navailable. But I think tying it to pg_signal_backend isn't great. It's\nunnecessarily baking in an implementation detail.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Oct 2021 10:29:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On Mon, 2021-10-25 at 16:10 +0900, Michael Paquier wrote:\n> Hmm. Why don't you split the patch into two parts that can be\n> discussed separately then? There would be one to remove all the\n> superuser() checks you can think of, and a potential second to grant \n> those function's execution to some system role.\n\nGood idea. Attached a patch to remove the superuser check on\npg_log_backend_memory_contexts(), except in the case when trying to log\nmemory contexts of a superuser backend.\n\nUsing pg_signal_backend does not seem to be universally acceptable, so\nI'll just drop the idea of granting to that predefined role.\n\nRegards,\n\tJeff Davis", "msg_date": "Mon, 25 Oct 2021 13:42:07 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On 10/25/21, 1:43 PM, \"Jeff Davis\" <pgsql@j-davis.com> wrote:\r\n> On Mon, 2021-10-25 at 16:10 +0900, Michael Paquier wrote:\r\n>> Hmm. Why don't you split the patch into two parts that can be\r\n>> discussed separately then? There would be one to remove all the\r\n>> superuser() checks you can think of, and a potential second to grant\r\n>> those function's execution to some system role.\r\n>\r\n> Good idea. Attached a patch to remove the superuser check on\r\n> pg_log_backend_memory_contexts(), except in the case when trying to log\r\n> memory contexts of a superuser backend.\r\n\r\nLGTM.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 25 Oct 2021 21:26:47 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "Hi,\n\nOn 2021-10-25 13:42:07 -0700, Jeff Davis wrote:\n> Good idea. Attached a patch to remove the superuser check on\n> pg_log_backend_memory_contexts(), except in the case when trying to log\n> memory contexts of a superuser backend.\n\nI don't get the reasoning behind the \"except ...\" logic. What does this\nactually protect against? A reasonable use case for this feature is is to\nmonitor memory usage of all backends, and this restriction practially requires\nto still use a security definer function.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Oct 2021 14:30:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On Mon, 2021-10-25 at 14:30 -0700, Andres Freund wrote:\n> I don't get the reasoning behind the \"except ...\" logic. What does\n> this\n> actually protect against? A reasonable use case for this feature is\n> is to\n> monitor memory usage of all backends, and this restriction practially\n> requires\n> to still use a security definer function.\n\nNathan brought it up -- more as a question than a request, so perhaps\nit's not necessary. I don't have a strong opinion about it, but I\nincluded it to be conservative (easier to relax a privilege than to\ntighten one).\n\nI can cut out the in-function check entirely if there's no objection.\n\nRegards,\n\tJeff Davis\n\n[1] https://postgr.es/m/33F34F0C-BB16-48DE-B125-95D340A54AE8@amazon.com\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 16:28:13 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On 10/25/21, 4:29 PM, \"Jeff Davis\" <pgsql@j-davis.com> wrote:\r\n> On Mon, 2021-10-25 at 14:30 -0700, Andres Freund wrote:\r\n>> I don't get the reasoning behind the \"except ...\" logic. What does\r\n>> this\r\n>> actually protect against? A reasonable use case for this feature is\r\n>> is to\r\n>> monitor memory usage of all backends, and this restriction practially\r\n>> requires\r\n>> to still use a security definer function.\r\n>\r\n> Nathan brought it up -- more as a question than a request, so perhaps\r\n> it's not necessary. I don't have a strong opinion about it, but I\r\n> included it to be conservative (easier to relax a privilege than to\r\n> tighten one).\r\n\r\nI asked about it since we were going to grant execution to\r\npg_signal_backend, which (per the docs) shouldn't be able to signal a\r\nsuperuser-owned backend. I don't mind removing this now that the\r\npg_signal_backend part is removed.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 25 Oct 2021 23:58:31 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "\n\nOn 2021/10/26 5:42, Jeff Davis wrote:\n> On Mon, 2021-10-25 at 16:10 +0900, Michael Paquier wrote:\n>> Hmm. Why don't you split the patch into two parts that can be\n>> discussed separately then? There would be one to remove all the\n>> superuser() checks you can think of, and a potential second to grant\n>> those function's execution to some system role.\n> \n> Good idea. Attached a patch to remove the superuser check on\n> pg_log_backend_memory_contexts(), except in the case when trying to log\n> memory contexts of a superuser backend.\n\n- Only superusers can request to log the memory contexts.\n+ Only superusers can request to log the memory contexts of superuser\n+ backends.\n\nThe description \"This function is restricted to superusers by default,\nbut other users can be granted EXECUTE to run the function.\"\nshould be added into the docs?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 26 Oct 2021 23:32:00 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." }, { "msg_contents": "On Tue, 2021-10-26 at 23:32 +0900, Fujii Masao wrote:\n> The description \"This function is restricted to superusers by\n> default,\n> but other users can be granted EXECUTE to run the function.\"\n> should be added into the docs?\n\nA similar statement already exists right above the table of functions:\n\n\"\nUse of these functions is restricted to superusers by default but\naccess may be granted to others using GRANT, with noted exceptions.\"\n\n\nCommitted the version that merely removes the superuser check, and\nrevokes from public. Privilege can be granted to non-superusers if\ndesired.\n\nThanks everyone for looking.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 26 Oct 2021 14:04:14 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_signal_backend members to use\n pg_log_backend_memory_stats()." } ]
[ { "msg_contents": "Add new predefined role pg_maintenance, which can issue VACUUM,\nANALYZE, CHECKPOINT.\n\nPatch attached.\n\nRegards,\n\tJeff Davis", "msg_date": "Sat, 23 Oct 2021 14:45:02 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Sun, Oct 24, 2021 at 3:15 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> Add new predefined role pg_maintenance, which can issue VACUUM,\n> ANALYZE, CHECKPOINT.\n>\n> Patch attached.\n\nAt this point, the idea of having a new role for maintenance work\nlooks good. With this patch and Mark Dilger's patch introducing a\nbunch of new predefined roles, one concern is that we might reach to a\nstate where we will have patches being proposed for new predefined\nroles for every database activity and the superuser eventually will\nhave nothing to do in the database, it just becomes dummy?\n\nI'm not sure if Mark Dilger's patch on new predefined roles has a\nsuitable/same role that we can use here.\n\nAre there any other database activities that fall under the\n\"maintenance\" category? How about CLUSTER, REINDEX? I didn't check the\ncode for their permissions.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sun, 24 Oct 2021 20:19:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Sun, Oct 24, 2021 at 7:49 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Sun, Oct 24, 2021 at 3:15 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > Add new predefined role pg_maintenance, which can issue VACUUM,\n> > ANALYZE, CHECKPOINT.\n>\n>\n> Are there any other database activities that fall under the\n> \"maintenance\" category? How about CLUSTER, REINDEX? I didn't check the\n> code for their permissions.\n>\n>\nI would not lump the I/O intensive cluster and reindexing commands, and\nvacuum full, into the same permission bucket as vacuum and analyze.\nCheckpoint fits in the middle of that continuum. However, given that both\nvacuum and analyze are run to ensure good planner statistics during normal\nusage of the database, while the others, including checkpoint, either are\nnon-normal usage or don't influence the planner, I would shift checkpoint\nto the same permission that covers cluster and reindex.\n\nDavid J.\n\nOn Sun, Oct 24, 2021 at 7:49 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Sun, Oct 24, 2021 at 3:15 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> Add new predefined role pg_maintenance, which can issue VACUUM,\n> ANALYZE, CHECKPOINT.\n\nAre there any other database activities that fall under the\n\"maintenance\" category? How about CLUSTER, REINDEX? I didn't check the\ncode for their permissions.I would not lump the I/O intensive cluster and reindexing commands, and vacuum full, into the same permission bucket as vacuum and analyze.  Checkpoint fits in the middle of that continuum.  However, given that both vacuum and analyze are run to ensure good planner statistics during normal usage of the database, while the others, including checkpoint, either are non-normal usage or don't influence the planner, I would shift checkpoint to the same permission that covers cluster and reindex.David J.", "msg_date": "Sun, 24 Oct 2021 09:47:37 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Sun, 2021-10-24 at 20:19 +0530, Bharath Rupireddy wrote:\n> At this point, the idea of having a new role for maintenance work\n> looks good. With this patch and Mark Dilger's patch introducing a\n> bunch of new predefined roles, one concern is that we might reach to\n> a\n> state where we will have patches being proposed for new predefined\n> roles for every database activity and the superuser eventually will\n> have nothing to do in the database, it just becomes dummy?\n\nThe idea is that, in different environments, the notion of an\n\"administrator\" should have different capabilities and different risks.\nBy making the privileges more fine-grained, we enable those different\nuse cases.\n\nI don't see it as necessarily a problem if superuser doesn't have much\nleft to do.\n\n> I'm not sure if Mark Dilger's patch on new predefined roles has a\n> suitable/same role that we can use here.\n\nI didn't see one. I think one of the most common reasons to do manual\ncheckpoints and vacuums is for performance testing, so another\npotential name might be \"pg_performance\". But \"pg_maintenance\" seemed a\nslightly better fit.\n\n> Are there any other database activities that fall under the\n> \"maintenance\" category? How about CLUSTER, REINDEX? I didn't check\n> the\n> code for their permissions.\n\nI looked around and didn't see much else to fit into this category.\nCLUSTER and REINDEX are a little too specific for a generic maintenance\noperation -- it's unlikely that you'd want to perform those expensive\noperations just to tidy up. But if you think something else should fit,\nlet me know.\n\nThank you,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sun, 24 Oct 2021 10:19:51 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 10/24/21, 10:20 AM, \"Jeff Davis\" <pgsql@j-davis.com> wrote:\r\n> On Sun, 2021-10-24 at 20:19 +0530, Bharath Rupireddy wrote:\r\n>> Are there any other database activities that fall under the\r\n>> \"maintenance\" category? How about CLUSTER, REINDEX? I didn't check\r\n>> the\r\n>> code for their permissions.\r\n>\r\n> I looked around and didn't see much else to fit into this category.\r\n> CLUSTER and REINDEX are a little too specific for a generic maintenance\r\n> operation -- it's unlikely that you'd want to perform those expensive\r\n> operations just to tidy up. But if you think something else should fit,\r\n> let me know.\r\n\r\nMy initial reaction was that members of pg_maintenance should be able\r\nto do all of these things (VACUUM, ANALYZE, CLUSTER, REINDEX, and\r\nCHECKPOINT). It's true that some of these are more expensive or\r\ndisruptive than others, but how else could we divvy it up? Maybe one\r\noption is to have two separate roles, one for commands that require\r\nlower lock levels (i.e., ANALYZE and VACUUM without TRUNCATE and\r\nFULL), and another for all of the maintenance commands.\r\n\r\nNathan\r\n\r\n", "msg_date": "Sun, 24 Oct 2021 21:32:27 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Sun, 2021-10-24 at 21:32 +0000, Bossart, Nathan wrote:\n> My initial reaction was that members of pg_maintenance should be able\n> to do all of these things (VACUUM, ANALYZE, CLUSTER, REINDEX, and\n> CHECKPOINT).\n\nWhat about REFRESH MATERIALIZED VIEW? That seems more specific to a\nworkload, but it's hard to draw a clear line between that and CLUSTER.\n\n> Maybe one\n> option is to have two separate roles, one for commands that require\n> lower lock levels (i.e., ANALYZE and VACUUM without TRUNCATE and\n> FULL), and another for all of the maintenance commands.\n\nMy main motivation is CHECKPOINT and database-wide VACUUM and ANALYZE.\nI'm fine extending it if others think it would be worthwhile, but it\ngoes beyond my use case.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sun, 24 Oct 2021 23:12:30 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 10/24/21, 11:13 PM, \"Jeff Davis\" <pgsql@j-davis.com> wrote:\r\n> On Sun, 2021-10-24 at 21:32 +0000, Bossart, Nathan wrote:\r\n>> My initial reaction was that members of pg_maintenance should be able\r\n>> to do all of these things (VACUUM, ANALYZE, CLUSTER, REINDEX, and\r\n>> CHECKPOINT).\r\n>\r\n> What about REFRESH MATERIALIZED VIEW? That seems more specific to a\r\n> workload, but it's hard to draw a clear line between that and CLUSTER.\r\n\r\nHm. CLUSTER reorders the content of a table but does not change it.\r\nREFRESH MATERIALIZED VIEW, on the other hand, does replace the\r\ncontent. I think that's the sort of line I'd draw between REFRESH\r\nMATERIALIZED VIEW and the other commands as well, so I'd leave it out\r\nof pg_maintenance.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 25 Oct 2021 16:10:23 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Greetings,\n\n* Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n> On Sun, Oct 24, 2021 at 3:15 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > Add new predefined role pg_maintenance, which can issue VACUUM,\n> > ANALYZE, CHECKPOINT.\n> >\n> > Patch attached.\n> \n> At this point, the idea of having a new role for maintenance work\n> looks good. With this patch and Mark Dilger's patch introducing a\n> bunch of new predefined roles, one concern is that we might reach to a\n> state where we will have patches being proposed for new predefined\n> roles for every database activity and the superuser eventually will\n> have nothing to do in the database, it just becomes dummy?\n\nIndependent of other things, getting to the point where everything can\nbe done in the database without the need for superuser is absolutely a\ngood goal to be striving for, not something to be avoiding.\n\nI don't think that makes superuser become 'dummy', but perhaps the\nonly explicit superuser check we end up needing is \"superuser is a\nmember of all roles\". That would be a very cool end state.\n\nThanks,\n\nStephen", "msg_date": "Mon, 25 Oct 2021 13:43:18 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> Independent of other things, getting to the point where everything can\n> be done in the database without the need for superuser is absolutely a\n> good goal to be striving for, not something to be avoiding.\n> I don't think that makes superuser become 'dummy', but perhaps the\n> only explicit superuser check we end up needing is \"superuser is a\n> member of all roles\". That would be a very cool end state.\n\nI'm not entirely following how that's going to work. It implies that\nthere is some allegedly-not-superuser role that has the ability to\nbecome superuser -- either within SQL or by breaking out to the OS --\nbecause certainly a superuser can do those things.\n\nI don't think we're serving any good purpose by giving people the\nimpression that roles with such permissions are somehow not\nsuperuser-equivalent. Certainly, the providers who don't want to\ngive users superuser are just going to need a longer list of roles\nthey won't give access to (and they probably won't be pleased about\nhaving to vet every predefined role carefully).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Oct 2021 13:51:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Sun, 2021-10-24 at 21:32 +0000, Bossart, Nathan wrote:\n> > My initial reaction was that members of pg_maintenance should be able\n> > to do all of these things (VACUUM, ANALYZE, CLUSTER, REINDEX, and\n> > CHECKPOINT).\n> \n> What about REFRESH MATERIALIZED VIEW? That seems more specific to a\n> workload, but it's hard to draw a clear line between that and CLUSTER.\n\nLet's not forget that there are already existing non-superusers who can\nrun things like REFRESH MATERIALIZED VIEW- the owner.\n\n> > Maybe one\n> > option is to have two separate roles, one for commands that require\n> > lower lock levels (i.e., ANALYZE and VACUUM without TRUNCATE and\n> > FULL), and another for all of the maintenance commands.\n> \n> My main motivation is CHECKPOINT and database-wide VACUUM and ANALYZE.\n> I'm fine extending it if others think it would be worthwhile, but it\n> goes beyond my use case.\n\nI've been wondering what the actual use-case here is. DB-wide VACUUM\nand ANALYZE are already able to be run by the database owner, but\nprobably more relevant is that DB-wide VACUUMs and ANALYZEs shouldn't\nreally be necessary given autovacuum, so why are we adding predefined\nroles which will encourage users to do that?\n\nI was also contemplating a different angle on this- allowing users to\nrequest autovacuum to run vacuum/analyze on a particular table. This\nwould have the advantage that you get the vacuum/analyze behavior that\nautovacuum has (giving up an attempted truncate lock if another process\nwants a lock on the table, going at a slower pace rather than going all\nout and sucking up lots of I/O, etc).\n\nI'm not completely against this approach but just would like a bit\nbetter understanding of why it makes sense and what things we'll be able\nto say about what this role can/cannot do.\n\nLastly though- I dislike the name, it seems far too general. I get that\nnaming things is hard but maybe we could find something better than\n'pg_maintenance' for this.\n\nThanks,\n\nStephen", "msg_date": "Mon, 25 Oct 2021 13:54:43 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > Independent of other things, getting to the point where everything can\n> > be done in the database without the need for superuser is absolutely a\n> > good goal to be striving for, not something to be avoiding.\n> > I don't think that makes superuser become 'dummy', but perhaps the\n> > only explicit superuser check we end up needing is \"superuser is a\n> > member of all roles\". That would be a very cool end state.\n> \n> I'm not entirely following how that's going to work. It implies that\n> there is some allegedly-not-superuser role that has the ability to\n> become superuser -- either within SQL or by breaking out to the OS --\n> because certainly a superuser can do those things.\n\nI don't think it implies that at all. Either that, or I'm not following\nwhat you're saying here. We have predefined roles today which aren't\nsuperusers and yet they're able to break out to the OS. Of course they\ncan become superusers if they put the effort in. None of that gets away\nfrom the more general idea that we could get to a point where all of a\nsuperuser's capabilities come from roles which the superuser is\nautomatically a member of such that we need have only one explicit\nsuperuser() check.\n\n> I don't think we're serving any good purpose by giving people the\n> impression that roles with such permissions are somehow not\n> superuser-equivalent. Certainly, the providers who don't want to\n> give users superuser are just going to need a longer list of roles\n> they won't give access to (and they probably won't be pleased about\n> having to vet every predefined role carefully).\n\nI agree that we need to be clear through the documentation about which\npredefined roles are able to \"break outside the box\" and become\nsuperuser, but that's independent from the question of if we will get to\na point where every capability the superuser has can also be given\nthrough membership in some predefined role or not.\n\nThat providers have to figure out what makes sense for them in terms of\nwhat they'll allow their users to do or not do doesn't seem entirely\nrelevant here- different providers are by definition different and some\nmight be fine with given out certain rights that others don't want to\ngive out. That's actually kind of the point of breaking out all of\nthese capabilities into more fine-grained ways of granting capabilities\nout.\n\nWe already have roles today which are ones that can break outside the\nbox and get to the OS and are able to do things that a superuser can do,\nor become a superuser themselves, and that's been generally a positive\nthing. We also have roles which are able to do things that only\nsuperusers used to be able to do but which are not able to become\nsuperusers themselves and aren't able to break out of the box. I don't\nsee any reason why we can't continue with this and eventually eliminate\nthe explicit superuser() checks except from the one where a superuser is\na member of all roles. Sure, some of those roles give capabilities\nwhich can be used to become superuser, while others don't, but I hardly\nsee that as an argument against the general idea that each is able to be\nindependently GRANT'd.\n\nIf nothing else, if we could eventually get to the point where there's\nonly one such explicit check then maybe we'd stop creating *new* places\nwhere capabilities are locked behind a superuser check. I did an audit\nonce upon a time of all superuser checks and rather than that number\ngoing down, as I had hoped at the time, it's been going up instead\nacross new releases. I view that as unfortunate.\n\nThanks,\n\nStephen", "msg_date": "Mon, 25 Oct 2021 14:10:54 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Mon, 2021-10-25 at 13:54 -0400, Stephen Frost wrote:\n> Let's not forget that there are already existing non-superusers who\n> can\n> run things like REFRESH MATERIALIZED VIEW- the owner.\n\nRight, that's one reason why I don't see a particular use case there.\n\nBut CHECKPOINT right now has an explicit superuser check, and it would\nbe nice to be able to avoid that.\n\nIt's pretty normal to issue a CHECKPOINT right after a data load and\nbefore running a performance test, right? Shouldn't there be some way\nto do that without superuser?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 13:50:28 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 2021-Oct-25, Jeff Davis wrote:\n\n> But CHECKPOINT right now has an explicit superuser check, and it would\n> be nice to be able to avoid that.\n> \n> It's pretty normal to issue a CHECKPOINT right after a data load and\n> before running a performance test, right? Shouldn't there be some way\n> to do that without superuser?\n\nMaybe you just need pg_checkpointer.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 25 Oct 2021 17:55:36 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "\n\n> On Oct 24, 2021, at 7:49 AM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> At this point, the idea of having a new role for maintenance work\n> looks good. With this patch and Mark Dilger's patch introducing a\n> bunch of new predefined roles, one concern is that we might reach to a\n> state where we will have patches being proposed for new predefined\n> roles for every database activity and the superuser eventually will\n> have nothing to do in the database, it just becomes dummy?\n> \n> I'm not sure if Mark Dilger's patch on new predefined roles has a\n> suitable/same role that we can use here.\n\nIf you refer to the ALTER SYSTEM SET patches, which I agree introduce a number of new predefined roles, it may interest you that Andrew has requested that I rework that patch set. In particular, he would like me to implement a new system of grants whereby the authority to ALTER SYSTEM SET can be granted per GUC rather than having predefined roles which hardcoded privileges.\n\nI have not withdrawn the ALTER SYSTEM SET patches yet, as I don't know if Andrew's proposal can be made to work, but I wouldn't recommend tying this pg_maintenance idea to that set.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 14:12:16 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Mon, 2021-10-25 at 17:55 -0300, Alvaro Herrera wrote:\n> Maybe you just need pg_checkpointer.\n\nFair enough. Attached simpler patch that only covers checkpoint, and\ncalls the role pg_checkpointer.\n\nRegards,\n\tJeff Davis", "msg_date": "Mon, 25 Oct 2021 16:39:35 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 10/25/21, 4:40 PM, \"Jeff Davis\" <pgsql@j-davis.com> wrote:\r\n> On Mon, 2021-10-25 at 17:55 -0300, Alvaro Herrera wrote:\r\n>> Maybe you just need pg_checkpointer.\r\n>\r\n> Fair enough. Attached simpler patch that only covers checkpoint, and\r\n> calls the role pg_checkpointer.\r\n\r\nIt feels a bit excessive to introduce a new predefined role just for\r\nthis. Perhaps this could be accomplished with a new function that\r\ncould be granted.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 26 Oct 2021 00:07:11 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Tue, 2021-10-26 at 00:07 +0000, Bossart, Nathan wrote:\n> It feels a bit excessive to introduce a new predefined role just for\n> this. Perhaps this could be accomplished with a new function that\n> could be granted.\n\nIt would be nice if the syntax could be used, since it's pretty\nwidespread. I guess it does feel excessive to have its own predefined\nrole, but at the same time it's hard to group a command like CHECKPOINT\ninto a category. Maybe if we named it something like pg_performance or\nsomething we could make a larger group?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 18:47:03 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 10/25/21, 6:48 PM, \"Jeff Davis\" <pgsql@j-davis.com> wrote:\r\n> On Tue, 2021-10-26 at 00:07 +0000, Bossart, Nathan wrote:\r\n>> It feels a bit excessive to introduce a new predefined role just for\r\n>> this. Perhaps this could be accomplished with a new function that\r\n>> could be granted.\r\n>\r\n> It would be nice if the syntax could be used, since it's pretty\r\n> widespread. I guess it does feel excessive to have its own predefined\r\n> role, but at the same time it's hard to group a command like CHECKPOINT\r\n> into a category. Maybe if we named it something like pg_performance or\r\n> something we could make a larger group?\r\n\r\nI do think a larger group is desirable, but as is evidenced by this\r\nthread, it may be some time until we can figure out exactly how that\r\nwould look. I feel like there's general support for being able to\r\nallow non-superusers to CHECKPOINT and do other\r\nmaintenance/performance tasks, though.\r\n\r\nI think my main concern with pg_checkpointer is that it could set a\r\nprecedent for new predefined roles, and we'd end up with dozens or\r\nmore. But as long as new predefined roles aren't terribly expensive,\r\nmaybe that's not all that bad. The advantage of having a\r\npg_checkpointer role is that it enables users to grant just CHECKPOINT\r\nand nothing else. If we wanted a larger \"pg_performance\" group in the\r\nfuture, we could introduce that role and make it a member of\r\npg_checkpointer and others (similar to how pg_monitor works).\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 26 Oct 2021 16:28:54 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Tue, 2021-10-26 at 00:07 +0000, Bossart, Nathan wrote:\n> > It feels a bit excessive to introduce a new predefined role just for\n> > this. Perhaps this could be accomplished with a new function that\n> > could be granted.\n> \n> It would be nice if the syntax could be used, since it's pretty\n> widespread. I guess it does feel excessive to have its own predefined\n> role, but at the same time it's hard to group a command like CHECKPOINT\n> into a category. Maybe if we named it something like pg_performance or\n> something we could make a larger group?\n\nFor the use-case presented, I don't really buy off on this argument.\nWe're talking about benchmarking tools, surely they can be and likely\nalready are updated with some regularity for new major versions of PG.\nI wonder also if there aren't other things besides this that would need\nto be changed for them to work as a non-superuser anyway too, meaning\nthis would be just one change among others that they'd need to make.\n\nIn this specific case, I'd be more inclined to provide a function rather\nthan an explicit predefined role for this one thing.\n\nThanks,\n\nStephen", "msg_date": "Tue, 26 Oct 2021 16:02:55 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Tue, 2021-10-26 at 16:02 -0400, Stephen Frost wrote:\n> We're talking about benchmarking tools\n\nWhat I had in mind was something much less formal, like a self-\ncontained repro case of a performance problem.\n\n ... simple schema\n ... data load\n ... maybe build some indexes\n ... maybe set hints\n VACUUM ANALYZE;\n CHECKPOINT;\n\nI'm not saying it's a very strong use case, but at least for me, it's\nkind of a habit to throw in a CHECKPOINT after a quick data load for a\ntest, even if it might not matter for whatever I'm testing.\n\nI guess I can change my habit to use a function instead, but then\nwhat's the point of the syntax?\n\nShould we just add a builtin function pg_checkpoint(), and deprecate\nthe syntax?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 26 Oct 2021 14:03:24 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 10/26/21, 2:04 PM, \"Jeff Davis\" <pgsql@j-davis.com> wrote:\r\n> Should we just add a builtin function pg_checkpoint(), and deprecate\r\n> the syntax?\r\n\r\nThat seems reasonable to me.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 26 Oct 2021 21:48:32 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Wed, Oct 27, 2021 at 3:18 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 10/26/21, 2:04 PM, \"Jeff Davis\" <pgsql@j-davis.com> wrote:\n> > Should we just add a builtin function pg_checkpoint(), and deprecate\n> > the syntax?\n>\n> That seems reasonable to me.\n\nIMHO, moving away from SQL command \"CHECKPOINT\" to function\n\"pg_checkpoint()\" isn't nice as the SQL command has been there for a\nlong time and all the applications or services that were/are being\nbuilt around the postgres ecosystem would have to adapt someday to the\nnew function (if at all we deprecate the command and onboard the\nfunction). This isn't good at all given the CHECKPOINT is one of the\nmostly used commands in the apps or services layer. Moreover, if we go\nwith the function pg_checkpoint(), we might see patches coming in for\npg_vacuum(), pg_reindex(), pg_cluster() and so on.\n\nI suggest having a predefined role (pg_maintenance or\npg_checkpoint(although I'm not sure convinced to have a separate role\njust for checkpoint) or some other) and let superuser and the users\nwith this new predefined role do checkpoint.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 30 Oct 2021 13:24:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Sat, 2021-10-30 at 13:24 +0530, Bharath Rupireddy wrote:\n> IMHO, moving away from SQL command \"CHECKPOINT\" to function\n> \"pg_checkpoint()\" isn't nice as the SQL command has been there for a\n> long time and all the applications or services that were/are being\n> built around the postgres ecosystem would have to adapt someday to\n> the\n> new function (if at all we deprecate the command and onboard the\n> function). This isn't good at all given the CHECKPOINT is one of the\n> mostly used commands in the apps or services layer. Moreover, if we\n> go\n> with the function pg_checkpoint(), we might see patches coming in for\n> pg_vacuum(), pg_reindex(), pg_cluster() and so on.\n\nI tend to agree with all of this. The CHECKPOINT command is already\nthere and people already use it. If we are already chipping away at the\nneed for superuser elsewhere, we should offer a way to use CHECKPOINT\nwithout being superuser.\n\nIf the purpose[0] of predefined roles is that they allow you to do\nthings that can't be expressed by GRANT, a predefined role\npg_checkpointer seems to fit the bill.\n\nThe main argument against[1] having a pg_checkpointer predefined role\nis that it creates a clutter of predefined roles. But it seems like\njust another part of the clutter of having a special SQL command merely\nfor requesting a checkpoint.\n\nThe alternative of creating a new function doesn't seem to alleviate\nthe clutter. Some people will use the function and some will use the\ncommand, creating inconsistency in examples and scripts, and people\nwill wonder which one is the \"right\" one.\n\nRegards,\n\tJeff Davis\n\n[0] \nhttps://postgr.es/m/CA+TgmobQoWZn62qWRX+OOFjBPhdubxYTBeO-GSNPcLvBHh4ZvA@mail.gmail.com\n\n[1] https://postgr.es/m/8C661979-AF85-4AE1-9E2B-2A091DA3DB22@amazon.com\n\n\n\n", "msg_date": "Sat, 30 Oct 2021 11:14:35 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 2021-Oct-30, Jeff Davis wrote:\n\n> I tend to agree with all of this. The CHECKPOINT command is already\n> there and people already use it. If we are already chipping away at the\n> need for superuser elsewhere, we should offer a way to use CHECKPOINT\n> without being superuser.\n\n+1\n\n> If the purpose[0] of predefined roles is that they allow you to do\n> things that can't be expressed by GRANT, a predefined role\n> pg_checkpointer seems to fit the bill.\n\n+1\n\n> The main argument against[1] having a pg_checkpointer predefined role\n> is that it creates a clutter of predefined roles. But it seems like\n> just another part of the clutter of having a special SQL command merely\n> for requesting a checkpoint.\n\n+1\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nY una voz del caos me habló y me dijo\n\"Sonríe y sé feliz, podría ser peor\".\nY sonreí. Y fui feliz.\nY fue peor.\n\n\n", "msg_date": "Sat, 30 Oct 2021 21:26:26 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 10/30/21, 11:14 AM, \"Jeff Davis\" <pgsql@j-davis.com> wrote:\r\n> On Sat, 2021-10-30 at 13:24 +0530, Bharath Rupireddy wrote:\r\n>> IMHO, moving away from SQL command \"CHECKPOINT\" to function\r\n>> \"pg_checkpoint()\" isn't nice as the SQL command has been there for a\r\n>> long time and all the applications or services that were/are being\r\n>> built around the postgres ecosystem would have to adapt someday to\r\n>> the\r\n>> new function (if at all we deprecate the command and onboard the\r\n>> function). This isn't good at all given the CHECKPOINT is one of the\r\n>> mostly used commands in the apps or services layer. Moreover, if we\r\n>> go\r\n>> with the function pg_checkpoint(), we might see patches coming in for\r\n>> pg_vacuum(), pg_reindex(), pg_cluster() and so on.\r\n>\r\n> I tend to agree with all of this. The CHECKPOINT command is already\r\n> there and people already use it. If we are already chipping away at the\r\n> need for superuser elsewhere, we should offer a way to use CHECKPOINT\r\n> without being superuser.\r\n\r\nI think Bharath brings up some good points. The simple fact is that\r\nCHECKPOINT has been around for a while, and creating functions for\r\nmaintenance tasks would add just as much or more clutter than adding a\r\npredefined role for each one. I do wonder what we would've done if\r\nCHECKPOINT didn't already exist. Based on the goal of this thread, I\r\nget the feeling that we might've seriously considered introducing it\r\nas a function so that you can just GRANT EXECUTE as needed. \r\n\r\nNathan\r\n\r\n", "msg_date": "Sun, 31 Oct 2021 01:05:47 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> On 10/30/21, 11:14 AM, \"Jeff Davis\" <pgsql@j-davis.com> wrote:\n> > On Sat, 2021-10-30 at 13:24 +0530, Bharath Rupireddy wrote:\n> >> IMHO, moving away from SQL command \"CHECKPOINT\" to function\n> >> \"pg_checkpoint()\" isn't nice as the SQL command has been there for a\n> >> long time and all the applications or services that were/are being\n> >> built around the postgres ecosystem would have to adapt someday to\n> >> the\n> >> new function (if at all we deprecate the command and onboard the\n> >> function). This isn't good at all given the CHECKPOINT is one of the\n> >> mostly used commands in the apps or services layer. Moreover, if we\n> >> go\n> >> with the function pg_checkpoint(), we might see patches coming in for\n> >> pg_vacuum(), pg_reindex(), pg_cluster() and so on.\n> >\n> > I tend to agree with all of this. The CHECKPOINT command is already\n> > there and people already use it. If we are already chipping away at the\n> > need for superuser elsewhere, we should offer a way to use CHECKPOINT\n> > without being superuser.\n> \n> I think Bharath brings up some good points. The simple fact is that\n> CHECKPOINT has been around for a while, and creating functions for\n> maintenance tasks would add just as much or more clutter than adding a\n> predefined role for each one. I do wonder what we would've done if\n> CHECKPOINT didn't already exist. Based on the goal of this thread, I\n> get the feeling that we might've seriously considered introducing it\n> as a function so that you can just GRANT EXECUTE as needed. \n\nI don't really buy off on the \"because it's been around a long time\" as\na reason to invent a predefined role for an individual command that\ndoesn't take any options and could certainly just be a function.\nApplications developed to run as a superuser aren't likely to magically\nstart working because they were GRANT'd this one additional predefined\nrole either but likely would need other changes anyway.\n\nAll that said, I wonder if we can have our cake and eat it too. I\nhaven't looked into this at all yet and perhaps it's foolish on its\nface, but, could we make CHECKPOINT; basically turn around and just run\nselect pg_checkpoint(); with the regular privilege checking happening?\nThen we'd keep the existing syntax working, but if the user is allowed\nto run the command would depend on if they've been GRANT'd EXECUTE\nrights on the function or not.\n\nThanks,\n\nStephen", "msg_date": "Mon, 1 Nov 2021 12:50:25 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 11/1/21, 9:51 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n> I don't really buy off on the \"because it's been around a long time\" as\r\n> a reason to invent a predefined role for an individual command that\r\n> doesn't take any options and could certainly just be a function.\r\n> Applications developed to run as a superuser aren't likely to magically\r\n> start working because they were GRANT'd this one additional predefined\r\n> role either but likely would need other changes anyway.\r\n\r\nI suspect the CHECKPOINT command wouldn't be removed anytime soon,\r\neither. I definitely understand the desire to avoid changing\r\nsomething that's been around a long time, but I think a function fits\r\nbetter in this case.\r\n\r\n> All that said, I wonder if we can have our cake and eat it too. I\r\n> haven't looked into this at all yet and perhaps it's foolish on its\r\n> face, but, could we make CHECKPOINT; basically turn around and just run\r\n> select pg_checkpoint(); with the regular privilege checking happening?\r\n> Then we'd keep the existing syntax working, but if the user is allowed\r\n> to run the command would depend on if they've been GRANT'd EXECUTE\r\n> rights on the function or not.\r\n\r\nI'd be worried about the behavior of CHECKPOINT changing because\r\nsomeone messed with the function.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 1 Nov 2021 17:23:35 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> On 11/1/21, 9:51 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\n> > All that said, I wonder if we can have our cake and eat it too. I\n> > haven't looked into this at all yet and perhaps it's foolish on its\n> > face, but, could we make CHECKPOINT; basically turn around and just run\n> > select pg_checkpoint(); with the regular privilege checking happening?\n> > Then we'd keep the existing syntax working, but if the user is allowed\n> > to run the command would depend on if they've been GRANT'd EXECUTE\n> > rights on the function or not.\n> \n> I'd be worried about the behavior of CHECKPOINT changing because\n> someone messed with the function.\n\nFolks playing around in the catalog can break lots of things, I don't\nreally see this as an argument against the idea.\n\nI do wonder if we should put a bit more effort into preventing people\nfrom messing with functions and such in pg_catalog. Being able to do\nsomething like:\n\ncreate or replace function xpath ( text, xml ) returns xml[]\nas $$ begin return 'xml'; end; $$ language plpgsql;\n\n(or with much worse functions..) strikes me as just a bit too easy to\nmistakenly cause problems as a superuser. Still, that's really an\nindependent issue from this discussion. It's not like someone breaking\nCHECKPOINT; would actually impact normal checkpoints anyway.\n\nThanks,\n\nStephen", "msg_date": "Mon, 1 Nov 2021 13:42:35 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 11/1/21, 10:43 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n> Folks playing around in the catalog can break lots of things, I don't\r\n> really see this as an argument against the idea.\r\n>\r\n> I do wonder if we should put a bit more effort into preventing people\r\n> from messing with functions and such in pg_catalog. Being able to do\r\n> something like:\r\n>\r\n> create or replace function xpath ( text, xml ) returns xml[]\r\n> as $$ begin return 'xml'; end; $$ language plpgsql;\r\n>\r\n> (or with much worse functions..) strikes me as just a bit too easy to\r\n> mistakenly cause problems as a superuser. Still, that's really an\r\n> independent issue from this discussion. It's not like someone breaking\r\n> CHECKPOINT; would actually impact normal checkpoints anyway.\r\n\r\nYeah, I see your point.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 1 Nov 2021 18:38:14 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Sat, Oct 23, 2021 at 5:45 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Add new predefined role pg_maintenance, which can issue VACUUM,\n> ANALYZE, CHECKPOINT.\n\nJust as a sort of general comment on this endeavor, I suspect that any\nattempt to lump things together that seem closely related is doomed to\nbackfire. There's bound to be somebody who wants to grant some of\nthese permissions and not others, or who wants to grant the ability to\nrun those commands on some tables but not others. That's kind of\nunfortunate because it makes it more complicated to implement stuff\nlike this ... but I've more or less given up hope on getting away with\nanything else.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Nov 2021 11:06:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Mon, 2021-11-01 at 12:50 -0400, Stephen Frost wrote:\n> All that said, I wonder if we can have our cake and eat it too. I\n> haven't looked into this at all yet and perhaps it's foolish on its\n> face, but, could we make CHECKPOINT; basically turn around and just\n> run\n> select pg_checkpoint(); with the regular privilege checking\n> happening?\n> Then we'd keep the existing syntax working, but if the user is\n> allowed\n> to run the command would depend on if they've been GRANT'd EXECUTE\n> rights on the function or not.\n\nGreat idea! Patch attached.\n\nThis feels like a good pattern that we might want to use elsewhere, if\nthe need arises.\n\nRegards,\n\tJeff Davis", "msg_date": "Tue, 02 Nov 2021 10:28:39 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Tue, 2021-11-02 at 11:06 -0400, Robert Haas wrote:\n> Just as a sort of general comment on this endeavor, I suspect that\n> any\n> attempt to lump things together that seem closely related is doomed\n> to\n> backfire.\n\nAgreed, I think that is apparent from the different opinions in this\nthread.\n\nRobert had a good idea over here though:\n\nhttps://postgr.es/m/20211101165025.GS20998@tamriel.snowman.net\n\nwhich gives fine-grained control without the \"clutter\" of extra\npredefined roles.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 02 Nov 2021 10:35:43 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 11/2/21, 10:29 AM, \"Jeff Davis\" <pgsql@j-davis.com> wrote:\r\n> Great idea! Patch attached.\r\n>\r\n> This feels like a good pattern that we might want to use elsewhere, if\r\n> the need arises.\r\n\r\nThe approach in the patch looks alright to me, but another one could\r\nbe to build a SelectStmt when parsing CHECKPOINT. I think that'd\r\nsimplify the standard_ProcessUtility() changes.\r\n\r\nOtherwise, I see a couple of warnings when compiling:\r\n xlogfuncs.c:54: warning: implicit declaration of function ‘RequestCheckpoint’\r\n xlogfuncs.c:56: warning: control reaches end of non-void function\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 2 Nov 2021 17:45:49 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Tue, 2021-11-02 at 11:06 -0400, Robert Haas wrote:\n> > Just as a sort of general comment on this endeavor, I suspect that\n> > any\n> > attempt to lump things together that seem closely related is doomed\n> > to\n> > backfire.\n> \n> Agreed, I think that is apparent from the different opinions in this\n> thread.\n> \n> Robert had a good idea over here though:\n\nThink you meant 'Stephen' there. ;)\n\n> https://postgr.es/m/20211101165025.GS20998@tamriel.snowman.net\n> \n> which gives fine-grained control without the \"clutter\" of extra\n> predefined roles.\n\nRight.\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> On 11/2/21, 10:29 AM, \"Jeff Davis\" <pgsql@j-davis.com> wrote:\n> > Great idea! Patch attached.\n> >\n> > This feels like a good pattern that we might want to use elsewhere, if\n> > the need arises.\n> \n> The approach in the patch looks alright to me, but another one could\n> be to build a SelectStmt when parsing CHECKPOINT. I think that'd\n> simplify the standard_ProcessUtility() changes.\n\nFor my 2c, at least, I'm not really partial to either approach, though\nI'd want to see what error messages end up looking like. Seems like we\nmight want to exercise a bit more control than we'd be able to if we\ntransformed it directly into a SelectStmt (that is, we might add a HINT:\nroles with execute rights on pg_checkpoint() can run this command, or\nsomething; maybe not too tho).\n\n> Otherwise, I see a couple of warnings when compiling:\n> xlogfuncs.c:54: warning: implicit declaration of function ‘RequestCheckpoint’\n> xlogfuncs.c:56: warning: control reaches end of non-void function\n\nYeah, such things would need to be cleaned up, of course.\n\nThanks!\n\nStephen", "msg_date": "Tue, 2 Nov 2021 14:26:14 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 11/2/21, 11:27 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n> * Bossart, Nathan (bossartn@amazon.com) wrote:\r\n>> The approach in the patch looks alright to me, but another one could\r\n>> be to build a SelectStmt when parsing CHECKPOINT. I think that'd\r\n>> simplify the standard_ProcessUtility() changes.\r\n>\r\n> For my 2c, at least, I'm not really partial to either approach, though\r\n> I'd want to see what error messages end up looking like. Seems like we\r\n> might want to exercise a bit more control than we'd be able to if we\r\n> transformed it directly into a SelectStmt (that is, we might add a HINT:\r\n> roles with execute rights on pg_checkpoint() can run this command, or\r\n> something; maybe not too tho).\r\n\r\nI don't feel strongly one way or the other as well, but you have a\r\ngood point about extra control over the error messages. The latest\r\npatch just does a standard aclcheck_error(), so you'd probably see\r\n\"permission denied for function\" if you didn't have privileges for\r\nCHECKPOINT. That could be confusing.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 2 Nov 2021 21:06:58 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 11/2/21 4:06 PM, Robert Haas wrote:\n> There's bound to be somebody who wants to grant some of\n> these permissions and not others, or who wants to grant the ability to\n> run those commands on some tables but not others.\nIs there anything stopping us from adding syntax like this?\n\n GRANT VACUUM, ANALYZE ON TABLE foo TO bar;\n\nThat doesn't fix the CHECKPOINT issue, but surely vacuum and analyze can\nbe done that way. I would much prefer that over new predefined roles.\n\nThis would be nice, but there is nothing to hang our hat on:\n\n GRANT CHECKPOINT TO username;\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 2 Nov 2021 23:14:20 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Tue, Nov 2, 2021 at 3:14 PM Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 11/2/21 4:06 PM, Robert Haas wrote:\n> > There's bound to be somebody who wants to grant some of\n> > these permissions and not others, or who wants to grant the ability to\n> > run those commands on some tables but not others.\n> Is there anything stopping us from adding syntax like this?\n>\n> GRANT VACUUM, ANALYZE ON TABLE foo TO bar;\n>\n> That doesn't fix the CHECKPOINT issue, but surely vacuum and analyze can\n> be done that way. I would much prefer that over new predefined roles.\n>\n> This would be nice, but there is nothing to hang our hat on:\n>\n> GRANT CHECKPOINT TO username;\n>\n>\nHere is the thread when I last brought up this idea five years ago:\n\nhttps://www.postgresql.org/message-id/CAKFQuwaAhVt6audf92Q1VrELfJ%2BPz%3DuDfNb8%3D1_bqAmyDpnDmA%40mail.gmail.com\n\nI do not believe we've actually consumed any of the then available\npermission bits in the meanwhile.\n\nDavid J.\n\nOn Tue, Nov 2, 2021 at 3:14 PM Vik Fearing <vik@postgresfriends.org> wrote:On 11/2/21 4:06 PM, Robert Haas wrote:\n> There's bound to be somebody who wants to grant some of\n> these permissions and not others, or who wants to grant the ability to\n> run those commands on some tables but not others.\nIs there anything stopping us from adding syntax like this?\n\n    GRANT VACUUM, ANALYZE ON TABLE foo TO bar;\n\nThat doesn't fix the CHECKPOINT issue, but surely vacuum and analyze can\nbe done that way.  I would much prefer that over new predefined roles.\n\nThis would be nice, but there is nothing to hang our hat on:\n\n    GRANT CHECKPOINT TO username;Here is the thread when I last brought up this idea five years ago:https://www.postgresql.org/message-id/CAKFQuwaAhVt6audf92Q1VrELfJ%2BPz%3DuDfNb8%3D1_bqAmyDpnDmA%40mail.gmail.comI do not believe we've actually consumed any of the then available permission bits in the meanwhile.David J.", "msg_date": "Tue, 2 Nov 2021 15:30:25 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Tue, 2 Nov 2021 at 18:14, Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 11/2/21 4:06 PM, Robert Haas wrote:\n> > There's bound to be somebody who wants to grant some of\n> > these permissions and not others, or who wants to grant the ability to\n> > run those commands on some tables but not others.\n> Is there anything stopping us from adding syntax like this?\n>\n> GRANT VACUUM, ANALYZE ON TABLE foo TO bar;\n>\n\nThere is a limited number of bits available in the way privileges are\nstored. I investigated this in 2018 in connection with an idea I had to\nallow granting the ability to refresh a materialized view; after\nconsideration and discussion I came to the idea of having a \"MAINTAIN\"\npermission which would allow refreshing materialized views and would also\ncover clustering, reindexing, vacuuming, and analyzing on objects to which\nthose actions are applicable.\n\nThis message from me summarizes the history of usage of the available\nprivilege bits:\n\nhttps://www.postgresql.org/message-id/CAMsGm5c4DycKBYZCypfV02s-SC8GwF%2BKeTt%3D%3DvbWrFn%2Bdz%3DKeg%40mail.gmail.com\n\nIf you dig into the replies you will find the revised proposal.\n\nThat doesn't fix the CHECKPOINT issue, but surely vacuum and analyze can\n> be done that way. I would much prefer that over new predefined roles.\n>\n> This would be nice, but there is nothing to hang our hat on:\n>\n> GRANT CHECKPOINT TO username;\n>\n\nOn Tue, 2 Nov 2021 at 18:14, Vik Fearing <vik@postgresfriends.org> wrote:On 11/2/21 4:06 PM, Robert Haas wrote:\n> There's bound to be somebody who wants to grant some of\n> these permissions and not others, or who wants to grant the ability to\n> run those commands on some tables but not others.\nIs there anything stopping us from adding syntax like this?\n\n    GRANT VACUUM, ANALYZE ON TABLE foo TO bar;There is a limited number of bits available in the way privileges are stored. I investigated this in 2018 in connection with an idea I had to allow granting the ability to refresh a materialized view; after consideration and discussion I came to the idea of having a \"MAINTAIN\" permission which would allow refreshing materialized views and would also cover clustering, reindexing, vacuuming, and analyzing on objects to which those actions are applicable.This message from me summarizes the history of usage of the available privilege bits:https://www.postgresql.org/message-id/CAMsGm5c4DycKBYZCypfV02s-SC8GwF%2BKeTt%3D%3DvbWrFn%2Bdz%3DKeg%40mail.gmail.comIf you dig into the replies you will find the revised proposal.\nThat doesn't fix the CHECKPOINT issue, but surely vacuum and analyze can\nbe done that way.  I would much prefer that over new predefined roles.\n\nThis would be nice, but there is nothing to hang our hat on:\n\n    GRANT CHECKPOINT TO username;", "msg_date": "Tue, 2 Nov 2021 18:43:52 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 11/2/21 11:14 PM, Vik Fearing wrote:\n\n> This would be nice, but there is nothing to hang our hat on:\n> \n> GRANT CHECKPOINT TO username;\n\nThinking about this more, why don't we just add CHECKPOINT and\nNOCHECKPOINT attributes to roles?\n\n ALTER ROLE username WITH CHECKPOINT;\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 3 Nov 2021 00:00:12 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Tue, 2 Nov 2021 at 19:00, Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 11/2/21 11:14 PM, Vik Fearing wrote:\n>\n> > This would be nice, but there is nothing to hang our hat on:\n> >\n> > GRANT CHECKPOINT TO username;\n>\n> Thinking about this more, why don't we just add CHECKPOINT and\n> NOCHECKPOINT attributes to roles?\n>\n> ALTER ROLE username WITH CHECKPOINT;\n>\n\nAt present, this would require adding a field to pg_authid. This isn't very\nscalable; but we're already creating new pg_* roles which give access to\nvarious actions so I don't know why a role attribute would be a better\napproach. If anything, I think it would be more likely to move in the other\ndirection: replace role attributes that in effect grant privileges with\npredefined roles. I think this has already been discussed here in the\ncontext of CREATEROLE.\n\nOn Tue, 2 Nov 2021 at 19:00, Vik Fearing <vik@postgresfriends.org> wrote:On 11/2/21 11:14 PM, Vik Fearing wrote:\n\n> This would be nice, but there is nothing to hang our hat on:\n> \n>     GRANT CHECKPOINT TO username;\n\nThinking about this more, why don't we just add CHECKPOINT and\nNOCHECKPOINT attributes to roles?\n\n    ALTER ROLE username WITH CHECKPOINT;\nAt present, this would require adding a field to pg_authid. This isn't very scalable; but we're already creating new pg_* roles which give access to various actions so I don't know why a role attribute would be a better approach. If anything, I think it would be more likely to move in the other direction: replace role attributes that in effect grant privileges with predefined roles. I think this has already been discussed here in the context of CREATEROLE.", "msg_date": "Tue, 2 Nov 2021 20:08:27 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "> On 2 Nov 2021, at 19:26, Stephen Frost <sfrost@snowman.net> wrote:\n\n>> Otherwise, I see a couple of warnings when compiling:\n>> xlogfuncs.c:54: warning: implicit declaration of function ‘RequestCheckpoint’\n>> xlogfuncs.c:56: warning: control reaches end of non-void function\n> \n> Yeah, such things would need to be cleaned up, of course.\n\nThe Commitfest CI has -Werror,-Wimplicit-function-declaration on some platforms\nin which this patch breaks, so I think we should apply the below (or something\nsimilar) to ensure this is tested everywhere:\n\ndiff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c\nindex 7ecaca4788..c9e1df39c1 100644\n--- a/src/backend/access/transam/xlogfuncs.c\n+++ b/src/backend/access/transam/xlogfuncs.c\n@@ -26,6 +26,7 @@\n #include \"funcapi.h\"\n #include \"miscadmin.h\"\n #include \"pgstat.h\"\n+#include \"postmaster/bgwriter.h\"\n #include \"replication/walreceiver.h\"\n #include \"storage/fd.h\"\n #include \"storage/ipc.h\"\n@@ -53,6 +54,7 @@ pg_checkpoint(PG_FUNCTION_ARGS)\n {\n RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_WAIT |\n (RecoveryInProgress() ? 0 : CHECKPOINT_FORCE));\n+ PG_RETURN_VOID();\n }\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 3 Nov 2021 15:02:20 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> On 11/2/21, 11:27 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\n> > * Bossart, Nathan (bossartn@amazon.com) wrote:\n> >> The approach in the patch looks alright to me, but another one could\n> >> be to build a SelectStmt when parsing CHECKPOINT. I think that'd\n> >> simplify the standard_ProcessUtility() changes.\n> >\n> > For my 2c, at least, I'm not really partial to either approach, though\n> > I'd want to see what error messages end up looking like. Seems like we\n> > might want to exercise a bit more control than we'd be able to if we\n> > transformed it directly into a SelectStmt (that is, we might add a HINT:\n> > roles with execute rights on pg_checkpoint() can run this command, or\n> > something; maybe not too tho).\n> \n> I don't feel strongly one way or the other as well, but you have a\n> good point about extra control over the error messages. The latest\n> patch just does a standard aclcheck_error(), so you'd probably see\n> \"permission denied for function\" if you didn't have privileges for\n> CHECKPOINT. That could be confusing.\n\nYeah, that's exactly the thing I was thinking about that might seem odd.\nI don't think it's a huge deal but I do think it'd be good for us to at\nleast think about if we're ok with that or if we want to try and do\nsomething a bit better.\n\nThanks,\n\nStephen", "msg_date": "Wed, 3 Nov 2021 14:46:07 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Tue, 2021-11-02 at 14:26 -0400, Stephen Frost wrote:\n> Think you meant 'Stephen' there. ;)\n\nYes ;-)\n\n> > The approach in the patch looks alright to me, but another one\n> > could\n> > be to build a SelectStmt when parsing CHECKPOINT. I think that'd\n> > simplify the standard_ProcessUtility() changes.\n\nNathan, if I understand correctly, that would mean no CheckPointStmt at\nall. So it would either lack the right command tag, or we would have to\nhack it in somewhere. The utility changes in the existing patch are\nfairly minor, so I'll stick with that approach unless I'm missing\nsomething.\n\n> For my 2c, at least, I'm not really partial to either approach,\n> though\n> I'd want to see what error messages end up looking like. Seems like\n> we\n> might want to exercise a bit more control than we'd be able to if we\n> transformed it directly into a SelectStmt (that is, we might add a\n> HINT:\n> roles with execute rights on pg_checkpoint() can run this command, or\n> something; maybe not too tho).\n\nI changed the error message to:\n\n ERROR: permission denied for command CHECKPOINT\n HINT: The CHECKPOINT command requires the EXECUTE privilege\n on the function pg_checkpoint().\n\nNew version attached.\n\nAndres suggested that I also consider a new form of the GRANT clause\nthat works on the CHECKPOINT command directly. I looked into that\nbriefly, but in every other case it seems that GRANT works on an object\n(like a function). It doesn't feel like grating on a command is the\nright solution.\n\nThe approach of using a function's ACL to represent the ACL of a\nhigher-level command (as in this patch) does feel right to me. It feels\nlike something we might extend to similar situations in the future; and\neven if we don't, it seems like a clean solution in isolation.\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 04 Nov 2021 09:03:52 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Thu, Nov 4, 2021 at 12:03 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> The approach of using a function's ACL to represent the ACL of a\n> higher-level command (as in this patch) does feel right to me. It feels\n> like something we might extend to similar situations in the future; and\n> even if we don't, it seems like a clean solution in isolation.\n\nIt feels wrong to me. I realize that it's convenient to be able to\nre-use the existing GRANT and REVOKE commands that we have for\nfunctions, but actually DDL interfaces are better than SQL functions,\nbecause the syntax can be richer and you can avoid things like needing\nto take a snapshot. This particular patch dodges that problem, which\nis both a good thing and also clever, but it doesn't really make me\nfeel any better about the concept in general.\n\nI think that the ongoing pressure to reduce as many things as possible\nto function permissions checks is ultimately going to turn out to be\nan unpleasant dead end. But by the time we reach that dead end we'll\nhave put so much effort into making it work that it will be hard to\nchange course, for backward-compatibility reasons among others.\n\nI don't have anything specific to propose, which I realize is kind of\nunhelpful ... but I don't like this, either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Nov 2021 12:37:13 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Thu, 2021-11-04 at 12:37 -0400, Robert Haas wrote:\n> I don't have anything specific to propose, which I realize is kind of\n> unhelpful ... but I don't like this, either.\n\nWe can go back to having a pg_checkpoint predefined role that is only\nused for the CHECKPOINT command.\n\nThe only real argument against that was a general sense of clutter, but\nI wasn't entirely convinced of that. If we have a specialized command,\nwe have all kinds of clutter associated with that; a predefined role\ndoesn't add much additional clutter.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 04 Nov 2021 14:25:54 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Hi,\n\nOn 2021-11-02 10:28:39 -0700, Jeff Davis wrote:\n> On Mon, 2021-11-01 at 12:50 -0400, Stephen Frost wrote:\n> > All that said, I wonder if we can have our cake and eat it too. I\n> > haven't looked into this at all yet and perhaps it's foolish on its\n> > face, but, could we make CHECKPOINT; basically turn around and just\n> > run\n> > select pg_checkpoint(); with the regular privilege checking\n> > happening?\n> > Then we'd keep the existing syntax working, but if the user is\n> > allowed\n> > to run the command would depend on if they've been GRANT'd EXECUTE\n> > rights on the function or not.\n> \n> Great idea! Patch attached.\n> \n> This feels like a good pattern that we might want to use elsewhere, if\n> the need arises.\n> \t\tcase T_CheckPointStmt:\n> -\t\t\tif (!superuser())\n> -\t\t\t\tereport(ERROR,\n> -\t\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> -\t\t\t\t\t\t errmsg(\"must be superuser to do CHECKPOINT\")));\n> +\t\t\t{\n> +\t\t\t\t/*\n> +\t\t\t\t * Invoke pg_checkpoint(). Implementing the CHECKPOINT command\n> +\t\t\t\t * with a function allows administrators to grant privileges\n> +\t\t\t\t * on the CHECKPOINT command by granting privileges on the\n> +\t\t\t\t * pg_checkpoint() function. It also calls the function\n> +\t\t\t\t * execute hook, if present.\n> +\t\t\t\t */\n> +\t\t\t\tAclResult\taclresult;\n> +\t\t\t\tFmgrInfo\tflinfo;\n> +\n> +\t\t\t\taclresult = pg_proc_aclcheck(F_PG_CHECKPOINT, GetUserId(),\n> +\t\t\t\t\t\t\t\t\t\t\t ACL_EXECUTE);\n> +\t\t\t\tif (aclresult != ACLCHECK_OK)\n> +\t\t\t\t\taclcheck_error(aclresult, OBJECT_FUNCTION,\n> +\t\t\t\t\t\t\t\t get_func_name(F_PG_CHECKPOINT));\n> \n> -\t\t\tRequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_WAIT |\n> -\t\t\t\t\t\t\t (RecoveryInProgress() ? 0 : CHECKPOINT_FORCE));\n> +\t\t\t\tInvokeFunctionExecuteHook(F_PG_CHECKPOINT);\n> +\n> +\t\t\t\tfmgr_info(F_PG_CHECKPOINT, &flinfo);\n> +\n> +\t\t\t\t(void) FunctionCall0Coll(&flinfo, InvalidOid);\n> +\t\t\t}\n> \t\t\tbreak;\n\nI don't like this. This turns the checkpoint command which previously didn't\nrely on the catalog in the happy path etc into something that requires most of\nthe backend to be happily up to work.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Nov 2021 15:42:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Hi,\n\nOn 2021-11-04 14:25:54 -0700, Jeff Davis wrote:\n> On Thu, 2021-11-04 at 12:37 -0400, Robert Haas wrote:\n> > I don't have anything specific to propose, which I realize is kind of\n> > unhelpful ... but I don't like this, either.\n> \n> We can go back to having a pg_checkpoint predefined role that is only\n> used for the CHECKPOINT command.\n\nWhat about extending GRANT to allow to grant rights on commands? Yes, it'd be\na bit of work to make that work in the catalogs, but it doesn't seem too hard\nto tackle.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Nov 2021 15:46:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Thu, 2021-11-04 at 15:46 -0700, Andres Freund wrote:\n> What about extending GRANT to allow to grant rights on commands? Yes,\n> it'd be\n> a bit of work to make that work in the catalogs, but it doesn't seem\n> too hard\n> to tackle.\n\nYou mean for the CHECKPOINT command specifically, or for many commands?\n\nIf it only applies to CHECKPOINT, it seems like more net clutter than a\nnew predefined role.\n\nBut I don't see it generalizing to a lot of commands, either. I looked\nat the list, and it's taking some creativity to think of more than a\ncouple other commands where it makes sense. Maybe LISTEN/NOTIFY? But\neven then, there are three related commands: LISTEN, UNLISTEN, and\nNOTIFY. Are those one privilege representing them all, two\n(LISTEN/UNLISTEN, and NOTIFY), or three separate privileges?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 04 Nov 2021 16:35:02 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Thu, 2021-11-04 at 15:42 -0700, Andres Freund wrote:\n> I don't like this. This turns the checkpoint command which previously\n> didn't\n> rely on the catalog in the happy path etc into something that\n> requires most of\n> the backend to be happily up to work.\n\nIt seems like this specific approach has been mostly shot down already.\n But out of curiosity, are you intending to run CHECKPOINT during\nbootstrap or something?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 04 Nov 2021 16:38:36 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Thu, Nov 4, 2021 at 5:25 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Thu, 2021-11-04 at 12:37 -0400, Robert Haas wrote:\n> > I don't have anything specific to propose, which I realize is kind of\n> > unhelpful ... but I don't like this, either.\n>\n> We can go back to having a pg_checkpoint predefined role that is only\n> used for the CHECKPOINT command.\n\nI would prefer that approach. Other people may prefer other things,\nbut if I got to pick, I'd say do it that way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Nov 2021 08:40:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Thu, Nov 4, 2021 at 7:38 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> It seems like this specific approach has been mostly shot down already.\n> But out of curiosity, are you intending to run CHECKPOINT during\n> bootstrap or something?\n\nImagine a system with corruption in pg_proc. Right now, that won't\nprevent you from successfully executing a checkpoint. With this\napproach, it might.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Nov 2021 08:42:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Thu, Nov 4, 2021 at 6:46 PM Andres Freund <andres@anarazel.de> wrote:\n> What about extending GRANT to allow to grant rights on commands? Yes, it'd be\n> a bit of work to make that work in the catalogs, but it doesn't seem too hard\n> to tackle.\n\nI think that there aren't too many commands where the question is just\nwhether you can execute the command or not. CHECKPOINT is one that\ndoes work that way, but if it's VACUUM or ANALYZE the question will be\nwhether you can run it on a particular table; if it's ALTER SYSTEM it\nwill be whether you can run it for that GUC; and so on. CHECKPOINT is\none of the few commands that has no target.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Nov 2021 08:54:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 2021-Nov-04, Jeff Davis wrote:\n\n> But I don't see it generalizing to a lot of commands, either. I looked\n> at the list, and it's taking some creativity to think of more than a\n> couple other commands where it makes sense. Maybe LISTEN/NOTIFY? But\n> even then, there are three related commands: LISTEN, UNLISTEN, and\n> NOTIFY. Are those one privilege representing them all, two\n> (LISTEN/UNLISTEN, and NOTIFY), or three separate privileges?\n\nWhat about things like CREATE SUBSCRIPTION/PUBLICATION? Sounds like it\nwould be useful to allow non-superusers do those, too.\n\nThat said, if the list is short, then additional predefined roles seem\npreferrable to having a ton of infrastructure code that might be much\nmore clutter than what seems a short list of additional predefined roles.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"We're here to devour each other alive\" (Hobbes)\n\n\n", "msg_date": "Fri, 5 Nov 2021 10:13:11 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 2021-11-05 08:42:58 -0400, Robert Haas wrote:\n> On Thu, Nov 4, 2021 at 7:38 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > It seems like this specific approach has been mostly shot down already.\n> > But out of curiosity, are you intending to run CHECKPOINT during\n> > bootstrap or something?\n> \n> Imagine a system with corruption in pg_proc. Right now, that won't\n> prevent you from successfully executing a checkpoint. With this\n> approach, it might.\n\nExactly. It wouldn't matter if checkpoints weren't something needed to\npotentially bring the system back into a sane state, but ...\n\n\n", "msg_date": "Sun, 7 Nov 2021 10:46:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Hi,\n\nOn 2021-11-05 08:54:37 -0400, Robert Haas wrote:\n> On Thu, Nov 4, 2021 at 6:46 PM Andres Freund <andres@anarazel.de> wrote:\n> > What about extending GRANT to allow to grant rights on commands? Yes, it'd be\n> > a bit of work to make that work in the catalogs, but it doesn't seem too hard\n> > to tackle.\n> \n> I think that there aren't too many commands where the question is just\n> whether you can execute the command or not. CHECKPOINT is one that\n> does work that way, but if it's VACUUM or ANALYZE the question will be\n> whether you can run it on a particular table; if it's ALTER SYSTEM it\n> will be whether you can run it for that GUC; and so on. CHECKPOINT is\n> one of the few commands that has no target.\n\nI don't know if that's really such a big deal. It's useful to be able to grant\nthe right to do a system wide ANALYZE etc to a role that can't otherwise do\nanything with the table. Even for ALTER SYSTEM etc it seems like it'd be\nhelpful, because it allows to constrain an admin tool to \"legitimate\" admin\npaths, without allowing, say, UPDATE pg_proc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 7 Nov 2021 10:50:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2021-11-05 08:42:58 -0400, Robert Haas wrote:\n> > On Thu, Nov 4, 2021 at 7:38 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > > It seems like this specific approach has been mostly shot down already.\n> > > But out of curiosity, are you intending to run CHECKPOINT during\n> > > bootstrap or something?\n> > \n> > Imagine a system with corruption in pg_proc. Right now, that won't\n> > prevent you from successfully executing a checkpoint. With this\n> > approach, it might.\n> \n> Exactly. It wouldn't matter if checkpoints weren't something needed to\n> potentially bring the system back into a sane state, but ...\n\nThis really isn't that hard to address- do a superuser check, if it\npasses then just call the checkpoint function like CHECKPOINT; does\ntoday. Otherwise, check the perms on the function or just call the\nfunction in a manner which would check privileges, or maybe have another\npredefined role, though I continue to feel like the function based\napproach is better.\n\nIf we're actually worried about catalog corruption (and, frankly, I've\ngot some serious doubts that jumping in and running CHECKPOINT; by hand\nis a great idea if there's such active corruption) then we must use such\nan approach no matter how we allow non-superusers to run the command\nbecause any approach to that necessarily involves some amount of catalog\naccess.\n\nAny concern leveraged against pg_proc applies equally to pg_auth_members\nafter all, so having it be something role-based vs. function privilege\nis really just moving deck chairs around on the titanic at that point.\n\nThanks,\n\nStephen", "msg_date": "Mon, 8 Nov 2021 12:23:18 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2021-11-05 08:54:37 -0400, Robert Haas wrote:\n> > On Thu, Nov 4, 2021 at 6:46 PM Andres Freund <andres@anarazel.de> wrote:\n> > > What about extending GRANT to allow to grant rights on commands? Yes, it'd be\n> > > a bit of work to make that work in the catalogs, but it doesn't seem too hard\n> > > to tackle.\n> > \n> > I think that there aren't too many commands where the question is just\n> > whether you can execute the command or not. CHECKPOINT is one that\n> > does work that way, but if it's VACUUM or ANALYZE the question will be\n> > whether you can run it on a particular table; if it's ALTER SYSTEM it\n> > will be whether you can run it for that GUC; and so on. CHECKPOINT is\n> > one of the few commands that has no target.\n> \n> I don't know if that's really such a big deal. It's useful to be able to grant\n> the right to do a system wide ANALYZE etc to a role that can't otherwise do\n> anything with the table. Even for ALTER SYSTEM etc it seems like it'd be\n> helpful, because it allows to constrain an admin tool to \"legitimate\" admin\n> paths, without allowing, say, UPDATE pg_proc.\n\nNote that it's already possible to have a non-superuser who can run\nVACUUM and ANALYZE on all non-shared tables in a database but who\notherwise isn't able to mess with the tables owned by other users- that\nis something the database owner can do.\n\nPerhaps it's useful to break that out into a separately grantable\npermission but the argument should really by why you'd want to GRANT\nsomeone the ability to VACUUM/ANALYZE an entire database while *not*\nhaving them be the database owner. That's a much more narrow use-case\nvs. having them not be a superuser or be able to do things like UPDATE\npg_proc.\n\nThanks,\n\nStephen", "msg_date": "Mon, 8 Nov 2021 12:32:38 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Hi,\n\nOn 2021-11-08 12:23:18 -0500, Stephen Frost wrote:\n> If we're actually worried about catalog corruption (and, frankly, I've\n> got some serious doubts that jumping in and running CHECKPOINT; by hand\n> is a great idea if there's such active corruption)\n\nI've been there when recovering from corruption.\n\n\n> though I continue to feel like the function based approach is better.\n\nI think it's a somewhat ugly hack.\n\n\n> then we must use such an approach no matter how we allow non-superusers to\n> run the command because any approach to that necessarily involves some\n> amount of catalog access.\n\nAs long as there's no additional catalog access when the user is known to be a\nsuperuser, then I think it's fine. There's a difference between doing one\npg_authid read for superuser - with a fallback to automatically assuming a\nuser if one couldn't be found - and doing a full pg_proc read with several\nsubsidiary pg_type reads etc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 8 Nov 2021 09:33:25 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Greetings,\n\n* Alvaro Herrera (alvherre@alvh.no-ip.org) wrote:\n> On 2021-Nov-04, Jeff Davis wrote:\n> > But I don't see it generalizing to a lot of commands, either. I looked\n> > at the list, and it's taking some creativity to think of more than a\n> > couple other commands where it makes sense. Maybe LISTEN/NOTIFY? But\n> > even then, there are three related commands: LISTEN, UNLISTEN, and\n> > NOTIFY. Are those one privilege representing them all, two\n> > (LISTEN/UNLISTEN, and NOTIFY), or three separate privileges?\n> \n> What about things like CREATE SUBSCRIPTION/PUBLICATION? Sounds like it\n> would be useful to allow non-superusers do those, too.\n\nAgreed. Having these be limited to superusers is unfortunate, though at\nthe time probably made sense as otherwise it would have made it that\nmuch more difficult to get logical replication in. Now is a great time\nto try and improve on that situation though. This is a bit tricky\nthough since creating a subscription means that you'll be able to cause\nsome code to be executed with higher privileges today, as I recall, and\nwe'd need to make sure to address that. If we can make sure that a\nsubscription isn't able to be used to execute code as effectively a\nsuperuser then I would think the other permission needed to create one,\nfor tables which you own, would be just a \"network access\" kind of\ncapability. In other words, I'm not 100% sure we need to have 'create\nsubscription' require different privileges from 'create a foreign\nserver'. Then again, having additional predefined rules isn't a huge\ncost and perhaps it would be better to avoid the confusion that\nintroducing a separate 'capabilities' kind of system would involve where\nthose capabilities cross multiple commands.\n\n> That said, if the list is short, then additional predefined roles seem\n> preferrable to having a ton of infrastructure code that might be much\n> more clutter than what seems a short list of additional predefined roles.\n\nNone of this strikes me as a 'ton of infrastructure code' and so I'm not\nquite sure I'm following the argument being made here.\n\nThanks,\n\nStephen", "msg_date": "Mon, 8 Nov 2021 12:39:33 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On 2021-Nov-08, Stephen Frost wrote:\n\n> * Alvaro Herrera (alvherre@alvh.no-ip.org) wrote:\n\n> > That said, if the list is short, then additional predefined roles seem\n> > preferrable to having a ton of infrastructure code that might be much\n> > more clutter than what seems a short list of additional predefined roles.\n> \n> None of this strikes me as a 'ton of infrastructure code' and so I'm not\n> quite sure I'm following the argument being made here.\n\nI was referring specifically to Andres' idea of having additional DDL\ncommands handled as special GRANTable privileges, \nhttps://postgr.es/m/20211104224636.5qg6cfyjkw52rh4d@alap3.anarazel.de\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Hay quien adquiere la mala costumbre de ser infeliz\" (M. A. Evans)\n\n\n", "msg_date": "Mon, 8 Nov 2021 14:45:20 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2021-11-08 12:23:18 -0500, Stephen Frost wrote:\n> > though I continue to feel like the function based approach is better.\n> \n> I think it's a somewhat ugly hack.\n\nI suppose we'll just have to disagree on that. :)\n\nI don't feel as strongly as others apparently do on this point though,\nand I'd rather have non-superusers able to run CHECKPOINT *somehow*\nthan not, so if the others feel like a predefined role is a better\napproach then I'm alright with that. Seems a bit overkill to me but\nit's also not like it's actually all that much code or work to do.\n\n> > then we must use such an approach no matter how we allow non-superusers to\n> > run the command because any approach to that necessarily involves some\n> > amount of catalog access.\n> \n> As long as there's no additional catalog access when the user is known to be a\n> superuser, then I think it's fine. There's a difference between doing one\n> pg_authid read for superuser - with a fallback to automatically assuming a\n> user if one couldn't be found - and doing a full pg_proc read with several\n> subsidiary pg_type reads etc.\n\nYes, the approach I'm suggesting would make the superuser-run CHECKPOINT\nbe exactly the same as it is today, while a non-superuser trying to run\na CHECKPOINT would end up doing additional catalog accesses.\n\nThanks,\n\nStephen", "msg_date": "Mon, 8 Nov 2021 12:47:23 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Greetings,\n\n* Alvaro Herrera (alvherre@alvh.no-ip.org) wrote:\n> On 2021-Nov-08, Stephen Frost wrote:\n> \n> > * Alvaro Herrera (alvherre@alvh.no-ip.org) wrote:\n> \n> > > That said, if the list is short, then additional predefined roles seem\n> > > preferrable to having a ton of infrastructure code that might be much\n> > > more clutter than what seems a short list of additional predefined roles.\n> > \n> > None of this strikes me as a 'ton of infrastructure code' and so I'm not\n> > quite sure I'm following the argument being made here.\n> \n> I was referring specifically to Andres' idea of having additional DDL\n> commands handled as special GRANTable privileges, \n> https://postgr.es/m/20211104224636.5qg6cfyjkw52rh4d@alap3.anarazel.de\n\nAh, thanks, I had seen that but didn't quite associate it to this\ncomment.\n\nPerhaps not a surprise, but I tend to favor predefined roles for these\nkinds of things. If we do want to revamp how GRANT works, I'd argue for\nfirst splitting up the way we handle privileges to be on a\nper-object-type basis and once we did that then we could extend that to\nallow GRANT on commands more easily (and with more variety as to what\nprivileges a GRANT on a command could be). It's kind of cute to have\none bitmap covering all objects but it puts us into a place where\nextending what can be GRANT'd on one kind of object necessarily impacts\nour ability to GRANT on other kinds (eg: we have a bit reserved for\nTRUNCATE in the same bitmask for a schema as we do for a table, but we\ndon't allow TRUNCATE on schemas and probably never will).\n\nThanks,\n\nStephen", "msg_date": "Mon, 8 Nov 2021 12:53:44 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Greetings,\n\n* Isaac Morland (isaac.morland@gmail.com) wrote:\n> On Tue, 2 Nov 2021 at 19:00, Vik Fearing <vik@postgresfriends.org> wrote:\n> > On 11/2/21 11:14 PM, Vik Fearing wrote:\n> >\n> > > This would be nice, but there is nothing to hang our hat on:\n> > >\n> > > GRANT CHECKPOINT TO username;\n> >\n> > Thinking about this more, why don't we just add CHECKPOINT and\n> > NOCHECKPOINT attributes to roles?\n> >\n> > ALTER ROLE username WITH CHECKPOINT;\n> \n> At present, this would require adding a field to pg_authid. This isn't very\n> scalable; but we're already creating new pg_* roles which give access to\n> various actions so I don't know why a role attribute would be a better\n> approach. If anything, I think it would be more likely to move in the other\n> direction: replace role attributes that in effect grant privileges with\n> predefined roles. I think this has already been discussed here in the\n> context of CREATEROLE.\n\nYes, much better to create predefined roles for this kind of thing and,\nideally, move explicitly away from role attributes.\n\nThanks,\n\nStephen", "msg_date": "Mon, 8 Nov 2021 14:11:57 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "On Mon, 2021-11-08 at 12:47 -0500, Stephen Frost wrote:\n> \n> I don't feel as strongly as others apparently do on this point\n> though,\n> and I'd rather have non-superusers able to run CHECKPOINT *somehow*\n> than not, so if the others feel like a predefined role is a better\n> approach then I'm alright with that. Seems a bit overkill to me but\n> it's also not like it's actually all that much code or work to do.\n\n+1. It seems like the pg_checkpointer predefined role is the most\nacceptable to everyone (even if not universally liked).\n\nAttached a rebased version of that patch.\n\nRegards,\n\tJeff Davis", "msg_date": "Mon, 08 Nov 2021 16:13:17 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." }, { "msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Mon, 2021-11-08 at 12:47 -0500, Stephen Frost wrote:\n> > \n> > I don't feel as strongly as others apparently do on this point\n> > though,\n> > and I'd rather have non-superusers able to run CHECKPOINT *somehow*\n> > than not, so if the others feel like a predefined role is a better\n> > approach then I'm alright with that. Seems a bit overkill to me but\n> > it's also not like it's actually all that much code or work to do.\n> \n> +1. It seems like the pg_checkpointer predefined role is the most\n> acceptable to everyone (even if not universally liked).\n> \n> Attached a rebased version of that patch.\n\nThanks. Quick review-\n\n> diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c\n> index bf085aa93b2..0ff832a62c2 100644\n> --- a/src/backend/tcop/utility.c\n> +++ b/src/backend/tcop/utility.c\n> @@ -24,6 +24,7 @@\n> #include \"catalog/catalog.h\"\n> #include \"catalog/index.h\"\n> #include \"catalog/namespace.h\"\n> +#include \"catalog/pg_authid.h\"\n> #include \"catalog/pg_inherits.h\"\n> #include \"catalog/toasting.h\"\n> #include \"commands/alter.h\"\n> @@ -939,10 +940,10 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n> \t\t\tbreak;\n> \n> \t\tcase T_CheckPointStmt:\n> -\t\t\tif (!superuser())\n> +\t\t\tif (!has_privs_of_role(GetUserId(), ROLE_PG_CHECKPOINTER))\n> \t\t\t\tereport(ERROR,\n> \t\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> -\t\t\t\t\t\t errmsg(\"must be superuser to do CHECKPOINT\")));\n> +\t\t\t\t\t\t errmsg(\"must be member of pg_checkpointer to do CHECKPOINT\")));\n\nMost such error messages say 'superuser or '... Also, note the recent\nthread about trying to ensure that places are using has_privs_of_role()\nas you're doing here but also say that in the error message\nconsistently, rather than 'member of' it should really be 'has\nprivileges of' as the two are not necessarily always the same. You can\nbe a member of a role but not actively have the privileges of that role.\n\nOtherwise, looks pretty good to me. I'll note that has_privs_of_role()\nwill first call superuser_arg(member), just the same as the prior\nsuperuser() check did, so this doesn't change the catalog accesses in\nthat case from today.\n\nThanks,\n\nStephen", "msg_date": "Tue, 9 Nov 2021 11:29:38 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT." } ]
[ { "msg_contents": "Avoid race in RelationBuildDesc() affecting CREATE INDEX CONCURRENTLY.\n\nCIC and REINDEX CONCURRENTLY assume backends see their catalog changes\nno later than each backend's next transaction start. That failed to\nhold when a backend absorbed a relevant invalidation in the middle of\nrunning RelationBuildDesc() on the CIC index. Queries that use the\nresulting index can silently fail to find rows. Fix this for future\nindex builds by making RelationBuildDesc() loop until it finishes\nwithout accepting a relevant invalidation. It may be necessary to\nreindex to recover from past occurrences; REINDEX CONCURRENTLY suffices.\nBack-patch to 9.6 (all supported versions).\n\nNoah Misch and Andrey Borodin, reviewed (in earlier versions) by Andres\nFreund.\n\nDiscussion: https://postgr.es/m/20210730022548.GA1940096@gust.leadboat.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/fdd965d074d46765c295223b119ca437dbcac973\n\nModified Files\n--------------\ncontrib/amcheck/t/002_cic.pl | 78 ++++++++++++++++\nsrc/backend/utils/cache/inval.c | 12 ++-\nsrc/backend/utils/cache/relcache.c | 115 +++++++++++++++++++++--\nsrc/bin/pgbench/t/001_pgbench_with_server.pl | 118 +++++++----------------\nsrc/include/utils/inval.h | 1 +\nsrc/include/utils/relcache.h | 2 +-\nsrc/test/perl/PostgresNode.pm | 134 +++++++++++++++++++++++++++\nsrc/tools/pgindent/typedefs.list | 1 +\n8 files changed, 368 insertions(+), 93 deletions(-)", "msg_date": "Sun, 24 Oct 2021 01:40:12 +0000", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "pgsql: Avoid race in RelationBuildDesc() affecting CREATE INDEX\n CONCURR" }, { "msg_contents": "On 10/24/21 03:40, Noah Misch wrote:\n> Avoid race in RelationBuildDesc() affecting CREATE INDEX CONCURRENTLY.\n> \n> CIC and REINDEX CONCURRENTLY assume backends see their catalog changes\n> no later than each backend's next transaction start. That failed to\n> hold when a backend absorbed a relevant invalidation in the middle of\n> running RelationBuildDesc() on the CIC index. Queries that use the\n> resulting index can silently fail to find rows. Fix this for future\n> index builds by making RelationBuildDesc() loop until it finishes\n> without accepting a relevant invalidation. It may be necessary to\n> reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices.\n> Back-patch to 9.6 (all supported versions).\n> \n> Noah Misch and Andrey Borodin, reviewed (in earlier versions) by Andres\n> Freund.\n> \n> Discussion: https://postgr.es/m/20210730022548.GA1940096@gust.leadboat.com\n> \n\nUnfortunately, this seems to have broken CLOBBER_CACHE_ALWAYS builds. \nSince this commit, initdb never completes due to infinite retrying over \nand over (on the first RelationBuildDesc call).\n\nWe have a CLOBBER_CACHE_ALWAYS buildfarm machine \"avocet\", and that \ncurrently looks like this (top):\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ \nCOMMAND\n 2626 buildfa+ 20 0 202888 21416 20084 R 98.34 0.531 151507:16 \n/home/buildfarm/avocet/buildroot/REL9_6_STABLE/pgsql.build/tmp_install/home/buildfarm/avocet/buildroot/REL9_6_STABLE/inst/bin/postgres \n--boot -x1 -F\n\nYep, that's 151507 minutes, i.e. 104 days in initdb :-/\n\n\nI haven't looked at this very closely yet, but it seems the whole \nproblem is we do this at the very beginning:\n\n in_progress_list[in_progress_offset].invalidated = false;\n\n /*\n * find the tuple in pg_class corresponding to the given relation id\n */\n pg_class_tuple = ScanPgRelation(targetRelId, true, false);\n\nwhich seems entirely self-defeating, because ScanPgRelation acquires a \nlock (on pg_class), which accepts invalidations, which invalidates \nsystem caches (in clobber_cache_always), which sets promptly sets\n\n in_progress_list[in_progress_offset].invalidated = false;\n\nguaranteeing an infinite loop.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 8 Feb 2022 22:13:01 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Avoid race in RelationBuildDesc() affecting CREATE INDEX\n CONCURR" }, { "msg_contents": "Hi,\n\nOn 2022-02-08 22:13:01 +0100, Tomas Vondra wrote:\n> On 10/24/21 03:40, Noah Misch wrote:\n> > Avoid race in RelationBuildDesc() affecting CREATE INDEX CONCURRENTLY.\n> > \n> > CIC and REINDEX CONCURRENTLY assume backends see their catalog changes\n> > no later than each backend's next transaction start. That failed to\n> > hold when a backend absorbed a relevant invalidation in the middle of\n> > running RelationBuildDesc() on the CIC index. Queries that use the\n> > resulting index can silently fail to find rows. Fix this for future\n> > index builds by making RelationBuildDesc() loop until it finishes\n> > without accepting a relevant invalidation. It may be necessary to\n> > reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices.\n> > Back-patch to 9.6 (all supported versions).\n> > \n> > Noah Misch and Andrey Borodin, reviewed (in earlier versions) by Andres\n> > Freund.\n> > \n> > Discussion: https://postgr.es/m/20210730022548.GA1940096@gust.leadboat.com\n> > \n> \n> Unfortunately, this seems to have broken CLOBBER_CACHE_ALWAYS builds. Since\n> this commit, initdb never completes due to infinite retrying over and over\n> (on the first RelationBuildDesc call).\n\nUgh. Do we need to do something about WRT the next set of minor releases? Is\nthere a a chance of this occuring in \"real\" workloads?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 8 Feb 2022 16:43:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Avoid race in RelationBuildDesc() affecting CREATE INDEX\n CONCURR" }, { "msg_contents": "On Tue, Feb 08, 2022 at 04:43:47PM -0800, Andres Freund wrote:\n> Ugh. Do we need to do something about WRT the next set of minor\n> releases?\n\nThe set of minor releases of this week has already been stamped, so\nthat's too late :/\n\n> Is there a a chance of this occuring in \"real\" workloads?\n\nUgh++. The problem is that we would not really detect that\nautomatically, isn't it?\n--\nMichael", "msg_date": "Wed, 9 Feb 2022 10:23:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Avoid race in RelationBuildDesc() affecting CREATE INDEX\n CONCURR" }, { "msg_contents": "\n\nOn 2/9/22 01:43, Andres Freund wrote:\n> Hi,\n> \n> On 2022-02-08 22:13:01 +0100, Tomas Vondra wrote:\n>> On 10/24/21 03:40, Noah Misch wrote:\n>>> Avoid race in RelationBuildDesc() affecting CREATE INDEX CONCURRENTLY.\n>>>\n>>> CIC and REINDEX CONCURRENTLY assume backends see their catalog changes\n>>> no later than each backend's next transaction start. That failed to\n>>> hold when a backend absorbed a relevant invalidation in the middle of\n>>> running RelationBuildDesc() on the CIC index. Queries that use the\n>>> resulting index can silently fail to find rows. Fix this for future\n>>> index builds by making RelationBuildDesc() loop until it finishes\n>>> without accepting a relevant invalidation. It may be necessary to\n>>> reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices.\n>>> Back-patch to 9.6 (all supported versions).\n>>>\n>>> Noah Misch and Andrey Borodin, reviewed (in earlier versions) by Andres\n>>> Freund.\n>>>\n>>> Discussion: https://postgr.es/m/20210730022548.GA1940096@gust.leadboat.com\n>>>\n>>\n>> Unfortunately, this seems to have broken CLOBBER_CACHE_ALWAYS builds. Since\n>> this commit, initdb never completes due to infinite retrying over and over\n>> (on the first RelationBuildDesc call).\n> \n> Ugh. Do we need to do something about WRT the next set of minor releases? Is\n> there a a chance of this occuring in \"real\" workloads?\n> \n\nAFAICS this only affects builds with CLOBBER_CACHE_ALWAYS, and anyone \nrunning such build in production clearly likes painful things anyway.\n\nBut really, for the infinite loop to happen, building a relation \ndescriptor has to invalidate a cache. And I haven't found a way to do \nthat without the CLOBBER_CACHE_ALWAYS thing.\n\nAlso, all the November minor releases include this commit, and there \nwere no reports about this (pretty obvious) issue. Buildfarm did not \ncomplain either (but an animal may be stuck for months and we would not \nknow about it).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 9 Feb 2022 02:25:09 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Avoid race in RelationBuildDesc() affecting CREATE INDEX\n CONCURR" }, { "msg_contents": "Hi,\n\nOn 2022-02-09 10:23:06 +0900, Michael Paquier wrote:\n> On Tue, Feb 08, 2022 at 04:43:47PM -0800, Andres Freund wrote:\n> > Ugh. Do we need to do something about WRT the next set of minor\n> > releases?\n> \n> The set of minor releases of this week has already been stamped, so\n> that's too late :/\n\nIt's stamped, not tagged, so we could send out new tarballs. Or we could skip\na release number. IIRC we had to do something along those lines before.\n\n\n> Ugh++. The problem is that we would not really detect that\n> automatically, isn't it?\n\nWhat do you mean with detect here?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 8 Feb 2022 17:43:34 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Avoid race in RelationBuildDesc() affecting CREATE INDEX\n CONCURR" }, { "msg_contents": "Hi,\n\nOn 2022-02-09 02:25:09 +0100, Tomas Vondra wrote:\n> AFAICS this only affects builds with CLOBBER_CACHE_ALWAYS, and anyone\n> running such build in production clearly likes painful things anyway.\n\nYea, realistically nobody does that.\n\n\n> But really, for the infinite loop to happen, building a relation descriptor\n> has to invalidate a cache. And I haven't found a way to do that without the\n> CLOBBER_CACHE_ALWAYS thing.\n\nPhew.\n\n\n> Also, all the November minor releases include this commit, and there were no\n> reports about this (pretty obvious) issue. Buildfarm did not complain either\n> (but an animal may be stuck for months and we would not know about it).\n\nAh, somehow I thought that wasn't yet in the last set of releases. Phew #2.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 8 Feb 2022 17:53:59 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Avoid race in RelationBuildDesc() affecting CREATE INDEX\n CONCURR" }, { "msg_contents": "On Tue, Feb 08, 2022 at 04:43:47PM -0800, Andres Freund wrote:\n> On 2022-02-08 22:13:01 +0100, Tomas Vondra wrote:\n> > On 10/24/21 03:40, Noah Misch wrote:\n> > > Avoid race in RelationBuildDesc() affecting CREATE INDEX CONCURRENTLY.\n> > > \n> > > CIC and REINDEX CONCURRENTLY assume backends see their catalog changes\n> > > no later than each backend's next transaction start. That failed to\n> > > hold when a backend absorbed a relevant invalidation in the middle of\n> > > running RelationBuildDesc() on the CIC index. Queries that use the\n> > > resulting index can silently fail to find rows. Fix this for future\n> > > index builds by making RelationBuildDesc() loop until it finishes\n> > > without accepting a relevant invalidation. It may be necessary to\n> > > reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices.\n> > > Back-patch to 9.6 (all supported versions).\n> > > \n> > > Noah Misch and Andrey Borodin, reviewed (in earlier versions) by Andres\n> > > Freund.\n> > > \n> > > Discussion: https://postgr.es/m/20210730022548.GA1940096@gust.leadboat.com\n> > > \n> > \n> > Unfortunately, this seems to have broken CLOBBER_CACHE_ALWAYS builds. Since\n> > this commit, initdb never completes due to infinite retrying over and over\n> > (on the first RelationBuildDesc call).\n\nThanks for the report. I had added the debug_discard arguments of\nInvalidateSystemCachesExtended() and RelationCacheInvalidate() to make the new\ncode survive a CREATE TABLE at debug_discard_caches=5. Apparently that's not\nenough for initdb. I'll queue a task to look at it.\n\nIt's a good reminder to set wait_timeout on buildfarm animals. (I should take\nthat advice, too.)\n\n> Ugh. Do we need to do something about WRT the next set of minor releases?\n\nNo, given that this code already debuted in the November releases.\n\n\n", "msg_date": "Tue, 8 Feb 2022 18:04:03 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: pgsql: Avoid race in RelationBuildDesc() affecting CREATE INDEX\n CONCURR" }, { "msg_contents": "On Tue, Feb 08, 2022 at 05:43:34PM -0800, Andres Freund wrote:\n> It's stamped, not tagged, so we could send out new tarballs. Or we could skip\n> a release number. IIRC we had to do something along those lines before.\n\nIt does not matter now, but the release is stamped and tagged.\n\n> What do you mean with detect here?\n\nWell, we would not be able to see that something is stuck by default,\nbut Noah has just answered to my question by mentioning wait_timeout\nin the buildfarm configuration.\n--\nMichael", "msg_date": "Wed, 9 Feb 2022 11:24:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Avoid race in RelationBuildDesc() affecting CREATE INDEX\n CONCURR" }, { "msg_contents": "On Tue, Feb 08, 2022 at 06:04:03PM -0800, Noah Misch wrote:\n> On Tue, Feb 08, 2022 at 04:43:47PM -0800, Andres Freund wrote:\n> > On 2022-02-08 22:13:01 +0100, Tomas Vondra wrote:\n> > > On 10/24/21 03:40, Noah Misch wrote:\n> > > > Avoid race in RelationBuildDesc() affecting CREATE INDEX CONCURRENTLY.\n> > > > \n> > > > CIC and REINDEX CONCURRENTLY assume backends see their catalog changes\n> > > > no later than each backend's next transaction start. That failed to\n> > > > hold when a backend absorbed a relevant invalidation in the middle of\n> > > > running RelationBuildDesc() on the CIC index. Queries that use the\n> > > > resulting index can silently fail to find rows. Fix this for future\n> > > > index builds by making RelationBuildDesc() loop until it finishes\n> > > > without accepting a relevant invalidation. It may be necessary to\n> > > > reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices.\n> > > > Back-patch to 9.6 (all supported versions).\n> > > > \n> > > > Noah Misch and Andrey Borodin, reviewed (in earlier versions) by Andres\n> > > > Freund.\n> > > > \n> > > > Discussion: https://postgr.es/m/20210730022548.GA1940096@gust.leadboat.com\n> > > > \n> > > \n> > > Unfortunately, this seems to have broken CLOBBER_CACHE_ALWAYS builds. Since\n> > > this commit, initdb never completes due to infinite retrying over and over\n> > > (on the first RelationBuildDesc call).\n> \n> Thanks for the report. I had added the debug_discard arguments of\n> InvalidateSystemCachesExtended() and RelationCacheInvalidate() to make the new\n> code survive a CREATE TABLE at debug_discard_caches=5. Apparently that's not\n> enough for initdb. I'll queue a task to look at it.\n\nThe explanation was more boring than that. v13 and earlier have an additional\nInvalidateSystemCaches() call site, which I neglected to update. Here's the\nfix I intend to push.", "msg_date": "Tue, 8 Feb 2022 21:41:41 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: pgsql: Avoid race in RelationBuildDesc() affecting CREATE INDEX\n CONCURR" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Feb 08, 2022 at 05:43:34PM -0800, Andres Freund wrote:\n>> It's stamped, not tagged, so we could send out new tarballs. Or we could skip\n>> a release number. IIRC we had to do something along those lines before.\n\n> It does not matter now, but the release is stamped and tagged.\n\nYeah, I see no need to do anything about this on an emergency\nbasis.\n\n>> What do you mean with detect here?\n\n> Well, we would not be able to see that something is stuck by default,\n> but Noah has just answered to my question by mentioning wait_timeout\n> in the buildfarm configuration.\n\nThe buildfarm's wait_timeout option isn't that helpful here, because\nwhen it triggers, the client just goes belly-up *with no report*.\nSo even if the CCA animals had it on, you'd not notice unless you\nstarted to wonder why they'd not reported lately.\n\nI think that's a bug that ought to be fixed. I do agree that\nwait_timeout ought to be finite by default, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Feb 2022 00:44:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Avoid race in RelationBuildDesc() affecting CREATE INDEX\n CONCURR" }, { "msg_contents": "On 2/9/22 06:41, Noah Misch wrote:\n>\n> The explanation was more boring than that. v13 and earlier have an additional\n> InvalidateSystemCaches() call site, which I neglected to update. Here's the\n> fix I intend to push.\n\nI tried this patch on 10 and 13, and it seems to fix the issue. So +1.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 9 Feb 2022 16:27:39 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Avoid race in RelationBuildDesc() affecting CREATE INDEX\n CONCURR" } ]
[ { "msg_contents": "Hi everyone. I want to do some feature request regarding indexes, as far as\nI know this kind of functionality doesn't exists in Postgres. Here is my\nproblem :\nI need to create following indexes:\n Create index job_nlp_year_scan on ingest_scans_stageing\n(`job`,`nlp`,`year`,`scan_id`);\n Create index job_nlp_year_issue_flag on ingest_scans_stageing\n(`job`,`nlp`,`year`,`issue_flag`);\n Create index job_nlp_year_sequence on ingest_scans_stageing\n(`job`,`nlp`,`year`,`sequence`);\nAs you can see the first 3 columns are the same (job, nlp, year). so if I\ncreate 3 different indexes db should manage same job_nlp_year structure 3\ntimes.\nThe Data Structure that I think which can be efficient in this kind of\nscenarios is to have 'Adaptive Index' which will be something like\nCreate index job_nlp_year on ingest_scans_stageing\n(`job`,`nlp`,`year`,(`issue_flag`,`scan_id`, `sequence`));\nAnd depend on query it will use or job_nlp_year_scan or\njob_nlp_year_issue_flag , or job_nlp_year_sequence ( job, nlp, year and one\nof ( `issue_flag` , `scan_id` , `sequence` )\nFor more description please feel free to refer me\n\nHi everyone. I want to do some feature request regarding indexes, as far asI know this kind of functionality doesn't exists in Postgres. Here is myproblem :I need to create following indexes:        Create index job_nlp_year_scan on ingest_scans_stageing(`job`,`nlp`,`year`,`scan_id`);        Create index job_nlp_year_issue_flag on ingest_scans_stageing(`job`,`nlp`,`year`,`issue_flag`);        Create index job_nlp_year_sequence on ingest_scans_stageing(`job`,`nlp`,`year`,`sequence`);As you can see the first 3 columns are the same (job, nlp, year). so if Icreate 3 different indexes db should manage same job_nlp_year structure 3times.The Data Structure that I think which can be efficient in this kind ofscenarios is to have 'Adaptive Index'  which will be something likeCreate index job_nlp_year on ingest_scans_stageing(`job`,`nlp`,`year`,(`issue_flag`,`scan_id`, `sequence`));And depend on query it will use or job_nlp_year_scan  orjob_nlp_year_issue_flag , or job_nlp_year_sequence ( job, nlp, year and oneof ( `issue_flag` , `scan_id` ,  `sequence` )For more description please feel free to refer me", "msg_date": "Mon, 25 Oct 2021 18:07:18 +0400", "msg_from": "Hayk Manukyan <manukyantt@gmail.com>", "msg_from_op": true, "msg_subject": "Feature request for adoptive indexes" }, { "msg_contents": "Hi,\n\nOn 10/25/21 16:07, Hayk Manukyan wrote:\n> Hi everyone. I want to do some feature request regarding indexes, as far as\n> I know this kind of functionality doesn't exists in Postgres. Here is my\n> problem :\n> I need to create following indexes:\n>         Create index job_nlp_year_scan on ingest_scans_stageing\n> (`job`,`nlp`,`year`,`scan_id`);\n>         Create index job_nlp_year_issue_flag on ingest_scans_stageing\n> (`job`,`nlp`,`year`,`issue_flag`);\n>         Create index job_nlp_year_sequence on ingest_scans_stageing\n> (`job`,`nlp`,`year`,`sequence`);\n> As you can see the first 3 columns are the same (job, nlp, year). so if I\n> create 3 different indexes db should manage same job_nlp_year structure 3\n> times.\n> The Data Structure that I think which can be efficient in this kind of\n> scenarios is to have 'Adaptive Index'  which will be something like\n> Create index job_nlp_year on ingest_scans_stageing\n> (`job`,`nlp`,`year`,(`issue_flag`,`scan_id`, `sequence`));\n> And depend on query it will use or job_nlp_year_scan  or\n> job_nlp_year_issue_flag , or job_nlp_year_sequence ( job, nlp, year and one\n> of ( `issue_flag` , `scan_id` ,  `sequence` )\n> For more description please feel free to refer me\n\nIt's not very clear what exactly would the \"adaptive index\" do, except \nthat it'd have all three columns. Clearly, the three columns can't be \nconsidered for ordering etc. but need to be in the index somehow. So why \nwouldn't it be enough to either to create an index with all six columns?\n\nCREATE INDEX ON job_nlp_year_scan (job, nlp, year, scan_id, issue_flag, \nsequence);\n\nor possibly with the columns just \"included\" in the index:\n\nCREATE INDEX ON job_nlp_year_scan (job, nlp, year) INCLUDE (scan_id, \nissue_flag, sequence);\n\nIf this does not work, you either need to explain more clearly what \nexactly the adaptive indexes does, or show queries that can't benefit \nfrom these existing features.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 25 Oct 2021 17:33:36 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "ok. here is the deal if I have the following index with 6 column\n\nCREATE INDEX ON job_nlp_year_scan (job, nlp, year, scan_id, issue_flag,\nsequence);\n\nI need to specify all 6 columns in where clause in order to fully use this\nindex.\nIt will not be efficient in cases when I have 4 condition in where clause\nalso I should follow the order of columns.\nIn case of INCLUDE the 3 columns just will be in index but will not be\nstructured as index so it will have affect only if In select I will have\nthat 6 columns nothing more.\n\nIn my case I have table with ~15 columns\nIn my application I have to do a lot of queries with following where\nclauses\n\n1. where job = <something> and nlp = <something> and year = <something>\nand SCAN_ID = <something>\n2. where job = <something> and nlp = <something> and year = <something>\nand ISSUE_FLAG = <something>\n3. where job = <something> and nlp = <something> and year = <something>\nand SEQUENCE = <something>\n\nI don't want to index just on job, nlp, year because for each job, nlp,\nyear I have approximately 5000-7000 rows ,\noverall table have ~50m rows so it is partitioned by job as well. So if I\nbuild 3 separate indexes it will be huge resource.\nSo I am thinking of having one index which will be job, nlp, year and the\n4-th layer will be other columns not just included but also in B-tree\nstructure.\nTo visualize it will be something like this:\n[image: image.png]\nThe red part is ordinary index with nested b-trees ant the yellow part is\nadaptive part so depends on\nwhere clause optimizer can decide which direction (leaf, b-tree whatever)\nto chose.\nIn this case I will have one index and will manage red part only once for\nall three cases.\nThose it make sense ?\nIf you need more discussion we can have short call I will try to explain\nyou in more detailed way.\n\nbest regards\n\nпн, 25 окт. 2021 г. в 19:33, Tomas Vondra <tomas.vondra@enterprisedb.com>:\n\n> Hi,\n>\n> On 10/25/21 16:07, Hayk Manukyan wrote:\n> > Hi everyone. I want to do some feature request regarding indexes, as far\n> as\n> > I know this kind of functionality doesn't exists in Postgres. Here is my\n> > problem :\n> > I need to create following indexes:\n> > Create index job_nlp_year_scan on ingest_scans_stageing\n> > (`job`,`nlp`,`year`,`scan_id`);\n> > Create index job_nlp_year_issue_flag on ingest_scans_stageing\n> > (`job`,`nlp`,`year`,`issue_flag`);\n> > Create index job_nlp_year_sequence on ingest_scans_stageing\n> > (`job`,`nlp`,`year`,`sequence`);\n> > As you can see the first 3 columns are the same (job, nlp, year). so if I\n> > create 3 different indexes db should manage same job_nlp_year structure 3\n> > times.\n> > The Data Structure that I think which can be efficient in this kind of\n> > scenarios is to have 'Adaptive Index' which will be something like\n> > Create index job_nlp_year on ingest_scans_stageing\n> > (`job`,`nlp`,`year`,(`issue_flag`,`scan_id`, `sequence`));\n> > And depend on query it will use or job_nlp_year_scan or\n> > job_nlp_year_issue_flag , or job_nlp_year_sequence ( job, nlp, year and\n> one\n> > of ( `issue_flag` , `scan_id` , `sequence` )\n> > For more description please feel free to refer me\n>\n> It's not very clear what exactly would the \"adaptive index\" do, except\n> that it'd have all three columns. Clearly, the three columns can't be\n> considered for ordering etc. but need to be in the index somehow. So why\n> wouldn't it be enough to either to create an index with all six columns?\n>\n> CREATE INDEX ON job_nlp_year_scan (job, nlp, year, scan_id, issue_flag,\n> sequence);\n>\n> or possibly with the columns just \"included\" in the index:\n>\n> CREATE INDEX ON job_nlp_year_scan (job, nlp, year) INCLUDE (scan_id,\n> issue_flag, sequence);\n>\n> If this does not work, you either need to explain more clearly what\n> exactly the adaptive indexes does, or show queries that can't benefit\n> from these existing features.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>", "msg_date": "Tue, 26 Oct 2021 10:49:31 +0400", "msg_from": "Hayk Manukyan <manukyantt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "\n\nOn 10/26/21 8:49 AM, Hayk Manukyan wrote:\n> ok. here is the deal if I have the following index with 6 column\n> \n> CREATE INDEX ON job_nlp_year_scan (job, nlp, year, scan_id, issue_flag,\n> sequence);\n> \n> I need to specify all 6 columns in where clause in order to fully use\n> this index.\n\nWhat do you mean by \"fully use this index\"? Yes, the query may use just\nsome of the columns and there will be a bit of overhead, but I doubt\nit'll be measurable.\n\n> It will not be efficient in cases when I have 4 condition in where\n> clause also I should follow the order of columns.\n\nSo, do some experiments and show us what the difference is. Create an\nindex on the 4 and 6 columns, and measure timings for a query with just\nthe 4 columns.\n\n> In case of INCLUDE the 3 columns just will be in index but will not be\n> structured as index so it will have affect only if In select I will have\n> that 6 columns nothing more.\n> \n> In my case I have table with ~15 columns\n> In my application  I have to do a lot of queries with following where\n> clauses \n> \n> 1. where  job = <something> and nlp = <something> and year = <something>\n> and SCAN_ID = <something>\n> 2. where  job = <something> and nlp = <something> and year = <something>\n> and ISSUE_FLAG = <something>\n> 3. where  job = <something> and nlp = <something> and year = <something>\n> and SEQUENCE = <something>\n> \n> I don't want to index just on  job, nlp, year because for each  job,\n> nlp, year I have approximately 5000-7000 rows ,\n> overall table have ~50m rows so it is partitioned by job as well.  So if\n> I build 3 separate indexes it will be huge resource.\n> So I am thinking of having one index which will be job, nlp, year and\n> the 4-th layer will be other columns not just included but also in\n> B-tree structure. \n> To visualize it will be something like this:\n> image.png\n> The red part is ordinary index with nested b-trees ant the yellow part\n> is adaptive part so depends on\n> where clause optimizer can decide which direction (leaf, b-tree\n> whatever) to chose.\n> In this case I will have one index and will manage red part only once\n> for all three cases.\n> Those it make sense ? \n\nIf I get what you propose, you want to have a \"top\" tree for (job, nlp,\nyear), which \"splits\" the data set into subsets of ~5000-7000 rows. And\nthen for each subset you want a separate \"small\" trees on each of the\nother columns, so in this case three trees.\n\nWell, the problem with this is pretty obvious - each of the small trees\nrequires separate copies of the leaf pages. And remember, in a btree the\ninternal pages are usually less than 1% of the index, so this pretty\nmuch triples the size of the index. And if you insert a row into the\nindex, it has to insert the item pointer into each of the small trees,\nlikely requiring a separate I/O for each.\n\nSo I'd bet this is not any different from just having three separate\nindexes - it doesn't save space, doesn't save I/O, nothing.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 26 Oct 2021 17:08:59 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "I've already answered OP but it seems in the wrong thread, so I copy it\nhere:\n\nI think now in many cases you can effectively use covering index to have\nfast index-only scans without index duplication. It will help if you don't\nhave great selectivity on the last column (most probably you don't). E.g.:\n\nCREATE INDEX ON table_name (`job`,`nlp`,`year`) INCLUDE (`scan_id`,\n`issue_flag`, `sequence`)\n\nBut I consider the feature can be useful when there is a very little\nselectivity in the first index columns. I.e. if (job`,`nlp`,`year') has\nmany repeats and the most selection is done in the last column. I am not\nsure how often this can arise but in general, I see it as a useful b-tree\ngeneralization.\n\nI'm not sure how it should be done. In my view, we need to add an ordered\nposting tree as a leaf element if b-tree and now we have index storage only\nfor tuples. The change of on-disk format was previously not easy in nbtree\nand if we consider the change, we need an extra bit to mark posting trees\namong index tuples. Maybe it could be done in a way similar to deduplicated\ntuples if some bits in the tuple header are still could be freed.\n\nThoughts?\n\nIf I get what you propose, you want to have a \"top\" tree for (job, nlp,\n> year), which \"splits\" the data set into subsets of ~5000-7000 rows. And\n> then for each subset you want a separate \"small\" trees on each of the\n> other columns, so in this case three trees.\n>\n> Well, the problem with this is pretty obvious - each of the small trees\n> requires separate copies of the leaf pages. And remember, in a btree the\n> internal pages are usually less than 1% of the index, so this pretty\n> much triples the size of the index. And if you insert a row into the\n> index, it has to insert the item pointer into each of the small trees,\n> likely requiring a separate I/O for each.\n>\n> So I'd bet this is not any different from just having three separate\n> indexes - it doesn't save space, doesn't save I/O, nothing.\n>\n\nTomas, I really think we should not try realizing this feature using\nexisting index pages that contain only tuples. You are right, it will cause\nlarge overhead. If instead we decide and succeed in creating \"posting\ntrees\" as a new on-disk page entry type we can have an index with space\ncomparable to the abovementioned covering index but with sorting of values\nin these trees (i.e. all values are sorted, and \"key\" ones).\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nI've already answered OP but it seems in the wrong thread, so I copy it here:I think now in many cases you can effectively use covering index to have fast index-only scans without index duplication. It will help if you don't have great selectivity on the last column (most probably you don't). E.g.:CREATE INDEX ON table_name (`job`,`nlp`,`year`) INCLUDE (`scan_id`, `issue_flag`, `sequence`)But I consider the feature can be useful when there is a very little selectivity in the first index columns. I.e. if (job`,`nlp`,`year') has many repeats and the most selection is done in the last column. I am not sure how often this can arise but in general, I see it as a useful b-tree generalization.I'm not sure how it should be done. In my view, we need to add an ordered posting tree as a leaf element if b-tree and now we have index storage only for tuples. The change of on-disk format was previously not easy in nbtree and if we consider the change, we need an extra bit to mark posting trees among index tuples. Maybe it could be done in a way similar to deduplicated tuples if some bits in the tuple header are still could be freed.Thoughts? If I get what you propose, you want to have a \"top\" tree for (job, nlp,\nyear), which \"splits\" the data set into subsets of ~5000-7000 rows. And\nthen for each subset you want a separate \"small\" trees on each of the\nother columns, so in this case three trees.\n\nWell, the problem with this is pretty obvious - each of the small trees\nrequires separate copies of the leaf pages. And remember, in a btree the\ninternal pages are usually less than 1% of the index, so this pretty\nmuch triples the size of the index. And if you insert a row into the\nindex, it has to insert the item pointer into each of the small trees,\nlikely requiring a separate I/O for each.\n\nSo I'd bet this is not any different from just having three separate\nindexes - it doesn't save space, doesn't save I/O, nothing. Tomas, I really think we should not try realizing this feature using existing index pages that contain only tuples. You are right, it will cause large overhead. If instead we decide and succeed in creating \"posting trees\" as a new on-disk page entry type we can have an index with space comparable to the abovementioned covering index but with sorting of values in these trees (i.e. all values are sorted, and \"key\" ones).-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Tue, 26 Oct 2021 23:39:20 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "\n\nOn 10/26/21 21:39, Pavel Borisov wrote:\n> I've already answered OP but it seems in the wrong thread, so I copy it \n> here:\n> \n> I think now in many cases you can effectively use covering index to have \n> fast index-only scans without index duplication. It will help if you \n> don't have great selectivity on the last column (most probably you \n> don't). E.g.:\n> \n> CREATE INDEX ON table_name (`job`,`nlp`,`year`) INCLUDE (`scan_id`, \n> `issue_flag`, `sequence`)\n> \n> But I consider the feature can be useful when there is a very little \n> selectivity in the first index columns. I.e. if (job`,`nlp`,`year') has \n> many repeats and the most selection is done in the last column. I am not \n> sure how often this can arise but in general, I see it as a useful \n> b-tree generalization.\n> \n> I'm not sure how it should be done. In my view, we need to add an \n> ordered posting tree as a leaf element if b-tree and now we have index \n> storage only for tuples. The change of on-disk format was previously not \n> easy in nbtree and if we consider the change, we need an extra bit to \n> mark posting trees among index tuples. Maybe it could be done in a way \n> similar to deduplicated tuples if some bits in the tuple header are \n> still could be freed.\n> \n> Thoughts?\n> \n> If I get what you propose, you want to have a \"top\" tree for (job, nlp,\n> year), which \"splits\" the data set into subsets of ~5000-7000 rows. And\n> then for each subset you want a separate \"small\" trees on each of the\n> other columns, so in this case three trees.\n> \n> Well, the problem with this is pretty obvious - each of the small trees\n> requires separate copies of the leaf pages. And remember, in a btree the\n> internal pages are usually less than 1% of the index, so this pretty\n> much triples the size of the index. And if you insert a row into the\n> index, it has to insert the item pointer into each of the small trees,\n> likely requiring a separate I/O for each.\n> \n> So I'd bet this is not any different from just having three separate\n> indexes - it doesn't save space, doesn't save I/O, nothing.\n> \n> Tomas, I really think we should not try realizing this feature using \n> existing index pages that contain only tuples. You are right, it will \n> cause large overhead. If instead we decide and succeed in creating \n> \"posting trees\" as a new on-disk page entry type we can have an index \n> with space comparable to the abovementioned covering index but with \n> sorting of values in these trees (i.e. all values are sorted, and \"key\" \n> ones).\n> \n\nWell, there was no explanation about how it could/should be implemented, \nand maybe there is some elaborate way to handle the \"posting trees\" that \nI can't quite think of (at least not in the btree context).\n\nI'm still rather skeptical about it - for such feature to be useful the \nprefix columns must not be very selective, i.e. the posting trees are \nexpected to be fairly large (e.g. 5-7k rows). It pretty much has to to \nrequire multiple (many) index pages, in order for the \"larger\" btree \nindex to be slower. And at that point I'd expect the extra overhead to \nbe worse than simply defining multiple simple indexes.\n\nA simple experiment would be to measure timing for queries with a \ncondition on \"sequence\" using two indexes:\n\n1) (job, nlp, year, sequence)\n2) (job, nlp, year, scan_id, issue_flag, sequence)\n\nThe (1) index is \"optimal\" i.e. there's unlikely to be a better index \nfor this query, at least no tree-like. (2) is something like the \"worst\" \ncase index that we can use for this query.\n\nFor the new feature to be useful, two things would need to be true:\n\n* query with (2) is much slower than (1)\n* the new index would need to be close to (1)\n\nObviously, if the new index is slower than (2), it's mostly useless \nright off the bat. And it probably can't be faster than (1) in practice, \nas it still is basically a btree index (at least the top half).\n\nSo I'd expect the performance to be somewhere between (1) and (2), but \nif (2) is very close to (1) - which I'd bet it is - then the potential \nbenefit is also pretty small.\n\nPerhaps I'm entirely wrong and there's a new type of index, better \nsuited for cases similar to this. The \"posting tree\" reference actually \nmade me thinking that maybe btree_gin might be applicable here?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 26 Oct 2021 22:43:22 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "\n\n> On Oct 26, 2021, at 1:43 PM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> I'm still rather skeptical about it - for such feature to be useful the prefix columns must not be very selective, i.e. the posting trees are expected to be fairly large (e.g. 5-7k rows). It pretty much has to to require multiple (many) index pages, in order for the \"larger\" btree index to be slower. And at that point I'd expect the extra overhead to be worse than simply defining multiple simple indexes.\n\nFor three separate indexes, an update or delete of a single row in the indexed table would surely require changing at least three pages in the indexes. For some as-yet-ill-defined combined index type, perhaps the three entries in the index would fall on the same index page often enough to reduce the I/O cost of the action? This is all hard to contemplate without a more precise description of the index algorithm.\n\nPerhaps the OP might want to cite a paper describing a particular index algorithm for us to review?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 26 Oct 2021 15:08:37 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> For three separate indexes, an update or delete of a single row in the indexed table would surely require changing at least three pages in the indexes. For some as-yet-ill-defined combined index type, perhaps the three entries in the index would fall on the same index page often enough to reduce the I/O cost of the action?\n\nOf course, we have that today from the solution of one index with the\nextra columns \"included\". I think the OP has completely failed to make\nany case why that's not a good enough approach.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Oct 2021 18:45:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "On Tue, Oct 26, 2021 at 3:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Of course, we have that today from the solution of one index with the\n> extra columns \"included\". I think the OP has completely failed to make\n> any case why that's not a good enough approach.\n\nI think that the design that the OP is talking about (adaptive\nindexes, AKA merged indexes with master detail clustering) have been\nthe subject of certain research papers. As far as I know nothing like\nthat has ever been implemented in a real DB system.\n\nIt seems like a daunting project, primarily because of the concurrency\ncontrol considerations. It's no coincidence that GIN indexes (which\nhave some of the same issues) only support lossy index scans. Lossy\nscans don't seem to be compatible with adaptive indexes, since the\nwhole point is to have multiple distinct \"logical indexes\" with a\ncommon prefix, but only one physical index, with clustering. I think\nyou'd need something like ARIES KVL for concurrency control, just for\nstarters. Even that is something that won't work with anything like\ncurrent Postgres.\n\nIt's roughly the same story that we see with generalizing TIDs at the\ntableam level. People tend to imagine that it's basically just a\nmatter of coming up with the right index AM data structure, but that's\nactually just the easy part.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 26 Oct 2021 16:30:52 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": ">\n> It's no coincidence that GIN indexes (which\n> have some of the same issues) only support lossy index scans.\n>\nAFAIK Gin is lossy for phrase queries as we don't store word position in\nthe posting list. For purely logical queries, where position doesn't\nmatter, it's not lossy.\n\nOne more consideration against the proposal is that if we want to select\nwith more than one \"suffix\" columns in the WHERE clause, effectively we\nwill have a join of two separate index scans. And as we consider suffix\ncolumns to be highly selective, and prefix columns are weakly selective,\nthen it could be very slow.\n\nJust some ideas on the topic which may not be connected to OP proposal (Not\nsure whether should we implement them as a part of nbtree) :\n\n1. If prefix columns have low selectivity it may be good if we have some\nattribute-level deduplication only for prefix columns.\n2. If we have several suffix columns, it might be a good idea is to treat\nthem as an n-dimensional space and define some R-tree or Quad-tree on top\nof them (using GiST, SpGIST).\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nIt's no coincidence that GIN indexes (which\nhave some of the same issues) only support lossy index scans. AFAIK Gin is lossy for phrase queries as we don't store word position in the posting list. For purely logical queries, where position doesn't matter, it's not lossy.One more consideration against the proposal is that if we want to select with more than one \"suffix\" columns in the WHERE clause, effectively we will have a join of two separate index scans. And as we consider suffix columns to be highly selective, and prefix columns are weakly selective, then it could be very slow. Just some ideas on the topic which may not be connected to OP proposal (Not sure whether should we implement them as a part of nbtree) :1. If prefix columns have low selectivity it may be good if we have some attribute-level deduplication only for prefix columns.2. If we have several suffix columns, it might be a good idea is to treat them as an n-dimensional space and define some R-tree or Quad-tree on top of them (using GiST, SpGIST).-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Wed, 27 Oct 2021 12:02:23 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "On Wed, Oct 27, 2021 at 1:02 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> AFAIK Gin is lossy for phrase queries as we don't store word position in the posting list. For purely logical queries, where position doesn't matter, it's not lossy.\n\nGIN is always lossy, in the sense that it provides only a\ngingetbitmap() routine -- there is no gingettuple() routine. I believe\nthat this is fundamental to the overall design of GIN. It would be\nvery difficult to add useful gingettuple() functionality now, since\nGIN already relies on lossiness to avoid race conditions.\n\nHere's an example of the problems that \"adding gingettuple()\" would\nrun into: Today, an index's pending list entries can be merged\nconcurrently with the entry tree, without worrying about returning the\nsame tuples twice. This is only safe/correct because GIN only supports\nbitmap index scans. Without that, you need some other mechanism to\nmake it safe -- ISTM you must \"logically lock\" the index structure,\nusing ARIES/KVL style key value locks, or something along those lines.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 27 Oct 2021 09:10:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "Hi all\nFirst of all thank you all for fast and rich responses, that is really nice.\nI don't have that deep knowledge of how postgres works under the hood so I\nwill try to explain more user side.\nI want to refer for some points mentioned above.\n - First INCLUDE statement mostly eliminates the necessity to refer to a\nclustered index or table to get columns that do not exist in the index. So\nfiltering upon columns in INCLUDE statement will not be performant. It can\ngive some very little performance if we include additional columns but it\nis not in level to compare with indexed one. I believe this not for this\ncase\n- Tomas Vondra's Assumption that adaptive should be something between this\ntwo\n1) (job, nlp, year, sequence)\n2) (job, nlp, year, scan_id, issue_flag, sequence)\nis completely valid. I have made fairly small demo with this index\ncomparison and as I can see the difference is noticeable. Here is git repo\nand results\n<https://github.com/HaykManukyanAvetiky/index_comparition/blob/main/results.md>\n,\nI had no much time to do significant one sorry for that ))\n - regarding data structure side of things by Pavel Borisov.\nI also think that different data structure will be needed. Not sure exactly\nat this point which kind of data structure but I will try to explain it\nhere.\n<https://github.com/HaykManukyanAvetiky/index_comparition/blob/main/data_structure.md>\n\nbest regards\n\n\nср, 27 окт. 2021 г. в 20:10, Peter Geoghegan <pg@bowt.ie>:\n\n> On Wed, Oct 27, 2021 at 1:02 AM Pavel Borisov <pashkin.elfe@gmail.com>\n> wrote:\n> > AFAIK Gin is lossy for phrase queries as we don't store word position in\n> the posting list. For purely logical queries, where position doesn't\n> matter, it's not lossy.\n>\n> GIN is always lossy, in the sense that it provides only a\n> gingetbitmap() routine -- there is no gingettuple() routine. I believe\n> that this is fundamental to the overall design of GIN. It would be\n> very difficult to add useful gingettuple() functionality now, since\n> GIN already relies on lossiness to avoid race conditions.\n>\n> Here's an example of the problems that \"adding gingettuple()\" would\n> run into: Today, an index's pending list entries can be merged\n> concurrently with the entry tree, without worrying about returning the\n> same tuples twice. This is only safe/correct because GIN only supports\n> bitmap index scans. Without that, you need some other mechanism to\n> make it safe -- ISTM you must \"logically lock\" the index structure,\n> using ARIES/KVL style key value locks, or something along those lines.\n>\n> --\n> Peter Geoghegan\n>\n\nHi allFirst of all thank you all for fast and rich responses, that is really nice.I don't have that deep knowledge of how postgres  works under the hood so I will try to explain more user side.I want to refer for some points mentioned above.  - First INCLUDE statement mostly eliminates the necessity to refer to a clustered index or table to get columns that do not exist in the index. So filtering upon columns in INCLUDE statement will not be performant. It can give some very little performance if we include additional columns but it is not in level to compare with indexed one. I believe this not for this case-  Tomas Vondra's Assumption that adaptive should be something between this two   1) (job, nlp, year, sequence)2) (job, nlp, year, scan_id, issue_flag, sequence) is completely valid. I have made fairly small demo with this index comparison and as I can see the difference is noticeable. Here is git repo and results , I had no much time to do significant one sorry for that )) -  regarding data structure side of things by Pavel Borisov. I also think that different data structure will be needed. Not sure exactly at this point which kind of data structure but I will try to explain it here.best regards  ср, 27 окт. 2021 г. в 20:10, Peter Geoghegan <pg@bowt.ie>:On Wed, Oct 27, 2021 at 1:02 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> AFAIK Gin is lossy for phrase queries as we don't store word position in the posting list. For purely logical queries, where position doesn't matter, it's not lossy.\n\nGIN is always lossy, in the sense that it provides only a\ngingetbitmap() routine -- there is no gingettuple() routine. I believe\nthat this is fundamental to the overall design of GIN. It would be\nvery difficult to add useful gingettuple() functionality now, since\nGIN already relies on lossiness to avoid race conditions.\n\nHere's an example of the problems that \"adding gingettuple()\" would\nrun into: Today, an index's pending list entries can be merged\nconcurrently with the entry tree, without worrying about returning the\nsame tuples twice. This is only safe/correct because GIN only supports\nbitmap index scans. Without that, you need some other mechanism to\nmake it safe -- ISTM you must \"logically lock\" the index structure,\nusing ARIES/KVL style key value locks, or something along those lines.\n\n-- \nPeter Geoghegan", "msg_date": "Fri, 29 Oct 2021 17:32:37 +0400", "msg_from": "Hayk Manukyan <manukyantt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "On 10/29/21 15:32, Hayk Manukyan wrote:\n> Hi all\n> First of all thank you all for fast and rich responses, that is really nice.\n> I don't have that deep knowledge of how postgres  works under the hood \n> so I will try to explain more user side.\n> I want to refer for some points mentioned above.\n>  - First INCLUDE statement mostly eliminates the necessity to refer to \n> a clustered index or table to get columns that do not exist in the \n> index. So filtering upon columns in INCLUDE statement will not be \n> performant. It can give some very little performance if we include \n> additional columns but it is not in level to compare with indexed one. I \n> believe this not for this case\n> - Tomas Vondra's Assumption that adaptive should be something between \n> this two\n> 1) (job, nlp, year, sequence)\n> 2) (job, nlp, year, scan_id, issue_flag, sequence)\n> is completely valid. I have made fairly small demo with this index \n> comparison and as I can see the difference is noticeable. Here is git \n> repo and results \n> <https://github.com/HaykManukyanAvetiky/index_comparition/blob/main/results.md> , \n> I had no much time to do significant one sorry for that ))\n\nI find those results entirely unconvincing, or maybe even suspicious.\n\nI used the script to create the objects, and the index sizes are:\n\n Name | Size\n ------------------------------------------+---------\n job_nlp_year_scan_id_issue_flag_sequence | 1985 MB\n job_nlp_year_sequence | 1985 MB\n\nSo there's no actual difference, most likely due to alignment making up \nfor the two smalling columns.\n\nAnd if I randomize the queries instead of running them with the same \nparameters over and over (see the attached scripts), then an average of \n10 runs, each 60s long, the results are (after a proper warmup)\n\n pgbench -n -f q4.sql -T 60\n\n 4 columns: 106 ms\n 6 columns: 109 ms\n\nSo there's like 3% difference between the two cases, and even that might \nbe just noise. This is consistent with the two indexes being about the \nsame size.\n\nThis is on machine with i5-2500k CPU and 8GB of RAM, which is just \nenough to keep everything in RAM. It seems somewhat strange that your \nmachine does this in 10ms, i.e. 10x faster. Seems strange.\n\n\nI'm not sure what is the point of the second query, considering it's not \neven using an index but parallel seqscan.\n\n\nAnyway, this still fails to demonstrate any material difference between \nthe two indexes, and consequently any potential benefit of the proposed \nnew index type.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 30 Oct 2021 01:44:08 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": ">\n> 4 columns: 106 ms\n> 6 columns: 109 ms\n>\n> So there's like 3% difference between the two cases, and even that might\n> be just noise. This is consistent with the two indexes being about the\n> same size.\n>\nI also don't think we can get great speedup in the mentioned case, so it is\nnot urgently needed of course. My point is that it is just nice to have a\nmulticolumn index constructed on stacked trees constructed on separate\ncolumns, not on the index tuples as a whole thing. At least there is a\nbenefit of sparing shared memory if we don't need to cache index tuples of\nseveral similar indexes, instead caching one \"compound index\". So if\nsomeone wants to propose this thing I'd support it provided problems with\nconcurrency, which were mentioned by Peter are solved.\n\nThese problems could be appear easy though, as we have index tuples\nconstructed in a similar way as heap tuples. Maybe it could be easier if we\nhad another heap am, which stored separate attributes (if so it could be\nuseful as a better JSON storage method than we have today).\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n  4 columns: 106 ms\n  6 columns: 109 ms\n\nSo there's like 3% difference between the two cases, and even that might \nbe just noise. This is consistent with the two indexes being about the \nsame size.I also don't think we can get great speedup in the mentioned case, so it is not urgently needed of course. My point is that it is just nice to have a multicolumn index constructed on stacked trees constructed on separate columns, not on the index tuples as a whole thing. At least there is a benefit of sparing shared memory if we don't need to cache index tuples of several similar indexes, instead caching one \"compound index\". So if someone wants to propose this thing I'd support it provided problems with concurrency, which were mentioned by Peter are solved. These problems could be appear easy though, as we have index tuples constructed in a similar way as heap tuples. Maybe it could be easier if we had another heap am, which stored separate attributes (if so it could be useful as a better JSON storage method than we have today).--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Sun, 31 Oct 2021 19:48:34 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "\n\nOn 10/31/21 16:48, Pavel Borisov wrote:\n>   4 columns: 106 ms\n>   6 columns: 109 ms\n> \n> So there's like 3% difference between the two cases, and even that\n> might\n> be just noise. This is consistent with the two indexes being about the\n> same size.\n> \n> I also don't think we can get great speedup in the mentioned case, so it \n> is not urgently needed of course. My point is that it is just nice to \n> have a multicolumn index constructed on stacked trees constructed on \n> separate columns, not on the index tuples as a whole thing.\n\nWell, I'd say \"nice to have\" features are pointless unless they actually \ngive tangible benefits (like speedup) to users. I'd bet no one is going \nto implement and maintain something unless it has such benefit, because \nthey have to weight it against other beneficial features.\n\nMaybe there are use cases where this would be beneficial, but so far we \nhaven't seen one. Usually it's the OP who presents such a case, and a \nplausible way to improve it - but it seems this thread presents a \nsolution and now we're looking for an issue it might solve.\n\n> At least there is a benefit of sparing shared memory if we don't need\n> to cache index tuples of several similar indexes, instead caching one\n> \"compound index\". So if someone wants to propose this thing I'd\n> support it provided problems with concurrency, which were mentioned\n> by Peter are solved.\n> \n\nThe problem with this it assumes the new index would use (significantly) \nless space than three separate indexes. I find that rather unlikely, but \nmaybe there is a smart way to achieve that (certainly not in detail).\n\nI don't want to sound overly pessimistic and if you have an idea how to \ndo this, I'd like to hear it. But it seems pretty tricky, particularly \nif we assume the suffix columns are more variable (which limits the \n\"compression\" ratio etc.).\n\n> These problems could be appear easy though, as we have index tuples \n> constructed in a similar way as heap tuples. Maybe it could be easier if \n> we had another heap am, which stored separate attributes (if so it could \n> be useful as a better JSON storage method than we have today).\n> \n\nIMO this just moved the goalposts somewhere outside the solar system.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 31 Oct 2021 18:33:54 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "I agree with the above mentioned.\nThe only concern I have is that we compare little wrong things.\nFor read we should compare\n (job, nlp, year, sequence) AND (job, nlp, year, Scan_ID) and (job, nlp,\nyear, issue_flag ) VS (job, nlp, year, sequence, Scan_ID, issue_flag)\nOR (job, nlp, year INCLUDE(sequence, Scan_ID, issue_flag) )\nBecause our proposed index for reading should be closer to a combination of\nthose 3 and we have current solutions like index on all or with Include\nstatement.\nWe should try to find a gap between these three cases.\nFor DML queries\n (job, nlp, year, sequence, Scan_ID, issue_flag) OR (job, nlp, year\nINCLUDE(sequence, Scan_ID, issue_flag) ) VS (job, nlp, year, sequence) AND\n(job, nlp, year, Scan_ID) and (job, nlp, year, issue_flag )\nBecause again the proposed index should be just one and cover all 3\nseparate ones.\n\nIf you agree with these cases I will try to find a bigger time frame to\ncompare these two cases deeper.\nThe issue is not high prio but I strongly believe it can help and can be\nnice feature for even more complicated cases.\n\nBest regards.\n\n\n\n\nвс, 31 окт. 2021 г. в 21:33, Tomas Vondra <tomas.vondra@enterprisedb.com>:\n\n>\n>\n> On 10/31/21 16:48, Pavel Borisov wrote:\n> > 4 columns: 106 ms\n> > 6 columns: 109 ms\n> >\n> > So there's like 3% difference between the two cases, and even that\n> > might\n> > be just noise. This is consistent with the two indexes being about\n> the\n> > same size.\n> >\n> > I also don't think we can get great speedup in the mentioned case, so it\n> > is not urgently needed of course. My point is that it is just nice to\n> > have a multicolumn index constructed on stacked trees constructed on\n> > separate columns, not on the index tuples as a whole thing.\n>\n> Well, I'd say \"nice to have\" features are pointless unless they actually\n> give tangible benefits (like speedup) to users. I'd bet no one is going\n> to implement and maintain something unless it has such benefit, because\n> they have to weight it against other beneficial features.\n>\n> Maybe there are use cases where this would be beneficial, but so far we\n> haven't seen one. Usually it's the OP who presents such a case, and a\n> plausible way to improve it - but it seems this thread presents a\n> solution and now we're looking for an issue it might solve.\n>\n> > At least there is a benefit of sparing shared memory if we don't need\n> > to cache index tuples of several similar indexes, instead caching one\n> > \"compound index\". So if someone wants to propose this thing I'd\n> > support it provided problems with concurrency, which were mentioned\n> > by Peter are solved.\n> >\n>\n> The problem with this it assumes the new index would use (significantly)\n> less space than three separate indexes. I find that rather unlikely, but\n> maybe there is a smart way to achieve that (certainly not in detail).\n>\n> I don't want to sound overly pessimistic and if you have an idea how to\n> do this, I'd like to hear it. But it seems pretty tricky, particularly\n> if we assume the suffix columns are more variable (which limits the\n> \"compression\" ratio etc.).\n>\n> > These problems could be appear easy though, as we have index tuples\n> > constructed in a similar way as heap tuples. Maybe it could be easier if\n> > we had another heap am, which stored separate attributes (if so it could\n> > be useful as a better JSON storage method than we have today).\n> >\n>\n> IMO this just moved the goalposts somewhere outside the solar system.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nI agree with the above mentioned.  The only concern I have is that we compare little wrong things.For read we should compare   (job, nlp, year, sequence) AND (job, nlp, year, Scan_ID) and (job, nlp, year, \n\nissue_flag  ) VS  (job, nlp, year, sequence, Scan_ID, issue_flag) OR  (job, nlp, year INCLUDE(sequence, Scan_ID, issue_flag) )Because our proposed index for reading should be closer to a combination of those 3 and we have current solutions like index on all or with Include statement. We should try to find a gap between these three cases.For DML queries  (job, nlp, year, sequence, Scan_ID, issue_flag) OR  (job, nlp, year INCLUDE(sequence, Scan_ID, issue_flag) ) VS  (job, nlp, year, sequence) AND (job, nlp, year, Scan_ID) and (job, nlp, year,  issue_flag  )Because again the proposed index should be just one and cover all 3 separate ones. If you agree with these cases I will try to find a bigger time frame to compare these two cases deeper. The issue is not high prio but I strongly believe it can help and can be nice feature for even more complicated cases.Best regards.  вс, 31 окт. 2021 г. в 21:33, Tomas Vondra <tomas.vondra@enterprisedb.com>:\n\nOn 10/31/21 16:48, Pavel Borisov wrote:\n>        4 columns: 106 ms\n>        6 columns: 109 ms\n> \n>     So there's like 3% difference between the two cases, and even that\n>     might\n>     be just noise. This is consistent with the two indexes being about the\n>     same size.\n> \n> I also don't think we can get great speedup in the mentioned case, so it \n> is not urgently needed of course. My point is that it is just nice to \n> have a multicolumn index constructed on stacked trees constructed on \n> separate columns, not on the index tuples as a whole thing.\n\nWell, I'd say \"nice to have\" features are pointless unless they actually \ngive tangible benefits (like speedup) to users. I'd bet no one is going \nto implement and maintain something unless it has such benefit, because \nthey have to weight it against other beneficial features.\n\nMaybe there are use cases where this would be beneficial, but so far we \nhaven't seen one. Usually it's the OP who presents such a case, and a \nplausible way to improve it - but it seems this thread presents a \nsolution and now we're looking for an issue it might solve.\n\n> At least there is a benefit of sparing shared memory if we don't need\n> to cache index tuples of several similar indexes, instead caching one\n> \"compound index\". So if someone wants to propose this thing I'd\n> support it provided problems with concurrency, which were mentioned\n> by Peter are solved.\n> \n\nThe problem with this it assumes the new index would use (significantly) \nless space than three separate indexes. I find that rather unlikely, but \nmaybe there is a smart way to achieve that (certainly not in detail).\n\nI don't want to sound overly pessimistic and if you have an idea how to \ndo this, I'd like to hear it. But it seems pretty tricky, particularly \nif we assume the suffix columns are more variable (which limits the \n\"compression\" ratio etc.).\n\n> These problems could be appear easy though, as we have index tuples \n> constructed in a similar way as heap tuples. Maybe it could be easier if \n> we had another heap am, which stored separate attributes (if so it could \n> be useful as a better JSON storage method than we have today).\n> \n\nIMO this just moved the goalposts somewhere outside the solar system.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 1 Nov 2021 16:24:38 +0400", "msg_from": "Hayk Manukyan <manukyantt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "On 11/1/21 1:24 PM, Hayk Manukyan wrote:\n> I agree with the above mentioned.  \n> The only concern I have is that we compare little wrong things.\n> For read we should compare  \n>  (job, nlp, year, sequence) AND (job, nlp, year, Scan_ID) and (job, nlp,\n> year,  issue_flag  ) VS  (job, nlp, year, sequence, Scan_ID, issue_flag)\n> OR  (job, nlp, year INCLUDE(sequence, Scan_ID, issue_flag) )\n> Because our proposed index for reading should be closer to a combination\n> of those 3 and we have current solutions like index on all or with\n> Include statement. \n\nI don't follow.\n\nThe whole point of the experiment was to show the gap between a \"best\ncase\" and \"worst case\" alternatives, with the assumption the gap would\nbe substantial and the new index type might get close to the best case.\n\nAre you suggesting those are not the actual best/worst cases and we\nshould use some other indexes? If yes, which ones?\n\n\nIMHO those best/worst cases are fine because:\n\n1) best case (job, nlp, year, sequence)\n\nI don't see how we could get anything better for queries on \"sequence\"\nthan this index, because that's literally one of the indexes that would\nbe included in the whole index.\n\nYes, if you need to support queries on additional columns, you might\nneed more indexes, but that's irrelevant - why would anyone define those\nindexes, when the \"worst case\" btree index with all the columns is so\nclose to the best case?\n\n\n2) worst case (job, nlp, year, scan_id, issue_flag, sequence)\n\nI think an index with INCLUDE is entirely irrelevant here. The reason to\nuse INCLUDE is to define UNIQUE index on a subset of columns, but that's\nnot what we need here. I repeated the benchmark with such index, and the\ntiming is ~150ms, so about 50% slower than the simple index. Sorting on\nall columns is clearly beneficial even for the last column.\n\n\nSo I still think those best/worst cases are sensible, and the proposed\nindex would need to beat the worst case. Which seems challenging,\nconsidering how close it is to the best case. Or it might break the best\ncase, if there's some sort of revolutionary way to store the small\nindexes or something like that.\n\nThe fact that there's no size difference between the two cases is mostly\na coincidence, due to the columns being just 2B each, and with wider\nvalues the difference might be substantial, making the gap larger. But\nthen the new index would have to improve on this, but there's no\nproposal on how to do that.\n\n\n> We should try to find a gap between these three cases.\n> For DML queries \n>  (job, nlp, year, sequence, Scan_ID, issue_flag) OR  (job, nlp, year\n> INCLUDE(sequence, Scan_ID, issue_flag) ) VS  (job, nlp, year, sequence)\n> AND (job, nlp, year, Scan_ID) and (job, nlp, year,  issue_flag  )\n> Because again the proposed index should be just one and cover all 3\n> separate ones. \n> \n> If you agree with these cases I will try to find a bigger time frame to\n> compare these two cases deeper. \n>\n> The issue is not high prio but I strongly believe it can help and can be\n> nice feature for even more complicated cases.\n> \n\nYou don't need my approval to run benchmarks etc. If you believe this is\nbeneficial then just do the tests and you'll see if it makes sense ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 1 Nov 2021 16:03:08 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "On Tue, Oct 26, 2021 at 11:11 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> If I get what you propose, you want to have a \"top\" tree for (job, nlp,\n> year), which \"splits\" the data set into subsets of ~5000-7000 rows. And\n> then for each subset you want a separate \"small\" trees on each of the\n> other columns, so in this case three trees.\n>\n> Well, the problem with this is pretty obvious - each of the small trees\n> requires separate copies of the leaf pages. And remember, in a btree the\n> internal pages are usually less than 1% of the index, so this pretty\n> much triples the size of the index. And if you insert a row into the\n> index, it has to insert the item pointer into each of the small trees,\n> likely requiring a separate I/O for each.\n>\n> So I'd bet this is not any different from just having three separate\n> indexes - it doesn't save space, doesn't save I/O, nothing.\n\nI agree. In a lot of cases, it's actually useful to define the index\non fewer columns, like (job, nlp, year) or even just (job, nlp) or\neven just (job) because it makes the index so much smaller and that's\npretty important. If you have enough duplicate entries in a (job, nlp,\nyear) index to justify create a (job, nlp, year, sequence) index, the\nnumber of distinct (job, nlp, year) tuples has to be small compared to\nthe number of (job, nlp, year, sequence) tuples - and that means that\nyou wouldn't actually save much by combining your (job, nlp, year,\nsequence) index with a (job, nlp, year, other-stuff) index. As you\nsay, the internal pages aren't the problem.\n\nI don't intend to say that there's no possible use for this kind of\ntechnology. Peter G. says that people are writing research papers\nabout that and they probably wouldn't be doing that unless they'd\nfound some case where it's a big win. But this example seems extremely\nunconvincing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 Nov 2021 16:06:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "\n\nOn 11/1/21 21:06, Robert Haas wrote:\n> On Tue, Oct 26, 2021 at 11:11 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> If I get what you propose, you want to have a \"top\" tree for (job, nlp,\n>> year), which \"splits\" the data set into subsets of ~5000-7000 rows. And\n>> then for each subset you want a separate \"small\" trees on each of the\n>> other columns, so in this case three trees.\n>>\n>> Well, the problem with this is pretty obvious - each of the small trees\n>> requires separate copies of the leaf pages. And remember, in a btree the\n>> internal pages are usually less than 1% of the index, so this pretty\n>> much triples the size of the index. And if you insert a row into the\n>> index, it has to insert the item pointer into each of the small trees,\n>> likely requiring a separate I/O for each.\n>>\n>> So I'd bet this is not any different from just having three separate\n>> indexes - it doesn't save space, doesn't save I/O, nothing.\n> \n> I agree. In a lot of cases, it's actually useful to define the index\n> on fewer columns, like (job, nlp, year) or even just (job, nlp) or\n> even just (job) because it makes the index so much smaller and that's\n> pretty important. If you have enough duplicate entries in a (job, nlp,\n> year) index to justify create a (job, nlp, year, sequence) index, the\n> number of distinct (job, nlp, year) tuples has to be small compared to\n> the number of (job, nlp, year, sequence) tuples - and that means that\n> you wouldn't actually save much by combining your (job, nlp, year,\n> sequence) index with a (job, nlp, year, other-stuff) index. As you\n> say, the internal pages aren't the problem.\n> \n> I don't intend to say that there's no possible use for this kind of\n> technology. Peter G. says that people are writing research papers\n> about that and they probably wouldn't be doing that unless they'd\n> found some case where it's a big win. But this example seems extremely\n> unconvincing.\n> \n\nI actually looked at the use case mentioned by Peter G, i.e. merged \nindexes with master-detail clustering (see e.g. [1]), but that seems \nlike a rather different thing. The master-detail refers to storing rows \nfrom multiple tables, interleaved in a way that allows faster joins. So \nit's essentially a denormalization tool.\n\nPerhaps there's something we could learn about efficient storage of the \nsmall trees, but I haven't found any papers describing that (I haven't \nspent much time on the search, though).\n\n[1] Algorithms for merged indexes, Goetz Graefe\n https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.140.7709\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 1 Nov 2021 22:15:59 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "Tomas Vondra\n> Are you suggesting those are not the actual best/worst cases and we\n> should use some other indexes? If yes, which ones?\n\nI would say yes.\nIn my case I am not querying only sequence column.\nI have the following cases which I want to optimize.\n1. Select * from Some_table where job = <somthing> and nlp = <something>\nand year = <something> and *scan_id = <something> *\n2. Select * from Some_table where job = <somthing> and nlp = <something>\nand year = <something> and *Issue_flag = <something> *\n3. Select * from Some_table where job = <somthing> and nlp = <something>\nand year = <something> and *sequence = <something> *\nThose are queries that my app send to db that is why I said that from *read\nperspective* our *best case* is 3 separate indexes for\n *(job, nlp, year, sequence)* AND *(job, nlp, year, Scan_ID)* and *(job,\nnlp, year, issue_flag)* and any other solution like\n (job, nlp, year, sequence, Scan_ID, issue_flag) OR (job, nlp, year )\nINCLUDE(sequence, Scan_ID, issue_flag) OR just (job, nlp, year) can be\nconsidered as* worst case *\nI will remind that in real world scenario I have ~50m rows and about *~5k\nrows for each (job, nlp, year )*\n From *write perspective* as far as we want to have only one index our* best\ncase* can be considered any of\n*(job, nlp, year, sequence, Scan_ID, issue_flag)* OR * (job, nlp, year )\nINCLUDE(sequence, Scan_ID, issue_flag) *\nand the* worst case* will be having 3 separate queries like in read\nperspective\n(job, nlp, year, sequence) AND (job, nlp, year, Scan_ID) and (job, nlp,\nyear, issue_flag)\n\nSo I think the comparison that we did is not right because we are comparing\ndifferent/wrong things.\n\nFor right results we need to compare this two cases when we are doing\nrandom queries with 1,2,3 and random writes.\n\n\nвт, 2 нояб. 2021 г. в 01:16, Tomas Vondra <tomas.vondra@enterprisedb.com>:\n\n>\n>\n> On 11/1/21 21:06, Robert Haas wrote:\n> > On Tue, Oct 26, 2021 at 11:11 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >> If I get what you propose, you want to have a \"top\" tree for (job, nlp,\n> >> year), which \"splits\" the data set into subsets of ~5000-7000 rows. And\n> >> then for each subset you want a separate \"small\" trees on each of the\n> >> other columns, so in this case three trees.\n> >>\n> >> Well, the problem with this is pretty obvious - each of the small trees\n> >> requires separate copies of the leaf pages. And remember, in a btree the\n> >> internal pages are usually less than 1% of the index, so this pretty\n> >> much triples the size of the index. And if you insert a row into the\n> >> index, it has to insert the item pointer into each of the small trees,\n> >> likely requiring a separate I/O for each.\n> >>\n> >> So I'd bet this is not any different from just having three separate\n> >> indexes - it doesn't save space, doesn't save I/O, nothing.\n> >\n> > I agree. In a lot of cases, it's actually useful to define the index\n> > on fewer columns, like (job, nlp, year) or even just (job, nlp) or\n> > even just (job) because it makes the index so much smaller and that's\n> > pretty important. If you have enough duplicate entries in a (job, nlp,\n> > year) index to justify create a (job, nlp, year, sequence) index, the\n> > number of distinct (job, nlp, year) tuples has to be small compared to\n> > the number of (job, nlp, year, sequence) tuples - and that means that\n> > you wouldn't actually save much by combining your (job, nlp, year,\n> > sequence) index with a (job, nlp, year, other-stuff) index. As you\n> > say, the internal pages aren't the problem.\n> >\n> > I don't intend to say that there's no possible use for this kind of\n> > technology. Peter G. says that people are writing research papers\n> > about that and they probably wouldn't be doing that unless they'd\n> > found some case where it's a big win. But this example seems extremely\n> > unconvincing.\n> >\n>\n> I actually looked at the use case mentioned by Peter G, i.e. merged\n> indexes with master-detail clustering (see e.g. [1]), but that seems\n> like a rather different thing. The master-detail refers to storing rows\n> from multiple tables, interleaved in a way that allows faster joins. So\n> it's essentially a denormalization tool.\n>\n> Perhaps there's something we could learn about efficient storage of the\n> small trees, but I haven't found any papers describing that (I haven't\n> spent much time on the search, though).\n>\n> [1] Algorithms for merged indexes, Goetz Graefe\n> https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.140.7709\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nTomas Vondra> Are you suggesting those are not the actual best/worst cases and we> should use some other indexes? If yes, which ones?I would say yes.In my case I am not querying only sequence column.I have the following cases which I want to optimize.1. Select * from Some_table where job = <somthing> and nlp = <something> and year = <something> and  scan_id = <something>  2. Select * from Some_table where job = <somthing> and nlp = <something> and year = <something> and  Issue_flag = <something>  3. Select * from Some_table where job = <somthing> and nlp = <something> and year = <something> and  sequence = <something>  Those are queries that my app send to db that is why I said that from read perspective our best case is 3 separate indexes for  (job, nlp, year, sequence) AND (job, nlp, year, Scan_ID) and (job, nlp, year,  issue_flag)  and any other solution like  (job, nlp, year, sequence, Scan_ID, issue_flag) OR  (job, nlp, year ) INCLUDE(sequence, Scan_ID, issue_flag)  OR just (job, nlp, year) can be considered as worst case I will remind that in real world scenario I have ~50m rows and about ~5k rows for each (job, nlp, year )From write perspective as far as we want to have only one index our best case can be considered any of(job, nlp, year, sequence, Scan_ID, issue_flag) OR  (job, nlp, year ) INCLUDE(sequence, Scan_ID, issue_flag) and the worst case will be having 3 separate queries like in read perspective (job, nlp, year, sequence) AND (job, nlp, year, Scan_ID) and (job, nlp, year,  issue_flag)  So I think the comparison that we did is not right because we are comparing different/wrong things.For right results we need to compare this two cases when we are doing random queries with 1,2,3  and random writes.вт, 2 нояб. 2021 г. в 01:16, Tomas Vondra <tomas.vondra@enterprisedb.com>:\n\nOn 11/1/21 21:06, Robert Haas wrote:\n> On Tue, Oct 26, 2021 at 11:11 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> If I get what you propose, you want to have a \"top\" tree for (job, nlp,\n>> year), which \"splits\" the data set into subsets of ~5000-7000 rows. And\n>> then for each subset you want a separate \"small\" trees on each of the\n>> other columns, so in this case three trees.\n>>\n>> Well, the problem with this is pretty obvious - each of the small trees\n>> requires separate copies of the leaf pages. And remember, in a btree the\n>> internal pages are usually less than 1% of the index, so this pretty\n>> much triples the size of the index. And if you insert a row into the\n>> index, it has to insert the item pointer into each of the small trees,\n>> likely requiring a separate I/O for each.\n>>\n>> So I'd bet this is not any different from just having three separate\n>> indexes - it doesn't save space, doesn't save I/O, nothing.\n> \n> I agree. In a lot of cases, it's actually useful to define the index\n> on fewer columns, like (job, nlp, year) or even just (job, nlp) or\n> even just (job) because it makes the index so much smaller and that's\n> pretty important. If you have enough duplicate entries in a (job, nlp,\n> year) index to justify create a (job, nlp, year, sequence) index, the\n> number of distinct (job, nlp, year) tuples has to be small compared to\n> the number of (job, nlp, year, sequence) tuples - and that means that\n> you wouldn't actually save much by combining your (job, nlp, year,\n> sequence) index with a (job, nlp, year, other-stuff) index. As you\n> say, the internal pages aren't the problem.\n> \n> I don't intend to say that there's no possible use for this kind of\n> technology. Peter G. says that people are writing research papers\n> about that and they probably wouldn't be doing that unless they'd\n> found some case where it's a big win. But this example seems extremely\n> unconvincing.\n> \n\nI actually looked at the use case mentioned by Peter G, i.e. merged \nindexes with master-detail clustering (see e.g. [1]), but that seems \nlike a rather different thing. The master-detail refers to storing rows \nfrom multiple tables, interleaved in a way that allows faster joins. So \nit's essentially a denormalization tool.\n\nPerhaps there's something we could learn about efficient storage of the \nsmall trees, but I haven't found any papers describing that (I haven't \nspent much time on the search, though).\n\n[1] Algorithms for merged indexes, Goetz Graefe\n     https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.140.7709\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 2 Nov 2021 16:04:29 +0400", "msg_from": "Hayk Manukyan <manukyantt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "\n\nOn 11/2/21 13:04, Hayk Manukyan wrote:\n> Tomas Vondra\n> > Are you suggesting those are not the actual best/worst cases and we\n> > should use some other indexes? If yes, which ones?\n> \n> I would say yes.\n> In my case I am not querying only sequence column.\n> I have the following cases which I want to optimize.\n> 1. Select * from Some_table where job = <somthing> and nlp = <something> \n> and year = <something> and *scan_id = <something> *\n> 2. Select * from Some_table where job = <somthing> and nlp = <something> \n> and year = <something> and *Issue_flag = <something> *\n> 3. Select * from Some_table where job = <somthing> and nlp = <something> \n> and year = <something> and *sequence = <something> *\n> Those are queries that my app send to db that is why I said that from \n> *read perspective* our *best case* is 3 separate indexes for\n> *(job, nlp, year, sequence)* AND *(job, nlp, year, Scan_ID)* and *(job, \n> nlp, year,  issue_flag)*  and any other solution like\n>  (job, nlp, year, sequence, Scan_ID, issue_flag) OR  (job, nlp, year ) \n> INCLUDE(sequence, Scan_ID, issue_flag)  OR just (job, nlp, year) can be \n> considered as*worst case *\n\nI already explained why using INCLUDE in this case is the wrong thing to \ndo, it'll harm performance compared to just defining a regular index.\n\n> I will remind that in real world scenario I have ~50m rows and about \n> *~5k rows for each (job, nlp, year )*\n\nWell, maybe this is the problem. We have 50M rows, but the three columns \nhave too many distinct values - (job, nlp, year) defines ~50M groups, so \nthere's only a single row per group. That'd explain why the two indexes \nperform almost equally.\n\nSo I guess you need to modify the data generator so that the data set is \nmore like the case you're trying to improve.\n\n> From *write perspective* as far as we want to have only one index \n> our*best case* can be considered any of\n> *(job, nlp, year, sequence, Scan_ID, issue_flag)* OR *(job, nlp, year ) \n> INCLUDE(sequence, Scan_ID, issue_flag) *\n> and the*worst case* will be having 3 separate queries like in read \n> perspective\n> (job, nlp, year, sequence) AND (job, nlp, year, Scan_ID) and (job, nlp, \n> year,  issue_flag)\n> \n\nMaybe. It's true a write with three indexes will require modification to \nthree leaf pages (on average). With a single index we have to modify \njust one leaf page, depending on where the row gets routed.\n\nBut with the proposed \"merged\" index, the row will have to be inserted \ninto three smaller trees. If the trees are large enough, they won't fit \ninto a single leaf page (e.g. the 5000 index tuples is guaranteed to \nneed many pages, even if you use some smart encoding). So the write will \nlikely need to modify at least 3 leaf pages, getting much closer to \nthree separate indexes. At which point you could just use three indexes.\n\n> So I think the comparison that we did is not right because we are \n> comparing different/wrong things.\n> > For right results we need to compare this two cases when we are doing\n> random queries with 1,2,3  and random writes.\n> \n\nI'm not going to spend any more time on tweaking the benchmark, but if \nyou tweak it to demonstrate the difference / benefits I'll run it again \non my machine etc.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 2 Nov 2021 15:03:06 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "вт, 2 нояб. 2021 г. в 16:04, Hayk Manukyan <manukyantt@gmail.com>:\n\n> Tomas Vondra\n> > Are you suggesting those are not the actual best/worst cases and we\n> > should use some other indexes? If yes, which ones?\n>\n> I would say yes.\n> In my case I am not querying only sequence column.\n> I have the following cases which I want to optimize.\n> 1. Select * from Some_table where job = <somthing> and nlp = <something>\n> and year = <something> and *scan_id = <something> *\n> 2. Select * from Some_table where job = <somthing> and nlp = <something>\n> and year = <something> and *Issue_flag = <something> *\n> 3. Select * from Some_table where job = <somthing> and nlp = <something>\n> and year = <something> and *sequence = <something> *\n> Those are queries that my app send to db that is why I said that from *read\n> perspective* our *best case* is 3 separate indexes for\n> *(job, nlp, year, sequence)* AND *(job, nlp, year, Scan_ID)* and *(job,\n> nlp, year, issue_flag)* and any other solution like\n> (job, nlp, year, sequence, Scan_ID, issue_flag) OR (job, nlp, year )\n> INCLUDE(sequence, Scan_ID, issue_flag) OR just (job, nlp, year) can be\n> considered as* worst case *\n> I will remind that in real world scenario I have ~50m rows and about *~5k\n> rows for each (job, nlp, year )*\n>\n\n So you get 50M rows /5K rows = 10K times selectivity, when you select on\njob = <somthing> and nlp = <something> and year = <something> which is\nenormous. Then you should select some of the 5K rows left, which is\nexpected to be pretty fast on bitmap index scan or INCLUDE column\nfiltering. It confirms Tomas's experiment\n\n pgbench -n -f q4.sql -T 60\n\n106 ms vs 109 ms\n\nfits your case pretty well. You get absolutely negligible difference\nbetween best and worst case and certainly you don't need anything more than\njust plain index for 3 columns, you even don't need INCLUDE index.\n\n From what I read I suppose that this feature indeed doesn't based on the\nreal need. If you suppose it is useful please feel free to make and post\nhere some measurements that proves your point.\n\n\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nвт, 2 нояб. 2021 г. в 16:04, Hayk Manukyan <manukyantt@gmail.com>:Tomas Vondra> Are you suggesting those are not the actual best/worst cases and we> should use some other indexes? If yes, which ones?I would say yes.In my case I am not querying only sequence column.I have the following cases which I want to optimize.1. Select * from Some_table where job = <somthing> and nlp = <something> and year = <something> and  scan_id = <something>  2. Select * from Some_table where job = <somthing> and nlp = <something> and year = <something> and  Issue_flag = <something>  3. Select * from Some_table where job = <somthing> and nlp = <something> and year = <something> and  sequence = <something>  Those are queries that my app send to db that is why I said that from read perspective our best case is 3 separate indexes for  (job, nlp, year, sequence) AND (job, nlp, year, Scan_ID) and (job, nlp, year,  issue_flag)  and any other solution like  (job, nlp, year, sequence, Scan_ID, issue_flag) OR  (job, nlp, year ) INCLUDE(sequence, Scan_ID, issue_flag)  OR just (job, nlp, year) can be considered as worst case I will remind that in real world scenario I have ~50m rows and about ~5k rows for each (job, nlp, year ) So you get 50M rows /5K rows = 10K times selectivity, when you select on job = <somthing> and nlp = <something> and year = <something> which is enormous. Then you should select some of the 5K rows left, which is expected to be pretty fast on bitmap index scan or INCLUDE column filtering. It confirms Tomas's experiment   pgbench -n -f q4.sql -T 60106 ms vs 109 msfits your case pretty well. You get absolutely negligible difference between best and worst case and certainly you don't need anything more than just plain index for 3 columns, you even don't need INCLUDE index.From what I read I suppose that this feature indeed doesn't based on the real need. If you suppose it is useful please feel free to make and post here some measurements that proves your point.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Tue, 2 Nov 2021 18:04:14 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Feature request for adoptive indexes" }, { "msg_contents": "Hi All\n\nI did final research and saw that the difference between best and worst\ncases is indeed really small.\nI want to thank you guys for your time and efforts.\n\nBest regards.\n\n\nвт, 2 нояб. 2021 г. в 18:04, Pavel Borisov <pashkin.elfe@gmail.com>:\n\n> вт, 2 нояб. 2021 г. в 16:04, Hayk Manukyan <manukyantt@gmail.com>:\n>\n>> Tomas Vondra\n>> > Are you suggesting those are not the actual best/worst cases and we\n>> > should use some other indexes? If yes, which ones?\n>>\n>> I would say yes.\n>> In my case I am not querying only sequence column.\n>> I have the following cases which I want to optimize.\n>> 1. Select * from Some_table where job = <somthing> and nlp = <something>\n>> and year = <something> and *scan_id = <something> *\n>> 2. Select * from Some_table where job = <somthing> and nlp = <something>\n>> and year = <something> and *Issue_flag = <something> *\n>> 3. Select * from Some_table where job = <somthing> and nlp = <something>\n>> and year = <something> and *sequence = <something> *\n>> Those are queries that my app send to db that is why I said that from *read\n>> perspective* our *best case* is 3 separate indexes for\n>> *(job, nlp, year, sequence)* AND *(job, nlp, year, Scan_ID)* and *(job,\n>> nlp, year, issue_flag)* and any other solution like\n>> (job, nlp, year, sequence, Scan_ID, issue_flag) OR (job, nlp, year )\n>> INCLUDE(sequence, Scan_ID, issue_flag) OR just (job, nlp, year) can be\n>> considered as* worst case *\n>> I will remind that in real world scenario I have ~50m rows and about *~5k\n>> rows for each (job, nlp, year )*\n>>\n>\n> So you get 50M rows /5K rows = 10K times selectivity, when you select on\n> job = <somthing> and nlp = <something> and year = <something> which is\n> enormous. Then you should select some of the 5K rows left, which is\n> expected to be pretty fast on bitmap index scan or INCLUDE column\n> filtering. It confirms Tomas's experiment\n>\n> pgbench -n -f q4.sql -T 60\n>\n> 106 ms vs 109 ms\n>\n> fits your case pretty well. You get absolutely negligible difference\n> between best and worst case and certainly you don't need anything more than\n> just plain index for 3 columns, you even don't need INCLUDE index.\n>\n> From what I read I suppose that this feature indeed doesn't based on the\n> real need. If you suppose it is useful please feel free to make and post\n> here some measurements that proves your point.\n>\n>\n>\n>\n> --\n> Best regards,\n> Pavel Borisov\n>\n> Postgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n>\n\nHi AllI did final research and saw that the difference between best and worst cases is indeed really small.I want to thank you guys for your time and efforts.Best regards.вт, 2 нояб. 2021 г. в 18:04, Pavel Borisov <pashkin.elfe@gmail.com>:вт, 2 нояб. 2021 г. в 16:04, Hayk Manukyan <manukyantt@gmail.com>:Tomas Vondra> Are you suggesting those are not the actual best/worst cases and we> should use some other indexes? If yes, which ones?I would say yes.In my case I am not querying only sequence column.I have the following cases which I want to optimize.1. Select * from Some_table where job = <somthing> and nlp = <something> and year = <something> and  scan_id = <something>  2. Select * from Some_table where job = <somthing> and nlp = <something> and year = <something> and  Issue_flag = <something>  3. Select * from Some_table where job = <somthing> and nlp = <something> and year = <something> and  sequence = <something>  Those are queries that my app send to db that is why I said that from read perspective our best case is 3 separate indexes for  (job, nlp, year, sequence) AND (job, nlp, year, Scan_ID) and (job, nlp, year,  issue_flag)  and any other solution like  (job, nlp, year, sequence, Scan_ID, issue_flag) OR  (job, nlp, year ) INCLUDE(sequence, Scan_ID, issue_flag)  OR just (job, nlp, year) can be considered as worst case I will remind that in real world scenario I have ~50m rows and about ~5k rows for each (job, nlp, year ) So you get 50M rows /5K rows = 10K times selectivity, when you select on job = <somthing> and nlp = <something> and year = <something> which is enormous. Then you should select some of the 5K rows left, which is expected to be pretty fast on bitmap index scan or INCLUDE column filtering. It confirms Tomas's experiment   pgbench -n -f q4.sql -T 60106 ms vs 109 msfits your case pretty well. You get absolutely negligible difference between best and worst case and certainly you don't need anything more than just plain index for 3 columns, you even don't need INCLUDE index.From what I read I suppose that this feature indeed doesn't based on the real need. If you suppose it is useful please feel free to make and post here some measurements that proves your point.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Fri, 5 Nov 2021 15:17:49 +0400", "msg_from": "Hayk Manukyan <manukyantt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Feature request for adoptive indexes" } ]
[ { "msg_contents": "Hello.\n\nI noticed that the following command doesn't leave connection log in\nlog file.\n\n> psql \"host=localhost options=-c\\ log_connections=on\"\n\nThe reason is we log connections before the options is processed. We\nneed to move the code from BackendInitialize to InitPostgres where\nthat options are processed if we want that option to work. However,\nI'm not sure we can delay connection-log until that point, since that\nmovement changes the meaning of the log message.\n\nAnother option is to log connections in InitPostgres if not yet and\nlog_connections is turned on by connection options.\n\nFuther another option is we don't make it work and write in the\ndocument that it doesn't work. (I didn't find that, but...)\n\n\nOpinions and suggestions are welcome.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 26 Oct 2021 17:33:50 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Setting log_connection in connection string doesn't work" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> I noticed that the following command doesn't leave connection log in\n> log file.\n>> psql \"host=localhost options=-c\\ log_connections=on\"\n\n[ shrug... ] Why would you expect it to? Should \"-c log_connections=off\"\nbe able to hide a connection from the log?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Oct 2021 09:39:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Setting log_connection in connection string doesn't work" }, { "msg_contents": "At Tue, 26 Oct 2021 09:39:12 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > I noticed that the following command doesn't leave connection log in\n> > log file.\n> >> psql \"host=localhost options=-c\\ log_connections=on\"\n> \n> [ shrug... ] Why would you expect it to? Should \"-c log_connections=off\"\n> be able to hide a connection from the log?\n\nI don't know. The fact is that it's a superuser-backend variable that\nis silently ignored (but acutally seems to be set in the session).\nSetting log_disconnection the same way works (of course the impliction\nof this is far less significant that the log_connection case).\n\nIf we want to refuse them to be set at session start (and I think so),\nshouldn't they be changed to SIGHUP? (I forgot to mention this choice\nin the previous mail..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 27 Oct 2021 10:24:05 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Setting log_connection in connection string doesn't work" }, { "msg_contents": "On Wed, Oct 27, 2021 at 10:24:05AM +0900, Kyotaro Horiguchi wrote:\n> I don't know. The fact is that it's a superuser-backend variable that\n> is silently ignored (but acutally seems to be set in the session).\n> Setting log_disconnection the same way works (of course the impliction\n> of this is far less significant that the log_connection case).\n\nfe550b2 is the commit that has changed both those parameters to be\nPGC_SU_BACKEND, with the commit log mentioning the case you are\ndescribing. That would be the area of this thread:\nhttps://www.postgresql.org/message-id/20408.1404329822@sss.pgh.pa.us\n\nAs Tom and this thread are saying, there may be a use-case for\nmaking log_connections more effective at startup so as superusers\ncould hide their logs at will. However, honestly, I am not sure that\nthis is worth spending time improving this as the use-case looks\nrather thin to me. Perhaps you are right and we could just mark both\nof those GUCs as PGC_SIGHUP, making the whole easier to understand and\nmore consistent, though. If we do that, the patch is wrong, as the\ndocs would also need a refresh.\n--\nMichael", "msg_date": "Wed, 27 Oct 2021 10:55:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Setting log_connection in connection string doesn't work" }, { "msg_contents": "At Wed, 27 Oct 2021 10:55:31 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Oct 27, 2021 at 10:24:05AM +0900, Kyotaro Horiguchi wrote:\n> > I don't know. The fact is that it's a superuser-backend variable that\n> > is silently ignored (but acutally seems to be set in the session).\n> > Setting log_disconnection the same way works (of course the impliction\n> > of this is far less significant that the log_connection case).\n> \n> fe550b2 is the commit that has changed both those parameters to be\n> PGC_SU_BACKEND, with the commit log mentioning the case you are\n> describing. That would be the area of this thread:\n> https://www.postgresql.org/message-id/20408.1404329822@sss.pgh.pa.us\n\nThanks for the pointer. (I didn't remember of that thread..)\n\n> As Tom and this thread are saying, there may be a use-case for\n> making log_connections more effective at startup so as superusers\n> could hide their logs at will. However, honestly, I am not sure that\n> this is worth spending time improving this as the use-case looks\n> rather thin to me. Perhaps you are right and we could just mark both\n\nI tend to agree.\n\n> of those GUCs as PGC_SIGHUP, making the whole easier to understand and\n> more consistent, though. If we do that, the patch is wrong, as the\n> docs would also need a refresh.\n\nYeah, this is the full version of the patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 27 Oct 2021 11:53:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Setting log_connection in connection string doesn't work" }, { "msg_contents": "\nThis patch is from October of 2021. I don't see any commitfest entry\nfor it. Should it be applied?\n\n---------------------------------------------------------------------------\n\nOn Wed, Oct 27, 2021 at 11:53:09AM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 27 Oct 2021 10:55:31 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > On Wed, Oct 27, 2021 at 10:24:05AM +0900, Kyotaro Horiguchi wrote:\n> > > I don't know. The fact is that it's a superuser-backend variable that\n> > > is silently ignored (but acutally seems to be set in the session).\n> > > Setting log_disconnection the same way works (of course the impliction\n> > > of this is far less significant that the log_connection case).\n> > \n> > fe550b2 is the commit that has changed both those parameters to be\n> > PGC_SU_BACKEND, with the commit log mentioning the case you are\n> > describing. That would be the area of this thread:\n> > https://www.postgresql.org/message-id/20408.1404329822@sss.pgh.pa.us\n> \n> Thanks for the pointer. (I didn't remember of that thread..)\n> \n> > As Tom and this thread are saying, there may be a use-case for\n> > making log_connections more effective at startup so as superusers\n> > could hide their logs at will. However, honestly, I am not sure that\n> > this is worth spending time improving this as the use-case looks\n> > rather thin to me. Perhaps you are right and we could just mark both\n> \n> I tend to agree.\n> \n> > of those GUCs as PGC_SIGHUP, making the whole easier to understand and\n> > more consistent, though. If we do that, the patch is wrong, as the\n> > docs would also need a refresh.\n> \n> Yeah, this is the full version of the patch.\n> \n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n> >From 11a9612c2590f57f431c3918d5b62c08a5b29efb Mon Sep 17 00:00:00 2001\n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> Date: Wed, 27 Oct 2021 11:39:02 +0900\n> Subject: [PATCH] Change log_(dis)connections to PGC_SIGHUP\n> \n> log_connections is not effective when it is given in connection\n> options. Since no complaint has been heard for this behavior the\n> use-case looks rather thin. Thus we change it to PGC_SIGHUP, rahther\n> than putting efforts to make it effective for the\n> use-case. log_disconnections is working with the usage but be\n> consistent by treating it the same way with log_connection.\n> ---\n> doc/src/sgml/config.sgml | 8 ++++----\n> src/backend/utils/misc/guc.c | 4 ++--\n> 2 files changed, 6 insertions(+), 6 deletions(-)\n> \n> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n> index de77f14573..64b04a47d2 100644\n> --- a/doc/src/sgml/config.sgml\n> +++ b/doc/src/sgml/config.sgml\n> @@ -6800,8 +6800,8 @@ local0.* /var/log/postgresql\n> Causes each attempted connection to the server to be logged,\n> as well as successful completion of both client authentication (if\n> necessary) and authorization.\n> - Only superusers can change this parameter at session start,\n> - and it cannot be changed at all within a session.\n> + This parameter can only be set in the <filename>postgresql.conf</filename>\n> + file or on the server command line.\n> The default is <literal>off</literal>.\n> </para>\n> \n> @@ -6827,8 +6827,8 @@ local0.* /var/log/postgresql\n> Causes session terminations to be logged. The log output\n> provides information similar to <varname>log_connections</varname>,\n> plus the duration of the session.\n> - Only superusers can change this parameter at session start,\n> - and it cannot be changed at all within a session.\n> + This parameter can only be set in the <filename>postgresql.conf</filename>\n> + file or on the server command line.\n> The default is <literal>off</literal>.\n> </para>\n> </listitem>\n> diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\n> index e91d5a3cfd..57d810c80d 100644\n> --- a/src/backend/utils/misc/guc.c\n> +++ b/src/backend/utils/misc/guc.c\n> @@ -1353,7 +1353,7 @@ static struct config_bool ConfigureNamesBool[] =\n> \t\tNULL, NULL, NULL\n> \t},\n> \t{\n> -\t\t{\"log_connections\", PGC_SU_BACKEND, LOGGING_WHAT,\n> +\t\t{\"log_connections\", PGC_SIGHUP, LOGGING_WHAT,\n> \t\t\tgettext_noop(\"Logs each successful connection.\"),\n> \t\t\tNULL\n> \t\t},\n> @@ -1362,7 +1362,7 @@ static struct config_bool ConfigureNamesBool[] =\n> \t\tNULL, NULL, NULL\n> \t},\n> \t{\n> -\t\t{\"log_disconnections\", PGC_SU_BACKEND, LOGGING_WHAT,\n> +\t\t{\"log_disconnections\", PGC_SIGHUP, LOGGING_WHAT,\n> \t\t\tgettext_noop(\"Logs end of a session, including duration.\"),\n> \t\t\tNULL\n> \t\t},\n> -- \n> 2.27.0\n> \n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Wed, 17 Aug 2022 10:23:06 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Setting log_connection in connection string doesn't work" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> This patch is from October of 2021. I don't see any commitfest entry\n> for it. Should it be applied?\n\nI think we decided not to. The original argument for having these\nbe PGC_SU_BACKEND was to try to ensure that you got matching\nconnection and disconnection log entries for any one session,\nand I don't see anything that supersedes that plan.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Aug 2022 10:29:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Setting log_connection in connection string doesn't work" }, { "msg_contents": "On Wed, Aug 17, 2022 at 10:29:26AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > This patch is from October of 2021. I don't see any commitfest entry\n> > for it. Should it be applied?\n> \n> I think we decided not to. The original argument for having these\n> be PGC_SU_BACKEND was to try to ensure that you got matching\n> connection and disconnection log entries for any one session,\n> and I don't see anything that supersedes that plan.\n\nOkay, thanks for the feedback.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Wed, 17 Aug 2022 10:30:16 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Setting log_connection in connection string doesn't work" } ]
[ { "msg_contents": "src/port/snprintf.c: Optimize the common base=10 case in fmtint\n\nfmtint() turns an integer into a string for a given base, and to do this\nit does a divide/modulo operation iteratively.\n\nOn just about any CPU, divides are a pretty expensive operation, generally\n10x to 20x or more expensive than adds or multiplies.\n\nBy special casing the super common case of base==10, the (gcc) compiler can (and will)\nreplace the divide by a multiply with 0xcccccccccccccccd, yielding a lot faster code.\n(fmtint dropped drastically in the perf profiles after this change)\n\nEven though this only shows up in the database creation phase of pgbench and not so much\nduring the normal run time, the optimization is simple and high value enough that\nin my opinion it's worth doing\n\n\n\n\ndiff --git a/src/port/snprintf.c b/src/port/snprintf.c\nindex 7c21429369..5957e6f2aa 100644\n--- a/src/port/snprintf.c\n+++ b/src/port/snprintf.c\n@@ -1076,11 +1076,24 @@ fmtint(long long value, char type, int forcesign, int leftjust,\n \telse\n \t{\n \t\t/* make integer string */\n-\t\tdo\n-\t\t{\n-\t\t\tconvert[sizeof(convert) - (++vallen)] = cvt[uvalue % base];\n-\t\t\tuvalue = uvalue / base;\n-\t\t} while (uvalue);\n+\n+\t\t/*\n+\t\t * Special case a base of 10 because it is super common and by special casing the compiler can\n+\t\t * avoid an expensive divide operation (the compiler will use a multiply for this)\n+\t\t */\n+\t\tif (likely(base == 10)) {\n+\t\t\tdo\n+\t\t\t{\n+\t\t\t\tconvert[sizeof(convert) - (++vallen)] = cvt[uvalue % 10];\n+\t\t\t\tuvalue = uvalue / 10;\n+\t\t\t} while (uvalue);\n+\t\t} else {\n+\t\t\tdo\n+\t\t\t{\n+\t\t\t\tconvert[sizeof(convert) - (++vallen)] = cvt[uvalue % base];\n+\t\t\t\tuvalue = uvalue / base;\n+\t\t\t} while (uvalue);\n+\t\t}\n \t}\n\n \tzeropad = Max(0, precision - vallen);\n\n\n", "msg_date": "Tue, 26 Oct 2021 07:57:36 -0700", "msg_from": "Arjan van de Ven <arjan@linux.intel.com>", "msg_from_op": true, "msg_subject": "src/port/snprintf.c: Optimize the common base=10 case in fmtint" }, { "msg_contents": "\n\n> On Oct 26, 2021, at 7:57 AM, Arjan van de Ven <arjan@linux.intel.com> wrote:\n> \n> By special casing the super common case of base==10, the (gcc) compiler can (and will)\n> replace the divide by a multiply with 0xcccccccccccccccd, yielding a lot faster code.\n> (fmtint dropped drastically in the perf profiles after this change)\n\nIt appears fmtint only has three options for base, being 10, 16, and 8. Have you profiled with either of the others special cased as well? I don't see much use in optimizing for octal, but hexadecimal is used quite a bit in wal with patterns like \"%08X%08X%08X\".\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 26 Oct 2021 09:45:32 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: src/port/snprintf.c: Optimize the common base=10 case in fmtint" }, { "msg_contents": "Hi,\n\nOn 2021-10-26 07:57:36 -0700, Arjan van de Ven wrote:\n> src/port/snprintf.c: Optimize the common base=10 case in fmtint\n> \n> fmtint() turns an integer into a string for a given base, and to do this\n> it does a divide/modulo operation iteratively.\n> \n> On just about any CPU, divides are a pretty expensive operation, generally\n> 10x to 20x or more expensive than adds or multiplies.\n\nThis has been bothering me too, thanks for doing something about it.\n\n\n> By special casing the super common case of base==10, the (gcc) compiler can (and will)\n> replace the divide by a multiply with 0xcccccccccccccccd, yielding a lot faster code.\n> (fmtint dropped drastically in the perf profiles after this change)\n> \n> Even though this only shows up in the database creation phase of pgbench and not so much\n> during the normal run time, the optimization is simple and high value enough that\n> in my opinion it's worth doing\n\nIt does even show up during normal running for me, in readonly pgbench.\n\n\n> diff --git a/src/port/snprintf.c b/src/port/snprintf.c\n> index 7c21429369..5957e6f2aa 100644\n> --- a/src/port/snprintf.c\n> +++ b/src/port/snprintf.c\n> @@ -1076,11 +1076,24 @@ fmtint(long long value, char type, int forcesign, int leftjust,\n> \telse\n> \t{\n> \t\t/* make integer string */\n> -\t\tdo\n> -\t\t{\n> -\t\t\tconvert[sizeof(convert) - (++vallen)] = cvt[uvalue % base];\n> -\t\t\tuvalue = uvalue / base;\n> -\t\t} while (uvalue);\n> +\n> +\t\t/*\n> +\t\t * Special case a base of 10 because it is super common and by special casing the compiler can\n> +\t\t * avoid an expensive divide operation (the compiler will use a multiply for this)\n> +\t\t */\n> +\t\tif (likely(base == 10)) {\n> +\t\t\tdo\n> +\t\t\t{\n> +\t\t\t\tconvert[sizeof(convert) - (++vallen)] = cvt[uvalue % 10];\n> +\t\t\t\tuvalue = uvalue / 10;\n> +\t\t\t} while (uvalue);\n> +\t\t} else {\n> +\t\t\tdo\n> +\t\t\t{\n> +\t\t\t\tconvert[sizeof(convert) - (++vallen)] = cvt[uvalue % base];\n> +\t\t\t\tuvalue = uvalue / base;\n> +\t\t\t} while (uvalue);\n> +\t\t}\n> \t}\n> \n> \tzeropad = Max(0, precision - vallen);\n\nSince all the bases are known / set earlier in the function, it seems better\nto just split the function into two, with the new helper doing the conversion.\n\nIt's harder than it should be, because that code is a bit, uh, tangled, but I\nthink I can see a way through...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 26 Oct 2021 09:56:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: src/port/snprintf.c: Optimize the common base=10 case in fmtint" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> It appears fmtint only has three options for base, being 10, 16, and 8. Have you profiled with either of the others special cased as well? I don't see much use in optimizing for octal, but hexadecimal is used quite a bit in wal with patterns like \"%08X%08X%08X\".\n\nI'd be inclined to just hard-wire the three allowed cases, and not have\nan arbitrary-divisor code path at all.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Oct 2021 13:51:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: src/port/snprintf.c: Optimize the common base=10 case in fmtint" }, { "msg_contents": "On 10/26/2021 10:51 AM, Tom Lane wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> It appears fmtint only has three options for base, being 10, 16, and 8. Have you profiled with either of the others special cased as well? I don't see much use in optimizing for octal, but hexadecimal is used quite a bit in wal with patterns like \"%08X%08X%08X\".\n> \n> I'd be inclined to just hard-wire the three allowed cases, and not have\n> an arbitrary-divisor code path at all.\n> \n\nok so feedback is \"Yes please but we want more of it\" :)\n\nI'll go poke at making an updated patch that does 8/10/16 and nothing else.\n\n\n\n\n", "msg_date": "Tue, 26 Oct 2021 11:13:14 -0700", "msg_from": "Arjan van de Ven <arjan@linux.intel.com>", "msg_from_op": true, "msg_subject": "Re: src/port/snprintf.c: Optimize the common base=10 case in fmtint" }, { "msg_contents": "Hi,\n\nOn 2021-10-26 13:51:55 -0400, Tom Lane wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> > It appears fmtint only has three options for base, being 10, 16, and 8. Have you profiled with either of the others special cased as well? I don't see much use in optimizing for octal, but hexadecimal is used quite a bit in wal with patterns like \"%08X%08X%08X\".\n> \n> I'd be inclined to just hard-wire the three allowed cases, and not have\n> an arbitrary-divisor code path at all.\n\nYea, I came to the same conclusion. But I'd implement it by moving the\ndivision into a separate inline function called from the switch. I tested that\nlocally and it works, but I got sidetracked by [1].\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20211026180454.xcjmu3kwmn3tka57%40alap3.anarazel.de\n\n\n", "msg_date": "Tue, 26 Oct 2021 11:15:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: src/port/snprintf.c: Optimize the common base=10 case in fmtint" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-10-26 13:51:55 -0400, Tom Lane wrote:\n>> I'd be inclined to just hard-wire the three allowed cases, and not have\n>> an arbitrary-divisor code path at all.\n\n> Yea, I came to the same conclusion. But I'd implement it by moving the\n> division into a separate inline function called from the switch. I tested that\n> locally and it works, but I got sidetracked by [1].\n\nUh, why not just a \"switch (base)\" around three copies of the loop?\nDon't overthink this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Oct 2021 14:33:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: src/port/snprintf.c: Optimize the common base=10 case in fmtint" }, { "msg_contents": "Hi,\n\nOn 2021-10-26 14:33:08 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-10-26 13:51:55 -0400, Tom Lane wrote:\n> >> I'd be inclined to just hard-wire the three allowed cases, and not have\n> >> an arbitrary-divisor code path at all.\n>\n> > Yea, I came to the same conclusion. But I'd implement it by moving the\n> > division into a separate inline function called from the switch. I tested that\n> > locally and it works, but I got sidetracked by [1].\n>\n> Uh, why not just a \"switch (base)\" around three copies of the loop?\n> Don't overthink this.\n\nWell, putting the loop into its own function isn't really much more\ncomplicated than duplicating the body. And there's also a few more\n\"unnecessarily run-time\" branches that we could get rid of that way.\n\nBut I'm also ok with duplicating, at least for now.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 26 Oct 2021 11:58:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: src/port/snprintf.c: Optimize the common base=10 case in fmtint" } ]
[ { "msg_contents": "Hi,\n\nour make dependencies currently are insufficient to trigger client binaries to\nbe relinked when pgport/pgcommon/libpq/... changes.\n\nTo reproduce, you can use something like:\ntouch ~/src/postgresql/src/port/snprintf.c && make -j48 2>&1|grep 'gcc.*pgbench'\nwhich won't show anything, whereas e.g.\ntouch ~/src/postgresql/src/include/postgres_fe.h && make -j48 2>&1|grep 'gcc.*pgbench'\nwill.\n\nThe reason for that is that currently client programs only have order-only\ndependencies on pgport:\n\npgbench: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils\n\t$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)\n\nwhich will dutifully cause pgport to be rebuilt. But then we won't actually\nrebuild $client_program, because it's just an order-only dependency [1].\n\n\nThe same problem does *not* exist for the backend, because there we add\npgport/pgcommon to OBJS, which will cause them to be proper dependencies.\n\nThis does explain some mysterious issues I had over the years with changes\noccasionally requiring a clean/build cycle to fully take. It's especially\nconfusing because after a build cycle one ends up with a build partially using\nthe old and partially the new libraries.\n\n\nI unfortunately don't see a localized fix for this. Afaict we'd need to change\nall client build rules to also have a dependency on the library?\n\nGreetings,\n\nAndres Freund\n\n\n[1] https://www.gnu.org/software/make/manual/html_node/Prerequisite-Types.html\n\n\n", "msg_date": "Tue, 26 Oct 2021 11:04:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "changes in pgport etc doesn't cause client programs to be relinked" }, { "msg_contents": "Hi,\n\nOn 2021-10-26 11:04:54 -0700, Andres Freund wrote:\n> pgbench: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils\n> \t$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)\n\n> I unfortunately don't see a localized fix for this. Afaict we'd need to change\n> all client build rules to also have a dependency on the library?\n\nFor a second I thought I had an elegant solution to this: Many linkers these\ndays support --dependency-file, similar to the way we deal with dependencies\nfor compilation.\n\nBut that doesn't work easily either, because we use $^ in at least some of the\nrecipes for building executables. Which will contain the generated library\ndependencies after the first build. That's easy enough to fix for things like\npgbench, where $^ is used directly, but there's also some binaries that we\nbuild using prefix rules.\n\nLike\n\n# Replace gmake's default rule for linking a single .o file to produce an\n# executable. The main point here is to put LDFLAGS after the .o file,\n# since we put -l switches into LDFLAGS and those are order-sensitive.\n# In addition, include CFLAGS and LDFLAGS_EX per project conventions.\n%: %.o\n\t$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)\n\nwhich, despite the comment, we not just use with a single object file, but\nalso with multiple ones, afaict. E.g. in src/bin/scripts/Makefile. With a\nsingle object file we could just replace $^ with $@.o, but...\n\nOf course we could filter $^ to only contain .o's, but at some point this\nisn't a simple solution anymore :(\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 26 Oct 2021 11:31:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: changes in pgport etc doesn't cause client programs to be\n relinked" } ]
[ { "msg_contents": "Hi,\r\n\r\nSince PostgreSQL 12 (0516c61b756e39) we have allowed for the ability to \r\nset \"clientcert=verify-full\" against various HBA authentication methods. \r\nThis provides the ability to provide \"multi-factor authentication\" e.g. \r\na client must provide both a valid certificate with a CN (or DN) that \r\nmatches the user account, as well as a separate authentication challenge \r\n(e.g. a password).\r\n\r\nWith certificate-based authentication methods and other methods, we \r\nallow for users to specify a mapping in pg_ident, e.g. if one needs to \r\nperform a rewrite on the CN to match the username that is specified \r\nwithin PostgreSQL.\r\n\r\nIt seems logical that we should allow for something like:\r\n\r\n\thostssl all all all scram-sha-256 clientcert=verify-full map=map\r\n\r\nso we can accept certificates that may have CNs that can be mapped to a \r\nPostgreSQL user name.\r\n\r\nCurrently we can't do this, as one will get the error:\r\n\r\n > authentication option \"map\" is only valid for authentication methods\r\n > ident, peer, gssapi, sspi, and cert\r\n\r\nI propose the below patch to add the currently supported password \r\nmethods, scram-sha-256 + md5 to allow for the \"map\" parameter to be \r\nused. I hesitate to add md5 given we're trying to phase it out, so open \r\nto debate there.\r\n\r\nWith my testing, this does work when you specify clientcert=verify-full: \r\nPostgreSQL will correctly map the certificate. If you do not have \r\nclientcert=verify-full, the mapping appears to do nothing.\r\n\r\nIf this seems acceptable/valid, I'll add the appropriate documentation \r\nand whatever else may be required.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Tue, 26 Oct 2021 14:59:19 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "allowing \"map\" for password auth methods with clientcert=verify-full" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> With certificate-based authentication methods and other methods, we \n> allow for users to specify a mapping in pg_ident, e.g. if one needs to \n> perform a rewrite on the CN to match the username that is specified \n> within PostgreSQL.\n\n> It seems logical that we should allow for something like:\n> \thostssl all all all scram-sha-256 clientcert=verify-full map=map\n> so we can accept certificates that may have CNs that can be mapped to a \n> PostgreSQL user name.\n\nI think this is conflating two different things: a mapping from the\nusername given in the startup packet, and a mapping from the TLS\ncertificate CN. Using the same keyword and terminology for both\nis going to lead to pain. I'm on board with the idea if we can\ndisentangle that, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Oct 2021 15:26:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allowing \"map\" for password auth methods with\n clientcert=verify-full" }, { "msg_contents": "On 10/26/21 3:26 PM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> With certificate-based authentication methods and other methods, we\r\n>> allow for users to specify a mapping in pg_ident, e.g. if one needs to\r\n>> perform a rewrite on the CN to match the username that is specified\r\n>> within PostgreSQL.\r\n> \r\n>> It seems logical that we should allow for something like:\r\n>> \thostssl all all all scram-sha-256 clientcert=verify-full map=map\r\n>> so we can accept certificates that may have CNs that can be mapped to a\r\n>> PostgreSQL user name.\r\n> \r\n> I think this is conflating two different things: a mapping from the\r\n> username given in the startup packet, and a mapping from the TLS\r\n> certificate CN. Using the same keyword and terminology for both\r\n> is going to lead to pain. I'm on board with the idea if we can\r\n> disentangle that, though.\r\n\r\nHm, don't we already have that already when using \"cert\" combined with \r\nthe \"map\" parameter? This is the main reason I \"stumbled\" upon this \r\nrecommendation.\r\n\r\nBased on what you say and if we're continuing with this functionality, \r\nwould solving the conflation be a matter of primarily documentation?\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Tue, 26 Oct 2021 16:04:01 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: allowing \"map\" for password auth methods with\n clientcert=verify-full" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 10/26/21 3:26 PM, Tom Lane wrote:\n>> I think this is conflating two different things: a mapping from the\n>> username given in the startup packet, and a mapping from the TLS\n>> certificate CN. Using the same keyword and terminology for both\n>> is going to lead to pain. I'm on board with the idea if we can\n>> disentangle that, though.\n\n> Hm, don't we already have that already when using \"cert\" combined with \n> the \"map\" parameter? This is the main reason I \"stumbled\" upon this \n> recommendation.\n\nI'm not exactly convinced that the existing design is any good.\nI'm suggesting that we stop and think about it before propagating\nit to a bunch of other use-cases.\n\nPer \"21.2. User Name Maps\", I think that the map parameter is supposed\nto translate from the startup packet's user name to the SQL role name.\nISTM that what is in the cert CN might be different from either\n(particularly by perhaps having a domain name attached). So I'd be\nhappier if there were a separate mapping available for the CN.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Oct 2021 18:16:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allowing \"map\" for password auth methods with\n clientcert=verify-full" }, { "msg_contents": "\nOn 10/26/21 18:16, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> On 10/26/21 3:26 PM, Tom Lane wrote:\n>>> I think this is conflating two different things: a mapping from the\n>>> username given in the startup packet, and a mapping from the TLS\n>>> certificate CN. Using the same keyword and terminology for both\n>>> is going to lead to pain. I'm on board with the idea if we can\n>>> disentangle that, though.\n>> Hm, don't we already have that already when using \"cert\" combined with \n>> the \"map\" parameter? This is the main reason I \"stumbled\" upon this \n>> recommendation.\n> I'm not exactly convinced that the existing design is any good.\n> I'm suggesting that we stop and think about it before propagating\n> it to a bunch of other use-cases.\n>\n> Per \"21.2. User Name Maps\", I think that the map parameter is supposed\n> to translate from the startup packet's user name to the SQL role name.\n> ISTM that what is in the cert CN might be different from either\n> (particularly by perhaps having a domain name attached). So I'd be\n> happier if there were a separate mapping available for the CN.\n>\n> \t\t\t\n\n\nPossibly slightly off topic, but\n\nThe cert+map pattern is very useful in conjunction with pgbouncer. Using\nit with an auth query to get the password pgbouncer doesn't even need to\nhave a list of users, and we in effect delegate authentication to\npgbouncer.\n\nIt would be nice to have + and @ expansion for the usernames in the\nident file, like there is for pg_hba.conf.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 27 Oct 2021 10:12:46 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: allowing \"map\" for password auth methods with\n clientcert=verify-full" }, { "msg_contents": "On Tue, 2021-10-26 at 18:16 -0400, Tom Lane wrote:\r\n> Per \"21.2. User Name Maps\", I think that the map parameter is supposed\r\n> to translate from the startup packet's user name to the SQL role name.\r\n\r\nI may have misunderstood what you wrote, but IIUC the startup packet's\r\nuser name _is_ the SQL role name, even when using a map. The map is\r\njust determining whether or not the authenticated ID (pulled from a\r\ncertificate, or from Kerberos, or etc.) is authorized to use that role\r\nname. It's not a translation, because you can have a one-to-many user\r\nmapping (where me@example.com is allowed to log in as `me` or\r\n`postgres` or `admin` or...).\r\n\r\nPlease correct me if I've missed something -- I need to have it right\r\nin my head, given my other patches in this area...\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 27 Oct 2021 16:14:45 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: allowing \"map\" for password auth methods with\n clientcert=verify-full" }, { "msg_contents": "On Wed, 2021-10-27 at 10:12 -0400, Andrew Dunstan wrote:\r\n> Possibly slightly off topic, but\r\n> \r\n> The cert+map pattern is very useful in conjunction with pgbouncer. Using\r\n> it with an auth query to get the password pgbouncer doesn't even need to\r\n> have a list of users, and we in effect delegate authentication to\r\n> pgbouncer.\r\n> \r\n> It would be nice to have + and @ expansion for the usernames in the\r\n> ident file, like there is for pg_hba.conf.\r\n\r\n(Probably is off-topic :D but +1 to the concept. Combined with LDAP\r\nmapping that could make some of the ad-hoc LDAP-to-Postgres sync\r\nscripts a lot simpler.)\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 27 Oct 2021 16:49:21 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: allowing \"map\" for password auth methods with\n clientcert=verify-full" }, { "msg_contents": "On 10/27/21 12:14 PM, Jacob Champion wrote:\r\n> On Tue, 2021-10-26 at 18:16 -0400, Tom Lane wrote:\r\n>> Per \"21.2. User Name Maps\", I think that the map parameter is supposed\r\n>> to translate from the startup packet's user name to the SQL role name.\r\n> \r\n> I may have misunderstood what you wrote, but IIUC the startup packet's\r\n> user name _is_ the SQL role name, even when using a map. The map is\r\n> just determining whether or not the authenticated ID (pulled from a\r\n> certificate, or from Kerberos, or etc.) is authorized to use that role\r\n> name. It's not a translation, because you can have a one-to-many user\r\n> mapping (where me@example.com is allowed to log in as `me` or\r\n> `postgres` or `admin` or...).\r\n> \r\n> Please correct me if I've missed something -- I need to have it right\r\n> in my head, given my other patches in this area...\r\n\r\nTo Tom's earlier point, I understand why we may want to pause and think \r\nabout this.\r\n\r\nI don't know the whole history of the \"pg_ident.conf\" file, but judging \r\nby the name, my guess is that the mapping functionality started with the \r\n\"ident\" authentication support, and then it was used for other auth \r\ntypes that could benefit from mapping (cert/gssapi etc.). The \r\ndocumentation referenced also skews towards describing what the original \r\nfunctionality for ident does.\r\n\r\nThat said, the existing functionality does match what Jacob is \r\ndescribing and what my own understanding is.\r\n\r\nThe patch I propose just layers on top of the existing functionality -- \r\n you could even argue that it's \"fixing a bug\" that we did not add the \r\ncurrent \"map\" support for the case of \"clientcert=verify-full\" given we \r\ndo introspect the certificate CN to see if it matches the SQL role name.\r\n\r\nIn terms of other user mapping functionality, we have ad hoc support for \r\nFDWs when trying to map to a user in a different server:\r\n\r\nhttps://www.postgresql.org/docs/current/sql-createusermapping.html\r\n\r\nI'm unsure if there is anything we'd want to leverage here, as the \r\noverall goal of this is to provide the ability to establish a connection \r\nwith a remote server.\r\n\r\nI think in the context of doing any new work, I'd step back and ask what \r\nproblem is this solving? The main one I think of is an integration with \r\na SSO system has a credential with an identifier that does not match \r\nit's credential in PostgreSQL? (That would be the case I was working on, \r\nthough said case was borrowed from our docs). Are there other cases?\r\n\r\nThat said, what would make it easier to manage it then? Maybe a lot of \r\nthis is documenting and some expansion on what the pg_ident.conf file \r\ncan do (per Andrew's suggestion). And maybe a new name for said file.\r\n\r\nI don't know if we would want to bring any of this into the catalog or \r\nnot -- but perhaps there may be some advantages to that from an \r\nadministration standpoint.\r\n\r\nAnyway, those are my initial thoughts on the challenge to think a bit \r\nmore deeply about this. I'd still suggest considering the patch I \r\npropose as an \"immediate fix\" for existing versions as, at least to \r\nmyself, I can argue it's a bug. We can then do some more work to make \r\nthe overall system a bit easier/clearer to use and maintain.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 27 Oct 2021 12:53:44 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: allowing \"map\" for password auth methods with\n clientcert=verify-full" }, { "msg_contents": "On Wed, 2021-10-27 at 12:53 -0400, Jonathan S. Katz wrote:\r\n> The patch I propose just layers on top of the existing functionality -- \r\n> you could even argue that it's \"fixing a bug\" that we did not add the \r\n> current \"map\" support for the case of \"clientcert=verify-full\" given we \r\n> do introspect the certificate CN to see if it matches the SQL role name.\r\n\r\nWell, also to Tom's earlier point, though, this is a different sort of\r\nmapping. Which \"map\" should we use if someone combines\r\nclientcert=verify-full with an auth method which already uses a map\r\nitself? Does the DBA want to map the auth name, the cert name, or both?\r\n\r\nThe current usermap support is piecemeal and I'd like to see it\r\ncompleted, but I think you may be painting yourself into a corner if\r\nyou fix it in this way. (From a quick look at the patch, I'm also\r\nworried that this happens to work by accident, but that may just be\r\nFUD.)\r\n\r\n> I think in the context of doing any new work, I'd step back and ask what \r\n> problem is this solving? The main one I think of is an integration with \r\n> a SSO system has a credential with an identifier that does not match \r\n> it's credential in PostgreSQL? (That would be the case I was working on, \r\n> though said case was borrowed from our docs). Are there other cases?\r\n> \r\n> That said, what would make it easier to manage it then? Maybe a lot of \r\n> this is documenting and some expansion on what the pg_ident.conf file \r\n> can do (per Andrew's suggestion). And maybe a new name for said file.\r\n\r\nI agree that the authorization system is due for a tuneup, for what\r\nit's worth. Some of the comments from Magnus on my LDAP patch [1] kind\r\nof hinted in that direction as well, I think, even if my approach is\r\nrejected in the end.\r\n\r\n--Jacob\r\n\r\n[1] https://www.postgresql.org/message-id/flat/1a61806047c536e7528b943d0cfe12608118ca31.camel@vmware.com\r\n", "msg_date": "Wed, 27 Oct 2021 17:37:49 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: allowing \"map\" for password auth methods with\n clientcert=verify-full" }, { "msg_contents": "Greetings,\n\n* Jacob Champion (pchampion@vmware.com) wrote:\n> On Wed, 2021-10-27 at 12:53 -0400, Jonathan S. Katz wrote:\n> > The patch I propose just layers on top of the existing functionality -- \n> > you could even argue that it's \"fixing a bug\" that we did not add the \n> > current \"map\" support for the case of \"clientcert=verify-full\" given we \n> > do introspect the certificate CN to see if it matches the SQL role name.\n> \n> Well, also to Tom's earlier point, though, this is a different sort of\n> mapping. Which \"map\" should we use if someone combines\n> clientcert=verify-full with an auth method which already uses a map\n> itself? Does the DBA want to map the auth name, the cert name, or both?\n\nMy understanding of the mapping system with pg_ident has always been\nthat it's a mapping from the 'authenticated user' to the 'user in PG'\nand the point is to check if that mapping is allowed but not to actually\nchange anything about who the ultimately logged in user is.\n\nThis is pretty clear and simple when the two are entirely disjoint- that\nis, with 'peer' or 'gssapi' or 'cert', the authenticated user will be\nwhatever the authentication system thinks it is- the Unix username for\npeer, the Kerberos principle for gssapi, the DN or CN for cert.\n\nIn all of the above cases, the username provided to us by the end user\nis the role they're trying to log in as and the mapping is just there to\ncheck if that's allowed based on the *authenticated username*. Perhaps\nnot surprisingly, 21.2 could use some improvement on this, but it's\ncertainly the case that the mapping is only there as a permission check\nand does not actually change who the user is logging into the system as.\nWhatever username is in the startup packet is the role that the user\nwill be logged in as if they're allowed to log in as that role.\n\nNow, when PG is involved in the authentication of the user, then I agree\nthat it gets more interesting. That is- the user wants to log in as\nuser u1, they have a certificate that has a DN of u1/user/domain, and we\nwant to verify that they know the password and present the right\ncertificate- but which password do they need to know?\n\nToday, there's really only one possible answer- the username in the\nstartup packet, because we don't have any concept of \"authenticate to PG\nas user u1 and then be logged in as user u2\". If we were to support a\nmapping for scram or md5, we'd really need the user to tell us both who\nto authenticate as and the user to log into PG as, and then we'd use\npg_ident.conf to ensure that such a mapping is allowed. If we wanted to\nimplement something along the lines of \"user authenticates as X but is\nlogged in as Y\" automatically, that would need to be something other\nthan pg_ident.conf, imv. I see that's been discussed a bit on the other\nthread I was explicitly trying to ignore and glad that it's more-or-less\ncoming to the same conclusion.\n\nWhere does that leave us with what Jonathan is suggesting though? For\nmy 2c, we shouldn't allow 'map=' to be used for scram or md5 because\nit'll just confuse users, until and unless we actually do the PGAUTHUSER\nthing and then we can allow 'map=' to check if that mapping is allowed\nand the actual SCRAM PW check is done against PGAUTHUSER and then the\nlogged in user is the user as specified in the startup packet, assuming\nthat mapping is allowed. For Jonathan's actual case though, we should\nadd a 'certmap' option instead and have that be explicitly for the case\nwhere it's scram w/ clientcert=verify-full and then we check the mapping\nof the DN/CN to the startup-packet username. There's no reason this\ncouldn't also work with a 'map' specified and PGAUTHUSER set and SCRAM\nused to verify against that at the same time.\n\n> The current usermap support is piecemeal and I'd like to see it\n> completed, but I think you may be painting yourself into a corner if\n> you fix it in this way. (From a quick look at the patch, I'm also\n> worried that this happens to work by accident, but that may just be\n> FUD.)\n\nI don't think it's an accident that it works, but a few comments and\na more explicit option for the user interface would be good. Admins\nwould be confused if 'map=xyz' was accepted for SCRAM but then didn't\nactually do anything.\n\n> > I think in the context of doing any new work, I'd step back and ask what \n> > problem is this solving? The main one I think of is an integration with \n> > a SSO system has a credential with an identifier that does not match \n> > it's credential in PostgreSQL? (That would be the case I was working on, \n> > though said case was borrowed from our docs). Are there other cases?\n> > \n> > That said, what would make it easier to manage it then? Maybe a lot of \n> > this is documenting and some expansion on what the pg_ident.conf file \n> > can do (per Andrew's suggestion). And maybe a new name for said file.\n> \n> I agree that the authorization system is due for a tuneup, for what\n> it's worth. Some of the comments from Magnus on my LDAP patch [1] kind\n> of hinted in that direction as well, I think, even if my approach is\n> rejected in the end.\n\nPerhaps not a surprise, but I continue to be against the idea of adding\nanything more to the insecure hack that is our LDAP auth method. We\nshould be moving away from that, not adding to it.\n\nThat this would also require a new connection option / envvar to tell us\nwho the user wants to authenticate to LDAP as doesn't exactly make me\nany more thrilled with it.\n\nJust my 2c though and I know others don't necessarily agree with me on\nthis point.\n\nThanks,\n\nStephen", "msg_date": "Mon, 8 Nov 2021 15:32:40 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: allowing \"map\" for password auth methods with\n clientcert=verify-full" }, { "msg_contents": "On Mon, 2021-11-08 at 15:32 -0500, Stephen Frost wrote:\r\n> Where does that leave us with what Jonathan is suggesting though? For\r\n> my 2c, we shouldn't allow 'map=' to be used for scram or md5 because\r\n> it'll just confuse users, until and unless we actually do the PGAUTHUSER\r\n> thing and then we can allow 'map=' to check if that mapping is allowed\r\n> and the actual SCRAM PW check is done against PGAUTHUSER and then the\r\n> logged in user is the user as specified in the startup packet, assuming\r\n> that mapping is allowed. For Jonathan's actual case though, we should\r\n> add a 'certmap' option instead and have that be explicitly for the case\r\n> where it's scram w/ clientcert=verify-full and then we check the mapping\r\n> of the DN/CN to the startup-packet username. There's no reason this\r\n> couldn't also work with a 'map' specified and PGAUTHUSER set and SCRAM\r\n> used to verify against that at the same time.\r\n\r\nAgreed.\r\n\r\n> Perhaps not a surprise, but I continue to be against the idea of adding\r\n> anything more to the insecure hack that is our LDAP auth method. We\r\n> should be moving away from that, not adding to it.\r\n\r\nParaphrasing you from earlier, the \"authenticate as one user and then\r\nlog in as another\" use case is the one I'm trying to expand. LDAP is\r\njust the auth method I happen to have present-day customer cases for.\r\n\r\n> That this would also require a new connection option / envvar to tell us\r\n> who the user wants to authenticate to LDAP as doesn't exactly make me\r\n> any more thrilled with it.\r\n\r\nForgetting the LDAP part for a moment, do you have another suggestion\r\nfor how we can separate the role name from the user name? The\r\nconnection string seemed to be the most straightforward.\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 30 Nov 2021 20:38:14 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: allowing \"map\" for password auth methods with\n clientcert=verify-full" } ]
[ { "msg_contents": "[PATCH v2] src/port/snprintf.c: Optimize the common base=10 case in fmtint\n\nfmtint() turns an integer into a string for a given base, and to do this\nit does a divide/modulo operation iteratively.\nThe only possible base values are 8, 10 and 16\n\nOn just about any CPU, divides are a pretty expensive operation, generally\n10x to 20x or more expensive than adds or multiplies.\n\nBy special casing the base values, the compiler (gcc or other) can (and will)\nreplace the divide by a multiply with 0xcccccccccccccccd (for base 10) or bitops\nfor base 8 and 16, yielding a lot faster code.\n\nI considered a switch statement, but since base 10 is the most common by far,\nI implemented it as a series of if/else statements with a likely() marking the 10 case.\n\nEven though this only shows up in the database creation phase of pgbench and not so much\nduring the normal run time, the optimization is simple and high value enough that\nin my opinion it's worth doing\n\n\n\n\ndiff --git a/src/port/snprintf.c b/src/port/snprintf.c\nindex 7c21429369..547a59d4a0 100644\n--- a/src/port/snprintf.c\n+++ b/src/port/snprintf.c\n@@ -1076,11 +1076,31 @@ fmtint(long long value, char type, int forcesign, int leftjust,\n \telse\n \t{\n \t\t/* make integer string */\n-\t\tdo\n-\t\t{\n-\t\t\tconvert[sizeof(convert) - (++vallen)] = cvt[uvalue % base];\n-\t\t\tuvalue = uvalue / base;\n-\t\t} while (uvalue);\n+\n+\t\t/*\n+\t\t * Special case each of the possible base values (8, 10, 16) to avoid an\n+\t\t * expensive divide operation\n+\t\t * (the compiler will use a multiply, shift or boolean ops for this)\n+\t\t */\n+\t\tif (likely(base == 10)) {\n+\t\t\tdo\n+\t\t\t{\n+\t\t\t\tconvert[sizeof(convert) - (++vallen)] = cvt[uvalue % 10];\n+\t\t\t\tuvalue = uvalue / 10;\n+\t\t\t} while (uvalue);\n+\t\t} else if (base == 16) {\n+\t\t\tdo\n+\t\t\t{\n+\t\t\t\tconvert[sizeof(convert) - (++vallen)] = cvt[uvalue % 16];\n+\t\t\t\tuvalue = uvalue / 16;\n+\t\t\t} while (uvalue);\n+\t\t} else if (base == 8) {\n+\t\t\tdo\n+\t\t\t{\n+\t\t\t\tconvert[sizeof(convert) - (++vallen)] = cvt[uvalue % 8];\n+\t\t\t\tuvalue = uvalue / 8;\n+\t\t\t} while (uvalue);\n+\t\t}\n \t}\n\n \tzeropad = Max(0, precision - vallen);\n\n\n", "msg_date": "Tue, 26 Oct 2021 13:58:17 -0700", "msg_from": "Arjan van de Ven <arjan@linux.intel.com>", "msg_from_op": true, "msg_subject": "[PATCH v2] src/port/snprintf.c: Optimize the common base=10 case in\n fmtint" }, { "msg_contents": "\nOn Wed, 27 Oct 2021 at 04:58, Arjan van de Ven <arjan@linux.intel.com> wrote:\n> [PATCH v2] src/port/snprintf.c: Optimize the common base=10 case in fmtint\n>\n> fmtint() turns an integer into a string for a given base, and to do this\n> it does a divide/modulo operation iteratively.\n> The only possible base values are 8, 10 and 16\n>\n> On just about any CPU, divides are a pretty expensive operation, generally\n> 10x to 20x or more expensive than adds or multiplies.\n>\n> By special casing the base values, the compiler (gcc or other) can (and will)\n> replace the divide by a multiply with 0xcccccccccccccccd (for base 10) or bitops\n> for base 8 and 16, yielding a lot faster code.\n>\n> I considered a switch statement, but since base 10 is the most common by far,\n> I implemented it as a series of if/else statements with a likely() marking the 10 case.\n>\n> Even though this only shows up in the database creation phase of pgbench and not so much\n> during the normal run time, the optimization is simple and high value enough that\n> in my opinion it's worth doing\n>\n>\n\n\n+\t\tif (likely(base == 10)) {\n+\t\t\tdo\n+\t\t\t{\n+\t\t\t\tconvert[sizeof(convert) - (++vallen)] = cvt[uvalue % 10];\n+\t\t\t\tuvalue = uvalue / 10;\n+\t\t\t} while (uvalue);\n+\t\t} else if (base == 16) {\n\nWhy do we need likely() for base=10, however, base=16 and base=8 don't need?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 27 Oct 2021 09:36:35 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH v2] src/port/snprintf.c: Optimize the common base=10\n case in fmtint" }, { "msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> Why do we need likely() for base=10, however, base=16 and base=8 don't need?\n\nYeah, I was a little unconvinced about that too. I concur with writing\nit as an if/else chain instead of a switch, but I'm not sure that likely()\nadds anything to that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Oct 2021 21:39:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH v2] src/port/snprintf.c: Optimize the common base=10 case\n in fmtint" }, { "msg_contents": "On 10/26/2021 6:39 PM, Tom Lane wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> Why do we need likely() for base=10, however, base=16 and base=8 don't need?\n> \n> Yeah, I was a little unconvinced about that too. I concur with writing\n> it as an if/else chain instead of a switch, but I'm not sure that likely()\n> adds anything to that.\n\nfair enough:\n\n[PATCH v3] src/port/snprintf.c: Optimize the division away in fmtint\n\nfmtint() turns an integer into a string for a given base, and to do this\nit does a divide/modulo operation iteratively.\nThe only possible base values are 8, 10 and 16\n\nOn just about any CPU, generic divides are a pretty expensive operation, generally\n10x to 20x or more expensive than adds or multiplies.\n\nBy special casing the base values, the compiler (gcc or other) can (and will)\nreplace the divide by a multiply with 0xcccccccccccccccd (for base 10) or bitops\nfor base 8 and 16, yielding a lot faster code.\n\nI considered a switch statement, but since base 10 is the most common by far,\nI implemented it as a series of if/else statements for simplicity.\n\nEven though this only shows up in the database creation phase of pgbench and not so much\nduring the normal run time, the optimization is simple and high value enough that\nin my opinion it's worth doing\n\n\n\n\ndiff --git a/src/port/snprintf.c b/src/port/snprintf.c\nindex 7c21429369..547a59d4a0 100644\n--- a/src/port/snprintf.c\n+++ b/src/port/snprintf.c\n@@ -1076,11 +1076,31 @@ fmtint(long long value, char type, int forcesign, int leftjust,\n \telse\n \t{\n \t\t/* make integer string */\n-\t\tdo\n-\t\t{\n-\t\t\tconvert[sizeof(convert) - (++vallen)] = cvt[uvalue % base];\n-\t\t\tuvalue = uvalue / base;\n-\t\t} while (uvalue);\n+\n+\t\t/*\n+\t\t * Special case each of the possible base values (8, 10, 16) to avoid an\n+\t\t * expensive divide operation\n+\t\t * (the compiler will use a multiply, shift or boolean ops for this)\n+\t\t */\n+\t\tif (base == 10) {\n+\t\t\tdo\n+\t\t\t{\n+\t\t\t\tconvert[sizeof(convert) - (++vallen)] = cvt[uvalue % 10];\n+\t\t\t\tuvalue = uvalue / 10;\n+\t\t\t} while (uvalue);\n+\t\t} else if (base == 16) {\n+\t\t\tdo\n+\t\t\t{\n+\t\t\t\tconvert[sizeof(convert) - (++vallen)] = cvt[uvalue % 16];\n+\t\t\t\tuvalue = uvalue / 16;\n+\t\t\t} while (uvalue);\n+\t\t} else if (base == 8) {\n+\t\t\tdo\n+\t\t\t{\n+\t\t\t\tconvert[sizeof(convert) - (++vallen)] = cvt[uvalue % 8];\n+\t\t\t\tuvalue = uvalue / 8;\n+\t\t\t} while (uvalue);\n+\t\t}\n \t}\n\n \tzeropad = Max(0, precision - vallen);\n\n\n\n", "msg_date": "Wed, 27 Oct 2021 15:18:13 -0700", "msg_from": "Arjan van de Ven <arjan@linux.intel.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH v2] src/port/snprintf.c: Optimize the common base=10 case\n in fmtint" }, { "msg_contents": "On 10/27/21 18:18, Arjan van de Ven wrote:\n> + /*\n> + * Special case each of the possible base values (8, 10, 16) to\n> avoid an\n> + * expensive divide operation\n> + * (the compiler will use a multiply, shift or boolean ops for this)\n> + */\n\n\nWas 'boolean' the intended word there? To me it is distinct from 'bitwise'.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 27 Oct 2021 19:50:45 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH v2] src/port/snprintf.c: Optimize the common base=10 case\n in fmtint" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 10/27/21 18:18, Arjan van de Ven wrote:\n>> + /*\n>> + * Special case each of the possible base values (8, 10, 16) to\n>> avoid an\n>> + * expensive divide operation\n>> + * (the compiler will use a multiply, shift or boolean ops for this)\n>> + */\n\n> Was 'boolean' the intended word there? To me it is distinct from 'bitwise'.\n\nI think the comment is overly specific anyway. We should just say\n\"division by a constant is faster than general-purpose division\".\nOnly compiler geeks will care about the details, and they probably\nknow them already.\n\nPersonally, I failed to measure any speedup at all on pgbench, either\nin the init phase or regular transactions; whatever difference there\nmay be is below the noise level. However, I wrote a simple C function\nwith a tight loop around snprintf(), and that showed about a 2X\nimprovement, so there is some win here.\n\nI went ahead and pushed it with a rewritten comment.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Oct 2021 13:46:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH v2] src/port/snprintf.c: Optimize the common base=10 case\n in fmtint" }, { "msg_contents": "Hi,\n\nOn 2021-10-28 13:46:49 -0400, Tom Lane wrote:\n> Personally, I failed to measure any speedup at all on pgbench, either\n> in the init phase or regular transactions; whatever difference there\n> may be is below the noise level. However, I wrote a simple C function\n> with a tight loop around snprintf(), and that showed about a 2X\n> improvement, so there is some win here.\n\nOdd - at least with an earlier patch I saw optimized pgbench initialization go\ndown by ~25%.\n\n\n> I went ahead and pushed it with a rewritten comment.\n\nImo the code now is a bit odd, because we first switch (type) setting base,\nand then separately have branches for the different bases.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 28 Oct 2021 13:27:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH v2] src/port/snprintf.c: Optimize the common base=10 case\n in fmtint" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Imo the code now is a bit odd, because we first switch (type) setting base,\n> and then separately have branches for the different bases.\n\nIt'd be hard to merge, I think, given that the cases in the switch\ndon't line up one-for-one with the different bases. You could\nprobably do something involving falling through between different\ncases, but I think that that would be a lot harder to read;\nand I'm still of the opinion that micro-optimizing this code\nis probably a waste of effort for our usage.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Oct 2021 16:34:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH v2] src/port/snprintf.c: Optimize the common base=10 case\n in fmtint" } ]
[ { "msg_contents": "Generally if a role is granted membership to another role with NOINHERIT\nthey must use SET ROLE to access the privileges of that role, however\nwith predefined roles the membership and privilege is conflated, as\ndemonstrated by:\n\nCREATE ROLE readrole;\nCREATE ROLE role2 NOINHERIT;\nCREATE ROLE brindle LOGIN;\n\nGRANT role2 TO brindle;\nCREATE TABLE foo(i INT);\nGRANT readrole TO role2;\nGRANT ALL ON TABLE foo TO readrole;\n\nGRANT pg_read_all_stats,pg_read_all_settings,pg_read_server_files,pg_write_server_files,pg_execute_server_program TO role2;\n\nLog in as brindle:\n\npostgres=> select current_user;\n current_user\n--------------\n brindle\n(1 row)\n\npostgres=> SELECT * FROM foo;\nERROR: permission denied for table foo\n\npostgres=> SELECT DISTINCT query FROM pg_stat_activity;\n query\n----------------------------------------------\n\n SELECT DISTINCT query FROM pg_stat_activity;\n(2 rows)\n\npostgres=> SET ROLE readrole;\nSET\npostgres=> SELECT * FROM foo;\n i\n---\n(0 rows)\n\nAfter this patch:\n\npostgres=> SELECT DISTINCT query FROM pg_stat_activity;\n query\n--------------------------\n <insufficient privilege>\n(1 row)\n\npostgres=> SET ROLE pg_read_all_stats;\nSET\npostgres=> SELECT DISTINCT query FROM pg_stat_activity;\n query\n----------------------------------------------\n\n SELECT DISTINCT query FROM pg_stat_activity;\n(2 rows)\n\npostgres=> SHOW config_file;\nERROR: must be superuser or have privileges of pg_read_all_settings to examine \"config_file\"\npostgres=> SET ROLE pg_read_all_settings;\nSET\npostgres=> SHOW config_file;\n config_file\n-----------------------------------\n /var/lib/pgsql/15/postgresql.conf\n(1 row)\n\nWith inheritance it works as expected:\n\nALTER ROLE role2 INHERIT;\n\npostgres=> SELECT current_user;\n current_user\n--------------\n brindle\n(1 row)\n\npostgres=> SHOW config_file;\n config_file\n-----------------------------------\n /var/lib/pgsql/15/postgresql.conf\n(1 row)\n\nSigned-off-by: Joshua Brindle <joshua.brindle@crunchydata.com>\n---\n src/backend/commands/copy.c | 12 ++++++------\n src/backend/replication/walreceiver.c | 8 ++++----\n src/backend/replication/walsender.c | 8 ++++----\n src/backend/utils/adt/dbsize.c | 8 ++++----\n src/backend/utils/adt/genfile.c | 6 +++---\n src/backend/utils/adt/pgstatfuncs.c | 2 +-\n src/backend/utils/misc/guc.c | 20 ++++++++++----------\n 7 files changed, 32 insertions(+), 32 deletions(-)\n\ndiff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c\nindex 53f48531419..e26ff42fd82 100644\n--- a/src/backend/commands/copy.c\n+++ b/src/backend/commands/copy.c\n@@ -80,26 +80,26 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n \t{\n \t\tif (stmt->is_program)\n \t\t{\n-\t\t\tif (!is_member_of_role(GetUserId(), ROLE_PG_EXECUTE_SERVER_PROGRAM))\n+\t\t\tif (!has_privs_of_role(GetUserId(), ROLE_PG_EXECUTE_SERVER_PROGRAM))\n \t\t\t\tereport(ERROR,\n \t\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n-\t\t\t\t\t\t errmsg(\"must be superuser or a member of the pg_execute_server_program role to COPY to or from an external program\"),\n+\t\t\t\t\t\t errmsg(\"must be superuser or have privileges of the pg_execute_server_program role to COPY to or from an external program\"),\n \t\t\t\t\t\t errhint(\"Anyone can COPY to stdout or from stdin. \"\n \t\t\t\t\t\t\t\t \"psql's \\\\copy command also works for anyone.\")));\n \t\t}\n \t\telse\n \t\t{\n-\t\t\tif (is_from && !is_member_of_role(GetUserId(), ROLE_PG_READ_SERVER_FILES))\n+\t\t\tif (is_from && !has_privs_of_role(GetUserId(), ROLE_PG_READ_SERVER_FILES))\n \t\t\t\tereport(ERROR,\n \t\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n-\t\t\t\t\t\t errmsg(\"must be superuser or a member of the pg_read_server_files role to COPY from a file\"),\n+\t\t\t\t\t\t errmsg(\"must be superuser or have privileges of the pg_read_server_files role to COPY from a file\"),\n \t\t\t\t\t\t errhint(\"Anyone can COPY to stdout or from stdin. \"\n \t\t\t\t\t\t\t\t \"psql's \\\\copy command also works for anyone.\")));\n \n-\t\t\tif (!is_from && !is_member_of_role(GetUserId(), ROLE_PG_WRITE_SERVER_FILES))\n+\t\t\tif (!is_from && !has_privs_of_role(GetUserId(), ROLE_PG_WRITE_SERVER_FILES))\n \t\t\t\tereport(ERROR,\n \t\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n-\t\t\t\t\t\t errmsg(\"must be superuser or a member of the pg_write_server_files role to COPY to a file\"),\n+\t\t\t\t\t\t errmsg(\"must be superuser or have privileges of the pg_write_server_files role to COPY to a file\"),\n \t\t\t\t\t\t errhint(\"Anyone can COPY to stdout or from stdin. \"\n \t\t\t\t\t\t\t\t \"psql's \\\\copy command also works for anyone.\")));\n \t\t}\ndiff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c\nindex b90e5ca98ea..c8ddb6fc323 100644\n--- a/src/backend/replication/walreceiver.c\n+++ b/src/backend/replication/walreceiver.c\n@@ -1392,12 +1392,12 @@ pg_stat_get_wal_receiver(PG_FUNCTION_ARGS)\n \t/* Fetch values */\n \tvalues[0] = Int32GetDatum(pid);\n \n-\tif (!is_member_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS))\n+\tif (!has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS))\n \t{\n \t\t/*\n-\t\t * Only superusers and members of pg_read_all_stats can see details.\n-\t\t * Other users only get the pid value to know whether it is a WAL\n-\t\t * receiver, but no details.\n+\t\t * Only superusers and roles with privileges of pg_read_all_stats\n+\t\t * can see details. Other users only get the pid value to know whether\n+\t\t * it is a WAL receiver, but no details.\n \t\t */\n \t\tMemSet(&nulls[1], true, sizeof(bool) * (tupdesc->natts - 1));\n \t}\ndiff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c\nindex d9ab6d6de24..4daf1581bc6 100644\n--- a/src/backend/replication/walsender.c\n+++ b/src/backend/replication/walsender.c\n@@ -3486,12 +3486,12 @@ pg_stat_get_wal_senders(PG_FUNCTION_ARGS)\n \t\tmemset(nulls, 0, sizeof(nulls));\n \t\tvalues[0] = Int32GetDatum(pid);\n \n-\t\tif (!is_member_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS))\n+\t\tif (!has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS))\n \t\t{\n \t\t\t/*\n-\t\t\t * Only superusers and members of pg_read_all_stats can see\n-\t\t\t * details. Other users only get the pid value to know it's a\n-\t\t\t * walsender, but no details.\n+\t\t\t * Only superusers and roles with privileges of pg_read_all_stats\n+\t\t\t * can see details. Other users only get the pid value to know\n+\t\t\t * it's a walsender, but no details.\n \t\t\t */\n \t\t\tMemSet(&nulls[1], true, PG_STAT_GET_WAL_SENDERS_COLS - 1);\n \t\t}\ndiff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c\nindex d5a7fb13f3c..95a5d34fdf1 100644\n--- a/src/backend/utils/adt/dbsize.c\n+++ b/src/backend/utils/adt/dbsize.c\n@@ -112,12 +112,12 @@ calculate_database_size(Oid dbOid)\n \tAclResult\taclresult;\n \n \t/*\n-\t * User must have connect privilege for target database or be a member of\n+\t * User must have connect privilege for target database or have privilegs of\n \t * pg_read_all_stats\n \t */\n \taclresult = pg_database_aclcheck(dbOid, GetUserId(), ACL_CONNECT);\n \tif (aclresult != ACLCHECK_OK &&\n-\t\t!is_member_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS))\n+\t\t!has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS))\n \t{\n \t\taclcheck_error(aclresult, OBJECT_DATABASE,\n \t\t\t\t\t get_database_name(dbOid));\n@@ -196,12 +196,12 @@ calculate_tablespace_size(Oid tblspcOid)\n \tAclResult\taclresult;\n \n \t/*\n-\t * User must be a member of pg_read_all_stats or have CREATE privilege for\n+\t * User must have privileges of pg_read_all_stats or have CREATE privilege for\n \t * target tablespace, either explicitly granted or implicitly because it\n \t * is default for current database.\n \t */\n \tif (tblspcOid != MyDatabaseTableSpace &&\n-\t\t!is_member_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS))\n+\t\t!has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS))\n \t{\n \t\taclresult = pg_tablespace_aclcheck(tblspcOid, GetUserId(), ACL_CREATE);\n \t\tif (aclresult != ACLCHECK_OK)\ndiff --git a/src/backend/utils/adt/genfile.c b/src/backend/utils/adt/genfile.c\nindex c436d9318b6..f87f77093a6 100644\n--- a/src/backend/utils/adt/genfile.c\n+++ b/src/backend/utils/adt/genfile.c\n@@ -58,11 +58,11 @@ convert_and_check_filename(text *arg)\n \tcanonicalize_path(filename);\t/* filename can change length here */\n \n \t/*\n-\t * Members of the 'pg_read_server_files' role are allowed to access any\n-\t * files on the server as the PG user, so no need to do any further checks\n+\t * Roles with privleges of the 'pg_read_server_files' role are allowed to access\n+\t * any files on the server as the PG user, so no need to do any further checks\n \t * here.\n \t */\n-\tif (is_member_of_role(GetUserId(), ROLE_PG_READ_SERVER_FILES))\n+\tif (has_privs_of_role(GetUserId(), ROLE_PG_READ_SERVER_FILES))\n \t\treturn filename;\n \n \t/*\ndiff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c\nindex ff5aedc99cb..56762a7d98d 100644\n--- a/src/backend/utils/adt/pgstatfuncs.c\n+++ b/src/backend/utils/adt/pgstatfuncs.c\n@@ -34,7 +34,7 @@\n \n #define UINT32_ACCESS_ONCE(var)\t\t ((uint32)(*((volatile uint32 *)&(var))))\n \n-#define HAS_PGSTAT_PERMISSIONS(role)\t (is_member_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS) || has_privs_of_role(GetUserId(), role))\n+#define HAS_PGSTAT_PERMISSIONS(role)\t (has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS) || has_privs_of_role(GetUserId(), role))\n \n Datum\n pg_stat_get_numscans(PG_FUNCTION_ARGS)\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex e91d5a3cfda..e400bfdea13 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -8154,10 +8154,10 @@ GetConfigOption(const char *name, bool missing_ok, bool restrict_privileged)\n \t\treturn NULL;\n \tif (restrict_privileged &&\n \t\t(record->flags & GUC_SUPERUSER_ONLY) &&\n-\t\t!is_member_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS))\n+\t\t!has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS))\n \t\tereport(ERROR,\n \t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n-\t\t\t\t errmsg(\"must be superuser or a member of pg_read_all_settings to examine \\\"%s\\\"\",\n+\t\t\t\t errmsg(\"must be superuser have privileges of pg_read_all_settings to examine \\\"%s\\\"\",\n \t\t\t\t\t\tname)));\n \n \tswitch (record->vartype)\n@@ -8201,10 +8201,10 @@ GetConfigOptionResetString(const char *name)\n \trecord = find_option(name, false, false, ERROR);\n \tAssert(record != NULL);\n \tif ((record->flags & GUC_SUPERUSER_ONLY) &&\n-\t\t!is_member_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS))\n+\t\t!has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS))\n \t\tereport(ERROR,\n \t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n-\t\t\t\t errmsg(\"must be superuser or a member of pg_read_all_settings to examine \\\"%s\\\"\",\n+\t\t\t\t errmsg(\"must be superuser or have privileges of pg_read_all_settings to examine \\\"%s\\\"\",\n \t\t\t\t\t\tname)));\n \n \tswitch (record->vartype)\n@@ -9448,7 +9448,7 @@ ShowAllGUCConfig(DestReceiver *dest)\n \n \t\tif ((conf->flags & GUC_NO_SHOW_ALL) ||\n \t\t\t((conf->flags & GUC_SUPERUSER_ONLY) &&\n-\t\t\t !is_member_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS)))\n+\t\t\t !has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS)))\n \t\t\tcontinue;\n \n \t\t/* assign to the values array */\n@@ -9515,7 +9515,7 @@ get_explain_guc_options(int *num)\n \t\t/* return only options visible to the current user */\n \t\tif ((conf->flags & GUC_NO_SHOW_ALL) ||\n \t\t\t((conf->flags & GUC_SUPERUSER_ONLY) &&\n-\t\t\t !is_member_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS)))\n+\t\t\t !has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS)))\n \t\t\tcontinue;\n \n \t\t/* return only options that are different from their boot values */\n@@ -9597,10 +9597,10 @@ GetConfigOptionByName(const char *name, const char **varname, bool missing_ok)\n \t}\n \n \tif ((record->flags & GUC_SUPERUSER_ONLY) &&\n-\t\t!is_member_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS))\n+\t\t!has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS))\n \t\tereport(ERROR,\n \t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n-\t\t\t\t errmsg(\"must be superuser or a member of pg_read_all_settings to examine \\\"%s\\\"\",\n+\t\t\t\t errmsg(\"must be superuser or have privileges of pg_read_all_settings to examine \\\"%s\\\"\",\n \t\t\t\t\t\tname)));\n \n \tif (varname)\n@@ -9628,7 +9628,7 @@ GetConfigOptionByNum(int varnum, const char **values, bool *noshow)\n \t{\n \t\tif ((conf->flags & GUC_NO_SHOW_ALL) ||\n \t\t\t((conf->flags & GUC_SUPERUSER_ONLY) &&\n-\t\t\t !is_member_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS)))\n+\t\t\t !has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS)))\n \t\t\t*noshow = true;\n \t\telse\n \t\t\t*noshow = false;\n@@ -9823,7 +9823,7 @@ GetConfigOptionByNum(int varnum, const char **values, bool *noshow)\n \t * insufficiently-privileged users.\n \t */\n \tif (conf->source == PGC_S_FILE &&\n-\t\tis_member_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS))\n+\t\thas_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS))\n \t{\n \t\tvalues[14] = conf->sourcefile;\n \t\tsnprintf(buffer, sizeof(buffer), \"%d\", conf->sourceline);\n-- \n2.31.1\n\n\n\n", "msg_date": "Tue, 26 Oct 2021 15:47:31 -0700", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": true, "msg_subject": "[PATCH] Conflation of member/privs for predefined roles" }, { "msg_contents": "On 10/26/21, 3:50 PM, \"Joshua Brindle\" <joshua.brindle@crunchydata.com> wrote:\r\n> Generally if a role is granted membership to another role with NOINHERIT\r\n> they must use SET ROLE to access the privileges of that role, however\r\n> with predefined roles the membership and privilege is conflated, as\r\n> demonstrated by:\r\n\r\nI think it makes sense that INHERIT/NOINHERIT should be respected for\r\nthe predefined roles. I went through some of the old threads and\r\ncommits for predefined roles, and I didn't find any mention of\r\ninheritance, so there might not be a strong reason it was done this\r\nway.\r\n\r\nI saw a few places in the docs that will likely need to be updated as\r\nwell. For example, pg_freespacemap has this note:\r\n\r\n By default use is restricted to superusers and members of the pg_stat_scan_tables role.\r\n\r\nAnd I found at least one test (rolenames.sql) that fails due to the\r\nnew ERROR message.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 27 Oct 2021 17:20:09 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Conflation of member/privs for predefined roles" }, { "msg_contents": "On Wed, Oct 27, 2021 at 1:20 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 10/26/21, 3:50 PM, \"Joshua Brindle\" <joshua.brindle@crunchydata.com> wrote:\n> > Generally if a role is granted membership to another role with NOINHERIT\n> > they must use SET ROLE to access the privileges of that role, however\n> > with predefined roles the membership and privilege is conflated, as\n> > demonstrated by:\n>\n> I think it makes sense that INHERIT/NOINHERIT should be respected for\n> the predefined roles. I went through some of the old threads and\n> commits for predefined roles, and I didn't find any mention of\n> inheritance, so there might not be a strong reason it was done this\n> way.\n\nThank you for looking into this. We believe this was a mistake and I\nhave a follow-up patch to remove is_member_of_role() from the header\nto avoid it going forward.\n\nAt least one new pre-defined role patch (pg_maintenance) was recently\nsubmitted using has_privs_of_role() so it seems like there is a need\nfor consistency regardless.\n\n> I saw a few places in the docs that will likely need to be updated as\n> well. For example, pg_freespacemap has this note:\n>\n> By default use is restricted to superusers and members of the pg_stat_scan_tables role.\n>\n> And I found at least one test (rolenames.sql) that fails due to the\n> new ERROR message.\n\nI'm new to contributing here but I've been told that the string\nchanges get taken care of later, is that not true?\n\n\n", "msg_date": "Wed, 27 Oct 2021 13:27:14 -0400", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Conflation of member/privs for predefined roles" }, { "msg_contents": "On 10/27/21, 10:28 AM, \"Joshua Brindle\" <joshua.brindle@crunchydata.com> wrote:\r\n> I'm new to contributing here but I've been told that the string\r\n> changes get taken care of later, is that not true?\r\n\r\nI will sometimes leave out tests and docs until I get buy-in on the\r\napproach. But for serious consideration, I think the patch has to be\r\nmore-or-less complete.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 27 Oct 2021 17:34:13 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Conflation of member/privs for predefined roles" }, { "msg_contents": "On Wed, Oct 27, 2021 at 1:34 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 10/27/21, 10:28 AM, \"Joshua Brindle\" <joshua.brindle@crunchydata.com> wrote:\n> > I'm new to contributing here but I've been told that the string\n> > changes get taken care of later, is that not true?\n>\n> I will sometimes leave out tests and docs until I get buy-in on the\n> approach. But for serious consideration, I think the patch has to be\n> more-or-less complete.\n>\n\nThanks, I'll fix those and resubmit both patches in a single email\nsince one depends on the other.\n\n\n", "msg_date": "Wed, 27 Oct 2021 13:54:06 -0400", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Conflation of member/privs for predefined roles" } ]
[ { "msg_contents": "Hi,\n\nA while back I'd reported [1] to the wine bugs list that their popen() doesn't\nquite work. Which I noticed because it made initdb.exe hang in wine.\n\nThat was just fixed. And I verified that with a bleeding edge wine the windows\ninitdb.exe actually succeeds (although I nuked the root/admin check for now).\n\nI mostly was interested in that to make it easier to do changes that affect\nwindows from linux, without having to wait for the buildfarm / cfbot / CI to\ncome back with the inevitable problem.\n\nGreetings,\n\nAndres Freund\n\n[1] https://bugs.winehq.org/show_bug.cgi?id=51719\n\n\n", "msg_date": "Tue, 26 Oct 2021 17:22:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "heads up: initdb.exe now succeeds in wine" } ]
[ { "msg_contents": "I've been investigating the poor performance of a WITH RECURSIVE\nquery, which I've recreated with test data.\n\nThe first thing was to re-write the query, which helped improve\nperformance by about 30%, but the plan was still very bad. With a\nsmall patch I've been able to improve performance by about x100.\n\nThe poor performance is traced to the planner cost estimates for\nrecursive queries. Specifically, the cost of the recursive arm of the\nquery is evaluated based upon both of these hardcoded assumptions:\n\n1. The recursion will last for 10 loops\n2. The average size of the worktable will be 10x the size of the\ninitial query (non-recursive term).\n\nTaken together these assumptions lead to a very poor estimate of the\nworktable activity (in this case), which leads to the plan changing as\na result.\n\nThe factor 10 is a reasonably safe assumption and helps avoid worst\ncase behavior in bigger graph queries. However, the factor 10 is way\ntoo large for many types of graph query, such as where the path\nthrough the data is tight, and/or the query is written to prune bushy\ngraphs, e.g. shortest path queries. The factor 10 should not be\nhardcoded in the planner, but should be settable, just as\ncursor_tuple_fraction is.\n\nI've written a short patch to make the estimate of the avg size of the\nworktable configurable:\n\n recursive_worktable_estimate = N (default 10)\n\nUsing this parameter with the test query results in a consistently\nrepeatable ~100x gain in performance, using\nrecursive_worktable_estimate = 1 for a shortest path query:\n\nUnpatched: 1775ms\nPatched: 17.2ms\n\nThis is because the estimated size of the worktable is closer to the\ntruth and so leads naturally to a more sensible plan. EXPLAINs\nattached - please look at the estimated rows for the WorkTable Scan.\n\nThere are various options for setting the two estimates: just one, or\nother, or both values separately, or both together. Note that I\nhaven't touched the estimate that recursion will last for 10 loops. I\nfigured that people would object to two knobs.\n\nThoughts?\n\n\nThere are 2 other ways to speed up the query. One is to set\nenable_seqscan = off, which helps about 20%, but may have other\nconsequences. Two is to set work_mem = 512MB, so that the poor plan\n(hash) works as fast as possible, but that is far from optimal either.\nGetting the right plan is x20 faster than either of those\nalternatives.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Wed, 27 Oct 2021 15:58:58 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Parameter for planner estimate of recursive queries" }, { "msg_contents": "On Wed, 27 Oct 2021 at 15:58, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> The poor performance is traced to the planner cost estimates for\n> recursive queries. Specifically, the cost of the recursive arm of the\n> query is evaluated based upon both of these hardcoded assumptions:\n>\n> 1. The recursion will last for 10 loops\n> 2. The average size of the worktable will be 10x the size of the\n> initial query (non-recursive term).\n>\n> Taken together these assumptions lead to a very poor estimate of the\n> worktable activity (in this case), which leads to the plan changing as\n> a result.\n>\n> The factor 10 is a reasonably safe assumption and helps avoid worst\n> case behavior in bigger graph queries. However, the factor 10 is way\n> too large for many types of graph query, such as where the path\n> through the data is tight, and/or the query is written to prune bushy\n> graphs, e.g. shortest path queries. The factor 10 should not be\n> hardcoded in the planner, but should be settable, just as\n> cursor_tuple_fraction is.\n\nIf you think this should be derived without parameters, then we would\nwant a function that starts at 1 for 1 input row and gets much larger\nfor larger input. The thinking here is that Graph OLTP is often a\nshortest path between two nodes, whereas Graph Analytics and so the\nworktable will get much bigger.\n\nSo I'm, thinking we use\n\nrel->tuples = min(1, cte_rows * cte_rows/2);\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 31 Dec 2021 14:10:12 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Parameter for planner estimate of recursive queries" }, { "msg_contents": "> The factor 10 should not be hardcoded in the planner, but should be\nsettable, just as cursor_tuple_fraction is.\n\nI feel considerably out of my depth here, but I like the idea of a working\ntable size multiplier GUC, given the challenges of predicting the number of\niterations (and any adjustments to cardinality per iteration). An\nexponential cost function may lead to increasingly pathological outcomes,\nespecially when estimates for cte_rows are off.\n\nIn the EXPLAINs, it looked like the estimates for knows_pkey were off by an\norder of magnitude or so. It's possible the planner would have chosen the\nNested Loop plan if knows_pkey had estimated to rows=87 (as the WindowAgg\nwould have estimated to roughly the same size as the second plan anyways,\neven with rel->tuples = 10 * cte_rows).\n\nI also wonder if there's a better default than cte_rows * 10, but\nimplementing a new GUC sounds like a reasonable solution to this as well.\n\n--\n\nKenaniah\n\nOn Sat, Jan 22, 2022 at 1:58 PM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> On Wed, 27 Oct 2021 at 15:58, Simon Riggs <simon.riggs@enterprisedb.com>\n> wrote:\n>\n> > The poor performance is traced to the planner cost estimates for\n> > recursive queries. Specifically, the cost of the recursive arm of the\n> > query is evaluated based upon both of these hardcoded assumptions:\n> >\n> > 1. The recursion will last for 10 loops\n> > 2. The average size of the worktable will be 10x the size of the\n> > initial query (non-recursive term).\n> >\n> > Taken together these assumptions lead to a very poor estimate of the\n> > worktable activity (in this case), which leads to the plan changing as\n> > a result.\n> >\n> > The factor 10 is a reasonably safe assumption and helps avoid worst\n> > case behavior in bigger graph queries. However, the factor 10 is way\n> > too large for many types of graph query, such as where the path\n> > through the data is tight, and/or the query is written to prune bushy\n> > graphs, e.g. shortest path queries. The factor 10 should not be\n> > hardcoded in the planner, but should be settable, just as\n> > cursor_tuple_fraction is.\n>\n> If you think this should be derived without parameters, then we would\n> want a function that starts at 1 for 1 input row and gets much larger\n> for larger input. The thinking here is that Graph OLTP is often a\n> shortest path between two nodes, whereas Graph Analytics and so the\n> worktable will get much bigger.\n>\n> So I'm, thinking we use\n>\n> rel->tuples = min(1, cte_rows * cte_rows/2);\n>\n> --\n> Simon Riggs http://www.EnterpriseDB.com/\n>\n>\n>\n>\n>\n\n> The factor 10 should not be hardcoded in the planner, but should be settable, just as cursor_tuple_fraction is.I feel considerably out of my depth here, but I like the idea of a working table size multiplier GUC, given the challenges of predicting the number of iterations (and any adjustments to cardinality per iteration). An exponential cost function may lead to increasingly pathological outcomes, especially when estimates for cte_rows are off. In the EXPLAINs, it looked like the estimates for knows_pkey were off by an order of magnitude or so. It's possible the planner would have chosen the Nested Loop plan if knows_pkey had estimated to rows=87 (as the WindowAgg would have estimated to roughly the same size as the second plan anyways, even with rel->tuples = 10 * cte_rows).I also wonder if there's a better default than cte_rows * 10, but implementing a new GUC sounds like a reasonable solution to this as well.--KenaniahOn Sat, Jan 22, 2022 at 1:58 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:On Wed, 27 Oct 2021 at 15:58, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> The poor performance is traced to the planner cost estimates for\n> recursive queries. Specifically, the cost of the recursive arm of the\n> query is evaluated based upon both of these hardcoded assumptions:\n>\n> 1. The recursion will last for 10 loops\n> 2. The average size of the worktable will be 10x the size of the\n> initial query (non-recursive term).\n>\n> Taken together these assumptions lead to a very poor estimate of the\n> worktable activity (in this case), which leads to the plan changing as\n> a result.\n>\n> The factor 10 is a reasonably safe assumption and helps avoid worst\n> case behavior in bigger graph queries. However, the factor 10 is way\n> too large for many types of graph query, such as where the path\n> through the data is tight, and/or the query is written to prune bushy\n> graphs, e.g. shortest path queries. The factor 10 should not be\n> hardcoded in the planner, but should be settable, just as\n> cursor_tuple_fraction is.\n\nIf you think this should be derived without parameters, then we would\nwant a function that starts at 1 for 1 input row and gets much larger\nfor larger input. The thinking here is that Graph OLTP is often a\nshortest path between two nodes, whereas Graph Analytics and so the\nworktable will get much bigger.\n\nSo I'm, thinking we use\n\nrel->tuples = min(1, cte_rows * cte_rows/2);\n\n-- \nSimon Riggs                http://www.EnterpriseDB.com/", "msg_date": "Sat, 22 Jan 2022 14:33:59 -0800", "msg_from": "Kenaniah Cerny <kenaniah@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parameter for planner estimate of recursive queries" }, { "msg_contents": "On 31.12.21 15:10, Simon Riggs wrote:\n>> The factor 10 is a reasonably safe assumption and helps avoid worst\n>> case behavior in bigger graph queries. However, the factor 10 is way\n>> too large for many types of graph query, such as where the path\n>> through the data is tight, and/or the query is written to prune bushy\n>> graphs, e.g. shortest path queries. The factor 10 should not be\n>> hardcoded in the planner, but should be settable, just as\n>> cursor_tuple_fraction is.\n> If you think this should be derived without parameters, then we would\n> want a function that starts at 1 for 1 input row and gets much larger\n> for larger input. The thinking here is that Graph OLTP is often a\n> shortest path between two nodes, whereas Graph Analytics and so the\n> worktable will get much bigger.\n\nOn the one hand, this smells like a planner hint. But on the other \nhand, it doesn't look like we will come up with proper graph-aware \nselectivity estimation system any time soon, so just having all graph \nOLTP queries suck until then because the planner hint is hardcoded \ndoesn't seem like a better solution. So I think this setting can be ok. \n I think the way you have characterized it makes sense, too: for graph \nOLAP, you want a larger value, for graph OLTP, you want a smaller value.\n\n\n\n", "msg_date": "Tue, 25 Jan 2022 10:44:21 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Parameter for planner estimate of recursive queries" }, { "msg_contents": "On Tue, 25 Jan 2022 at 14:44, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 31.12.21 15:10, Simon Riggs wrote:\n> >> The factor 10 is a reasonably safe assumption and helps avoid worst\n> >> case behavior in bigger graph queries. However, the factor 10 is way\n> >> too large for many types of graph query, such as where the path\n> >> through the data is tight, and/or the query is written to prune bushy\n> >> graphs, e.g. shortest path queries. The factor 10 should not be\n> >> hardcoded in the planner, but should be settable, just as\n> >> cursor_tuple_fraction is.\n> > If you think this should be derived without parameters, then we would\n> > want a function that starts at 1 for 1 input row and gets much larger\n> > for larger input. The thinking here is that Graph OLTP is often a\n> > shortest path between two nodes, whereas Graph Analytics and so the\n> > worktable will get much bigger.\n>\n> On the one hand, this smells like a planner hint. But on the other\n> hand, it doesn't look like we will come up with proper graph-aware\n> selectivity estimation system any time soon, so just having all graph\n> OLTP queries suck until then because the planner hint is hardcoded\n> doesn't seem like a better solution. So I think this setting can be ok.\n> I think the way you have characterized it makes sense, too: for graph\n> OLAP, you want a larger value, for graph OLTP, you want a smaller value.\n>\n\nDo you think there is a case to replace the 10x multiplier with\n\"recursive_worktable_estimate\" for total_rows calculation in the\ncost_recursive_union function too?\n\nOn Tue, 25 Jan 2022 at 14:44, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 31.12.21 15:10, Simon Riggs wrote:\n>> The factor 10 is a reasonably safe assumption and helps avoid worst\n>> case behavior in bigger graph queries. However, the factor 10 is way\n>> too large for many types of graph query, such as where the path\n>> through the data is tight, and/or the query is written to prune bushy\n>> graphs, e.g. shortest path queries. The factor 10 should not be\n>> hardcoded in the planner, but should be settable, just as\n>> cursor_tuple_fraction is.\n> If you think this should be derived without parameters, then we would\n> want a function that starts at 1 for 1 input row and gets much larger\n> for larger input. The thinking here is that Graph OLTP is often a\n> shortest path between two nodes, whereas Graph Analytics and so the\n> worktable will get much bigger.\n\nOn the one hand, this smells like a planner hint.  But on the other \nhand, it doesn't look like we will come up with proper graph-aware \nselectivity estimation system any time soon, so just having all graph \nOLTP queries suck until then because the planner hint is hardcoded \ndoesn't seem like a better solution.  So I think this setting can be ok. \n  I think the way you have characterized it makes sense, too: for graph \nOLAP, you want a larger value, for graph OLTP, you want a smaller value.Do you think there is a case to replace the 10x multiplier with \"recursive_worktable_estimate\" for total_rows calculation in the cost_recursive_union function too?", "msg_date": "Fri, 28 Jan 2022 18:40:20 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parameter for planner estimate of recursive queries" }, { "msg_contents": "On Tue, Jan 25, 2022 at 4:44 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> On the one hand, this smells like a planner hint. But on the other\n> hand, it doesn't look like we will come up with proper graph-aware\n> selectivity estimation system any time soon, so just having all graph\n> OLTP queries suck until then because the planner hint is hardcoded\n> doesn't seem like a better solution. So I think this setting can be ok.\n\nI agree. It's a bit lame, but seems pretty harmless, and I can't see\nus realistically doing a lot better with any reasonable amount of\nwork.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 28 Jan 2022 09:07:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parameter for planner estimate of recursive queries" }, { "msg_contents": "On Fri, 28 Jan 2022 at 14:07, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jan 25, 2022 at 4:44 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> > On the one hand, this smells like a planner hint. But on the other\n> > hand, it doesn't look like we will come up with proper graph-aware\n> > selectivity estimation system any time soon, so just having all graph\n> > OLTP queries suck until then because the planner hint is hardcoded\n> > doesn't seem like a better solution. So I think this setting can be ok.\n>\n> I agree. It's a bit lame, but seems pretty harmless, and I can't see\n> us realistically doing a lot better with any reasonable amount of\n> work.\n\nShall I set this as Ready For Committer?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 10 Mar 2022 17:42:14 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Parameter for planner estimate of recursive queries" }, { "msg_contents": "Hi,\n\nOn 2022-03-10 17:42:14 +0000, Simon Riggs wrote:\n> Shall I set this as Ready For Committer?\n\nCurrently this CF entry fails on cfbot: https://cirrus-ci.com/task/4531771134967808?logs=test_world#L1158\n\n[16:27:35.772] # Failed test 'no parameters missing from postgresql.conf.sample'\n[16:27:35.772] # at t/003_check_guc.pl line 82.\n[16:27:35.772] # got: '1'\n[16:27:35.772] # expected: '0'\n[16:27:35.772] # Looks like you failed 1 test of 3.\n[16:27:35.772] [16:27:35] t/003_check_guc.pl ..............\n\nMarked as waiting on author.\n\n- Andres\n\n\n", "msg_date": "Mon, 21 Mar 2022 17:04:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Parameter for planner estimate of recursive queries" }, { "msg_contents": "On Tue, 22 Mar 2022 at 00:04, Andres Freund <andres@anarazel.de> wrote:\n\n> On 2022-03-10 17:42:14 +0000, Simon Riggs wrote:\n> > Shall I set this as Ready For Committer?\n>\n> Currently this CF entry fails on cfbot: https://cirrus-ci.com/task/4531771134967808?logs=test_world#L1158\n>\n> [16:27:35.772] # Failed test 'no parameters missing from postgresql.conf.sample'\n> [16:27:35.772] # at t/003_check_guc.pl line 82.\n> [16:27:35.772] # got: '1'\n> [16:27:35.772] # expected: '0'\n> [16:27:35.772] # Looks like you failed 1 test of 3.\n> [16:27:35.772] [16:27:35] t/003_check_guc.pl ..............\n>\n> Marked as waiting on author.\n\nThanks.\n\nI've corrected that by adding a line to postgresql.conf.sample, as\nwell as adding docs.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Wed, 23 Mar 2022 14:53:04 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Parameter for planner estimate of recursive queries" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jan 25, 2022 at 4:44 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> On the one hand, this smells like a planner hint. But on the other\n>> hand, it doesn't look like we will come up with proper graph-aware\n>> selectivity estimation system any time soon, so just having all graph\n>> OLTP queries suck until then because the planner hint is hardcoded\n>> doesn't seem like a better solution. So I think this setting can be ok.\n\n> I agree. It's a bit lame, but seems pretty harmless, and I can't see\n> us realistically doing a lot better with any reasonable amount of\n> work.\n\nYeah, agreed on all counts. The thing that makes it lame is that\nthere's no reason to expect that the same multiplier is good for\nevery recursive query done in an installation, or even in a session.\n\nOne could imagine dealing with that by adding custom syntax to WITH,\nas we have already done once:\n\nWITH RECURSIVE cte1 AS SCALE 1.0 (SELECT ...\n\nBut I *really* hesitate to go there, mainly because once we do\nsomething like that we can't ever undo it. I think Simon's\nproposal is a reasonable low-effort compromise.\n\nSome nitpicks:\n\n* The new calculation needs clamp_row_est(), since the float\nGUC could be fractional or even zero.\n\n* Do we want to prevent the GUC value from being zero? It's not\nvery sensible, plus I think we might want to reserve that value\nto mean \"use the built-in calculation\", in case we ever do put\nin some smarter logic here. But I'm not sure what a reasonable\nnon-zero lower bound would be.\n\n* The proposed docs claim that a smaller setting works by biasing\nthe planner towards fast-start plans, but I don't think I believe\nthat explanation. I'd venture that we want text more along the\nlines of \"This may help the planner choose the most appropriate\nmethod for joining the work table to the query's other tables\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Mar 2022 13:36:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parameter for planner estimate of recursive queries" }, { "msg_contents": "On Wed, 23 Mar 2022 at 17:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Jan 25, 2022 at 4:44 AM Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> >> On the one hand, this smells like a planner hint. But on the other\n> >> hand, it doesn't look like we will come up with proper graph-aware\n> >> selectivity estimation system any time soon, so just having all graph\n> >> OLTP queries suck until then because the planner hint is hardcoded\n> >> doesn't seem like a better solution. So I think this setting can be ok.\n>\n> > I agree. It's a bit lame, but seems pretty harmless, and I can't see\n> > us realistically doing a lot better with any reasonable amount of\n> > work.\n>\n> Yeah, agreed on all counts. The thing that makes it lame is that\n> there's no reason to expect that the same multiplier is good for\n> every recursive query done in an installation, or even in a session.\n>\n> One could imagine dealing with that by adding custom syntax to WITH,\n> as we have already done once:\n>\n> WITH RECURSIVE cte1 AS SCALE 1.0 (SELECT ...\n>\n> But I *really* hesitate to go there, mainly because once we do\n> something like that we can't ever undo it. I think Simon's\n> proposal is a reasonable low-effort compromise.\n>\n> Some nitpicks:\n>\n> * The new calculation needs clamp_row_est(), since the float\n> GUC could be fractional or even zero.\n\nTrue, will do.\n\n> * Do we want to prevent the GUC value from being zero? It's not\n> very sensible, plus I think we might want to reserve that value\n> to mean \"use the built-in calculation\", in case we ever do put\n> in some smarter logic here. But I'm not sure what a reasonable\n> non-zero lower bound would be.\n\nAgreed, makes sense.\n\n> * The proposed docs claim that a smaller setting works by biasing\n> the planner towards fast-start plans, but I don't think I believe\n> that explanation. I'd venture that we want text more along the\n> lines of \"This may help the planner choose the most appropriate\n> method for joining the work table to the query's other tables\".\n\nOK, will improve.\n\n[New patch version pending]\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 23 Mar 2022 18:20:09 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Parameter for planner estimate of recursive queries" }, { "msg_contents": "On Wed, 23 Mar 2022 at 18:20, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> [New patch version pending]\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Wed, 23 Mar 2022 19:37:02 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Parameter for planner estimate of recursive queries" }, { "msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n>> [New patch version pending]\n\nDo you have any objection if I rename the GUC to\nrecursive_worktable_factor? That seems a bit clearer as to what\nit does, and it leaves more room for other knobs in the same area\nif we decide we need any.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Mar 2022 16:25:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parameter for planner estimate of recursive queries" }, { "msg_contents": "On Wed, 23 Mar 2022 at 20:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> >> [New patch version pending]\n>\n> Do you have any objection if I rename the GUC to\n> recursive_worktable_factor? That seems a bit clearer as to what\n> it does, and it leaves more room for other knobs in the same area\n> if we decide we need any.\n\nNone, I think your proposal is a better name.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 24 Mar 2022 10:32:00 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Parameter for planner estimate of recursive queries" }, { "msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> On Wed, 23 Mar 2022 at 20:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Do you have any objection if I rename the GUC to\n>> recursive_worktable_factor? That seems a bit clearer as to what\n>> it does, and it leaves more room for other knobs in the same area\n>> if we decide we need any.\n\n> None, I think your proposal is a better name.\n\nPushed that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Mar 2022 11:48:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parameter for planner estimate of recursive queries" }, { "msg_contents": "On Thu, 24 Mar 2022 at 15:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > On Wed, 23 Mar 2022 at 20:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Do you have any objection if I rename the GUC to\n> >> recursive_worktable_factor? That seems a bit clearer as to what\n> >> it does, and it leaves more room for other knobs in the same area\n> >> if we decide we need any.\n>\n> > None, I think your proposal is a better name.\n>\n> Pushed that way.\n\nOk, thanks.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 24 Mar 2022 16:04:55 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Parameter for planner estimate of recursive queries" } ]
[ { "msg_contents": "Hi,\n\nIs there a specific reason that we have a generic WARNING \"worker took\ntoo long to start; canceled\" for an autovacuum worker? Isn't it better\nwith \"autovacuum worker took too long to start; canceled\"? It is\nconfusing to see the generic message in the server logs while\ndebugging an issue for a user who doesn't know the internals of\nautovacuum code.\n\nTo be more informative about the message, how about the following:\n1) ereport(WARNING,\n (errmsg( \"worker took too long to start\"),\n errdetail(\"Previous attempt to start autovacuum\nworker was failed, canceled.\")));\nor\n2) ereport(WARNING,\n (errmsg( \"worker took too long to start, canceled\"),\n errdetail(\"The postmaster couldn't start an\nautovacuum worker.\")));\nor\n3) ereport(WARNING,\n (errmsg( \"worker took too long to start, canceled\"),\n errdetail(\"Previous attempt to start autovacuum\nworker was failed.\")));\nor\n4) elog(WARNING, \"postmaster couldn't start an autovacuum worker\");\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 27 Oct 2021 21:56:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Isn't it better with \"autovacuum worker....\" instead of \"worker took\n too long to start; canceled\" specific to \"auto" }, { "msg_contents": "On 10/27/21, 9:29 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> Is there a specific reason that we have a generic WARNING \"worker took\r\n> too long to start; canceled\" for an autovacuum worker? Isn't it better\r\n> with \"autovacuum worker took too long to start; canceled\"? It is\r\n> confusing to see the generic message in the server logs while\r\n> debugging an issue for a user who doesn't know the internals of\r\n> autovacuum code.\r\n\r\nIt looks like it has been this way for a while [0]. I don't know if\r\nI've ever seen this message before, and from the comments near it, it\r\nsounds like it is expected to rarely happen.\r\n\r\n> To be more informative about the message, how about the following:\r\n\r\nMy vote is to just change it to\r\n\r\n ereport(WARNING,\r\n (errmsg(\"autovacuum worker took too long to start; canceled\")));\r\n\r\nand call it a day. If we wanted to add errdetail(), I think we should\r\nmake sure it is providing useful context, but I'm not sure what that\r\nmight look like.\r\n\r\nNathan\r\n\r\n[0] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=bae0b56\r\n\r\n", "msg_date": "Wed, 27 Oct 2021 19:05:10 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Isn't it better with \"autovacuum worker....\" instead of \"worker\n took too\n long to start; canceled\" specific to \"auto" }, { "msg_contents": "On Wed, Oct 27, 2021 at 07:05:10PM +0000, Bossart, Nathan wrote:\n> On 10/27/21, 9:29 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Is there a specific reason that we have a generic WARNING \"worker took\n> > too long to start; canceled\" for an autovacuum worker? Isn't it better\n> > with \"autovacuum worker took too long to start; canceled\"? It is\n> > confusing to see the generic message in the server logs while\n> > debugging an issue for a user who doesn't know the internals of\n> > autovacuum code.\n> \n> It looks like it has been this way for a while [0]. I don't know if\n> I've ever seen this message before, and from the comments near it, it\n> sounds like it is expected to rarely happen.\n\nI was surprised to see that I have only two logs for this in the last 8 weeks.\n\n> > To be more informative about the message, how about the following:\n> \n> My vote is to just change it to\n> \n> ereport(WARNING,\n> (errmsg(\"autovacuum worker took too long to start; canceled\")));\n> \n> and call it a day. If we wanted to add errdetail(), I think we should\n> make sure it is providing useful context, but I'm not sure what that\n> might look like.\n\nI think that's fine.\n\nNote that the backend_type is illuminating for those who use CSV logs, or use\nP13+ and log_line_prefix += %b (see 70a7b4776).\n\nsession_line | 1\nerror_severity | WARNING\nmessage | worker took too long to start; canceled\nbackend_type | autovacuum launcher\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 27 Oct 2021 14:26:11 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Isn't it better with \"autovacuum worker....\" instead of \"worker\n took too long to start; canceled\" specific to \"auto" }, { "msg_contents": "At Wed, 27 Oct 2021 14:26:11 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Wed, Oct 27, 2021 at 07:05:10PM +0000, Bossart, Nathan wrote:\n> > My vote is to just change it to\n> > \n> > ereport(WARNING,\n> > (errmsg(\"autovacuum worker took too long to start; canceled\")));\n> > \n> > and call it a day. If we wanted to add errdetail(), I think we should\n> > make sure it is providing useful context, but I'm not sure what that\n> > might look like.\n> \n> I think that's fine.\n\n+1\n\n> Note that the backend_type is illuminating for those who use CSV logs, or use\n> P13+ and log_line_prefix += %b (see 70a7b4776).\n> \n> session_line | 1\n> error_severity | WARNING\n> message | worker took too long to start; canceled\n> backend_type | autovacuum launcher\n\nYeah, the additional \"autovacuum\" is not noisy at all even in that\ncontext. Some other messages are prefixed with \"autovacuum\".\n\n \"could not fork autovacuum worker process: %m\"\n \"autovacuum worker started without a worker entry\"\n\nBy a quick look all occurances of \"laucher\" are prefixed with\n\"autovacuum\" or \"logical replcaion\", which seems fine.\n\nAs a related topic, autovacuum.c has another use of bare \"worker\"s.\n\n>\ttmpcxt = AllocSetContextCreate(CurrentMemoryContext,\n>\t\t\t\t\t\t\t\t \"Start worker tmp cxt\",\n>\t\t\t\t\t\t\t\t ALLOCSET_DEFAULT_SIZES);\n\n>\tAutovacMemCxt = AllocSetContextCreate(TopMemoryContext,\n>\t\t\t\t\t\t\t\t\t\t \"AV worker\",\n>\t\t\t\t\t\t\t\t\t\t ALLOCSET_DEFAULT_SIZES);\n\nI'm not sure the former needs to be fixed, but the latter is actually\nvisible to users via pg_log_backend_memory_contexts().\n\nLOG: level: 1; AV worker: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 28 Oct 2021 10:41:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Isn't it better with \"autovacuum worker....\" instead of\n \"worker took too long to start; canceled\" specific to \"auto" }, { "msg_contents": "On Thu, Oct 28, 2021 at 7:11 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 27 Oct 2021 14:26:11 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in\n> > On Wed, Oct 27, 2021 at 07:05:10PM +0000, Bossart, Nathan wrote:\n> > > My vote is to just change it to\n> > >\n> > > ereport(WARNING,\n> > > (errmsg(\"autovacuum worker took too long to start; canceled\")));\n> > >\n> > > and call it a day. If we wanted to add errdetail(), I think we should\n> > > make sure it is providing useful context, but I'm not sure what that\n> > > might look like.\n> >\n> > I think that's fine.\n>\n> +1\n\nDone.\n\n> > Note that the backend_type is illuminating for those who use CSV logs, or use\n> > P13+ and log_line_prefix += %b (see 70a7b4776).\n> >\n> > session_line | 1\n> > error_severity | WARNING\n> > message | worker took too long to start; canceled\n> > backend_type | autovacuum launcher\n>\n> Yeah, the additional \"autovacuum\" is not noisy at all even in that\n> context. Some other messages are prefixed with \"autovacuum\".\n>\n> \"could not fork autovacuum worker process: %m\"\n> \"autovacuum worker started without a worker entry\"\n>\n> By a quick look all occurances of \"laucher\" are prefixed with\n> \"autovacuum\" or \"logical replcaion\", which seems fine.\n>\n> As a related topic, autovacuum.c has another use of bare \"worker\"s.\n>\n> > tmpcxt = AllocSetContextCreate(CurrentMemoryContext,\n> > \"Start worker tmp cxt\",\n> > ALLOCSET_DEFAULT_SIZES);\n>\n> > AutovacMemCxt = AllocSetContextCreate(TopMemoryContext,\n> > \"AV worker\",\n> > ALLOCSET_DEFAULT_SIZES);\n>\n> I'm not sure the former needs to be fixed, but the latter is actually\n> visible to users via pg_log_backend_memory_contexts().\n>\n> LOG: level: 1; AV worker: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n\nGood catch. I've seen the use of \"AV\" in some of the mem context\nnames, why that? Let's be specific and say \"Autovacuum\". Attached\npatch does that. Please review it.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 28 Oct 2021 08:13:58 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Isn't it better with \"autovacuum worker....\" instead of \"worker\n took too long to start; canceled\" specific to \"auto" }, { "msg_contents": "On Thu, Oct 28, 2021 at 8:14 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > LOG: level: 1; AV worker: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n>\n> Good catch. I've seen the use of \"AV\" in some of the mem context\n> names, why that? Let's be specific and say \"Autovacuum\". Attached\n> patch does that. Please review it.\n\n+1, the error message and other improvements look good.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 28 Oct 2021 09:52:19 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Isn't it better with \"autovacuum worker....\" instead of \"worker\n took too long to start; canceled\" specific to \"auto" }, { "msg_contents": "On Thu, Oct 28, 2021 at 11:44 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Oct 28, 2021 at 7:11 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Wed, 27 Oct 2021 14:26:11 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in\n> > > On Wed, Oct 27, 2021 at 07:05:10PM +0000, Bossart, Nathan wrote:\n> > > > My vote is to just change it to\n> > > >\n> > > > ereport(WARNING,\n> > > > (errmsg(\"autovacuum worker took too long to start; canceled\")));\n> > > >\n> > > > and call it a day. If we wanted to add errdetail(), I think we should\n> > > > make sure it is providing useful context, but I'm not sure what that\n> > > > might look like.\n> > >\n> > > I think that's fine.\n> >\n> > +1\n>\n> Done.\n>\n> > > Note that the backend_type is illuminating for those who use CSV logs, or use\n> > > P13+ and log_line_prefix += %b (see 70a7b4776).\n> > >\n> > > session_line | 1\n> > > error_severity | WARNING\n> > > message | worker took too long to start; canceled\n> > > backend_type | autovacuum launcher\n> >\n> > Yeah, the additional \"autovacuum\" is not noisy at all even in that\n> > context. Some other messages are prefixed with \"autovacuum\".\n> >\n> > \"could not fork autovacuum worker process: %m\"\n> > \"autovacuum worker started without a worker entry\"\n> >\n> > By a quick look all occurances of \"laucher\" are prefixed with\n> > \"autovacuum\" or \"logical replcaion\", which seems fine.\n> >\n> > As a related topic, autovacuum.c has another use of bare \"worker\"s.\n> >\n> > > tmpcxt = AllocSetContextCreate(CurrentMemoryContext,\n> > > \"Start worker tmp cxt\",\n> > > ALLOCSET_DEFAULT_SIZES);\n> >\n> > > AutovacMemCxt = AllocSetContextCreate(TopMemoryContext,\n> > > \"AV worker\",\n> > > ALLOCSET_DEFAULT_SIZES);\n> >\n> > I'm not sure the former needs to be fixed, but the latter is actually\n> > visible to users via pg_log_backend_memory_contexts().\n> >\n> > LOG: level: 1; AV worker: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n>\n> Good catch. I've seen the use of \"AV\" in some of the mem context\n> names, why that? Let's be specific and say \"Autovacuum\". Attached\n> patch does that. Please review it.\n\n+1. The patch looks good to me too.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 28 Oct 2021 16:11:16 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Isn't it better with \"autovacuum worker....\" instead of \"worker\n took too long to start; canceled\" specific to \"auto" }, { "msg_contents": "On Thu, Oct 28, 2021 at 12:41 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Oct 28, 2021 at 11:44 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Thu, Oct 28, 2021 at 7:11 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Wed, 27 Oct 2021 14:26:11 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in\n> > > > On Wed, Oct 27, 2021 at 07:05:10PM +0000, Bossart, Nathan wrote:\n> > > > > My vote is to just change it to\n> > > > >\n> > > > > ereport(WARNING,\n> > > > > (errmsg(\"autovacuum worker took too long to start; canceled\")));\n> > > > >\n> > > > > and call it a day. If we wanted to add errdetail(), I think we should\n> > > > > make sure it is providing useful context, but I'm not sure what that\n> > > > > might look like.\n> > > >\n> > > > I think that's fine.\n> > >\n> > > +1\n> >\n> > Done.\n> >\n> > > > Note that the backend_type is illuminating for those who use CSV logs, or use\n> > > > P13+ and log_line_prefix += %b (see 70a7b4776).\n> > > >\n> > > > session_line | 1\n> > > > error_severity | WARNING\n> > > > message | worker took too long to start; canceled\n> > > > backend_type | autovacuum launcher\n> > >\n> > > Yeah, the additional \"autovacuum\" is not noisy at all even in that\n> > > context. Some other messages are prefixed with \"autovacuum\".\n> > >\n> > > \"could not fork autovacuum worker process: %m\"\n> > > \"autovacuum worker started without a worker entry\"\n> > >\n> > > By a quick look all occurances of \"laucher\" are prefixed with\n> > > \"autovacuum\" or \"logical replcaion\", which seems fine.\n> > >\n> > > As a related topic, autovacuum.c has another use of bare \"worker\"s.\n> > >\n> > > > tmpcxt = AllocSetContextCreate(CurrentMemoryContext,\n> > > > \"Start worker tmp cxt\",\n> > > > ALLOCSET_DEFAULT_SIZES);\n> > >\n> > > > AutovacMemCxt = AllocSetContextCreate(TopMemoryContext,\n> > > > \"AV worker\",\n> > > > ALLOCSET_DEFAULT_SIZES);\n> > >\n> > > I'm not sure the former needs to be fixed, but the latter is actually\n> > > visible to users via pg_log_backend_memory_contexts().\n> > >\n> > > LOG: level: 1; AV worker: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n> >\n> > Good catch. I've seen the use of \"AV\" in some of the mem context\n> > names, why that? Let's be specific and say \"Autovacuum\". Attached\n> > patch does that. Please review it.\n>\n> +1. The patch looks good to me too.\n\nThanks all for reviewing this. Here's the CF entry -\nhttps://commitfest.postgresql.org/35/3378/\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 28 Oct 2021 18:11:59 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Isn't it better with \"autovacuum worker....\" instead of \"worker\n took too long to start; canceled\" specific to \"auto" }, { "msg_contents": "On 10/28/21, 5:42 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> Thanks all for reviewing this. Here's the CF entry -\r\n> https://commitfest.postgresql.org/35/3378/\r\n\r\nI've marked this one as ready-for-committer.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 10 Nov 2021 19:45:23 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Isn't it better with \"autovacuum worker....\" instead of \"worker\n took too\n long to start; canceled\" specific to \"auto" }, { "msg_contents": "On 2021-Oct-28, Bharath Rupireddy wrote:\n\n> Thanks all for reviewing this. Here's the CF entry -\n> https://commitfest.postgresql.org/35/3378/\n\nThanks, pushed. I changed a couple of things though -- notably changed\nthe elog() to ereport() as suggested by Nathan early on, but never\nmaterialized in the submitted patch. I also changed the wording of the\ncontext names, since the proposed ones weren't much more satisfactory\nthan the existing ones.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Every machine is a smoke machine if you operate it wrong enough.\"\nhttps://twitter.com/libseybieda/status/1541673325781196801\n\n\n", "msg_date": "Mon, 22 Nov 2021 13:25:45 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Isn't it better with \"autovacuum worker....\" instead of \"worker\n took too long to start; canceled\" specific to \"auto" } ]